22.04.2025 - 17.04.2025 (Week 1 - Week 4)
Janaan Ahmed (0353333)
Minor Project/ Bachelor of Design (Hons) in Creative Media
Exercises
Jump to:
Reflective Writing:
Exercises:
Lectures
Reflection at end of each lecture note
W1: Sound Fundamentals
Nature of Sound
- Sound: vibration of air molecules that stimulates our eardrums
- Exists as variations of pressure in a medium
- 3 Phases:
- Production: Source of sound (eg. vibration of vocal cords)
- Propagation: Medium in which it travels through. Molecules compress and refract (variations in pressure)
- Perception: Captured and translated by our brain
- Psychoacoustics: study of human perception of sound
Human Ear
- Intricate structure: outer -> middle -> inner
- Vibrations travel through ear structure, fluid in cochlea vibrates generating an electric signal transmitted to our brain which helps us identify different sounds
- Different notes trigger different regions of the cochlea
Anatomy of a wave
- 2 forms of waves:
- transverse: particles vibrate perpendicular to direction wave
- longitudinal: vibrations parallel to direction of wave
- Sound waves: longitudinal
- Travels quickest through solids (particle distance)
- Travels in all directions
- Wavelength: distance between two compressions (or rarefactions)
- Amplitude: strength of a wave signal
- Frequency: no. of waves passing through a point per second (measured in kHz)
Properties of a sound wave
- Pitch: more vibration = high frequency = high pitch
- Loudness: larger amplitude = louder sound
- Timbre: quality of the sound
- Perceived duration: pacing of sound (fast/slow)
- Envelope: structure of a sound (when does it get softer/louder)
- Spatialization: location of sound in space
L1/ REFLECTION:
It is interesting to learn the mechanism of sound as variations of
pressure and our perception of it. This lecture overall was a
refresher that takes me back to some things I learned in physics
class. It is also incredible how the human ear has developed and
adapted so efficiently to detect these changes in pressure, and of
course our brain's ability to intercept it as sound.
W2: Sound Design Tools
Digital Audio Workstations (DAW) have a set of common tools
useful for sound design.
Basic techniques:
-
Layering
-
Time stretching
-
Pitch shifting
-
Reversing
-
Mouth it!
1. Layering
Layering different sounds over each other enables us to
blend and mix them to create a new unique sound
2. Time Stretching/ Time Compression / Elastic Audio
Changes pacing/tempo/speed of audio without altering
pitch
3. Pitch Shifting
Changes pitch without altering pacing
4. Reversing
Coupled with layering, can yield interesting effects
5. Mouth it!
If you're struggling to find the sound you're looking
for
Extra notes:
-
Destructive editing: editing on waveform
-
Non-destructive editing: editing on multitrack
-
Make sure sample rate for multitrack session matches
the files you have
-
Radio = 44100 kHz, visuals = anything above 48000
kHz
-
Higher sample rate = better quality
-
Bit depth: range of sounds/ loudness
Sound design for impactful storytelling and atmosphere
Tip: Less is more
Understanding Reverb and its settings
-
Reverb is basically the echo you hear in a
room/space.
-
Decay time: how long the reverb lasts
before it fades away completely
-
eg. shorter: clap in a small room, longer: clap in
a large room
-
Pre-Delay time: delay between original
sound and when the reverb starts
-
eg. shouting in a large room, echo would be heard
after a while
-
Diffusion: how 'spread out' the reverb is
i.e. changes how 'smooth' or 'choppy' the echo is
-
High Diffusion: Smooth, blended echoes (like a sponge
or soft clouds).
-
Low Diffusion: Clear, sharp echoes (like tapping
glass or drumsticks).
-
Perception: Changes how 'bright' or
'dull' the reverb sounds
-
absorbent= room with carpets more muted i.e. dampens echoes for softer feel
-
reflective= room with tiles or glass more
bounces i.e. bright and lively
-
Output level (wet and dry): controls
between OG sound (dry) and reverb/echo effect
(wet)
-
high wet + low dry
-
large and spacious settings where reverb is very
apparent
-
bring out echoes
-
low wet + high dry
-
intimate settings like recording studios
-
clarity and focus on OG sound
Key Takeaways:
-
Decay and Pre-Delay: sets size of the space
-
Diffusion and Perception: define texture of
surroundings
-
Dry and Wet: balance clarity and immersion in
space
L2/ REFLECTION:
Sound design often carries the emotional weight and
atmosphere of a scene, however I think we can all agree it
doesn't receive as much praise as it deserves, despite doing
the heavy lifting. But perhaps it's because sound is ingrained
into a scene so effortlessly in the background. I think the
world of sound and foley is full of possible and endless
combinations, but taken with sound design tools, it is a
matter of utilising your creativity to achieve a desired sound
outcome.
Digital Audio Workstations (DAW) have a set of common tools
useful for sound design.
- Layering
- Time stretching
- Pitch shifting
- Reversing
- Mouth it!
1. Layering
Layering different sounds over each other enables us to
blend and mix them to create a new unique sound
2. Time Stretching/ Time Compression / Elastic Audio
Changes pacing/tempo/speed of audio without altering
pitch
3. Pitch Shifting
Changes pitch without altering pacing
4. Reversing
Coupled with layering, can yield interesting effects
5. Mouth it!
If you're struggling to find the sound you're looking
for
Extra notes:
- Destructive editing: editing on waveform
- Non-destructive editing: editing on multitrack
- Make sure sample rate for multitrack session matches the files you have
- Radio = 44100 kHz, visuals = anything above 48000 kHz
- Higher sample rate = better quality
- Bit depth: range of sounds/ loudness
Sound design for impactful storytelling and atmosphere
Tip: Less is more
Understanding Reverb and its settings
- Reverb is basically the echo you hear in a room/space.
- Decay time: how long the reverb lasts before it fades away completely
- eg. shorter: clap in a small room, longer: clap in a large room
- Pre-Delay time: delay between original sound and when the reverb starts
- eg. shouting in a large room, echo would be heard after a while
- Diffusion: how 'spread out' the reverb is i.e. changes how 'smooth' or 'choppy' the echo is
- High Diffusion: Smooth, blended echoes (like a sponge or soft clouds).
- Low Diffusion: Clear, sharp echoes (like tapping glass or drumsticks).
- Perception: Changes how 'bright' or 'dull' the reverb sounds
- absorbent= room with carpets more muted i.e. dampens echoes for softer feel
- reflective= room with tiles or glass more bounces i.e. bright and lively
- Output level (wet and dry): controls between OG sound (dry) and reverb/echo effect (wet)
- high wet + low dry
- large and spacious settings where reverb is very apparent
- bring out echoes
- low wet + high dry
- intimate settings like recording studios
- clarity and focus on OG sound
Key Takeaways:
- Decay and Pre-Delay: sets size of the space
- Diffusion and Perception: define texture of surroundings
- Dry and Wet: balance clarity and immersion in space
L2/ REFLECTION:
Sound design often carries the emotional weight and atmosphere of a scene, however I think we can all agree it doesn't receive as much praise as it deserves, despite doing the heavy lifting. But perhaps it's because sound is ingrained into a scene so effortlessly in the background. I think the world of sound and foley is full of possible and endless combinations, but taken with sound design tools, it is a matter of utilising your creativity to achieve a desired sound outcome.
Sound design often carries the emotional weight and atmosphere of a scene, however I think we can all agree it doesn't receive as much praise as it deserves, despite doing the heavy lifting. But perhaps it's because sound is ingrained into a scene so effortlessly in the background. I think the world of sound and foley is full of possible and endless combinations, but taken with sound design tools, it is a matter of utilising your creativity to achieve a desired sound outcome.
W3: Sound in Space
Diegetic and Non-Diegetic Sound
-
Diegesis: The world of the film and
everything in it
-
Everything characters exp within their world is
diegetic
-
Everything that only the audience perceives is
non-diegetic (eg. title cards)
-
Sounds characters can hear
-
e.g. weather, vehicles, weapons, music within
film, dialogue, some forms of voiceover
-
Establish world of characters and inform
setting
-
Can manipulated to let us know what chars hear (eg.
fly)
Non-Diegetic Sound
-
Everything characters cannot hear
-
e.g. SFX, musical score. forms of narration
-
Can enhance motion and movement to increase intensity
(fight scene)
-
For comedy (punchline in a joke)
-
Shapes the film experience (eg. how transcendent
Interstellar is)
Trans-Diegetic sound
-
Expectations subverted
-
sound expected to be non-diegetic suddenly
diegetic (music score -> playing musical
instrument)
-
Switches between them can blur lines between
fantasy and reality
L3/ REFLECTION:
Diegetic and non-diegetic sounds shape the world of
film in their own unique way and so it is
interesting to see how they both can be manipulated
to fit the story. Whether it is to convey a
character's feelings, or the atmosphere or
seriousness of a scene, the interplay of diegetic
and non-diegetic sounds contribute equally to a
scene as the cinematography. I suppose it is the
director's preference and artistic take on a scene
and overall tone of the movie that determines how
they're used.
PRACTICAL
-
Track automation is time specific
so if the clip is moved along the timeline, it
will sound different depending on where it is
within the timeframe.
-
Clip automation applies to the clip.
-
Use track automation for stories/film where
scenes require specific timings
-
Volume, panning, and EQ can be manipulated
under the drop-down panel.
In-class exercise
1. Jet plane passing by
2. Woman walking past us into a cave
Diegetic and Non-Diegetic Sound
- Diegesis: The world of the film and everything in it
- Everything characters exp within their world is diegetic
- Everything that only the audience perceives is non-diegetic (eg. title cards)
- Sounds characters can hear
- e.g. weather, vehicles, weapons, music within film, dialogue, some forms of voiceover
- Establish world of characters and inform setting
- Can manipulated to let us know what chars hear (eg. fly)
Non-Diegetic Sound
- Everything characters cannot hear
- e.g. SFX, musical score. forms of narration
- Can enhance motion and movement to increase intensity (fight scene)
- For comedy (punchline in a joke)
- Shapes the film experience (eg. how transcendent Interstellar is)
Trans-Diegetic sound
- Expectations subverted
- sound expected to be non-diegetic suddenly diegetic (music score -> playing musical instrument)
- Switches between them can blur lines between fantasy and reality
L3/ REFLECTION:
Diegetic and non-diegetic sounds shape the world of
film in their own unique way and so it is
interesting to see how they both can be manipulated
to fit the story. Whether it is to convey a
character's feelings, or the atmosphere or
seriousness of a scene, the interplay of diegetic
and non-diegetic sounds contribute equally to a
scene as the cinematography. I suppose it is the
director's preference and artistic take on a scene
and overall tone of the movie that determines how
they're used.
PRACTICAL
- Track automation is time specific so if the clip is moved along the timeline, it will sound different depending on where it is within the timeframe.
- Clip automation applies to the clip.
- Use track automation for stories/film where scenes require specific timings
- Volume, panning, and EQ can be manipulated under the drop-down panel.
In-class exercise
1. Jet plane passing by
Instructions
Exercise 1: Parametric Equaliser
For this exercise we are given 4 samples of an audio clip and required to
equalise it according to the reference or flat version of the audio. Bass,
treble and mid-range frequencies are adjusted for each one
accordingly.
|
| Fig 1.13: EQ 4 (Parametric Equaliser) |
Exercise 2: Sound Shaping
For this exercise we are provided with a sample voice recording and
required to shape it using the parametric equaliser, and reverb/echo to
follow:
- Telephone
- Closet (muffled)
- Walkie Talkie (similar to phone, limited frequencies are transmitted)
- Bathroom
- Airport Announcement
- Stadium Announcement
1. Telephone
A thin sound that has low bass and treble with more mid range
frequencies.
Fig 2.2: Telephone Audio
2. Within a closet
The voice will be muffled and thus have more bass and lower
treble.
3. Walkie Talkie
Walkie talkies transmit a limited range of frequencies and sound like
a more exaggerated version of a telephone. The mid range frequency is
increased and the others lowered to get the distorted 'cackle'
effect.
4. Bathroom
As with the closet, I first adjusted the EQ so the voice is muffled
(coming from within a small room). Afterwards, the reverb is adjusted.
The pre delay, decay time and diffusion are not too high since the
space is small. The dry sound is also higher than the wet since a
bathroom is small despite being reflective.
Fig 2.8: Bathroom audio
5. Airport
Looking at the space and texture of an airport, we know that they are
spacious, typically with high ceilings. Given this, we would have a higher decay + pre delay time + high
diffusion. Since they usually have reflective surfaces we also know that
there will high reflection. The EQ is adjusted so it is similar to a
telephone.
|
|
Fig 2.9: Airport EQ and reverb
|
Fig 2.10: Airport Audio
6. Stadium
As with an airport, a stadium being even more spacious will have a high
decay + predelay time + diffusion. However, it will not be as reflective
as an airport. Moreover, the wet (reverb) will be higher than the dry
(original sound) since it is a large open space.
Fig 2.12: Stadium audio
Final Outcome:
Exercise 3: Environment Soundscape
For this exercise we have to create environmental soundscapes for the
reference images provided:
1. Eco/Biotech Lab
The first picture appears to be a bio-chamber or eco-lab of sorts
set in a futuristic setting. I imagine it to be set in a dystopian
post-apocalyptic world.
Fig 3.2: Environment #1 Audio
Process:
Some Key Scene Elements:
- Giant Tree
- Machinery
- Metal Railing
- Exhaust fans/pipes
- Soldiers
- Machine Guns
- Electrical circuits
My idea for the narrative is essentially for one of the
soldiers/researchers to walk around the room passing by the
different elements visible in the scene. It'll essentially be an
unfolding of the scene for the listener:
- Soldier walks down metal walkway
- Passes by mini bio chambers
- Sits at one of the controller panels
- Tree rustles/ ventilation
My process was pretty straightforward:
- I first set the 'base' for my canvas, a natural forest-like ambience
- I then set the basic skeletal structure: the 'path' of the soldier adding footsteps, starting from the metal walkway, down the stairs, and towards the controller panel.
- Details like beeping, button sounds were added in later. Some sound files were merged and trimmed
- After scene was set, I then adjusted the effects, focusing mostly on reverb
- Panning, volume control were left for last.
2. Laser Room
The second image appears to once again be set in a futuristic
research lab (possibly underground). It looks cold and
sterile.
- Holographic screens
- Scientists
- Wet floor
- Electric circuits
- Machinery
- Laser Beams
The idea for the narrative is to follow the movement and action
of the researcher:
- Entering the lab
- Keying in data
- Machinery turns on
- Laser Beam run
The process was the same for this environment, however I used a
lot more effects such Parametric Equalisers to make the multiple
laser files I downloaded sound more cohesive during the laser
blast.
Final Outcome:
Fig 3.9: Sound Credit
Feedback
Ex 2: Sound Shaping
- The airport voice can be less muffled.
Ex 3: Soundscape
- The expectation was to make the audio according to the view from the pictures but I made it from the 1st person POV. Even though the outcome is different, sir was okay with the sound.
Reflection
Experience
These exercises were a refresher and expansion from some basics we
learned from previous modules in the semester. The headphone quality
definitely matters and it was interesting to note the difference in sound
quality between studio headphones and regular ones.
Observation
I initially went into the exercises and randomly played around with
the controls to figure out things as I go. That is no doubt helpful of
course, especially to understand the interface of Audition. But by
taking the extra step to understand and consider the space the sound
is travelling in, and the nature of the sound itself, the whole
process becomes more efficient.
Findings
I guess it's cliche of me and obvious to say that sound plays a
fundamental role in storytelling. But by actually being involved
in the process, I learned to appreciate the details and choices
that go into creating a soundscape that conveys what we want. Some
key things I learned were to always consider the direction,
atmosphere, and distance of the sound, and also from whose
perspective we are hearing the sound from (for
storytelling).























Comments
Post a Comment