Impressionistic Soundscapes

Claude_Monet,_Le_Grand_Canal.jpg

How your dreams look like

The fascinating thing about Impressionism is that it assumes that a painting is never going to able to recreate reality as accurately as a photograph. Once you leave behind the burden of precision, the artist is free to do what art does best: expressing a feeling, a mood, a state of mind. Impressionism relays more on movement and light than shape and form. The composition is open and the boundary between foreground and background is blurred.

An impressionistic painting doesn’t look like a real place but a distant memory, the impression a place leaves deep in your mind. It looks like the blurry pictures from a dream that linger in your mind just before you forget them.

That’s pretty much how far my artistic knowledge goes but I hope you get an idea. I was thinking that it would be cool to try to translate that approach into sound design by creating soundscapes to go along with some impressionist paintings. But before we do that, we can’t forget that, in a way, this already happened among a very specific sub-section of sound designers. The ones that limit themselves to a narrow amount of defined pitches and timbres: music composers.

How your dreams sound like

I was introduced to Impressionist music by the amazing series Young People's Concerts by Leonard Bernstein which I really can’t recommend enough. If this is the first time you hear about it, just go watch it. There is a whole show about Impressionism.

He does a better job than me in explaining it but, basically, when impressionism is translated into music we are trying to express the feeling, the essence of something in a subtle and seductive way. We are not explaining, we are suggesting. This often results in dreamy melodies (whole tone scales are a staple) and the use of exotic and unresolved harmonies. For the most part, composers limit themselves to traditional instruments but they try to get the most from them in terms of timbre. Piano is probably the instrument of choice for Impressionism, using its large range, dynamics and polyphony (pedals are heavily used).

As an example, here is what musically happened when Manual de Falla, who was born in Cádiz like myself, moved to Paris and met the Impressionists. Maybe is not its most known side, but sometimes flamenco has a dreamy, exotic quality that I think is perfect for this style of music.

And here is a maybe a more canonical example by Debussy. Notice how the melody is usually unresolved. Like in a dream, you don’t really know how you got there and there is no clear conclusion. This music maybe doesn’t sound that different or special to you, as these traits have been assimilated into mainstream music (think jazz) but keep in mind that in that time it was quite a contrast to the musical establishment.

An acoustic impression

If Impressionism doesn’t want to be constrained by shapes, colours or composition, maybe the most logical way to translate this idea into sound would be to forgo concepts like harmony, melody or rhythm. When you do this, only timbre is left and since shaping timbre is kind of my job, it sounds like a perfect fit.

My first approach to a Impressionistic soundscape is simple: just create an auditory complement to the visuals, extending the world within the painting to a new sense. Let’s lay down sounds that could exist in the scene and that go well with the feeling it transmits.

I’m using first the one that gave the style its name (and it was meant as an insult), “Impression, soleil levant” by Claude Monet:

Here is a second one, using “Woman in the Bath” by Edgar Degas:

At first, I thought I would use reverb to blurry sounds together in an analogy of how painters mix colours. But I soon discovered that doesn’t work very well. For the bath painting, I wanted to express a feel of intimacy, a sense of “costumbrismo” which actually was one of the other features of Impressionism: to portray everyday life.

Reverb doesn’t help with this because it creates an unnatural space that doesn’t complement the painting but opposes it. Monet’s Sunrise scene uses more reverb but only enough to match the environment that we are being presented.

One more thing was apparent: it helps to have elements in the scene that suggest motion, since most things that make a sound are moving in some way.

Here is “Effect of Snow on Petit Mountrouge” by Édouard Manet.

Since this painting was created during the franco-prusian war, I decided it could be cool to also tell a little story within the soundscape. I wanted to capture the peaceful calm of a winter snowy day somewhere in Paris. The calm is then broken when distant cannons are heard and the french soldier who is contemplating the scene has to go back to his post.

Finally, here is “Gare Saint Lazare” by Monet again:

I chose this one because I liked the painting from an aesthetics point of view, it has movement and life. And of course trains are a nice sound design opportunity.

Going further

After working on these four soundscapes, l realized I was mostly describing the scene and maybe transmitting some of its essence by choosing certain sounds but not being technically impressionistic. I was basically adding a soundtrack to the painting.

Their relaxing, atmospheric quality goes well with audio that borders on being ASMR. It’s somewhat ironic that the best complement to an impressionist painting is a soundscape that does the opposite: being descriptive, detailed and realistic. Maybe it makes sense in a way. These paintings suggest instead of being explicit so there is room for audio to add to the experience.

Of course this got me thinking about how it would be to create soundscapes for other art styles. Probably the ones that distort reality in different ways like expressionism or cubism could be good candidates. Maybe something worth exploring in the future.

But can we use audio in a way that gets to the core idea of Impressionism? To do this, we would need to go more experimental and abstract. We would need to stop using descriptive sound, forget about what you can see and focus on the feeling the painting creates.

Smearing sounds

I thought about using Paulstretch since if you play with the window size, you can blurry and smear sounds together, like painters mix colours. This worked nicely as Paulstretch tends to sound very dreamy. The following soundscape was created from only one audio sample, this recording of some wind chimes:

I created different layers in Paulstretch playing with the window size, pitch shifting and adding harmonics. I refrained form using any “real” audio. Here is “The Cliff at Étretat after the Storm” by Monet.

As you can hear I’m getting somewhere interesting. I tried to evoke a warm summer feeling although I’m sometimes dangerously close to the line between being dreamy and being unsettling. My first instinct to solve this was to use music tricks, like pitching layers a fifth away from each other but I didn’t want to relay on musicality too much.

Here is another darker example using a fantastic painting, “Winter, Midnight“ by Childe Hassam:

This one was created from a music stinger. If you hear both closely, you can tell it’s the same base sound but in a drone, dream-like state. It works well because the musical impacts are stretched creating some movement in the soundscape and some changes in tone.

And finally the last one turned out quite creepy, maybe too much for the painting but I like the result nevertheless. I used a combination of layers from Paulstretch, using the tonal / atonal slider to remove most of the “musicality” from the sounds (which were kind of musical). Here is “Moonlight, Isle of Shoals” by Childe Hassam.

If this got you interested in learning Paulstretch, I have a blog post about it that goes deep into how it works.

Conclusions

It’s cool to work with the concept of “pure sound design” without the burden of mere description but at times it seems to feel too close to atonal music. That last soundscape got me thinking about Ligeti and Penderecki. But is this something bad? Maybe is atonal music which is too close to “pure sound design”. Maybe they are the same thing but looked from different perspectives.

In any case, both approaches to the creation of a painting soundscape are valid and worth pursuing, I think. Just the idea of using visual art to inspire audio work is a good way to get your creative juices flowing and tackle things in a different way.

Other than that, I was also reminded that sound is not only simple description, it also conveys feelings and can somehow capture the very essence of a place, an action or a character. That’s something to always keep in mind.

Thoughts on buying gear

Hello! Here are some ideas and tips that I think could help you make better decisions while buying audio equipment.

Think long term

I like to see any piece of gear as an investment so I try to choose products that are known for being robust and durable. There are always cheaper options out there but I don’t mind paying a higher price if I have a better guarantee that the equipment is going to last for longer and be more reliable.

In order to determine durability, a good hint could be that the manufacturer offers a longer guarantee period than legally required and/or a good reputation among veteran users (some detective work in audio forums is a must). It is also a good sign when a product is manufactured in Europe or the US, although this is not very frequent and doesn’t guarantee a higher quality necessarily.

Buying higher end gear is particularly relevant for audio since electronic components are quite important in determining quality and life expectancy. The use of cheap plastic instead of more durable components like metal is also commonplace and something to avoid, specially in field equipment.

Something else to think about is that durable gear is usually well known in the industry and may give clients some extra confidence to hire you before others.

On the flip side, you can’t always afford to buy higher quality equipment and sometimes you may need to opt for entry level gear. This can also happen when you need an specific thing for a gig and don’t have time or money to find the best possible option. In those cases, well, you probably need to bite the bullet but in general my advice would be to wait if you can. Flip more burgers and sweep more floors. Once you have enough to at least access the mid tier, go for it. In my experience, those investments will pay off. Ten fold. You need to spend money to earn money.

I bought a Tascam HD-P2 in late 2011. I chose this model because of its reputation and quality. To this day, I still use it as my main recorder for sound effects. It has also accompanied me through features films and documentaries, on snowy cold exterior days and crazy hot Seville summers. It has never failed or died during a take.

I am not saying the HD-P2 is perfect. It only offers two microphone inputs, the pre-amps are not ultra clean (but they quite good for their price range) and the powering options are limited. Nevertheless, it served me well throughout my first years working in audio, it gave me confidence and allowed me to get a huge return on my investment.

The mighty HDP2. Respect.

Save on the features you don’t need

I think this is key. Don’t get dazzled with fancy stuff that you are never going to use. It is important that you think about the features that you actually need and then look for the best option the market has to offer.

Hopefully, I will have the chance to record more frequently now.

Of course, in order to do that, you need to know what your needs really are, which is the tricky part. Do you prefer more channels or a higher resolution? Bigger memory or longer battery life? If you know what kind of specific work you are going to do, this is going to be easier to decide. Try to narrow your needs and priorities.

I recently bought a Sony PCM D100 because I wanted to have something portable to record on the go. This recorder is quite expensive (for a handheld device) and doesn’t have XLR inputs which for me is a big issue. But the thing is my goal is to have something really portable so I can record in situations when a big rig would be cumbersome.

So I am losing the XLR feature in exchange for great quality of audio, battery life, internal memory and construction. All of them features that are essential if I’m going to use this on the go.

Avoid audio elitism

Sound is something that can be objectively measured but, nevertheless, the way we experience it is quite subjective. People apply all sorts of descriptions to audio like “silky”, “airy” or “muddy”. I’m not saying these are not useful or that these don’t describe real properties but sometimes I think we get caught up in these terms too much.

This problem is twofold. On one hand, sometimes people are so ready to justify their purchase that they start to hear mystical properties in a piece of gear. On the other hand, sometimes we can actually really tell the difference (in terms of clarity or timbre profile) between two pieces of gear but it is so small that it’s only noticeable while soloing and/or A-B testing. If the final consumer is probably not going to tell the difference, is it really that important?

Don’t get me wrong, I still think that audio quality should be a priority but usually when investing in equipment the very expensive stuff gives you diminishing returns. You need to really expend a lot of cash to get from the professional to the “elite” level. Maybe you don’t need to.

So yeah, choose quality but don’t get crazy. Beware of mystical claims and 20K€ cables. I honestly think that if we forced people to take blind A-B tests comparing decent gear with very high end equivalents they would be amazed with how close they can be.

Your sound is as good as your chain’s weakest link

Before buying a new fancy microphone, maybe stop for a second and think about the small stuff. There is always something outdated or in a bad condition. Maybe it would sensible to improve on those weak areas first.

Sure, you don’t need fancy solid gold cables but get yourself some decent ones. Another good example of this could be battery management. If your gear uses batteries of any kind, invest in good chargers. I recommend you get familiarized with the stuff that video and photography folks use. Smart chargers are a great option since they have independent charging cells and programs to keep batteries healthier.

Audio cases (I like Portabrace) are also a great option to make sure your equipment is safe while traveling or on location. I bought my Tascam HDP2 with a Portabrace case and it’s really a worthy investment. The velcros work like the first day eight years later.

This powerex charger is a very nice option if you need an army of batteries for your recorder and/or wireless kits.

Balance Risk and Personality

Some people are more risk averse than others and this is something you need to take into account. In my case, I don’t feel confortable rushing things or spending large sums of money so I try to avoid doing those two things at once. If you are similar to me, remember that at some point you have to take the leap and is going to feel uncomfortable. But that’s good. That’s what they mean when they say “Is good to step out of your confort zone”.

When I bought the Rode Blimp v1, I could not afford anything better. It’s an OK starting point, but I would not recommend it for a long term investment. Not very durable.

If, on the other hand, you tend to rush things, well, take it easy. It may help to give yourself some time to make sure to make the right decision. Sharing your situation with friends or colleagues may help too, you’d be surprised by how much better you can see things when you articulate them out loud and get feedback.

Personally, I don’t like to buy second-hand stuff because I feel like I’m taking a big risk but if you are confortable with that, it’s definitely an option. It helps if you can check the condition in person and knowing the seller is ideal. If you are buying online, using sites with a reputation system is a must. Other than that, second-hand is a risk that may pay off or end up in disaster. So ask yourself: how much more money am I willing to pay to get peace of mind instead?

Reviews are spooky

Any piece of equipment that is reasonably popular is going to have some scary reviews. That’s the nature of the polarized online world: people only bother giving 1 or 5 stars, so there isn’t much nuance. Having said that, reviews are still a valuable resource when used with caution.

My approach is to focus on quality rather than quantity. Sure, you can found many reviews in Amazon nowadays but I would prefer to check audio forums or specialized stores first. You can also check reviews for a product on online stores that you are not planning to use. If you are in Europe, B&H and Sweetwater are great. If you are in the US, Thomann is a fantastic source.

Other than that, your best bet is to join and participate forums like Gearlutz. With time, you’ll get to know people there whose opinion would probably be more valuable than a random Amazon user.

Limit your tools

The Sennheiser MKH 416 was my first mic and almost the only one for some time, forcing me to use it in many different ways (on location, for foley, for SFX, for VO…)

Scarcity may sound like a bad thing but I think you can learn a lot from it. Limiting yourself to a small number of tools forces you to be creative, try new things and of course you will master them. Is hard to do that if you have too much stuff so my advice would be to really make the most of what you have before buying something new.

For me, a good example of this is audio libraries. If you already have a decent amount of sounds, there is probably a lot you can do with them. Doing sci-fi or fantasy sounds, for example, will force you to experiment with what you have around in terms of recording gear and plugins and you will learn far more than if you just buy yet another library.

Figuring out: Measuring Loudness

How loud is too loud?

There are many loudness standards nowadays and many types of media and platforms so making sure audio is on the correct level everywhere can be tricky. In this post, I’m going to talk about the history of measuring loudness and the current standards that we use nowadays.

The analogue days

The first step to measure loudness is to define and understand the fundamental nature of the decibel. Luckily, I wrote a post last year about this very subject so you may want to check that before diving into loudness.

So, now that you are accustomed with the dB, let’s think about how we can best use it to measure how loud audio signals are.

In the analogue days, reading audio levels always implied measuring voltage or power in a signal and comparing it to a reference value. When trying to determine how loud an audio signal is, we can just measure these values across time but the problem is that levels are usually changing constantly. So how do we best represent the overall level?

A possible approach would be to just measure the highest value. This method of measuring loudness is called Peak and is handy when we want to make sure we are not working with levels above the system capacity to make sure our signals are not saturated. But in terms of measuring the general level of a piece of audio, this approach can be very deceiving. For example, a very quiet signal with a sudden loud transient would register as loud despite being quiet as a whole.

As you are probably thinking, a much better method would be to measure an average value across a certain time window instead of the instant reading that peak meters provide. This is usually called RMS (root mean square) metering and it is much closer to how we humans perceive loudness.

Let’s have a look at some of the meters that were created:

Real audio signal (grey) and how a VU meter would interpret it. (black)

VU (Volume Unit) meters are probably the most used meters in analogue equipment. They were designed in the 1940s to measure voltage with a response time similar to how we naturally hear. The method is surprisingly simple: the needle’s own weight slows down its movement by around 300 ms on both the attack and the release so very sudden changes would be soften. The time that the meter needs to start moving is usually called the integration time. You will also hear the term “ballistics” to define these response times.

The PPM (peak programme meter) is a different type of meter that was widely used in the UK and Scandinavia since the 1930s. Unlike the the VU meter, PPM uses very short attack integration times (around 10ms for type II and 4ms for type I) while using relatively long times for the release (around 1.5 seconds for a 20dB fall). Since these integration times are very short, they were often consider quasi-peak meters. The long release time helped engineers see peaks for a longer time and get a feel of the overall levels of a programme since levels would fall slowly after a loud section.

The Dorrough Loudness Meter is also worth mentioning. It combines a RMS and a peak meter in one unit and was very common in the 90s. We will see that combining a RMS and peak meter in a single unit was going to be a trend that will carry on until today.

VU meter.

PPM

The dawn of Digital Audio

As digital audio started to become the new industry standard, new ways to measure audio levels needed to be adopted. But how do we define how much 0 is in the digital realm? In analogue audio, the value we assign to 0 is usually some meaningful measure that help us avoid saturating the audio chain. These values used to be measured in volts or watts and would vary depending on the context and type of gear. For example, for studio equipment in the US, 0VU corresponds with +4 dBu (1.228 V) while europe’s 0VU is +6 dBu (1.55 V). Consumer equipment uses -10dBV (0.3162V) as their 0VU. As you can see, the meaning of 0VU is very context dependant.

In the case of digital audio, 0dB is simply defined as the loudest level that flows through the converters before clipping, this is, before the waveform is deformed and saturation is introduced. We call this definition of the decibel, dBFS (Decibel Full Scale). How digital audio levels correspond with analogue levels depends on how your converters are calibrated but usually 0VU is equated to around -20dBFS on studio equipment.

Fletcher-Munson curves showing frequency sensitivity for humans. How cool would it be to see the equivalent curves for other animals, like bats?

Fletcher-Munson curves showing frequency sensitivity for humans. How cool would it be to see the equivalent curves for other animals, like bats?

The platonic loudness standard

Since dBFS is only a scale in the digital world, we still need to find a way to measure loudness in a human friendly way within digital audio. As we have seen, this is usually accomplished by averaging audio levels across a certain time window. On the other hand, digital audio also needs precision when measuring peaks if we want to avoid saturation when converting audio between analogue and digital and viceversa.

Something else that we need to take into consideration for our standard is the fact that we are not sensitive to all frequencies in the same proportion as the Fletcher–Munson curves show. As you can see, we are not very sensitive to low or very high frequencies, if we want our audio levels to be accurate, this is something that needs to be accounted for.

So, I have laid out everything that we need our loudness standard to have. Does such thing exist?

2011-ITU-logo-official.png

The ITU BS.1770 standard

This document was presented by the ITU (International broadcast union) in 2006 and fits all the required criteria we were looking for. The ITU BS.1770 is really a collection of technologies and protocols designed to measure loudness accurately in a digital environment. It is really a set recommendations, we could say.

Four revisions have been released at the time of this writing plus the ITU BS.1771 which also expands on the same ideas. For simplicity, I will refer to all of these documents as simply the ITU BS.1770 or just ITU.

The loudness unit defined by the ITU is the LKFS, which stands for “Loudness K-weighted Full scale”. This unit combines a weighting curve (named “K”) to account for frequence sensitivity along with an averaged or RMS measurement that uses a 400 ms time window. The ITU also defines a “true peak” meter as a peak meter that uses oversampling for greater accuracy.

Once the ITU released their recommendations, each region used it as the foundation for their own standards. As the ITU released new updates each region would incorporate some of these ideas while expanding on them. Let´s see some regional standards.

EBU logo 2012.png

EBU R128, Time Windows & Gates

This is the standard in use in Europe and it is released by the EBU (European Broadcast Union).

Before I continue, a clarification. The EBU names the loudness unit LUFS (Loudness units relative to full scale) instead of LKFS as the former complies better with scientific naming conventions. So if you see LUFS, keep in mind that this is pretty much the same as LKFS. On the other hand you will also see LU (Loudness Units). This is simply a relative unit that is used when comparing two LUFS or two LKFS values.

In the R128 standard, four different times windows are defined. This is based on the ITU BS.1771 recommendation. A meter needs to have all these plus some other features (see below) to be considered capable of operating in “EBU Mode”.

  • True-Peak: Almost instantaneous window with sub-sample accuracy.

  • Momentary: 400 ms window. Useful to get an idea of how loud a particular sound is. Plugins usually offer different scale options.

  • Short Term: 3 seconds window. Gives a good feel of how loud a particular section is.

  • Integrated or Programme:. Indicates how loud the whole programme is in its whole length. Sometimes it’s also called “Long Term”

Why so many different time windows? In my opinion, they are useful when working on a mix since they tell you information at different levels of resolution. True-peak tells you wether you would saturate the converters and it is good practice to always keep some headroom here. The momentary measurement is more or less similar to what VU meters would indicate, and gives you information on a particular short section. I personally don’t really look at the momentary meter much because any mix with a decent amount of dynamic range is going to fluctuate here quite a bit. Nevertheless it is useful to make sure that the mix is not very far away from the target levels on some specific sections.

Short term maybe a better tool to get a solid feel of how loud a scene is. This measurement is going to fluctuate but not as much as the momentary value. In order to get a mix within the standards, you need to make sure the short term value is usually around the target level, but you don´t need to be super accurate with this. What I try to do is make a compromise between the level that feels right and my target level and when in doubt, I favor what it feels right.

Finally, the integrated or long term value has a time window with the size of the whole show. This is the value that is going to tell you the overall level and measuring it in a faithful way is tricky as you will see below.

So, I was mentioning “target levels”. Which levels? The EBU standard recommends audio to be at -23 LUFS ±0.5 LU (±1 LU for live programmes). We are talking here about the integrated measurement, so the level for the entire show. Additionally, the maximum true peak value allowed is -1 dBTP. And that would be pretty much it, although there is one more issue as I was saying. Measuring levels throughout a long length of time in a consistent way comes with some challenges.

This is because there is usually a main element that we want to make sure is always easy to hear (usually dialogue or narration) and since audio volume is logarithmic, that main element would pretty much carry 90% of the show’s loudness weight. So we would naturally mix this element to already be at the desired loudness or slightly below. The problem comes when considering all the other elements around the dialogue. If there are too many quiet moments, that it’s going to make our integrated levels quite low, since everything is averaged.

The solution would be to either push the level of the whole show or re-mix the level of the dialogue louder so the integrated value is correct. Either way that would probably make the dialogue too loud and we would also risk saturating the peak meter. Not ideal.

Nugen´s VisLM Plugin operating in EBU mode. You can see all the common EBU features including all time windows, loudness range and a gate indicator.

In order to fix this the R128 uses the recommendations from the revisioned ITU BS.1770-3. Integrated loudness is calculated using a relative gate method that effectively pauses the measurement when levels drop below a threshold of -10 LU relative to an un-gated measurement. There is also an absolute gate at -70 LUFS, nothing below this value would be consider for the measurement. These gates help us getting a more meaningful result since only the relevant audio in the foreground will be considered when measuring the integrated time.

The last concept I wanted to mention is loudness range or LRA. This is measured in LU and indicates how much the overall levels change throughout the programme, in a macroscopic view. You can think of this as an indication of the dynamic range of your mix: low values would indicate that the mix has a very constant level while higher values would appear when there is a larger difference between quiet and loud moments. The EBU doesn’t recommend any given target value for the loudness range since this would depend on the nature of the show but it is for sure a nice tool to have to get an idea of your overall mix dynamics.

atsc_large.png

ATSC A/85

This is the standard used in the US and is released by the ATSC (Advanced Television Systems Commitee). It uses LFKS units (remember that LKFS and LUFS are virtually equivalent) and similar time windows to the europeans. The recommended integrated value is -24 LKFS while the maximum peak value allowed is -2 dBTP.

When the first version was released in 2009, this standard recommended a different method when when calculating the integrated value. As you know, the EBU system uses a relative gate in order to only consider foreground audio for its measurements but the ATSC took a different approach. Remember when I was saying before that mixes usually have some main element (often dialogue) that forms the center of the mix?

The ATSC called this main element an “anchor”. Since dialogue is usually this anchor, the system used an algorithm to detect speech and would only consider that to calculate the integrated level. I’ve done some tests with both Waves WLM and Nugen VisLM and the algorithm works pretty well, the integrated value doesn’t even budge when you are just monitoring non-dialogue content although singing usually confuses it.

In fact, on the 2011 update, the ATSC standard started to differentiating between regular programmes and commercials. Dialogue based gating would be used for the former while the all elements in the entire mix would be consider for the latter. This was actually one the main goals of the ITU standard initially: to avoid commercials being excessively loud in comparison to the programmes themselves.

Nevertheless, the ATSC updated the standard again in 2013 to follow the ITU BS.1770-3 directives and from then on all content would be measured using the same two gated method Europe uses. Because of this, I was tempted to just avoid mentioning all this ATSC history mess but I thought it was important to explain it, so yo can understand why some loudness plugins offer so many different ATSC options.

Here you can see the ATSC options on WLM. The first two would be pre 2013, using either dialogue detection or the whole mix to calculate the integrated time. The third, called “2013” used the gated method ala Europe.

TV Regional and National Standards

Now that we have a good idea of all the different characteristics standards use, let’s see how they compare.

Country / Region Standard Units Used Integrated Level True Peak Weighting Integrated level method
Europe EBU R128 LUFS -23 LUFS -1 dBTP K Relative Gate
US ATSC A/85 post 2013 LKFS -24 LKFS -2 dBTP K Relative Gate
US ATSC A/85 pre 2013 (Commercials) LKFS -24 LKFS -2 dBTP K All elements are considered
US ATSC A/85 pre 2013 (Programmes) LKFS -24 LKFS -2 dBTP K Dialogue Detection
Japan TR-B32 LUFS -24 LUFS -2 dBTP K Relative Gate
Australia OP-59 LKFS -24 LKFS -2 dBTP K Relative Gate

As you can see, currently, there are only small differences between them.

Loudness for Digital Platforms

I have tried to find the specifications for some of the most used digital platforms but I was only able to find the latest Netflix specs. Hulu, Amazon and HBO don’t specify their requirements or at least not publicly. If you need to deliver a mix to these platform, make sure they send you their desired specs. In any case, using the latest EBU or ATSC recommendations is probably a good starting point.

In the case of Netflix, their specs are very curious. They ask for a integrated level of -27 LKFS and a maximum true peak of -2 dBTP. The method to measure the integrated level would be dialogue detection, like the ATSC used to recommend, which in a way is a step back. Why would Netflix recommend this if the ATSC spec moved on to gated based measurements? Netflix basically says that when using the gated method, mixes with a large dynamic range tend to leave dialogue too low so they propose a return to the dialogue detection algorithm.

The thing is, this algorithm is old and can be inaccurate so this decision was controversial. A new, modern and more robust algorithm could be a possible solution for this high dynamic range mixes. Also, -27 LKFS may sound too low but it wasn’t chosen arbitrarily but based on the fact that that was the level where dialogue would usually end up on these mixes. If you want to know more about this, you can check this, this and this article.

Loudness for Theatrical Releases

The case of cinema is very different from broadcast for a very simple reason: you can expect a certain homogeneity in the reproduction systems that you won’t find in home setups. For this reason there is no hard loudness standard that you have to follow.

Dolby Scale SPL (dBC)
7 85
6.5 83.33
6 81.66
5.5 80
5 78.33
4.5 76.66
4 75
3.5 65

This lack of general standard has resulted in a similar loudness war to the one in the music mixing world. The result are lower dynamic ranges and many complains about cinemas being too loud. Shouldn’t cinema mixes offer a bigger dynamic range experience than TV? How are these levels determined?

Cinema screens have a Dolby box where the projectionist would set the general level. These levels are determined by the Dolby Scale and correspond to SPL measures under a C curve when using the “Dolby noise”. Remember that, in the broadcast world, the K curve is used instead which doesn’t help things when trying to translate between both.

Nowadays more and more cinemas are automated. This means that levels are set via software or even remotely. At first, all cinemas were using level 7, which is the one recommended by Dolby but as movies were getting louder and people complained, projectionists would start to use lower levels. 6, 5 and even 4.5 are used regularly. In turn, mixers started to work in those levels too which resulted in louder mixes overall in order to get the same feel. This, again, made cinemas lower their levels even more.

You see where this is going. To give you an idea, Eelco Grimm together with Michel Schöpping analyzed 24 movies available at dutch cinemas and found out levels that would vary wildly. The integrated level went from -38 LUFS to -20 LUFS, with the maximum Short-term level varying from -29 LUFS to -8 LUFS and the maximum True-Peak level varying from -7 to +3.5 dBTP. Dialogue levels varied from -41 to -25 LUFS. That’s quite a big difference, imagine if that would be the case in broadcast.

The thing is that despite these numbers being very different, we have to remember that all these movies probably were played at different levels on the dolby scale. Eelco says on his analysis:

  • The average playback level for movies mastered at '7' is -28 LUFS (-29 to -25).

  • The average playback level for movies mastered at '6.3' is -23 LUFS (-25 to -21). They are projected 3 dB softer, so if we corrected the average to a '7' level, it would be -26 LUFS.

  • The average playback level for movies mastered at '5' is -20 LUFS (all were -20). They are projected 7 dB softer, so the corrected average would be -27 LUFS.

So, as you can see, at the end dialogue level is equivalent to about -27 LUFS in all cases, the only difference is that the movies that were mixed at 7 (which is the recommended level) would have greater dynamic range, something important to be able to give a cinematic feel that TV can’t provide. The situation is quite unstable and I hope a solid solution based in the ITU recommendations is implemented at some point. If you want to know more about all this issue and read the paper that Eelco Grimm released, check this comprehensive article.

Loudness standards for video games.

Video games are everywhere: consoles, computers, phones, tablets, etc, so there is no clear standard to use. Having said that, some companies have stablished some guidelines. Sony, through their ASWG-R001 document recommends the following:

  • -23 LUFS and -1dBTP for Playstations 3 and 4 games.

  • -18 LUFS and -1dBTP for PSVita games.

  • The maximum loudness range recommended is 20 LU.

But how do you measure the integrated loudness in a game? Integrated loudness was designed for linear media so Sony’s document recommends to make measurements in 30 minutes sessions that are a good representation of different sections of the game.

So, despite games being so diverse in platforms and contexts using the EBU recommendations for consoles and PC (-23 LUFS) and a louder spec for mobile and portable games (-18 LUFS) would be a great starting point.

Conclusions and some plugins.

I hope you now have a solid foundation of knowledge for the subject. Things will keep changing so if your read this in the future, assume some of this information is outdated. Nevertheless, you would have hopefully learned the concepts you need to work with loudness now and in the future.

If you want to test loudness, many DAWS (including Pro Tools) don’t have a built-in meter that can measure LUFS/LKFS but there are plugins to solve this. I recommend that you try both Waves WLM and Nugen VisLM. If you can’t afford a loudness plugin, you can try Youlean, which has a free version and is a great one to start with.

Thanks for reading!

Exploring Sound Design Tools: Sound Particles

Sound Particles allows you to create soundscapes and sound design using virtual particles that can be associated with audio files. The results are then rendered using virtual microphones.

If you want to check it out or follow this review along, you can download the demo here. It has all the features of the paid version but is limited for non-commercial projects only.

I won’t explain how to use the software in depth but I will give an over overview and show some practical uses for everyday work in sound design. If you want to get a more in-depth explanation, you can also watch this tutorial.

Sound Particles interface. Nice, clean and responsive.

Features Overview

The heart of the program are the particles. You can basically create them in three different ways:

  • A Particle Group will create any number of particles at the same time in an area or shape of your choice.

  • A Particle Emitter creates particles over time at a particular rate.

  • A single point source is just a single particle.

By default, particles are created as soon as you hit play, although you can also choose to change the start time to delay their creation. Generally, they last as much as the length of audio file attached to them.

You can choose the coordinates used to create your particles and also move the individual particles around the scene to create different effects. Particle emitters can also be moved. The movements that you can apply to the particles stack with each other, giving you an amazing amount of options to create motion. Keyframes can also be used to match any movement to a reference video.

See the video below for an example with the three types of particles:

So in the video you can see:

  • A particle group (red) that generates particles in a square shaped area. These particles are not created at the same time because we have also applied a random delay. They have fireworks sounds attached.

  • A particle emitter (orange) is moving in a circular motion while the particles that creates also have some small random movement. They have magical sounds attached.

  • A single point source (pink) with my voice paulstreched to infinity.

You can also apply audio modifiers to each particle group. These will randomize certain parameters so you obtain more interesting and varied results. If you think about this, this is similar to how audio works in the real world. Each time you take a step, your shoe makes a slightly different sound: pitch, level and timing will be different. Sound Particles lets you randomize the audio from each particle in a similar way. The audio modifiers are:

  • Gain: Basically, audio level.

  • Delay: This determines when the particle is created. It is very useful because usually you don’t want all particles in a group to be created at the start. In the example above, the red particles are being created with a random delay.

  • EQ: It applies different filters and bands of EQ to each particle so they don’t sound exactly the same.

  • Granular: This is kind of a special modifier. It slices the audio file and then plays each slice from a certain particle. You can control how long the slice is or even leave it random. You can also control if the slices are then played in sequence or at a random order.

  • Pitch: It applies a different pitch shifting value to each particle.

For any single parameter that requires randomization, you can choose different probability distributions to get the result that you want. An uniform distribution (all values have the same weight) and a normal distribution (most values will be around the mean) are probably the most useful ones. You can even create a custom distribution which is pretty awesome.

Uniform Distribution

Normal Distribution

Of course, once you have the particles ready, you need a virtual microphone to capture the result. On this area, the amount of options are simply amazing. Not only you can place the microphone anywhere in the scene but you can choose between many configurations including M/S, X/Y and all sorts of surround and ambisonic configurations.

If that wasn´t enough, you can also create several microphones on the same scene and render different stems per microphone. These stems can contain different combinations of particles so you can have more control later on the mix.

Finally, the project settings page allows you to control how Sound Particles is going to manage sound propagation and attenuation from distance. You can change the speed of sound, simulate the delay of far away sounds, change how much sounds attenuate with distance or wether your scene uses the doppler effect.

Microphone configurations can follow a variety of speaker setups

Project Settings

Sound design examples

Enough with the theory, let´s hear some real applications. Since sound particles is much easier to understand when you see the particles in movement, I decided to create a video for every example instead of just audio.

Battlefield soundscape

This is very simple but could be very useful if you need create a soundscape and don´t want to move every single sound into place by hand. As you can see, is very easy and quick to create a randomized soundscape. Something I feel I miss here is a bit more control on which sounds are triggered. When you have different types of sounds, it would be nice to be able to trigger some sounds only occasionally in the same way you can do this in fmod or wwise.

It would also be helpful to be able eliminate a particular particle that moves too close to the mic or at least be able to prevent them to getting too close without using complex custom distributions.

Scifi Interface

Now let’s imagine we are building a somewhat cheesy 80´s computer interface with beeps and blops and some folders flying around the screen.

As you can see, we are using two particle systems at the same time. One of them (blue) creates all the beeps in a circle around the listener while the orange is a particle emitter that throws particles horizontally to simulate things flying by.

Playing with pitch

Let’s explore how we can use the pitch randomization feature to create new, complex sounds from simple ones. On this example, I first use a uniform distribution for a more detuned and unsettling effect. We can also use a discrete distribution so the jumps in pitch are strictly within certain semitones, obtaining a more musical result.

As you can see, just changing the distribution can produce very different results.

We can also automate pitch to create dynamic effect like for example making all the frequencies converge on a central one. The THX deepnote was achieved with a a similar method.

Granular synthesis

This modifier offers many sound design possibilities. You can see an example below of building some sort of alien speech sound step by step.

We can also obtain a “voices in my head” effect by slicing up some speech and distributing it around the listener. As you can see, we can always re-create the particles to obtain a new variations which is very handy for video game work.

Doppler Effect

There are many plugins that recreate a doppler effect but this one for sure offers a unique visual approach. As you can see below, we can create a doppler effect on a single particle or on many.

Conclusion

I hope you found this software interesting, I think is a very good tool to have in your arsenal and I feel I have barely scratched the surface with the sonic possibilities that offers. I believe there is an update coming soon for Sound Particles and I may have another look then and write a new post covering the new features.

You can also have a look at a couple of plugins that Nuno Fonseca, Sound Particles creator has released. They allow you to use the doppler and air absorption simulations that Sound Particles has but in a convenient plugin that you can use in your DAW.

Interview on La Bobina Sonora

I have been interviewed on the site “La Bobina Sonora” which is dedicated to the spanish and latin america audio community. I thought it would be interesting to translate the interview into english in case you want to have a look. There are some insights into my career history, the way I approach sound design and mixing and the projects I was working on at the time (October 2018). So, here we go!

LA BOBINA SONORA: Before starting with the interview, I just wanted to thank you for your presence here at labobinasonora.net.

JAVIER ZUMER: Thanks you for the invitation, I’ve been reading the blog for years and I’m happy to be able to contribute.

LBS: You are currently based in Ireland, where you do most of your work. It’s interesting to ask, which are the main differences in the audio industry between Ireland and Spain?

JAVIER: The main difference is that Ireland is a country that enjoys a better economical situation. This brings more stability and specialization to the profession.

Having said that, Ireland is an interesting example because it shares some similarities with Spain. Both countries went under during the economic crisis (both with a property bubble). Also, both live under other countries shadows like the UK, France or the US since these have a more mature and stablished industry.

LBS: How are audio professionals treated by the Irish industry? Do any kind of associations or unions exist?

JAVIER: Personally, my experience has being positive. Maybe sound doesn’t get as much love and attention as other departments (that’s kind of universal since we are visual creatures) but in my environment I usually have the time and resources needed to get the job done.

About associations, I am not aware of them but if they do exist they are probably based in Dublin since the industry is mostly located there. (I’m currently in Galway).

LBS: Those who work on this amazing profession usually share an appreciation for cinema, music and even other arts. Which were the main reasons for you to end up building sonic worlds? Maybe your experience in music production brought you there?

JAVIER: Like many other people, the thing that made me consider and appreciate sound was music. Reason was the first audio software that I used in depth and that was when I dropped out of college to study audio.

I still think Reason is a very unique starting point since its design imitates real hardware and it gave me my first notions of how the audio signal flows.

Later, I started to be more interested in audio for cinema and games. I think they offer a great balance of artistic and technical challenges.

Screen-Shot-2015-07-23-at-18.17.35.png

LBS: At the start of your career you were getting some experience with music recording and mixing at Mundo Sinfónico. How do you think this time helped you in your career?

JAVIER: Mundo sinfónico was my first professional audio experience. Héctor Pérez, who owns the place, was kind enough to let me join on some projects during recording and mixing.

During that time I learned a lot about using microphones, Pro Tools, and other software. It was pretty much like discovering how all these things are used in the real world and in real applications. At this time, I also started to learn how to to face a mix.

sin-ticc81tulo2.png

LBS: So, how were your first steps as a sound designer?

JAVIER:  At some point, I knew I needed to invest in my own gear in order to work in projects and I had to make a decision. I could either invest in music recording or in location audio gear. I decided to go for the latter since building an studio would lock me into an specific location but I could do location audio anywhere. Also, by that point audio for cinema interested me as much as music production.

With this gear I did many, many short films, some documentaries and TV stuff. Naturally, I would also work on the audio post for some of these projects and this was the way I went into sound design and mixing.

LBS: Is there any specific moment in time when you feel you made a big leap forward on your career?

JAVIER: Maybe the way I got my current job. By that time, I was living in Galway, which is quite far away from Dublin (impossible to commute). Since all the industry is really in Dublin, this was an issue if I wanted to get work but those days I was just working on freelance projects here are there.

One day, I decided that it would be cool to find people in my city interested in going out and record sound effects. I sent some emails to local audio folks and one of them was Ciarán Ó Tuairisc, who was the head of sound for Telegael, a company that was super close, like a 5 minute drive from my place.

I went there to meet him and see the place and he gave me some episodes so I could do a sound design test. Some days later, I came back with the results and I was offer a job there. I was maybe expecting that they will consider me for freelance work at best but the whole thing was kind of a job interview where I was successful with no need for a CV or a tie.

LBS: What are your main goals when facing a sound design project? Which of them are esential to your workflow?

JAVIER: When doing sound design I like to first do a basic coverage pass. Just have a sound for every obvious thing without taking much time with each. Once this is done, the real job begins when you start thinking about how the sounds you already have work together and which ones are important enough to spend more time and thought on them.

LBS: When crafting a sonic world, which are the processes (artistic or technical) that deserve the most attention and detail?

JAVIER: The elements that drive the story forward defintely deserve the most attention. Also is very important to give detail to any element that helps with world building.

If the story takes place in a special place or there is a relevant object is important to think how these should sound like. Of course, ideally this should work on subconcious level for the viewer.

LBS: Talking now about all the different processes that build a sonic world (dialogue editing, ambients/fx, foley, mixing…), which is the hardest for you and which one do you enjoy the most?

JAVIER: Probably foley is where I am the least confortable. It is a true art that requieres experience, coordination and sensitivity to get it right. I don´t have a lot of experience doing it and I am not into the physical part of the job although I know that that appeals to other people.

The process I enjoy the most is mixing since this is when all elements come together to create a cohesive whole that moves towards the same artistic direction.

LBS: Do you usually think about mixing when doing sound design? Do you use sub-mixes or pre-mixes on certain elements? Or do you prefer to start the mix completely from scratch?

JAVIER: It depends on the situation. When I´m just doing sound design I try to give the mixer as much control and options as possible so I don´t usually do sub-mixes although sometimes they makes sense.

If I´m mxing and also doing sound design I tend to pre-mix things as I go and even apply some EQ or compression here and there on elements that I know are going to need it. For this, clip effecs on Pro-Tools are great.

LBS: Talking about something omnipresent and unavoidable like technology, which is the gear you usually use when doing editing, sound design and mixing?.

JAVIER: I use a Pro Tools Ultimate rig with a S6 M10 desk. In terms of software, I use the usual stuff, most of my plugins are either from Avid or from Waves. For dialogue editing, Izotope RX is a must.

LBS: Which was your last technological discover that improved your workflow the most?

JAVIER: Probably Soundly, although this wasn´t that recent. It is a library management software that maybe doesn´t offer as many features as Soundminer but I think is a great option. It is more affordable (in the short term) and also offers online libraries that are kept updated and growing. It offers more than enough metadata capabilities and good integration with Pro Tools.

sin-ticc81tulo3.png

LBS: A big portion of your work is focused on an area that is maybe a little unkwnown for some of us but very important and clearly rising in relevance. How did you get into video game sound design?

JAVIER: I grew up playing games and this was always an area that interested me when I got into sound design.

One day I saw an ad for a crowdfunding from a spanish game, Unepic. They were looking for some money to record some voice acting and I emailed them asking them wether they would also be interested in some help with sound design. I had really no idea about how this kind of work would go and surprinsingly the were interested and we started to work together.

Six year later, Unepic has sold more than half a million copies between consoles and PC, being the first spanish indie game to get into Steam. It was a project that taught me a lot and I have kept working with its developer, Francisco Téllex de Meneses and many others since.

LBS: What are the main differences between working on video game sound design and just working on traditional media?

JAVIER: The main difference is that traditional media is linear. Once you finish a mix, it is going to be the same for all viewers, the only differentiating factor would be the reproduction system but the mix itself it would be the same forever.

On the other hand, video games are interactive so there is no mix in the traditional sense. You just give the game engine every audio asset needed and the rules that will govern how these sound are played. So the mix would be created in real time as the player intereacts with the world of the game.

The real power in video game sound design comes from the fact that you can connect audio tools with parameters and states within the game world. For example, imagine that the music and dialogue are connected to a low pass filter, a reverb and a delay and they change as your health gets lower. Or a game where you build weapons that wear out as you use them so their foley and FX become darker (via an EQ) and more distorted in the process.

I have an article on my blog with more information for someone who wants to start to do video game sound design.

LBS: Let´s talk about your work on field and SFX recording. We can find some interesting libraries on your website made by you, some of them dedicated to something you call “audio explorations”.

How important is field and SFX recording for you?

JAVIER: It´s something I consider very important beacuse once you have access to the big libraries the industry uses, you realize that there are many sounds that are over used. Once you start to hear them, they are everywhere!

So, I think is important to bring a more unique and personal approach to sound design. Also, when you record and create your own sound effects you force yourself to be more adventurous and to experiment with thechniques and ideas.

LBS: How do you usually plan a field recording session? Are they done within the context of a larger project or do you plan free sessions just to experiment and play around?

JAVIER: This is something I´ve been thinking about for a long time. On one hand, when something specific is needed, I just go out to get it. But with time I have been thinking that in those cases is not very convenient to explore and record interesting stuff since you have deadlines and many other things to work on.

As a solution, I´ve been going on what I call “explorations”. I just pick a technique, prop, place or software and I try to create interesting stuff while trying to learn how it works. I´ve been blogging about them and also releasing free mini-libraries with the results.

sin-ticc81tulo7.png

LBS: Any particular piece of advice to keep in mind when doing field recording?

JAVIER: At the begining of every take, always explain what your are doing with your own voice. Take videos and picture if you can. I guarantee you won´t remember everything you where doing later when you are editing.

LBS: What kind of gear (recorder, microphones…) and techniques do you usually use when doing filed recording?

JAVIER: Nothing too special or obscure. I use a Tascam HD-P2 that works great after seven years of use and is able to record at 192 kHz although it only has two pre-amps so sometimes I need other recorders as a reinforcement. The microphones I use are a 416, Oktava 012, Rode NT4, SM57, Sanken COS-11D and some more exotic mics from JRF (hydrophone, contact mic and a coil pick up).

sin-ticc81tulo5.png

LBS: Which project would you consider a highlight on your career in terms technical or artistic merit?

JAVIER:  Recently, I have worked on the sound design and a good portion of the mix for a documentary series about the lighthouses of Ireland that was premiered on RTE (the irish BBC).

It was a very interesting project with beautiful helicopter footage. I needed to recreate the audio for 200 minutes of aerial shots so loads of waves, wind, storms, seagulls and things like that. I tried to give each location and lighthouse its own personality and sound. Some of them are really astonishing and true masterpieces of engineering while others are situated on amazing natural locations.

I summary, one the most beautiful projects I have had the chance of working on.

sin-ticc81tulo4.png

LBS: Is there any cool anecdote in your almost decade as a professional that you would like to share?

JAVIER:  While I was trying to remember an anecdote I thought I could share something that happens to me from time to time and I wonder if it´s something thar other people experience too.

Some times, when I´m looking for a particular sound. I bring some audio just by chance or even by mistake and it works great just like that. I guess that when you spend many hours editing audio, these things are going to just happen from time to time but it always feels like you were touched by the goods of sound design for a moment.

LBS: Is there any project on your near future?

JAVIER:  I´m about to get immersed in Drop Dead Weird, a live action comedy about three australian teenagers that move to Ireland and their parents turn into zombies. I am mixing the show, which is a co-production between Channel 7 (Australia) and RTE (Ireland).

It´s a cool crazy project with a lot of action and sound design and many people on each scene which is always a challenge in terms of dialogue editing.

sin-ticc81tulo6.png

LBS: To wrap thing up, any advice for someone who is mad enough to be interested in this beautiful profession?

JAVIER:  When I look back at my career there is a pattern that repeats itself: I was able to make a leap forward when I was on the right place at the right time. The problem is that you never know when and where this is going to happen, for each of these moments of success I´ve had many more that just were unfruitful.

So the best way to go then is to be persistant and throw as many seeds to the air as possible while always improving as a professional. Something will bloom.

LBS: Thanks again for your time, Javier. Best of luck on your future projects which we will keep an eye on here on labobinasonora.net.

JAVIER: Thanks to you, Óscar for having me. My pleasure.