Exploring Sound Design Tools: Igniter

Igniter is Krotos’ new engine sound design plugin. They have been kind enough to sent me a license to have a look and see what it can offer. Igniter allows you to virtualize vehicle engines (real, sci-fi or fantastic) combining a granular section, a set of synthetizers and two sample managers. It includes performance controls so you can automate the vehicle RPM, engine load and many FX (including doppler) in order to get a realistic sounding engine. It comes with a big amount of presets including sport and utilitarian cars, planes, helicopters, trucks, motorbikes and sci-fi vehicles.

So here is my in-depth look at the plugin features with some examples here and there. I encourage you to follow along in your own DAW, you can find a full featured demo here.

Interface / UI

The interface is clean and easy to read and you can resize the window which is very nice. A main section (left side) occupies most of the screen and includes all the audio sources we can use. These sources are divided into four different tabs: Granular, Synth, One Shot and Loop and also includes a file browser.

On the right hand side we find the engine master on/off switch and the main revs knob in the middle. This revs knob acts as a gas pedal for the whole plugin. At the top, we find the Mod system, where Igniter’s true power resides, since it allows you to dynamically link any parameter within the plugin to the revs knob using envelopes and LFOs. Lastly, at the bottom right side, we find the FX and mixer sections. Let’s see all these in more detail.

If you need more info, most features are well covered in the manual and on Kroto´s videos. What follows is my own take on the plugin capabilities, plus some wish list features that I would love to see in the future.

Granular Section

This is probably the most complex and important generator. It combines granular synthesis with real recordings to re-create a virtual engine with a revolutions or RPM knob that you can “drive”. Each vehicle includes two mic perspectives: engine and exhaust and we can easily mix between both with an slider.

When I saw this, it occurred to me that it would have been nice to also include an interior perspective as this would be very useful for vehicle scenes like chases. After some looking around, I discovered that all vehicles have a “In-car” preset which solves the problem. But this is not a true recording of the car interior, but a recreation of it using EQ and convolution reverb. Would this be too different or unauthentic compared to a true inside recording? To be honest, I don’t know since I don’t have a huge amount of experience doing car sound design but I suspect these presets will suffice pretty well for most applications and, of course, you can always tweak them to suit your needs or even do your own "in-car” processing outside of Igniter.

In terms of how the granular engine actually works, we can’t see what’s going on under the hood (see what I did there?) but I suppose the plugin is using recordings at different steady RPMs and blending them together as you act on the engine. This is similar to the approach used in middleware like Fmod for use in video games. The result is pretty natural and smooth and driving the RPM feels responsive and clean.

For now, you can’t add your own sounds to the granular section, as they would probably need to be edited in a very specific way for them to work here. On release day, Krotos offers 13 different vehicles that use this granular option but I’m sure more will be added with time or maybe made available for individual purchase in the future.

Driving Modes

As you can see on the interface above, there are basically two ways to control the engine simulation: manual and auto. At the same time, every car comes with a set of three presets, two of them are manual and a third uses auto. So, using the example pictured on the right hand side with the Dacia 1310:

-Dacia 1310: “Free” mode that uses manual driving.
-Dacia 1310 Manual Gears: Uses manual driving but with pre-determined gear shifts on the Revs progression.
-Dacia 1310 Auto Gearbox: Uses the auto mode.

Let’s see what’s the difference between these three:

In general, manual mode allows you to freely change the engine’s RPM an also gives you a “load” knob. This parameter simulates if you are putting pressure on the engine or, in other words, if you are applying pressure on the gas pedal or not and allows us to create more realistic sounding gear shifts and decelerations.

The difference between both manual presets is that on the “free” mode, the relationship between the granular RPM and the master rev knob is completely linear by default, so you have to play with the RPM value yourself to imitate the act of shifting gears. Here is a video of me just doing that with a Revs pass first, followed by a Load pass. As you can see, to achieve a natural result, you need to drive the parameters in a realistic way. It would require a bit of practice to follow onscreen action like this but it feels very easy and responsive.

On the other hand, the other preset type, “Manual Gears”, has the gear shifting already soft coded into the mod section, including on load and off load changes. Of course, you can tweak this as you please but the preset gives you a nice starting point. As you can see, in this mode you don’t need to imitate the engine revs with your automation and you can just use curves to describe how hard you want to accelerate or decelerate.

For the most part, this works quite well when going up on the revs but going down forces you to go through the whole set of gears which doesn’t always feel natural, although sometimes you may want this (Formula 1 cars kind of do this sometimes). I tried different ways to avoid this, like staying within the boundaries of the same gear or jumping fast from a higher to a lower point on the envelope, although this needs to be carefully drawn as automation. A potential solution to this issue would be that the RPM ramps don’t occur when we decelerate, only on our way up on the revs knob.

You can also notice how the load drops are already coded into the revs progression, which is pretty handy and also shows that I was too subtle with it on my free test.

The third preset, Auto Gearbox, uses the auto option which doesn’t allow you to directly control the granular RPM or load and simply gives you an slider called “Power” that we can use to accelerate or brake while the gears shifting is hard coded and can’t be tweaked. This would be similar to driving an automatic car.

Here is an example of me using this mode. Compared to the others, it feels a bit unresponsive at the start but once you get speed it works well, although the gear shifting doesn’t always feel “in the right place”. As long as you don’t need very precise and fast changes in RPM, this mode can be useful to get natural results quickly.

By the way, you may hear some clicks and pops on my examples above. I am not sure a 100% if this is coming from Igniter’s or was a internal audio recording problem but definitely the Audi R8 seems to be a bit more “clicky” on the exhaust than other cars I tried later.

Granular Advanced Controls

Lastly, the granular section also includes some other advanced controls:

-Shuffle Depth controls how thin or wide is the slice that the granular engine uses to select the samples. Higher values can help make the sound more natural and varied. Using the mods, you can, for example, make this value go up as the RPM goes up.

-RPM Smoothing: It slows down the response time to the changes in RPM. You can try increasing this if the engine feels too wild or decreasing it for a more fast response, which could be useful on auto mode.

-Idle Fade: Use this to adjust the fade between the engine on idle and low revs.

-Crossfade: It controls the blending between different grains or audio slices making it more abrupt or smooth.

-Lim Threshold & Kick: The documentation doesn’t cover these two but I suppose they are related to an internal limiter.

Synth Section

It includes 5 oscillators with two different waveforms each that you can blend together. You can also control the frequency and gain of each of the oscillators. There is frequency and amplitude modulation available for each oscillator plus a vibrato option.

And that’s pretty much it. Sounds basic but it is indeed powerful as you are able to link any of these parameters to the master rev knob creating dynamic designs that will grow in intensity and speed as the revs go up. You can also combine synth layers with real engines to create hybrid engines that combine real recordings and synths.

Here are some examples of sci-fi designs I did from scratch. Something that I missed is more options for the noise generator. it would be great to have more noise colours to create textures with or maybe a filter to shape it. The ability to apply separate FX to different oscillators would also be amazing.

One Shot section

This tab allows you to trigger certain individual sounds on specific moments on the rev progression curve. Maybe the most obvious use for this section is to trigger tire skids when we go up on the revs or screeching breaking sounds when we go down. In any case, this section is great to add sweeteners and flavour to the design.

There are four slots where you can drag and drop sounds. Unlike the granular engine, you can use your own sounds here and drag and drop them from finder. Each slot can be monitored independently and there are individual knobs to control volume and pitch. Both of these can also be controlled with an envelope instead of a knob, which offers interesting possibilities.

On top of the sample area there are four “timelines” each of them corresponding to one of the slots. Here is where you can choose when do you want the samples to be triggered but the horizontal axis doesn’t represent time but rev progression. In other words, you get to decide where in the acceleration curve you want some samples to be triggered.

Directionality is also accounted for. You can trigger samples as the revs go up or as the revs go down depending of where the triangle is looking. You can also have a sample that will be triggered both ways (diamond shape) and stop currently playing samples on the slot (square shape).

In general, the system is clever and nice to use but I feel that you’d really need some playlist and randomisation controls to make it really powerful. My idea would be to basically turn each of the slots into something like an fmod event. This way, you could add a playlist of sounds and control how to cycle through them or randomly jump between them.

This will give you a much richer system, where you can use sets of skids, terrain or engine pop sounds to choose from each time the event is triggered. For this to work well, you should be able to choose how deterministic the system is, in case you need predictability. Being able to tweak or re-shuffle the samples that were triggered after a pass would be also a good approach. I know Krotos is working on a run-time, middleware version of Igniter, so maybe something like this is already in mind.

Loop Section

Although the one shot section includes an option to loop its samples, this tab gives us much more power and control of sounds that need to be looped. It can be used in conjunction with the granular system or just by itself to create a completely new vehicle system.

This is pretty powerful. It allows you to have your own responsive car design, provided that you have recordings of steady RPMs to use. You can also use the loop section it to add texture or detail to the granular generator. You can add things like gravel, dirt, snow, clattering, squeaking or engine pops and link their intensity to the master revs knob.

You have four slots for loops and you can control their volume and pitch. The interesting and very handy thing is the section on the upper side. It allows you to customise how you want to blend your four loops together giving you the tools to smooth out both the crossfades and pitch changes between the transitions.

To obtain a good result, you need to make sure you have audio clips that loop cleanly. The Amp section helps when determining the boundaries between the clips but I miss more control on the actual volume of each of the sounds when I need to balance them out. I’ve noticed that actually some of the factory presets use the mod section to control this by using the general gain of the whole looping section but this strikes me as bit left field. Shouldn’t I be able to control the gain of each sample with the amp section? A gain parameter independent of the crossfades is needed here, I think.

On the other hand, the Pitch section is very nice to have and it works well. It would be amazing to actually being able to analyse the pitch of each of the samples and get a “suggested pitch curve”. This could be just an starting point so you can then tweak them by ear later.

The workflow in the loop section is a bit odd since you can’t hear anything unless the main engine switch is on but then if you switch it on, the first loop triggers so you can’t hear what you want to hear in isolation unless you manually mute the first slot. It feels kind of odd. Additionally, when building the loop progression, sometimes a slot doesn’t emit sound and you need to manually hit its play button. Kind of annoying.

So here is an example where I’ve built a Peugeot 307 engine from library recordings. For sure, the result is not as smooth as the granular presets and it sounds a bit “processed”, it’s like you can hear the artificial pitch bending too much. There are also many dropouts in the audio level and I don’t know if this is my fault or if there is a way to remedy that. The factory presets that use the loop system are cleaner than this but I can still hear some dropouts on those so maybe this is a bug?

As for the sound in general, it depends on how you drive the RPM and I assume creating a robust and good sounding vehicle system takes more sample preparation and tinkering than my quick test took. I was also thinking that maybe I chose the incorrect range of RPM loops and I missed having more slots so I can use more RPM states and make the progression smoother.


It is used to choose and monitor samples for the granular, one shot and looping sections. The tagging system is very nice. Igniter includes a nice selection of different engines and sweeteners, many cars also include recordings of doors, horns or wipers ready to use. I’ve noticed that you can’t drag and drop these sounds from Igniter to Pro Tools, which will probably be my first instinct if I just want a car door sound on the DAW’s timeline. The alternative would be to have the sound on the One shot section and either trigger it via Pro Tools automation or via the timeline system.

Other than the factory sounds, you can also use the “Files” tab to browse around your own computer files, including external drives, which is very nice.

Something I’ve noticed and is a bit counter-intuitive, is that in order to preview a sound on the browser, the engine button needs to be on, maybe that’s the case because that button just mutes the whole plugin internally but it took me a minute to figure it out.


As I have mentioned before, this is a very powerful and important section of Igniter and probably the one that I liked the most. It reminds me of Propellerhead’s Reason where you can flip the rack and apply envelopes and LFOs to any parameter in the system.

Basically the mod section allows you to link any parameter within Igniter to the master revs knob. You just need to drag the name of the desired parameter and drop it on the mod area. Then, you can edit the envelope that will govern this behaviour and also use an LFO to add some randomness or movement to any of these relations. The range or scale of the change can be adjusted with the sliders that appear to the right of each parameter. There are 8 mod slots so you can create very different envelopes and very complex systems.

By default, the RPM within the Granular section is inked linearly to the master revs and from there, you can link all sorts of other stuff, including FX, to make the engine more dynamic and responsive. Have a look at the presets to get some ideas of what you can do with this, it really allows you to get creative.

I was also thinking that it would be very nice to be able to use the mod section on other things than the RPM. As an experiment, I tried to turn the master revs knob into a distance knob, decoupling it from the granular RPM and linking it in several ways to volume, reverb and EQ.

Why would I want to do this? Because controlling the distance and perspective between shots is probably one of the most time consuming things to do in a vehicle scene. My experiment kind of works although when you do this, you loose the ability to link other stuff to the vehicle RPM. So, for a really powerful, all in one, vehicle design tool, I would love to have 3 master parameters: Revs, Distance and a maybe a third custom one. This is maybe outside of the scope or workflow that Krotos had in mind but that is at least how I would try to design it. Of course, you can create a similar effect just on your DAW but using this method you are able to link many things at once to the “distance knob” like engine/exhaust mix, granular FX, reverb sends, etc, speeding up workflow massively.

FX & Mixer

This section is pretty straight forward, nice to use and clean. You can control the level for each of your audio generators plus you have an FX send and Pan pot. While the sends and FX are pre-fader, the Pan is post fader. Each section has a rack with 5 slots where you can hook up FX. The FX that we can use are:

-EQ: Very nice parametric EQ with everything you need. Works great.
-Compressor: Very good too, with a gain reduction meter and a limiter mode.
-Limiter: Simple and clean dedicated limiter, useful to make sure you don’t saturate the output at high RPMs.
-Saturation: Good for adding some extra nastiness to an engine with extensive controls and colour presets.
-Transient Shaper: An unusual addition to a plugin like this since engine sounds don’t have many transients but it could be cool to use to add or remove dynamics to the granular section or on sweeteners.
-Flanger: Nice for sci-fi designs.
-Noise Gate: I suppose it could be useful if you have a noisy recording on your one-shot section.
-Ring Mod: Pretty cool and alien sounding and a nice addition for creating sci-fi stuff.
-Convolution reverb: Very good to have to recreate distance or an “in-car” sound. The controls are quite simple but you probably don’t need much more. I miss more outdoors IR in the factory library.
-Doppler: Very nice if you need to quickly cover passbys. You can control it independently or attach it to the main Revs knob. Passby presets are already created for each vehicle which is very handy.

General Workflow

In terms of workflow, Igniter allows you to create the engine RPM movements in a very quick and flexible way and of course, you can always come back and tweak the automation to make it work better. Additional passes controlling other parameters (like load) can add extra realism and detail.

The loops are nice to have since you can, for example, make any car go on gravel or dirt, for example, with just adding a loop layer to the granular. The one shots are not that useful, in my opinion, since you can only have five individual sounds and you can’t assign probability or playlists to the triggers, so every time you pass through them on the RPM curve, you would hear the same exact sound. The way it works right now, I think you would be better of just editing sweeteners like skids manually on your DAW the old-fashioned way and use Igniter for the engine itself but I’m open to be wrong about this.

You would probably need two instances of Igniter, one for exterior shots and one for interiors, unless you want to do the interior treatment outside. Once you have the basic RPM behaviour down, you would then need to mix it into the scene with fader work, pan and distance attenuation. That’s why I was thinking that it would be cool to have a dedicated master distance knob so you can tweak this in one go once you find a reverb that works with the scene. With these system I’m imagining, you would do an RPM pass, a distance pass, some tweaks here and there and you would be done for that car. Rinse and repeat.

Lastly, it’s also important to mention that Igniter offers a multi output so you can get an individual signal from each layer and mix them in any way you want in your DAW. This is very much appreciated.

Is Full Tank worth it?

Krotos offers an expanded version called “Igniter Full Tank” which includes all the unprocessed and processed recordings used to build all the presets. You get a lot of coverage for every vehicle in Igniter plus loads foley and sweeteners. The recordings are a great library just by themselves (75 GB of additional audio) and in combination with Igniter will allow you to cover every single detail and sound you may need. To clarify, these extra sounds come as separate audio that you can then browse within Igniter, but they don’t include new presets or vehicles.


I hope both you and me now have a good understanding of how Igniter works and what it can offer. I had a lot of fun testing the plugin, Krotos keeps giving us innovative tools to create custom, unique soundscapes and I feel that with them we can offer much more value to our clients because the result is unique and personal.

Above all, the granular system sounds great and I know how hard is to make interactive engines sound good. I’m sure more content will come for the plugin in the future and maybe some workflow quirks will be fixed with time. As for the features I’ve been suggesting, they are just my own take on how I would improve the software’s workflow and capabilities and since I’m sure some concepts and perspectives have escaped me, I will remain open to new and better ways of using Igniter as it spreads across studios worldwide.

Thanks for reading!

Impressionistic Soundscapes


How your dreams look like

The fascinating thing about Impressionism is that it assumes that a painting is never going to able to recreate reality as accurately as a photograph. Once you leave behind the burden of precision, the artist is free to do what art does best: expressing a feeling, a mood, a state of mind. Impressionism relays more on movement and light than shape and form. The composition is open and the boundary between foreground and background is blurred.

An impressionistic painting doesn’t look like a real place but a distant memory, the impression a place leaves deep in your mind. It looks like the blurry pictures from a dream that linger in your mind just before you forget them.

That’s pretty much how far my artistic knowledge goes but I hope you get an idea. I was thinking that it would be cool to try to translate that approach into sound design by creating soundscapes to go along with some impressionist paintings. But before we do that, we can’t forget that, in a way, this already happened among a very specific sub-section of sound designers. The ones that limit themselves to a narrow amount of defined pitches and timbres: music composers.

How your dreams sound like

I was introduced to Impressionist music by the amazing series Young People's Concerts by Leonard Bernstein which I really can’t recommend enough. If this is the first time you hear about it, just go watch it. There is a whole show about Impressionism.

He does a better job than me in explaining it but, basically, when impressionism is translated into music we are trying to express the feeling, the essence of something in a subtle and seductive way. We are not explaining, we are suggesting. This often results in dreamy melodies (whole tone scales are a staple) and the use of exotic and unresolved harmonies. For the most part, composers limit themselves to traditional instruments but they try to get the most from them in terms of timbre. Piano is probably the instrument of choice for Impressionism, using its large range, dynamics and polyphony (pedals are heavily used).

As an example, here is what musically happened when Manual de Falla, who was born in Cádiz like myself, moved to Paris and met the Impressionists. Maybe is not its most known side, but sometimes flamenco has a dreamy, exotic quality that I think is perfect for this style of music.

And here is a maybe a more canonical example by Debussy. Notice how the melody is usually unresolved. Like in a dream, you don’t really know how you got there and there is no clear conclusion. This music maybe doesn’t sound that different or special to you, as these traits have been assimilated into mainstream music (think jazz) but keep in mind that in that time it was quite a contrast to the musical establishment.

An acoustic impression

If Impressionism doesn’t want to be constrained by shapes, colours or composition, maybe the most logical way to translate this idea into sound would be to forgo concepts like harmony, melody or rhythm. When you do this, only timbre is left and since shaping timbre is kind of my job, it sounds like a perfect fit.

My first approach to a Impressionistic soundscape is simple: just create an auditory complement to the visuals, extending the world within the painting to a new sense. Let’s lay down sounds that could exist in the scene and that go well with the feeling it transmits.

I’m using first the one that gave the style its name (and it was meant as an insult), “Impression, soleil levant” by Claude Monet:

Here is a second one, using “Woman in the Bath” by Edgar Degas:

At first, I thought I would use reverb to blurry sounds together in an analogy of how painters mix colours. But I soon discovered that doesn’t work very well. For the bath painting, I wanted to express a feel of intimacy, a sense of “costumbrismo” which actually was one of the other features of Impressionism: to portray everyday life.

Reverb doesn’t help with this because it creates an unnatural space that doesn’t complement the painting but opposes it. Monet’s Sunrise scene uses more reverb but only enough to match the environment that we are being presented.

One more thing was apparent: it helps to have elements in the scene that suggest motion, since most things that make a sound are moving in some way.

Here is “Effect of Snow on Petit Mountrouge” by Édouard Manet.

Since this painting was created during the franco-prusian war, I decided it could be cool to also tell a little story within the soundscape. I wanted to capture the peaceful calm of a winter snowy day somewhere in Paris. The calm is then broken when distant cannons are heard and the french soldier who is contemplating the scene has to go back to his post.

Finally, here is “Gare Saint Lazare” by Monet again:

I chose this one because I liked the painting from an aesthetics point of view, it has movement and life. And of course trains are a nice sound design opportunity.

Going further

After working on these four soundscapes, l realized I was mostly describing the scene and maybe transmitting some of its essence by choosing certain sounds but not being technically impressionistic. I was basically adding a soundtrack to the painting.

Their relaxing, atmospheric quality goes well with audio that borders on being ASMR. It’s somewhat ironic that the best complement to an impressionist painting is a soundscape that does the opposite: being descriptive, detailed and realistic. Maybe it makes sense in a way. These paintings suggest instead of being explicit so there is room for audio to add to the experience.

Of course this got me thinking about how it would be to create soundscapes for other art styles. Probably the ones that distort reality in different ways like expressionism or cubism could be good candidates. Maybe something worth exploring in the future.

But can we use audio in a way that gets to the core idea of Impressionism? To do this, we would need to go more experimental and abstract. We would need to stop using descriptive sound, forget about what you can see and focus on the feeling the painting creates.

Smearing sounds

I thought about using Paulstretch since if you play with the window size, you can blurry and smear sounds together, like painters mix colours. This worked nicely as Paulstretch tends to sound very dreamy. The following soundscape was created from only one audio sample, this recording of some wind chimes:

I created different layers in Paulstretch playing with the window size, pitch shifting and adding harmonics. I refrained form using any “real” audio. Here is “The Cliff at Étretat after the Storm” by Monet.

As you can hear I’m getting somewhere interesting. I tried to evoke a warm summer feeling although I’m sometimes dangerously close to the line between being dreamy and being unsettling. My first instinct to solve this was to use music tricks, like pitching layers a fifth away from each other but I didn’t want to relay on musicality too much.

Here is another darker example using a fantastic painting, “Winter, Midnight“ by Childe Hassam:

This one was created from a music stinger. If you hear both closely, you can tell it’s the same base sound but in a drone, dream-like state. It works well because the musical impacts are stretched creating some movement in the soundscape and some changes in tone.

And finally the last one turned out quite creepy, maybe too much for the painting but I like the result nevertheless. I used a combination of layers from Paulstretch, using the tonal / atonal slider to remove most of the “musicality” from the sounds (which were kind of musical). Here is “Moonlight, Isle of Shoals” by Childe Hassam.

If this got you interested in learning Paulstretch, I have a blog post about it that goes deep into how it works.


It’s cool to work with the concept of “pure sound design” without the burden of mere description but at times it seems to feel too close to atonal music. That last soundscape got me thinking about Ligeti and Penderecki. But is this something bad? Maybe is atonal music which is too close to “pure sound design”. Maybe they are the same thing but looked from different perspectives.

In any case, both approaches to the creation of a painting soundscape are valid and worth pursuing, I think. Just the idea of using visual art to inspire audio work is a good way to get your creative juices flowing and tackle things in a different way.

Other than that, I was also reminded that sound is not only simple description, it also conveys feelings and can somehow capture the very essence of a place, an action or a character. That’s something to always keep in mind.

Thoughts on buying gear

Hello! Here are some ideas and tips that I think could help you make better decisions while buying audio equipment.

Think long term

I like to see any piece of gear as an investment so I try to choose products that are known for being robust and durable. There are always cheaper options out there but I don’t mind paying a higher price if I have a better guarantee that the equipment is going to last for longer and be more reliable.

In order to determine durability, a good hint could be that the manufacturer offers a longer guarantee period than legally required and/or a good reputation among veteran users (some detective work in audio forums is a must). It is also a good sign when a product is manufactured in Europe or the US, although this is not very frequent and doesn’t guarantee a higher quality necessarily.

Buying higher end gear is particularly relevant for audio since electronic components are quite important in determining quality and life expectancy. The use of cheap plastic instead of more durable components like metal is also commonplace and something to avoid, specially in field equipment.

Something else to think about is that durable gear is usually well known in the industry and may give clients some extra confidence to hire you before others.

On the flip side, you can’t always afford to buy higher quality equipment and sometimes you may need to opt for entry level gear. This can also happen when you need an specific thing for a gig and don’t have time or money to find the best possible option. In those cases, well, you probably need to bite the bullet but in general my advice would be to wait if you can. Flip more burgers and sweep more floors. Once you have enough to at least access the mid tier, go for it. In my experience, those investments will pay off. Ten fold. You need to spend money to earn money.

I bought a Tascam HD-P2 in late 2011. I chose this model because of its reputation and quality. To this day, I still use it as my main recorder for sound effects. It has also accompanied me through features films and documentaries, on snowy cold exterior days and crazy hot Seville summers. It has never failed or died during a take.

I am not saying the HD-P2 is perfect. It only offers two microphone inputs, the pre-amps are not ultra clean (but they quite good for their price range) and the powering options are limited. Nevertheless, it served me well throughout my first years working in audio, it gave me confidence and allowed me to get a huge return on my investment.

The mighty HDP2. Respect.

Save on the features you don’t need

I think this is key. Don’t get dazzled with fancy stuff that you are never going to use. It is important that you think about the features that you actually need and then look for the best option the market has to offer.

Hopefully, I will have the chance to record more frequently now.

Of course, in order to do that, you need to know what your needs really are, which is the tricky part. Do you prefer more channels or a higher resolution? Bigger memory or longer battery life? If you know what kind of specific work you are going to do, this is going to be easier to decide. Try to narrow your needs and priorities.

I recently bought a Sony PCM D100 because I wanted to have something portable to record on the go. This recorder is quite expensive (for a handheld device) and doesn’t have XLR inputs which for me is a big issue. But the thing is my goal is to have something really portable so I can record in situations when a big rig would be cumbersome.

So I am losing the XLR feature in exchange for great quality of audio, battery life, internal memory and construction. All of them features that are essential if I’m going to use this on the go.

Avoid audio elitism

Sound is something that can be objectively measured but, nevertheless, the way we experience it is quite subjective. People apply all sorts of descriptions to audio like “silky”, “airy” or “muddy”. I’m not saying these are not useful or that these don’t describe real properties but sometimes I think we get caught up in these terms too much.

This problem is twofold. On one hand, sometimes people are so ready to justify their purchase that they start to hear mystical properties in a piece of gear. On the other hand, sometimes we can actually really tell the difference (in terms of clarity or timbre profile) between two pieces of gear but it is so small that it’s only noticeable while soloing and/or A-B testing. If the final consumer is probably not going to tell the difference, is it really that important?

Don’t get me wrong, I still think that audio quality should be a priority but usually when investing in equipment the very expensive stuff gives you diminishing returns. You need to really expend a lot of cash to get from the professional to the “elite” level. Maybe you don’t need to.

So yeah, choose quality but don’t get crazy. Beware of mystical claims and 20K€ cables. I honestly think that if we forced people to take blind A-B tests comparing decent gear with very high end equivalents they would be amazed with how close they can be.

Your sound is as good as your chain’s weakest link

Before buying a new fancy microphone, maybe stop for a second and think about the small stuff. There is always something outdated or in a bad condition. Maybe it would sensible to improve on those weak areas first.

Sure, you don’t need fancy solid gold cables but get yourself some decent ones. Another good example of this could be battery management. If your gear uses batteries of any kind, invest in good chargers. I recommend you get familiarized with the stuff that video and photography folks use. Smart chargers are a great option since they have independent charging cells and programs to keep batteries healthier.

Audio cases (I like Portabrace) are also a great option to make sure your equipment is safe while traveling or on location. I bought my Tascam HDP2 with a Portabrace case and it’s really a worthy investment. The velcros work like the first day eight years later.

This powerex charger is a very nice option if you need an army of batteries for your recorder and/or wireless kits.

Balance Risk and Personality

Some people are more risk averse than others and this is something you need to take into account. In my case, I don’t feel confortable rushing things or spending large sums of money so I try to avoid doing those two things at once. If you are similar to me, remember that at some point you have to take the leap and is going to feel uncomfortable. But that’s good. That’s what they mean when they say “Is good to step out of your confort zone”.

When I bought the Rode Blimp v1, I could not afford anything better. It’s an OK starting point, but I would not recommend it for a long term investment. Not very durable.

If, on the other hand, you tend to rush things, well, take it easy. It may help to give yourself some time to make sure to make the right decision. Sharing your situation with friends or colleagues may help too, you’d be surprised by how much better you can see things when you articulate them out loud and get feedback.

Personally, I don’t like to buy second-hand stuff because I feel like I’m taking a big risk but if you are confortable with that, it’s definitely an option. It helps if you can check the condition in person and knowing the seller is ideal. If you are buying online, using sites with a reputation system is a must. Other than that, second-hand is a risk that may pay off or end up in disaster. So ask yourself: how much more money am I willing to pay to get peace of mind instead?

Reviews are spooky

Any piece of equipment that is reasonably popular is going to have some scary reviews. That’s the nature of the polarized online world: people only bother giving 1 or 5 stars, so there isn’t much nuance. Having said that, reviews are still a valuable resource when used with caution.

My approach is to focus on quality rather than quantity. Sure, you can found many reviews in Amazon nowadays but I would prefer to check audio forums or specialized stores first. You can also check reviews for a product on online stores that you are not planning to use. If you are in Europe, B&H and Sweetwater are great. If you are in the US, Thomann is a fantastic source.

Other than that, your best bet is to join and participate forums like Gearlutz. With time, you’ll get to know people there whose opinion would probably be more valuable than a random Amazon user.

Limit your tools

The Sennheiser MKH 416 was my first mic and almost the only one for some time, forcing me to use it in many different ways (on location, for foley, for SFX, for VO…)

Scarcity may sound like a bad thing but I think you can learn a lot from it. Limiting yourself to a small number of tools forces you to be creative, try new things and of course you will master them. Is hard to do that if you have too much stuff so my advice would be to really make the most of what you have before buying something new.

For me, a good example of this is audio libraries. If you already have a decent amount of sounds, there is probably a lot you can do with them. Doing sci-fi or fantasy sounds, for example, will force you to experiment with what you have around in terms of recording gear and plugins and you will learn far more than if you just buy yet another library.

Figuring out: Measuring Loudness

How loud is too loud?

There are many loudness standards nowadays and many types of media and platforms so making sure audio is on the correct level everywhere can be tricky. In this post, I’m going to talk about the history of measuring loudness and the current standards that we use nowadays.

The analogue days

The first step to measure loudness is to define and understand the fundamental nature of the decibel. Luckily, I wrote a post last year about this very subject so you may want to check that before diving into loudness.

So, now that you are accustomed with the dB, let’s think about how we can best use it to measure how loud audio signals are.

In the analogue days, reading audio levels always implied measuring voltage or power in a signal and comparing it to a reference value. When trying to determine how loud an audio signal is, we can just measure these values across time but the problem is that levels are usually changing constantly. So how do we best represent the overall level?

A possible approach would be to just measure the highest value. This method of measuring loudness is called Peak and is handy when we want to make sure we are not working with levels above the system capacity to make sure our signals are not saturated. But in terms of measuring the general level of a piece of audio, this approach can be very deceiving. For example, a very quiet signal with a sudden loud transient would register as loud despite being quiet as a whole.

As you are probably thinking, a much better method would be to measure an average value across a certain time window instead of the instant reading that peak meters provide. This is usually called RMS (root mean square) metering and it is much closer to how we humans perceive loudness.

Let’s have a look at some of the meters that were created:

Real audio signal (grey) and how a VU meter would interpret it. (black)

VU (Volume Unit) meters are probably the most used meters in analogue equipment. They were designed in the 1940s to measure voltage with a response time similar to how we naturally hear. The method is surprisingly simple: the needle’s own weight slows down its movement by around 300 ms on both the attack and the release so very sudden changes would be soften. The time that the meter needs to start moving is usually called the integration time. You will also hear the term “ballistics” to define these response times.

The PPM (peak programme meter) is a different type of meter that was widely used in the UK and Scandinavia since the 1930s. Unlike the the VU meter, PPM uses very short attack integration times (around 10ms for type II and 4ms for type I) while using relatively long times for the release (around 1.5 seconds for a 20dB fall). Since these integration times are very short, they were often consider quasi-peak meters. The long release time helped engineers see peaks for a longer time and get a feel of the overall levels of a programme since levels would fall slowly after a loud section.

The Dorrough Loudness Meter is also worth mentioning. It combines a RMS and a peak meter in one unit and was very common in the 90s. We will see that combining a RMS and peak meter in a single unit was going to be a trend that will carry on until today.

VU meter.


The dawn of Digital Audio

As digital audio started to become the new industry standard, new ways to measure audio levels needed to be adopted. But how do we define how much 0 is in the digital realm? In analogue audio, the value we assign to 0 is usually some meaningful measure that help us avoid saturating the audio chain. These values used to be measured in volts or watts and would vary depending on the context and type of gear. For example, for studio equipment in the US, 0VU corresponds with +4 dBu (1.228 V) while europe’s 0VU is +6 dBu (1.55 V). Consumer equipment uses -10dBV (0.3162V) as their 0VU. As you can see, the meaning of 0VU is very context dependant.

In the case of digital audio, 0dB is simply defined as the loudest level that flows through the converters before clipping, this is, before the waveform is deformed and saturation is introduced. We call this definition of the decibel, dBFS (Decibel Full Scale). How digital audio levels correspond with analogue levels depends on how your converters are calibrated but usually 0VU is equated to around -20dBFS on studio equipment.

Fletcher-Munson curves showing frequency sensitivity for humans. How cool would it be to see the equivalent curves for other animals, like bats?

Fletcher-Munson curves showing frequency sensitivity for humans. How cool would it be to see the equivalent curves for other animals, like bats?

The platonic loudness standard

Since dBFS is only a scale in the digital world, we still need to find a way to measure loudness in a human friendly way within digital audio. As we have seen, this is usually accomplished by averaging audio levels across a certain time window. On the other hand, digital audio also needs precision when measuring peaks if we want to avoid saturation when converting audio between analogue and digital and viceversa.

Something else that we need to take into consideration for our standard is the fact that we are not sensitive to all frequencies in the same proportion as the Fletcher–Munson curves show. As you can see, we are not very sensitive to low or very high frequencies, if we want our audio levels to be accurate, this is something that needs to be accounted for.

So, I have laid out everything that we need our loudness standard to have. Does such thing exist?


The ITU BS.1770 standard

This document was presented by the ITU (International broadcast union) in 2006 and fits all the required criteria we were looking for. The ITU BS.1770 is really a collection of technologies and protocols designed to measure loudness accurately in a digital environment. It is really a set recommendations, we could say.

Four revisions have been released at the time of this writing plus the ITU BS.1771 which also expands on the same ideas. For simplicity, I will refer to all of these documents as simply the ITU BS.1770 or just ITU.

The loudness unit defined by the ITU is the LKFS, which stands for “Loudness K-weighted Full scale”. This unit combines a weighting curve (named “K”) to account for frequence sensitivity along with an averaged or RMS measurement that uses a 400 ms time window. The ITU also defines a “true peak” meter as a peak meter that uses oversampling for greater accuracy.

Once the ITU released their recommendations, each region used it as the foundation for their own standards. As the ITU released new updates each region would incorporate some of these ideas while expanding on them. Let´s see some regional standards.

EBU logo 2012.png

EBU R128, Time Windows & Gates

This is the standard in use in Europe and it is released by the EBU (European Broadcast Union).

Before I continue, a clarification. The EBU names the loudness unit LUFS (Loudness units relative to full scale) instead of LKFS as the former complies better with scientific naming conventions. So if you see LUFS, keep in mind that this is pretty much the same as LKFS. On the other hand you will also see LU (Loudness Units). This is simply a relative unit that is used when comparing two LUFS or two LKFS values.

In the R128 standard, four different times windows are defined. This is based on the ITU BS.1771 recommendation. A meter needs to have all these plus some other features (see below) to be considered capable of operating in “EBU Mode”.

  • True-Peak: Almost instantaneous window with sub-sample accuracy.

  • Momentary: 400 ms window. Useful to get an idea of how loud a particular sound is. Plugins usually offer different scale options.

  • Short Term: 3 seconds window. Gives a good feel of how loud a particular section is.

  • Integrated or Programme:. Indicates how loud the whole programme is in its whole length. Sometimes it’s also called “Long Term”

Why so many different time windows? In my opinion, they are useful when working on a mix since they tell you information at different levels of resolution. True-peak tells you wether you would saturate the converters and it is good practice to always keep some headroom here. The momentary measurement is more or less similar to what VU meters would indicate, and gives you information on a particular short section. I personally don’t really look at the momentary meter much because any mix with a decent amount of dynamic range is going to fluctuate here quite a bit. Nevertheless it is useful to make sure that the mix is not very far away from the target levels on some specific sections.

Short term maybe a better tool to get a solid feel of how loud a scene is. This measurement is going to fluctuate but not as much as the momentary value. In order to get a mix within the standards, you need to make sure the short term value is usually around the target level, but you don´t need to be super accurate with this. What I try to do is make a compromise between the level that feels right and my target level and when in doubt, I favor what it feels right.

Finally, the integrated or long term value has a time window with the size of the whole show. This is the value that is going to tell you the overall level and measuring it in a faithful way is tricky as you will see below.

So, I was mentioning “target levels”. Which levels? The EBU standard recommends audio to be at -23 LUFS ±0.5 LU (±1 LU for live programmes). We are talking here about the integrated measurement, so the level for the entire show. Additionally, the maximum true peak value allowed is -1 dBTP. And that would be pretty much it, although there is one more issue as I was saying. Measuring levels throughout a long length of time in a consistent way comes with some challenges.

This is because there is usually a main element that we want to make sure is always easy to hear (usually dialogue or narration) and since audio volume is logarithmic, that main element would pretty much carry 90% of the show’s loudness weight. So we would naturally mix this element to already be at the desired loudness or slightly below. The problem comes when considering all the other elements around the dialogue. If there are too many quiet moments, that it’s going to make our integrated levels quite low, since everything is averaged.

The solution would be to either push the level of the whole show or re-mix the level of the dialogue louder so the integrated value is correct. Either way that would probably make the dialogue too loud and we would also risk saturating the peak meter. Not ideal.

Nugen´s VisLM Plugin operating in EBU mode. You can see all the common EBU features including all time windows, loudness range and a gate indicator.

In order to fix this the R128 uses the recommendations from the revisioned ITU BS.1770-3. Integrated loudness is calculated using a relative gate method that effectively pauses the measurement when levels drop below a threshold of -10 LU relative to an un-gated measurement. There is also an absolute gate at -70 LUFS, nothing below this value would be consider for the measurement. These gates help us getting a more meaningful result since only the relevant audio in the foreground will be considered when measuring the integrated time.

The last concept I wanted to mention is loudness range or LRA. This is measured in LU and indicates how much the overall levels change throughout the programme, in a macroscopic view. You can think of this as an indication of the dynamic range of your mix: low values would indicate that the mix has a very constant level while higher values would appear when there is a larger difference between quiet and loud moments. The EBU doesn’t recommend any given target value for the loudness range since this would depend on the nature of the show but it is for sure a nice tool to have to get an idea of your overall mix dynamics.



This is the standard used in the US and is released by the ATSC (Advanced Television Systems Commitee). It uses LFKS units (remember that LKFS and LUFS are virtually equivalent) and similar time windows to the europeans. The recommended integrated value is -24 LKFS while the maximum peak value allowed is -2 dBTP.

When the first version was released in 2009, this standard recommended a different method when when calculating the integrated value. As you know, the EBU system uses a relative gate in order to only consider foreground audio for its measurements but the ATSC took a different approach. Remember when I was saying before that mixes usually have some main element (often dialogue) that forms the center of the mix?

The ATSC called this main element an “anchor”. Since dialogue is usually this anchor, the system used an algorithm to detect speech and would only consider that to calculate the integrated level. I’ve done some tests with both Waves WLM and Nugen VisLM and the algorithm works pretty well, the integrated value doesn’t even budge when you are just monitoring non-dialogue content although singing usually confuses it.

In fact, on the 2011 update, the ATSC standard started to differentiating between regular programmes and commercials. Dialogue based gating would be used for the former while the all elements in the entire mix would be consider for the latter. This was actually one the main goals of the ITU standard initially: to avoid commercials being excessively loud in comparison to the programmes themselves.

Nevertheless, the ATSC updated the standard again in 2013 to follow the ITU BS.1770-3 directives and from then on all content would be measured using the same two gated method Europe uses. Because of this, I was tempted to just avoid mentioning all this ATSC history mess but I thought it was important to explain it, so yo can understand why some loudness plugins offer so many different ATSC options.

Here you can see the ATSC options on WLM. The first two would be pre 2013, using either dialogue detection or the whole mix to calculate the integrated time. The third, called “2013” used the gated method ala Europe.

TV Regional and National Standards

Now that we have a good idea of all the different characteristics standards use, let’s see how they compare.

Country / Region Standard Units Used Integrated Level True Peak Weighting Integrated level method
Europe EBU R128 LUFS -23 LUFS -1 dBTP K Relative Gate
US ATSC A/85 post 2013 LKFS -24 LKFS -2 dBTP K Relative Gate
US ATSC A/85 pre 2013 (Commercials) LKFS -24 LKFS -2 dBTP K All elements are considered
US ATSC A/85 pre 2013 (Programmes) LKFS -24 LKFS -2 dBTP K Dialogue Detection
Japan TR-B32 LUFS -24 LUFS -2 dBTP K Relative Gate
Australia OP-59 LKFS -24 LKFS -2 dBTP K Relative Gate

As you can see, currently, there are only small differences between them.

Loudness for Digital Platforms

I have tried to find the specifications for some of the most used digital platforms but I was only able to find the latest Netflix specs. Hulu, Amazon and HBO don’t specify their requirements or at least not publicly. If you need to deliver a mix to these platform, make sure they send you their desired specs. In any case, using the latest EBU or ATSC recommendations is probably a good starting point.

In the case of Netflix, their specs are very curious. They ask for a integrated level of -27 LKFS and a maximum true peak of -2 dBTP. The method to measure the integrated level would be dialogue detection, like the ATSC used to recommend, which in a way is a step back. Why would Netflix recommend this if the ATSC spec moved on to gated based measurements? Netflix basically says that when using the gated method, mixes with a large dynamic range tend to leave dialogue too low so they propose a return to the dialogue detection algorithm.

The thing is, this algorithm is old and can be inaccurate so this decision was controversial. A new, modern and more robust algorithm could be a possible solution for this high dynamic range mixes. Also, -27 LKFS may sound too low but it wasn’t chosen arbitrarily but based on the fact that that was the level where dialogue would usually end up on these mixes. If you want to know more about this, you can check this, this and this article.

Loudness for Theatrical Releases

The case of cinema is very different from broadcast for a very simple reason: you can expect a certain homogeneity in the reproduction systems that you won’t find in home setups. For this reason there is no hard loudness standard that you have to follow.

Dolby Scale SPL (dBC)
7 85
6.5 83.33
6 81.66
5.5 80
5 78.33
4.5 76.66
4 75
3.5 65

This lack of general standard has resulted in a similar loudness war to the one in the music mixing world. The result are lower dynamic ranges and many complains about cinemas being too loud. Shouldn’t cinema mixes offer a bigger dynamic range experience than TV? How are these levels determined?

Cinema screens have a Dolby box where the projectionist would set the general level. These levels are determined by the Dolby Scale and correspond to SPL measures under a C curve when using the “Dolby noise”. Remember that, in the broadcast world, the K curve is used instead which doesn’t help things when trying to translate between both.

Nowadays more and more cinemas are automated. This means that levels are set via software or even remotely. At first, all cinemas were using level 7, which is the one recommended by Dolby but as movies were getting louder and people complained, projectionists would start to use lower levels. 6, 5 and even 4.5 are used regularly. In turn, mixers started to work in those levels too which resulted in louder mixes overall in order to get the same feel. This, again, made cinemas lower their levels even more.

You see where this is going. To give you an idea, Eelco Grimm together with Michel Schöpping analyzed 24 movies available at dutch cinemas and found out levels that would vary wildly. The integrated level went from -38 LUFS to -20 LUFS, with the maximum Short-term level varying from -29 LUFS to -8 LUFS and the maximum True-Peak level varying from -7 to +3.5 dBTP. Dialogue levels varied from -41 to -25 LUFS. That’s quite a big difference, imagine if that would be the case in broadcast.

The thing is that despite these numbers being very different, we have to remember that all these movies probably were played at different levels on the dolby scale. Eelco says on his analysis:

  • The average playback level for movies mastered at '7' is -28 LUFS (-29 to -25).

  • The average playback level for movies mastered at '6.3' is -23 LUFS (-25 to -21). They are projected 3 dB softer, so if we corrected the average to a '7' level, it would be -26 LUFS.

  • The average playback level for movies mastered at '5' is -20 LUFS (all were -20). They are projected 7 dB softer, so the corrected average would be -27 LUFS.

So, as you can see, at the end dialogue level is equivalent to about -27 LUFS in all cases, the only difference is that the movies that were mixed at 7 (which is the recommended level) would have greater dynamic range, something important to be able to give a cinematic feel that TV can’t provide. The situation is quite unstable and I hope a solid solution based in the ITU recommendations is implemented at some point. If you want to know more about all this issue and read the paper that Eelco Grimm released, check this comprehensive article.

Loudness standards for video games.

Video games are everywhere: consoles, computers, phones, tablets, etc, so there is no clear standard to use. Having said that, some companies have stablished some guidelines. Sony, through their ASWG-R001 document recommends the following:

  • -23 LUFS and -1dBTP for Playstations 3 and 4 games.

  • -18 LUFS and -1dBTP for PSVita games.

  • The maximum loudness range recommended is 20 LU.

But how do you measure the integrated loudness in a game? Integrated loudness was designed for linear media so Sony’s document recommends to make measurements in 30 minutes sessions that are a good representation of different sections of the game.

So, despite games being so diverse in platforms and contexts using the EBU recommendations for consoles and PC (-23 LUFS) and a louder spec for mobile and portable games (-18 LUFS) would be a great starting point.

Conclusions and some plugins.

I hope you now have a solid foundation of knowledge for the subject. Things will keep changing so if your read this in the future, assume some of this information is outdated. Nevertheless, you would have hopefully learned the concepts you need to work with loudness now and in the future.

If you want to test loudness, many DAWS (including Pro Tools) don’t have a built-in meter that can measure LUFS/LKFS but there are plugins to solve this. I recommend that you try both Waves WLM and Nugen VisLM. If you can’t afford a loudness plugin, you can try Youlean, which has a free version and is a great one to start with.

Thanks for reading!

Exploring Sound Design Tools: Sound Particles

Sound Particles allows you to create soundscapes and sound design using virtual particles that can be associated with audio files. The results are then rendered using virtual microphones.

If you want to check it out or follow this review along, you can download the demo here. It has all the features of the paid version but is limited for non-commercial projects only.

I won’t explain how to use the software in depth but I will give an over overview and show some practical uses for everyday work in sound design. If you want to get a more in-depth explanation, you can also watch this tutorial.

Sound Particles interface. Nice, clean and responsive.

Features Overview

The heart of the program are the particles. You can basically create them in three different ways:

  • A Particle Group will create any number of particles at the same time in an area or shape of your choice.

  • A Particle Emitter creates particles over time at a particular rate.

  • A single point source is just a single particle.

By default, particles are created as soon as you hit play, although you can also choose to change the start time to delay their creation. Generally, they last as much as the length of audio file attached to them.

You can choose the coordinates used to create your particles and also move the individual particles around the scene to create different effects. Particle emitters can also be moved. The movements that you can apply to the particles stack with each other, giving you an amazing amount of options to create motion. Keyframes can also be used to match any movement to a reference video.

See the video below for an example with the three types of particles:

So in the video you can see:

  • A particle group (red) that generates particles in a square shaped area. These particles are not created at the same time because we have also applied a random delay. They have fireworks sounds attached.

  • A particle emitter (orange) is moving in a circular motion while the particles that creates also have some small random movement. They have magical sounds attached.

  • A single point source (pink) with my voice paulstreched to infinity.

You can also apply audio modifiers to each particle group. These will randomize certain parameters so you obtain more interesting and varied results. If you think about this, this is similar to how audio works in the real world. Each time you take a step, your shoe makes a slightly different sound: pitch, level and timing will be different. Sound Particles lets you randomize the audio from each particle in a similar way. The audio modifiers are:

  • Gain: Basically, audio level.

  • Delay: This determines when the particle is created. It is very useful because usually you don’t want all particles in a group to be created at the start. In the example above, the red particles are being created with a random delay.

  • EQ: It applies different filters and bands of EQ to each particle so they don’t sound exactly the same.

  • Granular: This is kind of a special modifier. It slices the audio file and then plays each slice from a certain particle. You can control how long the slice is or even leave it random. You can also control if the slices are then played in sequence or at a random order.

  • Pitch: It applies a different pitch shifting value to each particle.

For any single parameter that requires randomization, you can choose different probability distributions to get the result that you want. An uniform distribution (all values have the same weight) and a normal distribution (most values will be around the mean) are probably the most useful ones. You can even create a custom distribution which is pretty awesome.

Uniform Distribution

Normal Distribution

Of course, once you have the particles ready, you need a virtual microphone to capture the result. On this area, the amount of options are simply amazing. Not only you can place the microphone anywhere in the scene but you can choose between many configurations including M/S, X/Y and all sorts of surround and ambisonic configurations.

If that wasn´t enough, you can also create several microphones on the same scene and render different stems per microphone. These stems can contain different combinations of particles so you can have more control later on the mix.

Finally, the project settings page allows you to control how Sound Particles is going to manage sound propagation and attenuation from distance. You can change the speed of sound, simulate the delay of far away sounds, change how much sounds attenuate with distance or wether your scene uses the doppler effect.

Microphone configurations can follow a variety of speaker setups

Project Settings

Sound design examples

Enough with the theory, let´s hear some real applications. Since sound particles is much easier to understand when you see the particles in movement, I decided to create a video for every example instead of just audio.

Battlefield soundscape

This is very simple but could be very useful if you need create a soundscape and don´t want to move every single sound into place by hand. As you can see, is very easy and quick to create a randomized soundscape. Something I feel I miss here is a bit more control on which sounds are triggered. When you have different types of sounds, it would be nice to be able to trigger some sounds only occasionally in the same way you can do this in fmod or wwise.

It would also be helpful to be able eliminate a particular particle that moves too close to the mic or at least be able to prevent them to getting too close without using complex custom distributions.

Scifi Interface

Now let’s imagine we are building a somewhat cheesy 80´s computer interface with beeps and blops and some folders flying around the screen.

As you can see, we are using two particle systems at the same time. One of them (blue) creates all the beeps in a circle around the listener while the orange is a particle emitter that throws particles horizontally to simulate things flying by.

Playing with pitch

Let’s explore how we can use the pitch randomization feature to create new, complex sounds from simple ones. On this example, I first use a uniform distribution for a more detuned and unsettling effect. We can also use a discrete distribution so the jumps in pitch are strictly within certain semitones, obtaining a more musical result.

As you can see, just changing the distribution can produce very different results.

We can also automate pitch to create dynamic effect like for example making all the frequencies converge on a central one. The THX deepnote was achieved with a a similar method.

Granular synthesis

This modifier offers many sound design possibilities. You can see an example below of building some sort of alien speech sound step by step.

We can also obtain a “voices in my head” effect by slicing up some speech and distributing it around the listener. As you can see, we can always re-create the particles to obtain a new variations which is very handy for video game work.

Doppler Effect

There are many plugins that recreate a doppler effect but this one for sure offers a unique visual approach. As you can see below, we can create a doppler effect on a single particle or on many.


I hope you found this software interesting, I think is a very good tool to have in your arsenal and I feel I have barely scratched the surface with the sonic possibilities that offers. I believe there is an update coming soon for Sound Particles and I may have another look then and write a new post covering the new features.

You can also have a look at a couple of plugins that Nuno Fonseca, Sound Particles creator has released. They allow you to use the doppler and air absorption simulations that Sound Particles has but in a convenient plugin that you can use in your DAW.