Dear Devs, 7 Reasons why your Game may need Audio Middleware.

There are several audio middleware programmes on the market. You may have heard of the two main players: Fmod and Wwise. Both offer free licenses for smaller budget games and paid licenses for bigger projects.

So, what is Audio Middleware? Does your game need it?

Audio Middleware is a bridge between your game's engine and the game's music and sound effects. Although is true that most game engines offer ready to use audio functionalities (and some of them overlap with the features explained below), middleware gives you more power and flexibility for both creating, organizing and implementing audio.

Here are the seven main reasons to consider using middleware:

1. It gives Independence to the Audio Team.

Creating sound effects and music for a game is already a good amount of work, but that is barely half the battle. For these assets to work, they need to be implemented in the game and be connected to in-game states and variables like health or speed.

This connection will always need some collaboration between the audio team and the programming team. When using middleware, this is a much easier process since once the variables are created and associated, the audio team will be free to tweak how the gameplay will affect the audio, without the need to go into the code or the game engine.

2. Adaptive Music.

Music is usually linear and predictable, which is fine for linear media like movies. But in the case of video games, we have the power to make music adapt and react to the gameplay, giving the player a much more compelling experience.

Middleware plays an important role here because it gives the composer non-linear tools to work and think about the soundtrack not in terms of defined songs but of different layers or fragments of music that can be triggered, modified and silenced as the player progresses.

3. Better handling of Variation and Repetition.

Back when memory was a limiting factor, games had to get by with just a few sounds, which would usually meant repetition, a lot of repetition. Although repetition is certainly still used to give an old school flavour, is not very desirable in modern, more realistic games.

When something happens often enough in a game, the associated sound effect can get boring and annoying pretty fast. Middleware offers tools to avoid this, like randomly selecting the sound from a pool of different variations or randomly altering the pitch, volume or stereo position of the audio file each time is triggered. When all these tools are combined, we end up with an audio event that will be different each time but cohesive and consistent, offering the player a more organic and realistic experience.

4. Advanced Layering.

Layering is how we sound designers usually build sounds. We use different, modified individual sounds to create a new and unique one. Middleware allows us to, instead of mixing down this combination of sounds, import all these layers into different tracks so we can apply different effects and treatments to each sound separatelly.

This flexibility is very important and powerful. It help us to better adapt the character and feel of a sound event to the context of the gameplay. For example, a sci-fi gun could have a series of layers (mechanical, laser, alien hum, low frequency impact, etc) and having all these layer separated would allow us to vary the balance between them depending on things like ammo, distance to the source or damage to the weapon.

5. Responsive Audio tools.

Sound effects are usually created using linear audio software, like Pro Tools, Nuendo or Reaper. These are also called DAWs (Digital Audio Workstations). The tools that we can find in DAWs allow us to transform and shape sounds. Things like equalization, compression and other effects are the bread and butter of audio people. Most of the modern music, sound effects and movies that you´ve ever heard came from a DAW.

But the issue is that once you bounce or export your sound it’s kind of set in stone, that’s how it will sound when you trigger it in your game. Middleware software not only give us the same tools that you can find in a DAW. More importantly, it also give us the ability to make them interact with variables and states coming from the game engine.

How about a monster whose voice gets deeper as it gets larger? Or music and sound effects that get louder, brighter and more distorted as the time is running out?

6. Hardware & Memory Optimization.

Different areas of a game compete for processing power and usually audio is not the first priority (or even the second). That´s why is very important to able to optimize and streamline a game´s audio as much as possible.

Middleware offers handy tools to keep the audio tidy, small and efficent. You can customize things like reverbs and other real time effects and also choose how much quality you want in the final compression algorithm for the audio.

7. Platform flexibility & Localization.

If you need to prepare your game for different platforms, including PC, consoles or mobile phones, middleware makes this very easy. You can compile a different version of the game’s audio for each of the platforms. Memory or hardware requirements may be different for each of them and you’ll need to maybe simplify sound events, bake-in effects or turn a surround mix into an stereo one.

You can also have a different version per language, so the voice acting would be different but the overall sound effects and treatment of the voices would be consistent.

I hope this gave you a bit of a taste of what middleware is capable of. When in doubt, don´t hesitate to ask us, audio folks!
Thanks for reading.

Exploring Audio Tools: Mammut


Mammut is a strange and unpredictable piece of software. It basically does a Fast Fourier Transform (FFT) of a sound file but unlike Paulstretch, which uses slices of the sound, Mammut uses the whole thing at once, creating more drastic results.

Mammut is not (in any way) a commercial tool but more of an experimentation one, so I won't go into detail about what is doing under the hood. Instead, I will focus on how it can be used to create interesting and cool sound design. If you want to follow along, you can download it here.

Software Features

Mammut has many processing tabs but I will only cover some of the most interesting ones.

Loading & Playing sounds.

Mammut works as a standalone software. You need to load a sound (using the browse button) to be able to start fooling around. The "Duration Doubling" section adds extra space (technically, FFT Zero Padding) after the sound. This extra space give some of the effects (like stretching) more time to develop and evolve.

Play and stop the sound on the Play section. There is also a timeline of sorts. Now that our sound is loaded, let's see what we can do with it.


It creates a non-linear frequency stretching with frequency sweep effects. All frequencies are raised to the power of the selected exponent, so small changes are enough to produce very different results. Because of the frequency sweeps, it sounds quite sci-fi, like the classic star wars blaster sound. Here are some examples at different exponents:

As you can hear, as the values get further away from 1, the effect is more pronounced and it also starts sooner. Here are some results with values higher than 1:

And here are some interesting results with a servo motor sound.

This sounds remind me a bit of japanese anime or video games, maybe this could be one of the steps for achieving that kind of style from scratch.


This stretches and contracts the frequency spectrum following a sinusoidal transfer function. You can control the frequency and amplitude of this change.

This one is weird (no surprise) and it doesn't really do what I was expecting. It tends to create sounds that are increasingly dissonant and "white noise like" as you go to more extreme parameters. Here are some examples:


Quite cool. Removes all the frequencies below a certain intensity threshold. This means that you can kind of "extract" the fundamental timbre or resonance of a sound. Used on ambiences (3rd example below), it sounds dissonant at first and then, once you remove almost all frequencies, kind of dreamy and relaxing.

Block Swap

This one basically divides the frequency spectrum in chunks and interchanges their halves a given number of times. Hard to wrap your head around but it produces interesting results. First, the number of swaps seems to make the sound more "blurry" and abstract as you can hear:

Then, the block size seems to create different resonances around different frequencies as you increase it.


Simple but hard to predict. It reflects the whole spectrum around the specified frequency. The problem with this is that when you flip the spectrum around a low frequency, everything ends up under it and is mostly lost. On the other hand, if you use a higher frequency, too much of the energy ends up on the harsh 5-15 KHz area.

A couple of examples:

Keep Peaks

Screen Shot 2018-06-29 at 15.50.33.png

This one doesn't even have controls or an explanation on the documentation. It seems to extend the core timbre of the sound across time which can be pretty useful. When using this option, the duration doubling function is specially handy.


Mammut is certainly original and unique. Since it only works standalone and is rather unpredictable and unstable, I don't feel it would be very easy to include in someone's workflow. Having said that, is definitely a nice wild card to have whenever you need something different.

Figuring out: Dolby Atmos

Figuring out: About this series
They say the best way to really learn about something is to force yourself to explain it to someone. That is the goal of this series. 
I will delve into a topic that I feel don't know enough about and explain my findings. Hopefully, we would both learn something useful!


More than a gimmick?

Up until some months ago, Dolby Atmos was to me mostly about having speakers on the ceiling in the hope of attracting people back to the cinemas. After getting to know Atmos a little better, I wanted to see what it has to offer and if it is really going to be the new standard in professional audio. Consider this a 101 introduction on Dolby Atmos.

Surround Systems

Before Atmos, let´s start with something familiar. Surround systems have been used for decades to offer a more interesting audio experience for the listener. 5.1 and 7.1 are the more used formats for both cinemas and home setups.

Something important to understand about these systems is that they are channel-based. For example, a 7.1 system would offer us the following channels:

As you can see, these channels can be composed of just one speaker (like the central channel) or by several of them (like the left surround channel). We can send audio to any channel independently but we would have no control on how much is sent to each of the individual speakers that form a channel.

That is basically how all surround systems work, the only thing that varies is the amount of channels.

Dolby Atmos introduces two innovation to the table. Firstly, it uses an object-based approach on top of the previous channel-based system. Secondly, it expands the surround feel by adding speakers to the ceiling and unlocking 3D sound. Let´s look at both of these features:


Dolby Atmos allows for 128 channels in total. We can use a certain amount of those for traditional channel-based stems and the rest for the new sound objects. 

Think about these sound objects as individual mono sounds that you can place and move around the room. If you place a sound object on a specific location, Dolby Atmos will play the sound on that location, addressing the nearby speakers individually as needed, regardless on how big the room is or how many speakers there are.

In other words, you are telling Atmos the coordinates of the sound instead of how much the sound is feeding each of the channels. It allows you to place sounds with great precision in big rooms but at the same time, the mix will translate well into smaller rooms or even headphones since Atmos is just using the coordinates of each sound object in 3D space.

3D Sound

The second innovation is probably the flashiest.

If you think about it, stereo is one dimensional, sound moves in a horizontal line. Surround audio is 2D, the soundscape is around you, on a horizontal plane. 3D is the next step: sound would be on a cube or a sphere.

Before Atmos, some surround 9.1 systems tried to achieve this by placing two speakers on top of the front speakers in order to give some "height" to some elements of the mix.

Dolby Atmos goes one step beyond adding speakers to the ceiling itself. Elements like ambiences, FX or music can now be placed overhead, opening the third dimension for the listener.

In theatres, these ceiling speakers usually go in two rows. There are also some extra surround speakers on the walls to make panning smoother when transitioning sounds between onscreen and offscreen. In total, up to 64 individual speakers are allowed on a theatrical Atmos installation.

At home, usually two or four overhead speakers are used, so you'll see configurations like 5.1.2 or 7.1.4. Note how the third set of numbers denotes the number of ceiling speakers. Up to 22 speakers are allowed on home setups.

Since installing ceiling speakers may not always be very practical on a home setting, sometimes sound is "fired" to the ceiling so that it bounces back to the listener giving the impression that it comes from above.

Crafting a soundscape with Atmos in mind

Knowing that a project will be mixed in Atmos changes the approach in terms of sound design and mixing, giving us more tools and challenges to achieve a compelling soundtrack.

For example, building ambiences now has an additional dimension. Imagine a scene inside a car while is raining. You could have different layers of the car engine and the city exterior and then the sound of the rain falling into the roof featured on the overhead speakers. A forest ambience could have discreet mono birds chirping above and around you, some of them static, some of them moving throughout the 3D space.

It's also worth noting that Atmos setups usually include one or more extra subwoofers close to the surrounds and overhead speakers. Although low frequencies are not very directional, it sill makes a difference in terms of sound placement to use the surround subwoofer instead of the one behind the screen.

Additionally, the Atmos standard makes sure that all surround speakers offer the same sound pressure level and frequency response as the onscreen ones. This means that while designing sound objects with a wide frequency range like a fighter jet going by overhead we have the whole spectrum at our disposal. This wasn't the case with previous systems, since the surround speakers did not have enough power and were best suited for simple atmospheric and background sounds.

Atmos makes you think more on where you want the audio to be in a 3D space rather than thinking about which channels and speakers to feed the audio to. It turns the mix into a full frequency canvas to position your elements.

Encoding for Dolby Atmos.

When preparing audio for Atmos, there are two distincts uses we can give to each of the available 128 channels. We can have sound objects as discussed above and we can also have channel-based submixes (beds). These beds can be created in any traditional channel-based configuration like 5.1 or 7.1 and are mapped to individual speakers or arrays of speakers the old fashioned way. In contrast, objects are not mapped to any speaker but saved with metadata that describes their coordinates over time.

This double approach (beds + objects) makes Atmos backwards compatible since we are also creating a traditional channel-based version when creating the masters.

To put all this information together we use a renderer. I won't go into a too much detail here, but Dolby basically offers two ways of doing this:

Dolby Mastering Suite + RMU:
This is the most advanced option, it is used for theatrical applications and Dolby certified rooms. It combines the Dolby Mastering Suite software with the Dolby Rendering and Mastering Unit (RMU), a dedicated Dell server computer that communicates with Pro Tools via MADI and processes all the Atmos information while compensating for any delays in the system. 

The RMU can be used for monitoring, authoring and recording Dolby Atmos print masters. It is also used for creating and loading room calibrations and configurations.

Note that the Dolby Mastering Suite software runs only on dedicated hardware (the RMU), while we would still need a different software package for any Pro Tools systems involved in the Atmos workflow. This would be the Dolby Production Suite, which I'm explaining below. The Dolby Mastering Suite includes three Dolby Production Suite copies but you can also buy the latter separately.

The mighty RMU

Dolby Production Suite:
This is the package that should be installed on the Pro Tools machines. It basically includes the renderer itself, a monitoring application and all the necessary Pro Tools plugins. In case you are using an RMU, this package will allow you to connect with it. If you are not, it will allow you to play, edit and record any Atmos mixes all within the same Pro Tools system.

While the Dolby Atmos Production Suite includes the ability to render Atmos objects, just like you can using the RMU, it has significant limitations. The software is an "in the box" renderer that runs on the same system as your Pro Tools session so if your project is large you may not be able to run it. Also, the software won't be able to compensate for any delays produced in the system.

Having said that, the Dolby Production Suite may be powerful enough for Blue-ray, streaming and VR projects with a limitation of up to 22 monitor outputs. For larger and/or theatrical projects an RMU is necessary, being capable of up to 64 outputs.

Dolby Atmos Everywhere

Atmos in home theatres is not rendered the same way as in cinemas because of limited bandwidth and lack of processing power. Close objects and speakers are clustered together conserving any relevant panning metadata. This simplified Atmos mix can be played through a home Atmos setup, like a 7.1.2.

Since ceiling speakers are cumbersome, home setups are becoming more accessible with the inclusion of sound bars and upward-firing speakers.

Blu-rays can carry an Atmos soundtrack and some broadcasting and streaming companies like Sky or Netflix are starting to offer Atmos content. The 2018 winter olympics was the first live event offered in Atmos.

In the world of video games, Dolby Atmos could be specially promising, enhancing the player's experience with immersive and expressive 3D audio. Currently, Xbox One, the PC and somewhat the PS4 offer dolby Atmos options via either an AV receiver or headphones (behind a paywall). There are a handful of titles ready for Atmos like Overwatch, Battlefield 1 or Star Wars: Battlefront.

Any Atmos mix can be scaled down into a pair of headphones. You don't need surround headphones for this, the Dolby algorithms convert all the Atmos channels into a stereo binaural signal that sounds around you in 360°. Some phones and tablets are starting to support this already.

Final Thoughts

It seems like Dolby Atmos is here to stay and become the new standard the same way stereo and surround sound replaced their older counterparts.

In my opinion, The key quality about Atmos is its object-based technology and scalability. Overhead 3D audio is very cool, but it may not be game changing enough and/or very accessible for the average user. It is still to be seen if binaural headphone technology and upward-firing speakers are going to be good enough to recreate the 3D feel that currently theatres can provide.

Exploring Sound Design Tools: Pickup Coil Microphone

This post belongs to a series where I´m using unconventional microphones to get interesting sounds.
Please h
ave a look at the other posts from the series:

Contact Microphone.

To finish up this three part series about unconventional microphones, here are my results while recording with a coil pickup.

This device records the inductance of electromagnetic waves that are generated by any electronic device, allowing you to get all sorts of buzz, fuzz and hum type of sounds. This type of microphone is similar to the one used in electric guitars.

I have been recording everything in sight: computers, hard drives, screens, appliances and all sorts of audio equipment. I was very surprised about the vast array of different sounds that you can get. Sometimes just changing the mic placement a few centimenters gives you a completely different sound, which seems to be a recurring theme throughout this unconventional microphones series.

So, here are some of the sounds I´ve got. You can individually download every sound via or download the whole package through this link.

Hum & Buzz

These are probably the most common sounds you are going to get since any electronic device has a transformer that produces these kind of sounds.

As you can hear, different devices produce different timbres:

Hum & Fuzz Effects

These two are interesting. The first one was produced recording a microwave oven and moving the microphone back and forth to create these dopplery whooshes.

The second one was recorded on a blinking electric hob, creating this pulsating alarm-like pattern.

Data & Glitching

Hard drives, printers, phones and computers produce very cool and interesting sounds. It´s worth recording them while idling but also as they boot up.


I´m happy with the results and I´ve definitely got some cool sounds that I will be using in the future. These could be great for sci-fi, user interface or magical sound design. Thanks for stopping by,

Exploring Sound Design Tools: Hydrophone

This post belongs to a series where I´m using unconventional microphones to get interesting sounds.
Please h
ave a look at the other posts from the series:

Contact Microphone.
Coil Pickup.

Continuing with the unconventional microphones theme, this time I've being fooling around with an hydrophone. As you may know, these are designed to better capture sound in water instead of in the air.

I tried recording water movements and props on all sorts of small containers, the kitchen sink and the bathtub. I quickly learned that is important to manage the cable properly since moving or touching it can be quite noisy, specially when trying to get quiet sounds. I was usually using one hand to keep the microphone and cable still and the other to perform the sound.

I also discovered that very small changes in mic placement usually produce vastly different results. On some occasions, just some centimetres were the difference between a close aggressive sound and a distant atmospheric one. I don't know if this is the case because water is denser than air and sound waves move 4.3 faster but it certainly something to keep in mind.

Finally, I have to say I was surprised by how clean the sounds were, although when processing very quiet stuff I did some RX cleaning here and there.

So, on with the recordings. You can individually download every sound via or download the whole package through this link.


I first tried to get some bubble sounds. I used a plastic drinking straw to get the small ones and then tried sinking a bowl or a mug with some air inside to get bigger ones.

I tried some effervescent tablets too and got some nice fizzy sounds. 


Next, I tried some water movements. I quickly found out that submerging the microphone and trying to create water sounds with hand movements doesn't work really well since not a lot of sound energy reaches the mic.

So I tried to record them with the mic just on the surface of the water and got better results that you can hear in the first example below.

I also wanted to get some underwater movements and discovered that the easiest way was to move the microphone itself through a large mass of turbulent water. I did this in a filled bathtub (second recording below).

Steady Water Streams

For this sounds, I was trying to get long samples of water flowing that could be then used for underwater scenes.

To achieve this, you need some kind of water flow. In my case, since I didn't have access to a swimming pool or a jacuzzi, I just recorded the whole filling and emptying process of a kitchen sink and a bathtub.

While doing this, I experimented with different mic placements and amounts of water flowing in. You can get a vast array of result by just changing these two factors as you can hear in these examples:

Metal Kitchen Sink

Here are some other sounds I got in the kitchen sink.

Again, the draining sounds show how important mic placement is. Those changes in the sound intensity were produced by just getting closer or further away from the vortex.


Here are some other random things I tried.

The first one is just me hitting a floating bowl with my finger. The resonance was captured with the mic underwater and close the bowl but not touching it. As the bowl filled more and more, the pitch changed in an interesting manner.

Lastly, the second recording below is how water directly impacting the hydrophone sounds. 


It was nice doing this recording session. I learned that mic placement is crucial when working with these microphones. Having an hydrophone is perhaps kind of a niche purchase, but it could be very useful if you need underwater sounds or want to record anything that involves too much water for conventional microphone to be safe.