Interview on La Bobina Sonora

I have been interviewed on the site “La Bobina Sonora” which is dedicated to the spanish and latin america audio community. I thought it would be interesting to translate the interview into english in case you want to have a look. There are some insights into my career history, the way I approach sound design and mixing and the projects I was working on at the time (October 2018). So, here we go!

LA BOBINA SONORA: Before starting with the interview, I just wanted to thank you for your presence here at labobinasonora.net.

JAVIER ZUMER: Thanks you for the invitation, I’ve been reading the blog for years and I’m happy to be able to contribute.

LBS: You are currently based in Ireland, where you do most of your work. It’s interesting to ask, which are the main differences in the audio industry between Ireland and Spain?

JAVIER: The main difference is that Ireland is a country that enjoys a better economical situation. This brings more stability and specialization to the profession.

Having said that, Ireland is an interesting example because it shares some similarities with Spain. Both countries went under during the economic crisis (both with a property bubble). Also, both live under other countries shadows like the UK, France or the US since these have a more mature and stablished industry.

LBS: How are audio professionals treated by the Irish industry? Do any kind of associations or unions exist?

JAVIER: Personally, my experience has being positive. Maybe sound doesn’t get as much love and attention as other departments (that’s kind of universal since we are visual creatures) but in my environment I usually have the time and resources needed to get the job done.

About associations, I am not aware of them but if they do exist they are probably based in Dublin since the industry is mostly located there. (I’m currently in Galway).

LBS: Those who work on this amazing profession usually share an appreciation for cinema, music and even other arts. Which were the main reasons for you to end up building sonic worlds? Maybe your experience in music production brought you there?

JAVIER: Like many other people, the thing that made me consider and appreciate sound was music. Reason was the first audio software that I used in depth and that was when I dropped out of college to study audio.

I still think Reason is a very unique starting point since its design imitates real hardware and it gave me my first notions of how the audio signal flows.

Later, I started to be more interested in audio for cinema and games. I think they offer a great balance of artistic and technical challenges.

Screen-Shot-2015-07-23-at-18.17.35.png

LBS: At the start of your career you were getting some experience with music recording and mixing at Mundo Sinfónico. How do you think this time helped you in your career?

JAVIER: Mundo sinfónico was my first professional audio experience. Héctor Pérez, who owns the place, was kind enough to let me join on some projects during recording and mixing.

During that time I learned a lot about using microphones, Pro Tools, and other software. It was pretty much like discovering how all these things are used in the real world and in real applications. At this time, I also started to learn how to to face a mix.

sin-ticc81tulo2.png

LBS: So, how were your first steps as a sound designer?

JAVIER:  At some point, I knew I needed to invest in my own gear in order to work in projects and I had to make a decision. I could either invest in music recording or in location audio gear. I decided to go for the latter since building an studio would lock me into an specific location but I could do location audio anywhere. Also, by that point audio for cinema interested me as much as music production.

With this gear I did many, many short films, some documentaries and TV stuff. Naturally, I would also work on the audio post for some of these projects and this was the way I went into sound design and mixing.

LBS: Is there any specific moment in time when you feel you made a big leap forward on your career?

JAVIER: Maybe the way I got my current job. By that time, I was living in Galway, which is quite far away from Dublin (impossible to commute). Since all the industry is really in Dublin, this was an issue if I wanted to get work but those days I was just working on freelance projects here are there.

One day, I decided that it would be cool to find people in my city interested in going out and record sound effects. I sent some emails to local audio folks and one of them was Ciarán Ó Tuairisc, who was the head of sound for Telegael, a company that was super close, like a 5 minute drive from my place.

I went there to meet him and see the place and he gave me some episodes so I could do a sound design test. Some days later, I came back with the results and I was offer a job there. I was maybe expecting that they will consider me for freelance work at best but the whole thing was kind of a job interview where I was successful with no need for a CV or a tie.

LBS: What are your main goals when facing a sound design project? Which of them are esential to your workflow?

JAVIER: When doing sound design I like to first do a basic coverage pass. Just have a sound for every obvious thing without taking much time with each. Once this is done, the real job begins when you start thinking about how the sounds you already have work together and which ones are important enough to spend more time and thought on them.

LBS: When crafting a sonic world, which are the processes (artistic or technical) that deserve the most attention and detail?

JAVIER: The elements that drive the story forward defintely deserve the most attention. Also is very important to give detail to any element that helps with world building.

If the story takes place in a special place or there is a relevant object is important to think how these should sound like. Of course, ideally this should work on subconcious level for the viewer.

LBS: Talking now about all the different processes that build a sonic world (dialogue editing, ambients/fx, foley, mixing…), which is the hardest for you and which one do you enjoy the most?

JAVIER: Probably foley is where I am the least confortable. It is a true art that requieres experience, coordination and sensitivity to get it right. I don´t have a lot of experience doing it and I am not into the physical part of the job although I know that that appeals to other people.

The process I enjoy the most is mixing since this is when all elements come together to create a cohesive whole that moves towards the same artistic direction.

LBS: Do you usually think about mixing when doing sound design? Do you use sub-mixes or pre-mixes on certain elements? Or do you prefer to start the mix completely from scratch?

JAVIER: It depends on the situation. When I´m just doing sound design I try to give the mixer as much control and options as possible so I don´t usually do sub-mixes although sometimes they makes sense.

If I´m mxing and also doing sound design I tend to pre-mix things as I go and even apply some EQ or compression here and there on elements that I know are going to need it. For this, clip effecs on Pro-Tools are great.

LBS: Talking about something omnipresent and unavoidable like technology, which is the gear you usually use when doing editing, sound design and mixing?.

JAVIER: I use a Pro Tools Ultimate rig with a S6 M10 desk. In terms of software, I use the usual stuff, most of my plugins are either from Avid or from Waves. For dialogue editing, Izotope RX is a must.

LBS: Which was your last technological discover that improved your workflow the most?

JAVIER: Probably Soundly, although this wasn´t that recent. It is a library management software that maybe doesn´t offer as many features as Soundminer but I think is a great option. It is more affordable (in the short term) and also offers online libraries that are kept updated and growing. It offers more than enough metadata capabilities and good integration with Pro Tools.

sin-ticc81tulo3.png

LBS: A big portion of your work is focused on an area that is maybe a little unkwnown for some of us but very important and clearly rising in relevance. How did you get into video game sound design?

JAVIER: I grew up playing games and this was always an area that interested me when I got into sound design.

One day I saw an ad for a crowdfunding from a spanish game, Unepic. They were looking for some money to record some voice acting and I emailed them asking them wether they would also be interested in some help with sound design. I had really no idea about how this kind of work would go and surprinsingly the were interested and we started to work together.

Six year later, Unepic has sold more than half a million copies between consoles and PC, being the first spanish indie game to get into Steam. It was a project that taught me a lot and I have kept working with its developer, Francisco Téllex de Meneses and many others since.

LBS: What are the main differences between working on video game sound design and just working on traditional media?

JAVIER: The main difference is that traditional media is linear. Once you finish a mix, it is going to be the same for all viewers, the only differentiating factor would be the reproduction system but the mix itself it would be the same forever.

On the other hand, video games are interactive so there is no mix in the traditional sense. You just give the game engine every audio asset needed and the rules that will govern how these sound are played. So the mix would be created in real time as the player intereacts with the world of the game.

The real power in video game sound design comes from the fact that you can connect audio tools with parameters and states within the game world. For example, imagine that the music and dialogue are connected to a low pass filter, a reverb and a delay and they change as your health gets lower. Or a game where you build weapons that wear out as you use them so their foley and FX become darker (via an EQ) and more distorted in the process.

I have an article on my blog with more information for someone who wants to start to do video game sound design.

LBS: Let´s talk about your work on field and SFX recording. We can find some interesting libraries on your website made by you, some of them dedicated to something you call “audio explorations”.

How important is field and SFX recording for you?

JAVIER: It´s something I consider very important beacuse once you have access to the big libraries the industry uses, you realize that there are many sounds that are over used. Once you start to hear them, they are everywhere!

So, I think is important to bring a more unique and personal approach to sound design. Also, when you record and create your own sound effects you force yourself to be more adventurous and to experiment with thechniques and ideas.

LBS: How do you usually plan a field recording session? Are they done within the context of a larger project or do you plan free sessions just to experiment and play around?

JAVIER: This is something I´ve been thinking about for a long time. On one hand, when something specific is needed, I just go out to get it. But with time I have been thinking that in those cases is not very convenient to explore and record interesting stuff since you have deadlines and many other things to work on.

As a solution, I´ve been going on what I call “explorations”. I just pick a technique, prop, place or software and I try to create interesting stuff while trying to learn how it works. I´ve been blogging about them and also releasing free mini-libraries with the results.

sin-ticc81tulo7.png

LBS: Any particular piece of advice to keep in mind when doing field recording?

JAVIER: At the begining of every take, always explain what your are doing with your own voice. Take videos and picture if you can. I guarantee you won´t remember everything you where doing later when you are editing.

LBS: What kind of gear (recorder, microphones…) and techniques do you usually use when doing filed recording?

JAVIER: Nothing too special or obscure. I use a Tascam HD-P2 that works great after seven years of use and is able to record at 192 kHz although it only has two pre-amps so sometimes I need other recorders as a reinforcement. The microphones I use are a 416, Oktava 012, Rode NT4, SM57, Sanken COS-11D and some more exotic mics from JRF (hydrophone, contact mic and a coil pick up).

sin-ticc81tulo5.png

LBS: Which project would you consider a highlight on your career in terms technical or artistic merit?

JAVIER:  Recently, I have worked on the sound design and a good portion of the mix for a documentary series about the lighthouses of Ireland that was premiered on RTE (the irish BBC).

It was a very interesting project with beautiful helicopter footage. I needed to recreate the audio for 200 minutes of aerial shots so loads of waves, wind, storms, seagulls and things like that. I tried to give each location and lighthouse its own personality and sound. Some of them are really astonishing and true masterpieces of engineering while others are situated on amazing natural locations.

I summary, one the most beautiful projects I have had the chance of working on.

sin-ticc81tulo4.png

LBS: Is there any cool anecdote in your almost decade as a professional that you would like to share?

JAVIER:  While I was trying to remember an anecdote I thought I could share something that happens to me from time to time and I wonder if it´s something thar other people experience too.

Some times, when I´m looking for a particular sound. I bring some audio just by chance or even by mistake and it works great just like that. I guess that when you spend many hours editing audio, these things are going to just happen from time to time but it always feels like you were touched by the goods of sound design for a moment.

LBS: Is there any project on your near future?

JAVIER:  I´m about to get immersed in Drop Dead Weird, a live action comedy about three australian teenagers that move to Ireland and their parents turn into zombies. I am mixing the show, which is a co-production between Channel 7 (Australia) and RTE (Ireland).

It´s a cool crazy project with a lot of action and sound design and many people on each scene which is always a challenge in terms of dialogue editing.

sin-ticc81tulo6.png

LBS: To wrap thing up, any advice for someone who is mad enough to be interested in this beautiful profession?

JAVIER:  When I look back at my career there is a pattern that repeats itself: I was able to make a leap forward when I was on the right place at the right time. The problem is that you never know when and where this is going to happen, for each of these moments of success I´ve had many more that just were unfruitful.

So the best way to go then is to be persistant and throw as many seeds to the air as possible while always improving as a professional. Something will bloom.

LBS: Thanks again for your time, Javier. Best of luck on your future projects which we will keep an eye on here on labobinasonora.net.

JAVIER: Thanks to you, Óscar for having me. My pleasure.

Essential Bodyfalls: Sound Library Post-Mortem

Essential Bodyfalls is the second library that I’ve published. This is a brief account of what I learned during the process of creating it along side fellow sound designers Grace Canavan and Pearse O' Caoimh.

Where to record

At first, we considered recording outdoors, somewhere desolated and quiet but the Irish weather quickly encouraged us to go another route. It would be very tough to find enough days when the three of us were free plus the weather was decent.. So we considered finding an indoors place. After some looking around, we found that Grace’s family had a house that was in construction and there was a room in there that we may able to use.

The place was empty and echoey but fairly quiet and mostly for ourselves on the weekends so we decided to turn it into our improvised foley studio. We couldn’t do anything permanent to the room, so we did some research to find possible solutions that would be easy to remove afterwards.

This is how the room looked at first…

We were able to get some help from the builders working on the house and we built a wooden frame and two foley pits for us. The idea was to apply a poor’s man room within a room concept. The frame, which spanned two thirds of the room, was then covered with old blankets and duvets creating both a dream-like blanket castle and hopefully a recording studio.

The result, despite the low tech approach was pretty decent acoustically. The room was now very dry although from a frequency balance perspective there were improvements to make. Firstly, the high frequency absorption was maybe too much so we removed some of the blankets to make the room a bit more bright.

The naked wooden frame.

A view from the outside.

This is how one of the corners looked with all the blankets on.

The biggest issue, as always with amateur acoustical work, were the low frequencies. We had some big resonance modes on several places. To solve this (or at least to try to), we built some DIY bass traps on the corners. We had an improvement but it wasn’t very dramatic. We decided to continue anyways knowing that we would maybe need to do some EQ work with the resulting sounds.

Props: Building dummies

Although the idea of using your own body to record is tempting, it may not be very practical from a medical point of view. We knew we had to build some kind of dummy that we could use as an action double. Something durable, heavy enough and of course realistic sounding.

We tried several things to try to create the correct weight and sound.

Mark 1 (Fat Tony): Our first approach was to use sandbags covered with clothes. A big one would be the torso plus two smaller cylindrical ones for limbs. The resulting dummy was heavy (maybe too heavy) and it sounded quite dull.

Mark 2 (Potato Man): A different approach was to stuff some old dungarees with a mix of potatoes and foam. The result was a brighter sound that maybe needed more weight.

Mark 3 (Punching Bag): This time we bought a punching bag and we stuffed with old clothes and foam. This one sounded kind of in the middle of the two previous ones, it had a good amount of weight to it but without being too dull.

We also used other smaller props, like toys and stuffed animals to give the sounds more variability and to interact with the different materials and surfaces we had. At the end, the best results were achieved by combining two or more props in a single action, we were usually using two of our dummies at a time.

 Our collection of dummies during some initial testing.

Our collection of dummies during some initial testing.

Surfaces & Materials

Although we considered some others, the final library ended up having body falls on: dirt, gravel, sand, concrete, metal, grass and wood.

We were able to find some of the materials in construction sites where builders were kind enough to let us grab a bucket full of different types of dirt, sand and gravel.

For the concrete, we just use the bare floor of the room since it had no carpet or tiles. For metal, we used different pieces that we found around. We had a solid one and then a more hollow sounding one.

The grass was recorded using combinations of dry grass and VHS tapes to achieve both short and tall grass. Finally, the wood falls were recorded on an old door and a abandoned pallet.

We used a piece of cloth to contain the materials and easily swap them when needed. Something we quickly discovered was that to get more interesting results, it´s a good idea to combine different materials. The dirt, for example, had a bit of gravel mixed in to enhance the crunchiness.

Here you an see our buckets + the cloths we were using to contain each material + the old wooden door on the left.

Recording sessions

Something we learned while working on this project was that at first we were being too ambitious. We were planning to record several falls from each of the dummies with three different intensities on each variation of every surface. This would have taken forever.

At the end, we decided to streamline the process, focusing of getting nice sounds for each of the surfaces regardless of the prop used and mixing up intensities. The best results were probably achieved when combining the dummies and using two of them at the same time.

Since we were a team of three, two would be recording while the third is editing and checking takes on a Pro Tools rig that we set up on another room. This way, we had quick feedback on what was working the best.

After we have recorded enough falls on any given surface, we would record some isolated interactions with the material like drags, impacts, debris, etc… This proved to be essential on the editing phase.

The gear used was quite simple. A sennheisser MKH416 and a Shure SM57 into my faithful Tascam HD-P2.

Gear in action.

Editing, mixing & Mastering

This is probably one of the most gruelling steps of the process. We needed to process and combine hundreds of sounds to get to the final product. The approach we used was to have a master Pro Tools session with every single dummy and surface combination. We then did a selection of the best sounds from each of the takes.

Our glorious master session.

We then created a new session per final bodyfall type where we combined all the different layers of sounds to achieve a nice range of intensities and complexity. In some cases, we could even use a dull, neutral fall recorded on concrete and for example add a gravel impact and debris to create a gravel body fall.

Izotope RX was used to clean up takes and EQ + compression was applied all around. We were also mindful about audio levels and we applied the same mastering process to all the final sounds so they have a confortable level of loudness to work with.

Conclusions

In my opinion, the main lesson learned from this project, was that it´s important to set a realistic goal and focus on getting that done to the best of your abilities instead of planning to do something too ambitious that you probably will never finish.

Another lesson was that sometimes it´s easier to just pay for something instead of expending a lot of time trying to get it for free. Every problem can be solved with either time or money and knowing when to use each is key if you want to get things done.

If you work on any library creation project, something that you should always keep in mind is that the editing and mixing process is tough and very time consuming. Try dividing it into smaller chunks or assigning different sections to different people to make it easier.

With all this work now behind us, we are very happy with the results and with how the library is doing. We are definitely looking forward to tackle new projects and apply all the learned lessons but in the meantime, you can check out the library here:

Figuring out: Gain Staging

What is it?

Gain staging is all about managing the audio levels of different layers within an audio system. In other words, when you need to make something louder, good gain staging is knowing where in the signal chain would be best to do this. 

I will focus this article on the realm of mix & post-production work under Protools, since this is what I do daily, but these concepts can be applied in any other audio related situation like recording or live sound.

Pro Tools Signal Chain

To start with, let's have a look at the signal chain on Protools:

Untitled Diagram (10).png

Knowing and understanding this chain is very important when setting your session up for mixing. Note that other DAWs would vary in their signal chain. Cubase, for example, offers pre and post-fader inserts while on Pro Tools every insert is always pre-fader except from the ones on the master channel.

Also, I've added a Sub Mix Bus (an auxiliar) at the end of the chain because this is how usually mixing templates are set up and is important to keep it in mind when thinking about signal flow.

So, let's dive into each of the elements of the chain and see their use and how they interact with each other.

Clip gain & Inserts

As I was saying, on Pro Tools, inserts are pre-fader. It doesn't matter how much you lower your track's volume, the audio clip is always hitting the plugins with its "original" level. This renders clip gain very handy since we can use it to control the clip levels before they hit the insert chain.

You can use clip gain to make sure you don't saturate your first insert input and for keeping the level consistent between different clips on the same track. This last use is specially important when audio is going through a compressor since you want roughly the same amount of signal being compressed across all the different clips on a given channel.

So what if you want a post-fader insert? As I said, you can't directly change an insert to post-fader but there is a workaround. If you want to affect the signal after the track's volume, you can always route that track or tracks to an auxiliar and have the inserts on that aux. In this case, these inserts would be post-fader from the audio channel perspective but don't forget they are still pre-fader from the aux channel own perspective.

Signal flow within the insert chain

Since the audio signal flows from the first to the last insert, when choosing the order of these plugins is always important to think about whatever goal you want to achieve. Should you EQ first? Compress first? What if you want a flanger, should it be at the end of the chain or maybe at the beginning?

I don't think there is definitive answer and, as I was saying, the key is to think about the goal you have in mind and whichever way makes conceptual sense to your brain. EQ and compression order is a classic example of this. 

The way I usually work is that I use EQ first to reduce any annoying or problematic frequencies, having also a high pass filter most of the time to remove unnecessary low end. Once this is done, I use the compressor to control the dynamic range as desired. The idea behind this approach is that the compressor is only going to work with the desired part of the signal.

I sometimes add a second EQ after the compressor for further enhancements, usually boosting frequencies if needed. Any other special effects, like a flanger or a vocoder would go last on the chain.

Please note that, if you use the new Pro Tools clip effects (which I do use), these are applied to the clip before the fader and before the inserts.

Channel Fader

After the insert chain, the signal goes through the channel fader or track volume. This is where you usually do most of the automation and levelling work. A good gain stage management job makes working with the fader much easier. You want to be working close to unity, that is, close to 0.

This means that, after clip gain, clip effects and all inserts; you want the signal to be at your target level when the fader is hovering around 0. Why? This is where you have the most control, headroom and confort. If you look closely at the fader you'll notice it has a logarithmic scale. A small movement next to unity would suppose 1 or 2 dB but the same movement down below could be a 10 dB change. Mixing close to unity makes subtle and precise fader movements easy and confortable.

Sends

Pro Tools sends are post-fader by default and this is the behaviour you would usually want most of the time. Sending audio to a reverb or delay is probably the most common use for a send since you want to keep 100% of the dry signal and just add some wet processed signal that will change in level as the dry also changes.

Pre-fader sends are mostly useful for recording and live mixing (sending a headphone mix is a usual example) and I don't find myself using them much on post. Nevertheless, a possible use on a post-production context could be when you want to work with a 100% of the wet signal regardless of how much of the dry signal is coming through. Examples of this could be special effects and/or very distant or echoey reverbs where you don't want to keep much of the original dry signal.

Channel Trim

Trim is pretty much like effectively having two volume lanes per track. Why would this be useful? I use trim when I already have an automation curve that I want to keep but I just want to make the whole thing louder or quieter in a dynamic way. Once you finish a trim pass, both curves would coalesce into one. This is the default behaviour but you can change it on Preferences > Mixing > Automation.

VCAs

VCAs are a concept that comes from analogue consoles (Voltage Controlled Amplifier) and allows you to control the level of several tracks with a single fader. They use to do this by controlling the voltage reaching each channel but on Pro Tools, VCAs are a special type of track that doesn't have audio, inserts, inputs or outputs.  VCA tracks just have a volume lane that can be used to control the volume of any group of tracks.

So, VCAs are something that you usually use when you want to control the overall level of a section of the mix as a whole, like the dialogue or sound effects tracks. In terms of signal flow, VCAs are just changing a track level via the track's fader so you may say they just act as a third fader (the second being trim).

Why is this better that just routing the same tracks to an auxiliar and changing the volume there? Auxiliars are also useful, as you will see on the next section, but if the goal is just level control, VCAs have a few advantages:

  • Coalescing: After every pass, you are able to coalesce your automation, changing the target tracks levels and leaving your VCA track flat and ready for your next pass.
  • More information: When using an auxiliar instead of a VCA track, there is no way to know if a child track is being affected by it. If you accidentally move that aux fader you may go crazy trying to figure out why your dialogue tracks are all slightly lower (true story). On the other hand, VCAs show you a blue outline (see picture below) with the real affected volume lane that would result after coalescing both lanes so you can always see how a VCA is affecting a track.
  • Post fader workflow: Another problem of using an auxiliar to control the volume of a group of tracks, is that if you have post-fader sends on those tracks, you will still send that audio away regardless of the parent's auxiliar level. This is because you are sending that audio away before you send it to the auxiliar. VCAs avoid this problem by directly affecting the child track volume and thus also affecting how much is sent post-fader.

Sub Mix buses

This is the final step of the signal chain. After all inserts, faders, trim and VCA, the resulting audio signals can be routed directly to your output or you may also consider using a sub mixing bus instead. This is an auxiliar track that sums all the signals from a specific group of channels (like Dialogue tracks) and allows you to control and process each sub mix as a whole.

These are the type of auxiliar tracks that I was taking about on the VCA section. They may not be ideal to control the levels of a sub mix, but they are useful when you want to process a group of tracks with the same plugins or when you need to print different stems.

An issue you may find when using them is that you may find yourself "fighting" for a sound to be loud enough. You feel that pushing the fader more and more doesn't really help and you barely hear the difference. When this happens, you've probably run out of headroom. Pushing the volume doesn't seem to help because a compressor or limiter further on the signal chain (that is, acting as a post-fader insert) is squashing the signal.

When this happens, you need to go back and give yourself more headroom by making sure you are not over compressing or lowering every track volume until you are working on manageable level. Ideally, you should be metering your mix from the start so you know where you are in terms of loudness. If you mix to any loudness standard like EBU-R128, that should give you a nice and comfortable amount of headroom.

Final Thoughts

Essentially, mixing is about making things louder or quieter to serve the story that is being told. As you can see, is important to know where in the audio chain the best place to do this is. If you keep your chain in order, from clip gain to the sub mix buses, making sure levels are optimal every step of the way. you'll be in control and have a better idea on where to act when issues arise. Happy Mixing.

Dear Devs, 7 Reasons why your Game may need Audio Middleware.

There are several audio middleware programmes on the market. You may have heard of the two main players: Fmod and Wwise. Both offer free licenses for smaller budget games and paid licenses for bigger projects.

So, what is Audio Middleware? Does your game need it?

Audio Middleware is a bridge between your game's engine and the game's music and sound effects. Although is true that most game engines offer ready to use audio functionalities (and some of them overlap with the features explained below), middleware gives you more power and flexibility for both creating, organizing and implementing audio.

Here are the seven main reasons to consider using middleware:

1. It gives Independence to the Audio Team.

Creating sound effects and music for a game is already a good amount of work, but that is barely half the battle. For these assets to work, they need to be implemented in the game and be connected to in-game states and variables like health or speed.

This connection will always need some collaboration between the audio team and the programming team. When using middleware, this is a much easier process since once the variables are created and associated, the audio team will be free to tweak how the gameplay will affect the audio, without the need to go into the code or the game engine.

2. Adaptive Music.

Music is usually linear and predictable, which is fine for linear media like movies. But in the case of video games, we have the power to make music adapt and react to the gameplay, giving the player a much more compelling experience.

Middleware plays an important role here because it gives the composer non-linear tools to work and think about the soundtrack not in terms of defined songs but of different layers or fragments of music that can be triggered, modified and silenced as the player progresses.

3. Better handling of Variation and Repetition.

Back when memory was a limiting factor, games had to get by with just a few sounds, which would usually meant repetition, a lot of repetition. Although repetition is certainly still used to give an old school flavour, is not very desirable in modern, more realistic games.

When something happens often enough in a game, the associated sound effect can get boring and annoying pretty fast. Middleware offers tools to avoid this, like randomly selecting the sound from a pool of different variations or randomly altering the pitch, volume or stereo position of the audio file each time is triggered. When all these tools are combined, we end up with an audio event that will be different each time but cohesive and consistent, offering the player a more organic and realistic experience.

4. Advanced Layering.

Layering is how we sound designers usually build sounds. We use different, modified individual sounds to create a new and unique one. Middleware allows us to, instead of mixing down this combination of sounds, import all these layers into different tracks so we can apply different effects and treatments to each sound separatelly.

This flexibility is very important and powerful. It help us to better adapt the character and feel of a sound event to the context of the gameplay. For example, a sci-fi gun could have a series of layers (mechanical, laser, alien hum, low frequency impact, etc) and having all these layer separated would allow us to vary the balance between them depending on things like ammo, distance to the source or damage to the weapon.

5. Responsive Audio tools.

Sound effects are usually created using linear audio software, like Pro Tools, Nuendo or Reaper. These are also called DAWs (Digital Audio Workstations). The tools that we can find in DAWs allow us to transform and shape sounds. Things like equalization, compression and other effects are the bread and butter of audio people. Most of the modern music, sound effects and movies that you´ve ever heard came from a DAW.

But the issue is that once you bounce or export your sound it’s kind of set in stone, that’s how it will sound when you trigger it in your game. Middleware software not only give us the same tools that you can find in a DAW. More importantly, it also give us the ability to make them interact with variables and states coming from the game engine.

How about a monster whose voice gets deeper as it gets larger? Or music and sound effects that get louder, brighter and more distorted as the time is running out?

6. Hardware & Memory Optimization.

Different areas of a game compete for processing power and usually audio is not the first priority (or even the second). That´s why is very important to able to optimize and streamline a game´s audio as much as possible.

Middleware offers handy tools to keep the audio tidy, small and efficent. You can customize things like reverbs and other real time effects and also choose how much quality you want in the final compression algorithm for the audio.

7. Platform flexibility & Localization.

If you need to prepare your game for different platforms, including PC, consoles or mobile phones, middleware makes this very easy. You can compile a different version of the game’s audio for each of the platforms. Memory or hardware requirements may be different for each of them and you’ll need to maybe simplify sound events, bake-in effects or turn a surround mix into an stereo one.

You can also have a different version per language, so the voice acting would be different but the overall sound effects and treatment of the voices would be consistent.


I hope this gave you a bit of a taste of what middleware is capable of. When in doubt, don´t hesitate to ask us, audio folks!
Thanks for reading.

Exploring Sound Design Tools: Mammut

mammut.png

Mammut is a strange and unpredictable piece of software. It basically does a Fast Fourier Transform (FFT) of a sound file but unlike Paulstretch, which uses slices of the sound, Mammut uses the whole thing at once, creating more drastic results.

Mammut is not (in any way) a commercial tool but more of an experimentation one, so I won't go into detail about what is doing under the hood. Instead, I will focus on how it can be used to create interesting and cool sound design. If you want to follow along, you can download it here.

Software Features

Mammut has many processing tabs but I will only cover some of the most interesting ones.

Loading & Playing sounds.

Mammut works as a standalone software. You need to load a sound (using the browse button) to be able to start fooling around. The "Duration Doubling" section adds extra space (technically, FFT Zero Padding) after the sound. This extra space give some of the effects (like stretching) more time to develop and evolve.

Play and stop the sound on the Play section. There is also a timeline of sorts. Now that our sound is loaded, let's see what we can do with it.

Stretch

It creates a non-linear frequency stretching with frequency sweep effects. All frequencies are raised to the power of the selected exponent, so small changes are enough to produce very different results. Because of the frequency sweeps, it sounds quite sci-fi, like the classic star wars blaster sound. Here are some examples at different exponents:

As you can hear, as the values get further away from 1, the effect is more pronounced and it also starts sooner. Here are some results with values higher than 1:

And here are some interesting results with a servo motor sound.

This sounds remind me a bit of japanese anime or video games, maybe this could be one of the steps for achieving that kind of style from scratch.

Wobble

This stretches and contracts the frequency spectrum following a sinusoidal transfer function. You can control the frequency and amplitude of this change.

This one is weird (no surprise) and it doesn't really do what I was expecting. It tends to create sounds that are increasingly dissonant and "white noise like" as you go to more extreme parameters. Here are some examples:

Threshold

Quite cool. Removes all the frequencies below a certain intensity threshold. This means that you can kind of "extract" the fundamental timbre or resonance of a sound. Used on ambiences (3rd example below), it sounds dissonant at first and then, once you remove almost all frequencies, kind of dreamy and relaxing.

Block Swap

This one basically divides the frequency spectrum in chunks and interchanges their halves a given number of times. Hard to wrap your head around but it produces interesting results. First, the number of swaps seems to make the sound more "blurry" and abstract as you can hear:

Then, the block size seems to create different resonances around different frequencies as you increase it.

Mirror

Simple but hard to predict. It reflects the whole spectrum around the specified frequency. The problem with this is that when you flip the spectrum around a low frequency, everything ends up under it and is mostly lost. On the other hand, if you use a higher frequency, too much of the energy ends up on the harsh 5-15 KHz area.

A couple of examples:

Keep Peaks

Screen Shot 2018-06-29 at 15.50.33.png

This one doesn't even have controls or an explanation on the documentation. It seems to extend the core timbre of the sound across time which can be pretty useful. When using this option, the duration doubling function is specially handy.

Conclusions

Mammut is certainly original and unique. Since it only works standalone and is rather unpredictable and unstable, I don't feel it would be very easy to include in someone's workflow. Having said that, is definitely a nice wild card to have whenever you need something different.