Exploring Sound Design Tools: Paulstretch

Have you heard this?

That video was, years ago, my introduction to "Paul's Extreme Sound Stretch" or just Paulstretch for short, a tool created by Paul Nasca that allows you to stretch audio to ridiculously cosmic lengths

Some years ago it was fashionable to grab almost anything, from pop music to the simpsons audio snippets, stretch them 800% and upload them to youtube. When the dust settled we were left with an amazing free tool that has been extensively used by musicians and sound designers. Let's see what it can do.

I encourage you to download Paulstretch and follow along:

Windows - (Source)
Mac - (Source)

The stretch engine

The user interface may seem a bit cryptic at first glance but is actually fairly simple to use. Instead of going through every section one by one, I will show how different settings affect your sounds with actual examples. For a more exhaustive view, you can read the official documentation and this tutorial before diving in.

As you can see above, there are four main tabs on the main window: Parameters, Process, Binaural beats and Write to file. I'm just going to focus on the most useful and interesting settings from the first two tabs.

Under Parameters, you can find the most basic tools to stretch your sounds. The screenshot shows the default parameters when you open the software and import some audio. 8x is the default stretch value, that may explain why so many of those youtube videos where using a 800% stretch.

The stretch value lets you set how much you want to stretch your sound. You have three modes here. Stretch and Hyperstretch will make sounds longer. Be careful with Hyperstretch because you can create crazily long files with it. There is also a Shorten mode that does the opposite, makes sounds shorter. If you want to make a sound infinite,  you can freeze the sound in place to create an infinite soundscape with the "freeze" button that is just to the right of the play button.

Below the stretch slider, you can see the window size in samples. This parameter can have quite a profound impact in the final result. Paulstretch breaks up the audio file in multiple slices and this parameter changes the size of those slices, affecting the character of the resulting sound as will hear below.

Let's explore how all these settings will affect different audio samples. First, here is a recording of my voice on the left and the stretched version with default values on the right hand side:

Cool. As you can see on the file name above, 8X is the stretch value while 7.324K is the window size in samples. Notice that the end of the file that Paulstretch created cuts abruptly. This can be fixed using lower values of window size to create a smoother fade out. This is the classic Paulstretch sound: kind of dreamy, clean and with no noticeable artefacts. You will also notice that, although the original is mono, the stretched version feels more open and stereo.

Just for fun, let's see how the Pro Tools and Izotope RX 6 pitch algorithms deal with a 8x time stretch:

This kind of "artefacty" sound is interesting, useful and even beautiful in its own way. But in terms of cleanly stretching a sound without massively changing its timbre, Paulstretch is clearly the way to go.

Let's play now with the window size value and see how this affects the result. Intermediate values seem to be the cleanest, we are just extending the sound in the most neutral way possible. Lower values (under 3K aprox) will have poor frequency resolution, introducing all sorts of artefacts and a flangerish kind of character. A couple of examples of applying low values to the same vocal sample:

Using a different recording we get a whole new assortment of artefacts. Below, you can see the original recoding on the left, the processed version with the default, dreamy settings on the centre and lastly, on the right, a version with a low window value that seems to summon beelzebub himself. Awesome.

On the other hand, Higher values (over 15K aprox) are better at frequency resolution but the the time resolution suffers. This means that, since the chunks are going to be bigger, frequency response is more accurate and faithful to the original sound, but in terms of time, everything is smeared together into a uniform texture with timbres and chracters from different sections of the original sample blending together. So, it doesn't really make sense to use high values with short, homogeneous sounds. Longer and more heterogeneous sounds will yield more interesting results as in this case different frequencies will be mixed together.

You can hear below an example with speech. Again, original on the left, dreamy default values on the centre and high values on the right. You can still understand syllables and words with a lower window value (centre sample) but with a 66K value the slices in this case are 2 seconds long, so different vocal sounds blend together in an unintelligible texture.

Basically, high window values are great for creating smearing textures from heterogeneous audio. Here is another example to help you visualize what the window size does.

On the left, you have a little piece of music with two very different sections: a music box and a drum and bass loop. Each of them is around 3-4 seconds long. If we use a moderate window size (centre sample below) we will hear a music box texture and then a drum texture. The different music notes are blended together but we can still have a sense of the overall harmony. On the third sample (right) we use a window size that yields a slice bigger than 4 seconds, resulting in a blended texture of both the music box and the drums.

Not only can you choose the window size, but also the type of window. Sort of the shape of the slices. Rectangular/Hamming deal better with frequency but they introduce more noise and distortion. Blackman types produce much less noise but they go nuts with the frequency response. See some examples below:

Adding flavour

Jumping now to the Process tab, here we have several very powerful settings to do sound design with.

Harmonics will remove all frequencies form the sample except for a fundamental frequency and a number of harmonics that you can set. You can also change the bandwidth of these harmonics. A lower number of harmonics and a lower bandwidth will yield more tonal results since a fundamental frequency will dominate the sound, while higher values will be closer to the original source having more frequency and nosie content.

See samples below, the first two are the original recording on the left and the stretched version with no harmonic processing on the right. I left the window size kind of low so we have some interesting frequency warping there.  Further below, you can hear several versions with harmonic processing applied and increasingly higher bandwidths. Hear how the first one is almost completely tone and then more and more harmonic and noise content creeps in. I's surprising how different they are from each other.

Definitely very interesting for creating drones and soundscapes, Paulstretch behaves here almost like a synthetizer, it seems like it creates frequencies that were not there before. For example:

Also worth mentioning are the pitch controls. Pitch shift will just tune the pitch as any other pitch shift plugin. Frequency shift creates a dissonant effect by shifting all frequencies by a certain amount. Very cool for scary and horror SFX.

The octave mixer creates copies of your sound and shifts them to certain octaves that you can blend in. Great for calming vibes. See examples below:

 

Lastly, the spread value is supposed to increase the bandwidth of each harmonic which results in a progressive blend of white noise in the signal as you push the setting further. The cool thing about this, is that the white noise will follow the envelope of your sound. This could be used to create ghostly/alien speech. Here are some examples with no spread on the left and spread applied on the right:

And that's it form me! I hope you now have a good idea of what Paulstretch can do. I see a lot of potential to create drones, ghostly horror soundscapes, sci-fi sounds and cool effects for the human voice. Oh, and also just stretching things up to 31 billion years is nice too.

Mini Library

Here is a mini library I've put together with some of the example sounds, some extended versions an a bunch of new ones. It includes creatures, drones, voices and alien winds. Feel free to use them on your projects.

Soundly: An unofficial user's guide

This current version of the guide covers Soundly 2.0. I will do my best to keep it updated.

Disclaimer: I decided to make this guide because there are almost no resources online for Soundly and I thought it would be something that some people may find useful. This is not intended to be a strictly objective guide, I've included some of my own opinions and suggestions. I don't have any working or commercial relation with the creators of Soundly.

Introduction
Soundly is a audio library management software that lets you organise and tag your sound effects and add them to your projects in a fast and convenient way. It also includes an online proprietary library that you will be able to access in conjunction with your own local files. I'll be using the mac version and shortcuts. Any mac shortcut using CMD can be used on windows with CTRL.

Soundly Overview

Pricing and Accounts
Soundly uses a freemium model. Using the app is free with some limitations. You will only be able to import up to 2500 sounds from your local drive and you will only have access to a selection of sounds from the cloud based Soundly library.

The Pro version allows you to import unlimited local files and lets you access the Soundly online library in its entirety. Additionally, this version allows you to access other third party online libraries within the app. Some of these are free, like the whole catalogue from freesound.org, and others are paid libraries that you can buy via asoundeffect.com.

As you can see bellow, there is also a 24hr pass option that gives you all the pro features in case you just need them for a short period of time.

There is also a multi-user option for companies that gives you the pro features for multiple people and includes shared cloud storage so everyone in the team can access the same libraries. 

Audio libraries sources

Your sounds can come from three fundamental sources:

  • Local: These are the files that you have in your own computer.
  • Cloud: These are files that are always online and you can access from anywhere.
  • Network: These are files that are in your local network (but maybe not necessarily in your computer).

Let's see how you can manage files from these three sources.

Importing your local libraries

To import your libraries, you just drag and drop any audio files or folders. Make sure you drop them on the "Local" blue box. The best way to keep things organized is to have each of your libraries on individual folders and drop them all to soundly instead of just importing the parent folder of all these libraries.

This way, Soundly will list all your libraries one by one and you will be able to select which ones do you want to include in your searches. You will also be able to browse through subfolders within each library.

If you bring in loose sound files instead of folders, they will go into a "Loose sounds" folder that will be automatically created.

Managing your own sounds on the cloud

For now, this option is only available for multi-user accounts. When you drag and drop files or folder to Soundly, you will see that one of the blue boxes says "Cloud Storage". Once these files are uploaded you will be able to access them anywhere.

Managing third party cloud libraries

Soundly allows you to access some online cloud libraries with no need for you to have the files locally. If you are a pro user you will be able to access all the libraries listed bellow. In the case of a free user, only a selection of sounds from The Soundly Library will be available.

  • The Soundly Library: This is general purpose library built by sound designer Christian Schaanning. You will be able to access the whole set of sounds (currently around 10.400 files) if you are a pro user. Free users will just be able to access a selection of around 300 sounds. This ls a quite complete and well tagged library that will be probably enough for an editor or an student and a nice additional bonus resource for sound designers.
     
  • Freesound.org library: Soundly lets you access the huge catalogue hosted in freesound.org (around 300K sounds!). Is very handy to have this vast amount of material available to you without the need of jumping to your browser to search and download the sounds by hand. Having said that, freesound.org doesn't always offer the highest quality plus the content is under different creative commons licenses. Soundly will show all these different licenses and you can even set things up so Soundly will only show you creative commons 0 licensed sounds which is a great way to assure you that you are only using public domain material. To do this, just right click on the freesound folder (under "Cloud") and select Creative Commons 0 Only.
  • The Free Firearm/Medieval Weapon Library: These libraries were produced by Still North Media and financed via Kickstarter. You would need Soundly Pro to access them via the cloud but if you are a free user you can always download them and access them locally.
     
  • Paid asoundeffect.com Libraries: Pro users can browse, purchase and access third party libraries via the vast asoundeffect.com catalogue and access them anywhere on the cloud section.

So, as you can see, Soundly cloud capabilities are really about convenience. They give you the ability of searching on all these cloud based sources plus on your local files, all in a single place, saving time and improving your workflow.

Using shared network databases

The 2.0 version introduced this new feature aimed at companies or just groups of people using different clients of soundly locally. To use this feature, you will need the computers to be connected via a local network (LAN). Keep in mind that the database and audio files could be in one of the users computers or even just in a server or machine room computer that all other users will access.

So, why use this and not just local copies for each user? This option allows you to have an unified and centralised database that any user can access, edit and improve upon, instead of a fragmented database that is going to be different for every member of the team. Additionally, this database will update its metadata in real time, without the need for users to restart Soundly or re-import the audio libraries when another user makes any metadata change.

To set up a database, you will need to create it with Database > New shared network database. You will then see the following screen:

As the text above says, the database is stored in a .sdb file and this file should be stored on the same network disk as the audio files forming the library. Click on the folder icon to name the database and locate where do you want to save the database file. As mentioned, it should be saved on your library root folder and on the same disk where the audio files are. Then, you will find these three options:

  • Duplicate local database: This option will add to your local library folders to your otherwise empty network library.
  • Password protect: This lets you add a password that any users trying to connect to the database will need to know.
  • Restrict editing: If selected, unauthorised users won't be able to edit the database content or metadata. 

Once your database is created, any user can access it going to Database > Connect to shared network database. As mentioned before, the user will need to have access, via the local network, to the computer or hard drive where the files are. Once connected to the database, the user will be able to search within the library and see any new changes like new folders, files or metadata made by any other user with permission to edit.

Using Metadata

Having solid metadata is very important if you want to find the sound you need easily. It's no use to have a great sounding library if the metadata is vague or incorrect.

There are other audio library managing software with far more options to manage metadata but, in my opinion Soundly offers enough features to keep your files well labeled and easy to find. I think most users won't need much more.

When browsing local files you will see the following fields: 

  • Name: The name of the audio file.
  • Time: The duration of the file.
  • Format: Sampling frequency + Bit depth when browsing wav files. In case of other formats, it will show the name of the format.
  • Channels: The amount of channels. Very handy when you just want to see surround files, for example.
  • Library: The name of the parent folder containing the audio file. Ideally, it should be the name of the library. 
  • Description: Additional information about the file to make it easier to find.

If you want to change which fields are visible, right click on any of them to add or remove them. The only editable fields within Soundly are the "Name" and "Description". For my own libraries, I personally use the name field to just state what the sound is in terms of materials, moods and/or actions. I then use the description field to add any additional information that may help locate the sound when searching in the future, even if this information is very different to the original purpose of the sound.

Shortcuts:

  • CMD + E: Edit the name of the file.
  • CMD + T: Edit the description of the file.

If you select more than one file at the same time and do CMD + T, you will be able to access this following dialogue window. (See picture on the right hand side)

This window will allow you to modify the names, descriptions or originators of all the selected files at the same time.

The originator is an additional field that appears when editing a file's description and just indicates where the original metadata is coming from.

This are the commands accessible on this window. Keep in mind that any of this can be applied to either the name, description or originator field:

  • Add to start: Adds any text to the start of the field. You may need to end with an space to keep things clear.
  • Add to end: Adds any text to the end of the field. You may need to start with an space to keep things clear.
  • Replace with: Replaces occurrences of a determined chain of characters with a new one of your choice. Super useful for removing things like underscores and replacing them with spaces, as you will see bellow.
  • Replace whole: Replaces the whole field with a new text.

Here are some general tips and tricks to keep in mind when editing metadata. Not all of them are necessarily related to Soundly and some are definitely a matter of personal preference but they may be useful to you:

  • When possible, use wav and interleaved files. Metadata on some other types of files will be saved in soundly but not on the file itself, so it will be lost if the files are moved or renamed. In my experience, sticking to wav and interleaved is the safest option.
     
  • Searches are not case-sensitive but I like to keep everything lower case for the sake of simplicity.
     
  • You can use dashes "-", underscores "_" and commas "," to separate words (like sci-fi). Soundly will treat them as spaces.
     
  • Self-contained words: Looking for "cars" won't give you anything labeled as "car". But looking for "car" will give you both "car" and "cars". So, in general, is better to search for the shortest form of a word (usually singular). Another example of this would be "plane". It will give you words labeled as "plane", "planes" and "airplane" but also "planer" and "planet".
     
  • Establish a set of words used to describe certain sounds and be consistent with them, specially when two options are possible, like "impact and "hit". Choose one of them and stick with it.

Searching

Searching is very straightforward but I wanted to share some tips:

  • You can use the minus symbol "-" to remove results from your search. For example, if you search for "rock" you may also end up with results referring to rock music. You can then use "rock -music" and hopefully you will filter those unwanted results out.
     
  • You can see your search history (and go back to any previous search) clicking on the magnifying icon on the left the search box or using the arrows on the right of the search box.
     
  • When you do a search, Soundly will give you suggestions based on global sound effect searches.
     
  • If you want to do a search just in a group of libraries or folders you can uncheck the rest of folders and just leave those you want to search in.
     
  • If you want to search in just one library, you can use the "Search in this library" command that you will see by right-clicking on any of your libraries.
     
  • You can save a particular selection of folders  to be used while searching. To set this up, first select the folders you would like to be included and then click on the three dots on the right hand side of the SOUNDS tab (see picture bellow). Once on that dialogue, you can name and save your selection or delete previous selections. You can then access these selections using the shortcut CMD + Number.

Audio Operations

There are several audio operations that you can do to your sounds within Soundly. Keep in mind that you won't be affecting the original audio file when using these operations. Let's have a look at them:

  • You can change the sounds pitch and this will affect its duration too, so this is an old school pitch change where pitch and length are linked. You can do this by using the big slider on the bottom left hand side. There is also an amount setting that you can change to make the change bigger. This setting can be 2X, 4X and 8X. Note that if you change the pitch and then move the sound to to other app you will be moving the pitch shifted version of the file.
     
  • There is a volume slider next to the pitch control. This will just change your auditioning level and will not affect the sound's level when importing it into other software.
     
  • You can change the waveform size or zoom with a slider that is on the right hand side of the waveform display. Again, this will just change how you view the waveform, not the actual level of the sound.
     
  • Use CMD + R to reverse the audio file. This change will be carried forward to other software.
     
  • Use CMD + I to invert the channels on an stereo audio file. This change will be carried forward to other software.
     
  • Use CMD + N to normalize the audio file. This change will be carried forward to other software.
     
  • Use SHIFT + CMD + M to sum an stereo file to mono. This change will be carried forward to other software.
     
  • Use SHIFT + CMD + LEFT ARROW to use the left channel only on an stereo file. This change will be carried forward to other software.
     
  • Use SHIFT + CMD + RIGHT ARROW to use the right channel only on an stereo file. This change will be carried forward to other software.

Exporting sounds to other apps

You can export sounds to any audio or video editing software. To do this just select a section of a sound (or the entire file) and drag and drop.

You can also work on Dock Mode, this compact mode allows you to see Soundly and your editing software at the same time. To use this mode just click on "Dock Mode" on the top right hand side of Soundly's window.

Using Dock Mode

Using Soundly with Pro Tools

To spot sound directly to Pro Tools you can either use "S" to spot to the cursor or "B" to spot to the Pro Tools bin. This is the best way to ensure that the sounds are in the session's audio files folder. When spotting to the cursor, make sure you have selected to the proper track beforehand, since Soundly will spot the file to the selected track on Pro Tools.

When spotting a stereo track to a mono track, Soundly will first bring the left channel and then right channel on top of it, so you will end up with just the right channel which is OK if you just want a mono effect. If you want use to use the other channel you can either undo in Pro Tools (this will undo bringing the right channel so you will just have the left channel) or you can just use the commands explained above to just bring the left channel or a mono sum of both channels.

If your selection doesn't span the whole file and you spot to pro tools, you will just bring that particular selection but it will include handles so you will be able to extend the clip if needed.

Using Playlists

Playlists are a great way of saving your favourite sounds per category (Whooshes, Grabs, Night) or per client or show you are working on. In the picture on the right, you can see some of the playlists I'm currently using to give you an idea of categories you could use.

To create a new playlist, just click on "New Playlist" just bellow your libraries. Name your playlist and you can start adding sounds just right clicking on any sound and selecting "Add to playlist" and your desired playlist.

The Starred playlist is a somewhat special playlist that comes by default and you can use to tag your favourite sounds.

You can also share any playlist with other Soundly users. This is really handy for teams working on the same set of projects. Right click on any playlist and select share. You can then add any users using their emails. You can also choose to let them add files to the playlist and even give them permissions to manage it.

Settings Page

Access the setting page by clicking on "SETTINGS" on the up right hand side, going to Soundly > Settings or with the shortcut CMD + {COMMA}

  • Output Device: Your Audio output device.
  • Auto play: When browsing, selecting a new audio file will play it automatically.
  • Loop play: Files will be played on loop. Shortcut: SHIFT + CMD + L
  • Hide on drag out: Soundly window will hide when dragging and dropping a sound.
  • Auto resize search result column: When searching, columns will automatically resize to fit the content.
  • Window always on top: Soundly window will always stay on top, useful when using Dock Mode.
  • File name on export: You can choose if either the file name or the description will be carried when exporting to other software.
  • Output format: The format that exported files will have. It can be the same as the original file ("Same as input") or a custom one.
  • Reset local database: Deletes the local database.
  • Offline License: This options allows you to export and import an offline license if you need to work on a computer not connected to the internet.
  • Audio storage location:  If you work with video editing software, Soundly will not automatically save the audio to the project audio folder when exporting. With this feature, you can manually set that path to the project's audio folder.
    Soundly will save the files you export to this folder whenever your editing software 
  • File transfer quality: If you have a slow internet connection you can use the low quality settings to load the files faster.
  • Update: Checks if you are using the last version of Soundly.
  • Networking: Use this option if you are accessing the internet through a proxy server.
  • ReWire: Rewire lets you audition Soundly through a track in your DAW. Specially useful if you use Pro Tools HD.

Soundly shortcuts

  • CMD + E: Edit the name of the file.
  • CMD + T: Edit the description of the file.
  • CMD + Number: Activates a saved folder selection preset.
  • CMD + [COMMA]: Preferences page.
  • CMD + R: Reverse an audio file. 
  • CMD + I: Invert the channels on an stereo audio file. 
  • CMD + N: Normalize the audio file. 
  • SHIFT + CMD + M: Sum an stereo file to mono. 
  • SHIFT + CMD + LEFT ARROW: Use the left channel only on an stereo file. 
  • SHIFT + CMD + RIGHT ARROW: Use the right channel only on an stereo file. 
  • SHIFT + CMD + L: Toggle loop play on and off.

Features for the future

This just a compilation of ideas from myself and other colleagues that we would love to see in Soundly in the future. If there is anything that you think is missing here, leave a comment and I'll add it to the list. As you can see, some features were added!

  • Being able to use "" to search for concrete words but not words containing them. So, if you search for "car", you won't find cartoon or cardboard.
  • File descriptions get updated in real time between different users accessing the same files. Added in version 2.0
  • Being able to edit more than rule at once for a certain field. For example, we would want to add "speed" to the end of the description and also replace all instances of "vehicle" with "car" within the description too. This would be very handy when re-naming big libraries.
  • Search statistics. Being able to see what you search the most would be awesome.
  • Multi-edit is great because of the replacing tools and sometimes I'd love to be able to do it on just one file.
  • Sort playlists in folders and share those folders.
  • More boolean logic for searches. Like look for anything with the words "dog" or "wolf".
  • More metadata fields or even custom metadata fields.
  • Ability to spot to pro tools with handles. Added in version 2.0
  • A button allowing you to "Go back" to previous page. Added in version 2.0
  • Being able to do conditional formatting on metadata. For example, being able to tell Soundly: within this selection, if you see the term "wind", add "ambience".
  • Add local tags tied to your account to cloud libraries.
  • Spell correction in search bar. Added in version 1.2
  • Being able to add tags in a particular time along a file's waveform.
  • Right click on a Freesound track has an option to open that sound's page on a browser.

An Introduction to Game Audio

Talking to a fellow sound designer about game audio, I realised that he wasn't aware of some of the differences between working on audio for linear media (film, animation, TV, etc) and for interactive media (video games).

So this post is kind of my answer to that. A brief introduction to the creative depth of video game sound design. This would be aimed to audio people who are maybe not very familiar with the possibilities this world offers or just want to see in which way is different to, say, working on film sound design.

Of course there are many differences between linear and interactive sound design, but perhaps the most fundamental, and the most important for somebody new to interactive sound design, is the concept of middleware. In this post, I’ll aim to give beginners a first look at this unfamiliar tool.

I'll use screenshots from Unearned Bounty, a project I've been working on for around a year now. Click on them to enlarge. This game runs with Unity as the engine and Fmod as the audio middleware.

Linear Sound vs Interactive Sound

Video games are an interactive media and this is going to influence how you face the sound design work. In traditional linear media, you absolutely control what happens in your timeline. You can be sure that, once you finish your job, every time anyone presses play that person is going to have the same audio experience you originally intended, provided that their monitoring system is faithful.

Think about that. You can spend hours and hours perfecting the mix to match a scene since you know is always going to look the same and it will always be played in the same context. Far away explosion? Let's drop a distant explosion there or maybe make a closer FX sound further away. No problem. 

In the case of interactive media, this won´t always be the case. Any given sound effect could be affected by game world variablesstates and context. Let me elaborate on those three factors. Let's use the example of the explosion again. In the linear case, you can design the perfect explosion for the shot, because is always going to be the same. Let's see in the case of a game:

  • The player could be just next to the explosion or miles away. In this case, the distance would be a variable that is going to affect how the explosion is heard. Maybe the EQ, reverb or compression should be different depending on this.
  • At the same time, you probably don't want the sound effect to be exactly the same if it comes from an ally instead of the player. In that case, you'd prefer to use a simpler, less detailed SFX. One reason for this could be that you want to enhance the sound of what the player does so her actions feel more clear and powerful. In this case, who the effect belongs to, would be a state.
  • Lastly, is easier to make something sound good when you always know the context. In video games, you may not always know or control which sounds will play together. This forces you to play-test to make sure that sounds not only work in isolation but also together and in the proportions that usually the player is going to hear. Also, different play styles will alter these proportions. So, following with our example, your explosion may sound awesome but maybe at the same time dialogue is usually being played and is getting lost in the mix and you'd need to account for that.

After seeing this, linear sound design may feel more straightforward, almost easy in comparison. Well, not really. I´ll explain with an analogy. Working on linear projects, particularly movies, is like writing a book. You can really focus on developing the characters, plot and style. You can keep improving the text and making rewrites until you are completely satisfied. Once is done, your work is always going to deliver the same experience to anyone who reads the book.

Interactive media, on the other hand, is closer to being a game master preparing a D&D adventure for your friends. You may go into a lot detail with the plot, characters and setting but like any experienced GM knows, players will somewhat unpredictable. They will spend an annoying amount of time exploring some place that you didn´t give enough attention to and then they will circumnavigate the epic boss fight by some creative rule bending or a clever outside the box idea.

So, as you can see, being a book writer or working in linear sound design gives you the luxury of really focusing on the details you want, since the consumer experience and interaction with your creation is going to be closed and predictable. In both D&D and interactive media, you are not really giving the final experience to the players, you are just providing the ingredients and the rules that will create a unique experience every time.

Creating those ingredients and rules is our job. Let's explore the tools that will help us with this epic task.

Audio Middleware and the Audio Event

Here you can see code being scary.

Games, or any software for that matter, is built from a series of instructions that we call code. This code manages and keeps track of everything that makes a game run: graphics, internal logic, connecting to other computers through the internet and of course, audio.

The simplest way of connecting a game with some audio files is just calling them from the code whenever we need them. Let's think about a FPS game. We would need a sound every time the players shoots her shotgun. So, in layman's terms, the code would say something like: "every time the players clicks her mouse to shoot, please play this shotgun.wav file that you will find in this particular folder". And we may don't even need to say please since computers don't usually care about such things.

This is how all games were done before and is still pretty much in use. This method is very straightforward but also very limited. Incorporating the audio files into the game is a process that is usually called implementation and this is its more rudimentary form. The thing is, code can be a little scary at first, specially for us audio people who are not very familiar with it. Of course, we can learn it, and is an awesome tool if you plan to work in the video game industry, but at the end of the day we want to be focusing on our craft.

Middleware was created help us with this and fill the gap between the game code and the audio. It serves as a middle man, hence the name, allowing sound designers to just focus on the sound design itself. In our previous example, the code was pointing to specific audio files that were needed in any given moment. Middleware does essentially the same thing but puts an intermediate in the middle of the process. This intermediate is what we call an audio event

An example of audio events managing the behaviour of the pirate ships.

An audio event is the main functional unit that the code will call whenever it needs a sound. It could be a gunshot, a forest ambience or a line of dialogue. It could contain a single sound file or dozens of them. Anytime something makes a sound, is triggering an event. The key thing is that, once the code is pointing to an event, we have control, we can make it sound the way we want, we are in our territory.

And this is because middleware uses tools that we audio people are familiar with. We'll find tracks, faders, EQs and compressors. Keep in mind that these tools are still essentially code, middleware is just offering us the convenience of having them in an comfortable and familiar environment. Is bringing the DAW experience to the game development realm.

Audio middleware can be complex and powerful and I'd need a whole series of posts to tell you what they can do and how. So, for now. I'm just going to go through three main features that should give you an idea of what they can offer.

I - Conventional Audio Tools within Middleware

Middleware offer a familiar environment with tracks, timelines and tools similar to the ones found on your DAW. Things like EQ, dynamics, pitch shifters or flangers are common.

This gives you the ability to tweak your audio assets without needing to go back and forth between different softwares. Probably you are still going to start from your DAW and build the base sounds there using conventional plugins, but being able to also do processing within the middleware gives you flexibility and, more importantly, a great amount of power as you'll see later.

II - Dealing with Repetition and Variability

The player may perform some actions over and over again. For example, think about footsteps. You generally don't want to just play the same footstep sound every single time. Even having a set of, say 4 different footsteps, is going to feel repetitive eventually. This repetitiveness is something that older games suffer from and that generally modern games try to avoid. The original 1998 Half-Life, for example, uses a set of 4 footstep sounds per surface. Having said that, it may still be used when looking for a nostalgic or retro flavour the same way pixel art is still used. 

Middleware offer us tools to make several instances of the same audio event, sound cohesive but never exactly identical. The most important of these tools are variations, layering and parameter randomization.

The simplest approach to avoid repetition is just recording or designing several variations on the same effect and let the middleware choose randomly between them every time the event is triggered. If you think about it, this imitates how reality behaves. A sword impact or an footstep are not going to sound exactly the same every single time, even if you really try to use the same amount of force and hit on the same place. 

You could also break up a sound into different components or layers. For example, a gunshot could be divided in a shot impact, its tail and the bullet shell hitting the ground. Each of this layers could also have their own variations. So now, every time the player shoots, the middleware is going to randomly choose an impact, a tail and a bullet shell sound, creating a unique combination.

Another cool thing to do is to have an event with a special layer that is triggered very rarely. By default, every layer on an event has a 100% possibility to be heard but you can tweak this value to make it more infrequent. Imagine for example a power-up sound that has an exciting extra sound effect but is only played 5% of the time the event is called. This is a way to spice things up and also reward players who spend more time playing.

An additional way of adding variability would be to randomize not only which sound clip will be played, but also their parameters. For example, you could randomize volume, pitch or panorama within a certain range of your choice. So, every time an audio clip is called, a different pitch and volume value are going to be randomly picked.

Do you see the possibilities? If you combine these three techniques, you can achieve an amazing degree of variability, detail and realism while using a relative small amount of audio files.

See above the collision_ashore event that is triggered whenever a ship collides with an island. It contains 4 different layers: 

  • A wood impact.  (3 variations)
  • Sand & dirt impacts with debris. (3 variations)
  • Wooden creaks (5 variations)
  • A low frequency impact.

As I said, each time the event is triggered, one of this variations within each layer will be chosen. If we then combine this with some pitch, volume and EQ randomization we will assure that every instance of the event will be unique but cohesive with the rest.

III - Connecting audio tools to in-game variables and states.

This is where the real power resides.

Remember the audio tools built-in into middleware that I mentioned before? In the first section I showed you how we can use these audio tools the same way we use them on any DAW. Additionally, can also randomize their values, like I showed you in the second section. So here comes the big one.

We can also automate any parameter like volume, pitch, EQ or delay in relation to anything going on inside the game. In other words, we will have a direct connection between the language of audio and the language the game speaks, the code. Think about the power that that gives you. Here are some examples:

  • Apply an increasing high pass filter to the music and FX as the protagonist health gets lower.
  • Apply a delay to cannon shots that gets longer the further away the shot is, creating a realistic depiction of how light travels faster than sound.
  • Make the tempo of a song gets faster and its EQ brighter as you approach the end of the level.
  • As your sci-fi gun wears off, the sounds get more distorted and muffled. You feel so relieved when you can repair it and get all its power back.

Do you see the possibilities this opens? You can express ideas in the game's plot and mechanics with dynamic and interactive sound design! Isn't that exciting? The takeaway concept that I want you to grasp from this post is that you would never be able to do something this powerful with just linear audio. Working on games makes you think much harder about how sound coming from objects and creatures behaves, evolves and changes. 

As I said before,  you are just providing the ingredients and the rules, the sound design itself only materializes when the player starts the game. 

You can see on the above screenshot how an in-game parameter, distance in this case, affects an event layers' volume, reverb send and EQs.

How to get started

If I have piqued your interest, here are some resources and information to start with.

Fmod and Wwise are currently the two main middleware used by the industry. Both are free to use and not very hard to get into. You will need to re-wire your brain a bit to get used to the way they work, though. Hopefully, reading this post gave you a solid introduction on some of the concepts and tools they use.

If I had to choose one of them, Fmod could look less intimidating at first and maybe more "DAW user friendly". Of course, there are other options, but if you just want to have a first contact, Fmod does the job.

There are loads of resources and tutorials online to learn both Fmod and Wwise, but since I think that the best way to really learn is to jump in and make something yourself, I'll leave you with something concrete to start from for each of them:

Fmod has these very nice tutorials with example projects that you can download and play with.

Wwise has official courses and certifications that you can do for free and and also include example projects.

And of course, don't hesitate to contact me if you have further questions. Thanks for reading!

Pro Tools Functions, Tips and Tricks for Sound Design

Here is a compilation of tips and tricks for sound design and editing with Pro Tools. Some of these shortcuts could be obvious to the seasoned Sound Designer but you’d never know, you might learn a new trick or two. For the purposes of this post, I’ll assume that most readers are Mac users, although most of this content should also be PC-compatible.

The post contains short videos which demonstrate the shortcuts discussed in each section. If you have Pro Tools at hand, I would recommend that you open it and follow along. Let's go!

Use memory locations to mark sync points and scene changes.
Pretty basic but worth mentioning. You can add memory locations to your timeline and use them to mark certain key moments. As you import new clips to your session, these markers will be very helpful in lining up different layers. You can also jump between markers with shortcuts, which is particularly useful in long sessions with multiple scene changes.

Shortcut Function
Enter (numeric keyboard) Create new marker at playhead position.
Cmd + 5 (numeric keyboard) Open the memory location window.
Opt + Click on marker Deletes the marker.
"." (numeric keyboard) + marker number + "." (numeric keyboard) Jump to a marker location

Note: If your numeric keypad is on Classic Mode you can skip the first "." when recalling a memory location.


Using X-Form + Elastic Properties
This is my preferred method for quick clip pitch and length changes. It doesn't work very well for big changes but is good enough for small adjustments and you can tweak both parameters independently This is not a real time process, the changes are rendered offline and you'll keep the original version if you need to go back to it.

To use this method, you will need to activate X-Form in the track elastic properties, it's just under the track automation mode. (Video Bellow)

X-Form pitch changes work great when you have a sound that is similar to what you need but you feel it needs to be a little bigger or smaller in weight. The results are not always natural sounding, but adjusting the pitch can sometimes bring a clip close to the sound you're looking for.

Also, being able to change the length of a clip makes your library instantly bigger. Now your clips may work on more situations as you can make them shorter or longer to fit in context.

And don't forget this tool could be also used as a creative resource. Listen to the following clip where some plastic bag impacts are extremely slowed down, creating a weird, distorted kind of Sci-Fi sound. You can hear bellow the original sound first and then the processed one.

Shortcut Function
Alt + 5 (numeric keyboard) Opens the elastic properties for the selected clip.

Tab to transients. 

Again, pretty basic but super useful. When activated, (under the trim tool or with the fancy shortcut) you can use the tab key to jump between transients instead of clip boundaries. Very handy when editing steps, gun shots or impacts.

This function is great when working with just a few short clips, but If you just want to create clip separations on transients on a long file with loads of steps, the best to do this would be to use the function "Separate Clip on Transients".

Shortcut Function
Opt + Cmd + TAB Toggles "Tab to Transients" on and off.
TAB Jump between transients or clip boundaries.
B Separate Clip.
No official shortcut, but keep reading for a workaround! Separate Clip on Transients

Shortcuts in the ASDFG keys (cuts and fades). 

Naturally, my left hand is usually in a WASD position but when editing in Pro Tools, your fingers should be on the ASDFG keys to allow you to quickly trim and fade clips. It might take some time to get used to using these, but in no time at all, it will become second nature. Remember that you need to be on keyboard focus to use these.

Shortcut Function
A Trim Start
S Trim End
D Fade In
F Cross Fade
G Fade Out

Ctrl + click to move a clip to the playhead position.

To align two clips, select the clip to which you want to align the other(s) and then press Ctrl + click on the second clip to align them.. Also works on markers. Simple and neat.


Move a clip from one track to another without changing its sync.

Just press and hold Ctrl while moving a clip from one track to another and it will keep its timeline position no matter how much you horizontally move your mouse.. Add Opt to the shortcut to also duplicate the clip.


Fill gaps. 

This is very useful when you have an unwanted noise on an ambience track.

To eliminate the offending noise, first select it and press Cmd + B to remove it. Then select and copy another similar region in the audio clip, ideally longer than the gap you have removed. Lastly, select the area to fill and do a paste special repeat to fill selection. You can then cross fade the boundaries to make it seamless.

You can use this method rather than copying a section of audio and having to adjust the clip manually to fill the gap.. this shortcut does that tedious work for you!

Last thing, if your selection when copying is smaller than the gap itself, Pro Tools is going to paste the same clip several times until the gap is filled. It will also create crossfades between these copies of your selection. This won't probably sound very smooth but it may work if the gap is not very big and/or the scene is busy.

Shortcut Function
Cmd + B Clear or remove selection from clip.
Opt + Cmd + V Paste special repeat to fill selection.

Easy access to your most used plugins 

You can select your preferred EQ and Compressor plug-ins by going to Setup > Preferences > Mixing.

You can also select you most commonly-used plug-ins to appear at the top of your inserts list, by holding Cmd and then selecting the relevant plug-in in an insert slot. This also works with AudioSuite plug-ins.


Have Audiosuite plugins at hand with window configurations.

The previous trick will allow you to have your AudioSuite plug- ins at hand but there is an even quicker and better way to quickly access AudioSuite plugins.

First, open the AudioSuite plugin of your choice, you can even do this with more than one plugin at the same time. Now, create a new window configuration using the Window Configurations window or the shortcut. You can then call that window configuration to summon the plugin or even incorporate it in a memory location as you can see in the picture on the right.

Keep in mind that window configurations can do much more than that, you can save any edit and/or mix window distribution set up and easily toggle between them.

More info.

Shortcut Function
Opt + Cmd + J Open Window Configurations.
"," + Number from 1 to 99 + "+" (Numeric Keyboard) Create new window configuration.
"," + Window Configuration Number + "*" (Numeric Keyboard) Recall window configuration.

Or impress your friends with custom shortcuts...

Memory locations + Window configurations are very powerful. But there is hidden feature that may be even better for accessing Pro Tools functions. You can create your own custom shortcuts for unmapped commands and without any external macro software.

As far as I know, this only works on mac. Just go to Apple > System Preferences > Keyboard > Application Shortcuts and add Pro Tools to the list if it wasn't there already. Now you can create (Plus symbol button) a new shortcut to any Pro Tools function your heart desires as long as that function appears on any Pro Tools menu. Even AudioSuite plugin names. You just need to type the exact name and then add the shortcut you want to assign to that given function.

This blew my mind when I discovered it, you can now access loads of functions even if they are buried under 3 sub-menus.
Here is a list of some of the custom shortcuts I'm currently using, I tend to use Control as the modifier key since Pro Tools doesn't use it much:

Shortcut Exact function name Description
Ctrl + C Color Palette Pimp those tracks!
Ctrl + S Izotope RX 6 Connect Opens the window to send audio to RX
Ctrl + R Reverse AudioSuite Reverse Plugin
Ctrl + V Vari-Fi AudioSuite Vari-Fi Plugin
Ctrl + T At Transients Separates a clip on its transients
Ctrl + E EQ3 7-Band AudioSuite EQ Plugin
Ctrl + G Render Renders Clip Gain
Ctrl + P Preferences...
Ctrl + D Delete Deletes empty tracks
Ctrl + Opt + D Delete... Deletes non-empty tracks
Opt + Cmd + S Save Copy In....
Ctrl + Opt + Cmd + B QuickTime... Bounce to Quicktime

As you can see, you need two distinct shortcuts to delete tracks since the dialog is different depending on the contents of the track. With the set up I have, using "Ctrl + Opt + D" will always delete the track, regardless of its content, but it will only show the warning window if the track has content on it.

Of course, these are only the ones I currently have, I change them all the time. There are many other functions that you could hook up like I/O, Playback Engine, Hardware, Make Inactive (for tracks). Go nuts!


Automation Follows Edit is your friend.

By default, is a good a idea to keep this option on, so when you move a clip, its automation moves with it. But sometimes, especially when doing sound design, you want to swap a clip with another without moving the automation so you can hear how the same processing affects a different clip.

Just remember to turn this back on when you finish or you may mess things up badly. In newer Pro Tools versions like 12 the button will go bright orange to remind you the function is off which is very handy.


Moving through the session

My workflow is based on the mouse wheel because that's what I had when I started with Pro Tools. It might not be the fastest or most efficient way of working but it’s what I’m used to and I can move pretty fast through a session with this method. 

I use the mouse wheel to move horizontally and I like to use the "Mouse Wheel Scrolling Snaps to Track" feature (Pro Tools preferences > Operation > Misc) so every mouse wheel click is a track's length.

To move horizontally, I use Shift + the mouse wheel. To zoom, I use Alt + the mouse wheel or the R/T keys.

Shortcut Description
Shift + Mousewheel Move Horizontally in the timeline.
Alt + Mousewheel Zoom in and out.
R Zoom out.
T Zoom in.

Automation

You don't usually need to do complex automations while designing but here are some handy shortcuts to speed you up so you can focus on the actual sound design. One of the most tricky and annoying things is to move automation around and to automate one or more parameters on a plugin. These shortcuts may help. For accessibility, I'll avoid talking about HD features.

Shortcut Function Comment
"," "." Nudge automation. Select a section of an automation curve and nudge it to position.
Really useful when you are early or late on a automation pass.
(Pictured in the video bellow).
Ctrl + Opt + Cmd + Click Enable automation. Use it on a plugin parameter to make it "automable" or on the "Plugin
Automation Enable" button to enable everything.
Ctrl + Cmd + Click Show automation lane. Shows the automation lane for the selected parameter. Saves me hours.
Ctrl + Cmd + Left & Right Arrow Keys Change Track View Flip through track views. Very handy to go betweeen waveform,
volume and pan views.

Import Session Data

This is a very powerful and somewhat overlooked feature that allows you to import audio and other information from any sessions.

The key concept is that you can bring different elements independently. You could, for example, bring one track's plugins or its I/O without bringing any audio. 

If you decide to import audio, you can choose between just referencing the audio from the other session or to copying it on your current session's audio folder which would be a safer option.

You can also bring memory locations and window configurations. Remember those fancy window configuration shenanigans I was talking about above? You could import your little creations from other sessions with this!


Show number of tracks

Just go to View > Track Number. Very simple, but sometimes you want to know how many tracks you have used so you can brag about your unreasonable layering needs.


Miscellaneous useful shortcuts

And finally, here are some random shortcuts and functions:

Shortcut Function Comment
"*" (Numeric Keyboard) Enter timecode. Lets you type in the counter window so you can jump to any point on the timecode.
Also, this counter acts as a calculator so you can type + or - to jump a certain
amount of timecode forward or backwards.
Shift + Cmd + K Export Clips as Files A somewhat hidden feature (it's on the clip list window, not the main menus
that allows you to quickly export any clip as a separate file. You'll be able to
choose the settings of the new audio file but remember this is not the same
as a bounce, inserts and sends won't be considered.
Alt + Cmd + "[" or "]" Waveform Zoom Changes the waveform zoom in or out. Very useful when you need to work
with a zoom level that makes sense with the material you have. 
Ctrl + Opt + Cmd + [ Reset Waveform Zoom Resets the waverform zoom in case you want to see how loud a clip really looks.
Ctrl + Shift +
Arrow up/down or mousewheel
Nudge Clip Gain. It even works with several of them at the same time. A must if you use clip
gain a lot. Keep in mind that you are going to jump in a determined amount of
dB that you can change at Preferences > Editing > Clips > Clip Gain Nudge
Value. I usually use 0.5 dB but sometimes I like a smaller value.
None, Double Click on a
Crosfade and select
Equal Power.
Equal Power
(Crossfades)
Use this when there is a drop in volume on a crossfade. This setting will make
the transition much smoother.

That's it. Questions? Suggestions? Did I forget your favourite trick? Leave a comment!

Ghost 1.0 Sound Design Postmortem

Hi there! Here is a brief summary of my sound design work on the game Ghost 1.0
Since this is the first one I write, I'll go into some extra detail about my workflow but I'll try to always keep it related to Ghost 1.0

The Project

The game was developed by Francisco Téllez de Meneses (Fran). We had already worked together on Unepic, a pretty successful metroidvania RPG published on Steam and consoles.

I had worked on Ghost 1.0 on and off between April 2014 and June 2016. To give you an idea of the size of the project, here are some numbers from the audio folder where I did all the work:

-23K files in total.
-37 Gb in size.
-Around 200 Pro Tools folders.
-386 unique audio files in the final game (not including voice overs).
-Over 1000 lines of dialogue for each language (English, Spanish and Russian localizations were made).
-Around 230 sound events covered in the game.

 

Game & Audio Engine

For this game, we were using the Unepic engine (written in C) which already manages audio using DirectSound. This API allows cross-platform audio and, in the case of Unepic, allowed us to port the game to the a big variety of systems including WiiU, Playstation 4, Xbox One, PC, Mac & Linux. Ghost 1.0 is also planned to be ported to some of these systems in the near future.

DirectSound works nicely but sometimes I missed some middleware features like "live" layering, more complex randomization capabilities and better loop management. I realised that not using middleware forces me to really work on the SFX within Pro Tools and only bounce when everything seems perfect. Going back and forth between the game and the DAW is very common. On the other hand, with FMOD or Wwise, I often find myself just casually creating the SFX layers knowing that I can put them together and even EQ or compress them within the middleware environment.

This difference in the workflow is something to keep in mind while switching between projects with or without middleware.

Sonic Style

I began by working on some general sounds to define the game's tone and style. Footsteps, the starting weapons and initial character skills were the very first things we focused on. The basic, primary blaster pistol was one of the SFX that went under the most iterations. It's the gun every player starts with so we wanted to make sure we had the right sound for it.

Here you can have a listen at how the starting gun SFX evolved. At the beginning, we were just trying out different styles until settling with a more mechanical and simpler sound. Version 9.3 was the one used on the game.

One of the main priorities creating this sound was to avoid being annoying through repetition. Sometimes you think you have a cool sound but you have to remember that, if the SFX needs to be played over and over again sometimes you are better off with a simpler, shorter SFX. Also, we made it feel a little weak with the idea of making the player feel powerful when switching to better weapons.

The room where you start the game and try your primary weapon. I've been here too many hours.

After this initial phase, Fran would then ask for new sound effects as the game grew and expanded over time. The nice thing about this way of working is that the project doesn't require constant attention but what it does require, is an ability to adapt quickly once again to the tone of the game when work does need to be done.

In this way, it was a very organic process, usually without strict deadlines, that allowed us to spend the proper amount of time creating the right SFX.

Keeping track

I used a spreadsheet, to gather every relevant piece of information including the name of the SFX, its duration, how it was triggered (one shot or loop), its location in the game, description and examples. This was crucial to be able to keep track of dozens of sound effects at the same time, I'd usually found myself constantly going back to the spreadsheet to note down ideas or check on things.

Whenever the sound was going to be triggered by an animation and needed precise timing, I'd record a short video of the animation using OBS and then use it as a reference in Pro Tools

SFX Approval

Any given SFX would need on average around 2-3 versions to be approved. Usually language is not very good to describe the abstract world of sound so I always tried to get as much information as possible (the spreadsheet is great for this) to get close to the idea Fran had in his mind.

Weapons and power-ups are a good example of a SFX that can vary greatly in its style. Sometimes just knowing is a "Cloaking device SFX" is not enough information to get the SFX right. Should it feel powerful? Quiet? Electric? High Tech? Is there an example from other game or movie? Making these kind of questions is crucial.

I always took extensive notes through meetings, or, even better, recorded the whole thing. Having all the information and context for a given SFX in a video that you can go back to was a blessing. I never trust my memory and you probably shouldn't either.

Versions, versions, versions...

File Management & Version Control

I had a separated Pro Tools session per sound effect and usually even separated sessions per version of a given SFX. It would often happen that I had to go back to a previous version or combine layers from some of them so giving every version its own session is the safest way to go about this in my opinion.

For version control, I keep a very simple but effective system. I would use the sufix "_v1", "_v2", "_v3", etc. at the end of a sound file name. I recommend this system or a similar one and I'd avoid using terms like "_final" or "_last" because you never know when you are going to need to go back and change something, even months later.

A sequential numeric system keeps everything tidy and clear. I would sometimes use sub versions like "_2.1" or "_2.2" when the sound design was essentially the same but there was a small difference in volume or EQ, usually to make the sound sit better in its context.

 

Implementation

Basically, testing new sounds was as easy as swapping the audio files in the game folders. To make work easier, I had access to a developer copy of the game that allowed me to have instant access to the whole map and to cheats like being invincible or give myself any weapon or power-up. 

Having access to these kind of tools is key to speed up work and helped me a lot to focus on the sound design and testing SFX.

Developer map screenshot

I first worked on Ghost 1.0 using a Mac mini to do the sound design an run Pro Tools. I used bootcamp partition with Windows to run the game. I also had the option of emulating Windows but when working with a game still in development  things can be unstable sometimes so running the game on Windows natively was the most reliable option. 

Going back and forth between two operating systems is quite slow and usually breaks your creative momentum when working and testing new sounds in the game.

So I later upgraded to a dual computer system. I still use my Mac mini as a Pro Tools and Soundly machine and then I custom built a PC to run the games I'm working on. I share files between both computers via Ethernet and the keyboard and mouse using Synergy's Seamless which sometimes can be a little problematic, especially if you are using an old mac OS version, but in general, I would recommend this set up.

Distance, Panorama & Reverb

A comparison between Unepic and Ghost 1.0. Both have a similar point of view.

Both Unepic and Ghost 1.0 use a classic metroid style 2D Side-scrolling view that is particularly wide so our hero looks pretty small on the screen. This presented some challenges in the stereo sound placement and also in terms of getting the distance right.

If we use a completely realistic approach, with the listener on the camera, all sounds should be pretty much mono and somewhat far away but of course, this would be quite dull, since we wanted to convey information about enemy placement using the stereo field. So, what we did was to imagine that the listener was somewhere between the camera and the character. This way, we have some stereo depth but keeping also a sonic perspective that works with a wide camera angle where you can see several enemies and platforms at the same time.

We used baked-in reverbs since all the action happens in a very similar environment, the space station hallways and chambers. I may have used bigger reverbs for certain SFX that only play in large rooms, like boss rooms. 

Sample sounds

I leave you with some weapon, User Interface and environment sounds from the game. If you have any questions or want me to expand on anything further, feel free to leave a comment.

Thanks for reading!