An Introduction to Game Audio

Talking to a fellow sound designer about game audio, I realised that he wasn't aware of some of the differences between working on audio for linear media (film, animation, TV, etc) and for interactive media (video games).

So this post is kind of my answer to that. A brief introduction to the creative depth of video game sound design. This would be aimed to audio people who are maybe not very familiar with the possibilities this world offers or just want to see in which way is different to, say, working on film sound design.

Of course there are many differences between linear and interactive sound design, but perhaps the most fundamental, and the most important for somebody new to interactive sound design, is the concept of middleware. In this post, I’ll aim to give beginners a first look at this unfamiliar tool.

I'll use screenshots from Unearned Bounty, a project I've been working on for around a year now. Click on them to enlarge. This game runs with Unity as the engine and Fmod as the audio middleware.

Linear Sound vs Interactive Sound

Video games are an interactive media and this is going to influence how you face the sound design work. In traditional linear media, you absolutely control what happens in your timeline. You can be sure that, once you finish your job, every time anyone presses play that person is going to have the same audio experience you originally intended, provided that their monitoring system is faithful.

Think about that. You can spend hours and hours perfecting the mix to match a scene since you know is always going to look the same and it will always be played in the same context. Far away explosion? Let's drop a distant explosion there or maybe make a closer FX sound further away. No problem. 

In the case of interactive media, this won´t always be the case. Any given sound effect could be affected by game world variablesstates and context. Let me elaborate on those three factors. Let's use the example of the explosion again. In the linear case, you can design the perfect explosion for the shot, because is always going to be the same. Let's see in the case of a game:

  • The player could be just next to the explosion or miles away. In this case, the distance would be a variable that is going to affect how the explosion is heard. Maybe the EQ, reverb or compression should be different depending on this.

  • At the same time, you probably don't want the sound effect to be exactly the same if it comes from an ally instead of the player. In that case, you'd prefer to use a simpler, less detailed SFX. One reason for this could be that you want to enhance the sound of what the player does so her actions feel more clear and powerful. In this case, who the effect belongs to, would be a state.

  • Lastly, is easier to make something sound good when you always know the context. In video games, you may not always know or control which sounds will play together. This forces you to play-test to make sure that sounds not only work in isolation but also together and in the proportions that usually the player is going to hear. Also, different play styles will alter these proportions. So, following with our example, your explosion may sound awesome but maybe at the same time dialogue is usually being played and is getting lost in the mix and you'd need to account for that.

After seeing this, linear sound design may feel more straightforward, almost easy in comparison. Well, not really. I´ll explain with an analogy. Working on linear projects, particularly movies, is like writing a book. You can really focus on developing the characters, plot and style. You can keep improving the text and making rewrites until you are completely satisfied. Once is done, your work is always going to deliver the same experience to anyone who reads the book.

Interactive media, on the other hand, is closer to being a game master preparing a D&D adventure for your friends. You may go into a lot detail with the plot, characters and setting but like any experienced GM knows, players will somewhat unpredictable. They will spend an annoying amount of time exploring some place that you didn´t give enough attention to and then they will circumnavigate the epic boss fight by some creative rule bending or a clever outside the box idea.

So, as you can see, being a book writer or working in linear sound design gives you the luxury of really focusing on the details you want, since the consumer experience and interaction with your creation is going to be closed and predictable. In both D&D and interactive media, you are not really giving the final experience to the players, you are just providing the ingredients and the rules that will create a unique experience every time.

Creating those ingredients and rules is our job. Let's explore the tools that will help us with this epic task.

Audio Middleware and the Audio Event

Here you can see code being scary.

Games, or any software for that matter, is built from a series of instructions that we call code. This code manages and keeps track of everything that makes a game run: graphics, internal logic, connecting to other computers through the internet and of course, audio.

The simplest way of connecting a game with some audio files is just calling them from the code whenever we need them. Let's think about a FPS game. We would need a sound every time the players shoots her shotgun. So, in layman's terms, the code would say something like: "every time the players clicks her mouse to shoot, please play this shotgun.wav file that you will find in this particular folder". And we may don't even need to say please since computers don't usually care about such things.

This is how all games were done before and is still pretty much in use. This method is very straightforward but also very limited. Incorporating the audio files into the game is a process that is usually called implementation and this is its more rudimentary form. The thing is, code can be a little scary at first, specially for us audio people who are not very familiar with it. Of course, we can learn it, and is an awesome tool if you plan to work in the video game industry, but at the end of the day we want to be focusing on our craft.

Middleware was created help us with this and fill the gap between the game code and the audio. It serves as a middle man, hence the name, allowing sound designers to just focus on the sound design itself. In our previous example, the code was pointing to specific audio files that were needed in any given moment. Middleware does essentially the same thing but puts an intermediate in the middle of the process. This intermediate is what we call an audio event

An example of audio events managing the behaviour of the pirate ships.

An audio event is the main functional unit that the code will call whenever it needs a sound. It could be a gunshot, a forest ambience or a line of dialogue. It could contain a single sound file or dozens of them. Anytime something makes a sound, is triggering an event. The key thing is that, once the code is pointing to an event, we have control, we can make it sound the way we want, we are in our territory.

And this is because middleware uses tools that we audio people are familiar with. We'll find tracks, faders, EQs and compressors. Keep in mind that these tools are still essentially code, middleware is just offering us the convenience of having them in an comfortable and familiar environment. Is bringing the DAW experience to the game development realm.

Audio middleware can be complex and powerful and I'd need a whole series of posts to tell you what they can do and how. So, for now. I'm just going to go through three main features that should give you an idea of what they can offer.

I - Conventional Audio Tools within Middleware

Middleware offer a familiar environment with tracks, timelines and tools similar to the ones found on your DAW. Things like EQ, dynamics, pitch shifters or flangers are common.

This gives you the ability to tweak your audio assets without needing to go back and forth between different softwares. Probably you are still going to start from your DAW and build the base sounds there using conventional plugins, but being able to also do processing within the middleware gives you flexibility and, more importantly, a great amount of power as you'll see later.

II - Dealing with Repetition and Variability

The player may perform some actions over and over again. For example, think about footsteps. You generally don't want to just play the same footstep sound every single time. Even having a set of, say 4 different footsteps, is going to feel repetitive eventually. This repetitiveness is something that older games suffer from and that generally modern games try to avoid. The original 1998 Half-Life, for example, uses a set of 4 footstep sounds per surface. Having said that, it may still be used when looking for a nostalgic or retro flavour the same way pixel art is still used. 

Middleware offer us tools to make several instances of the same audio event, sound cohesive but never exactly identical. The most important of these tools are variations, layering and parameter randomization.

The simplest approach to avoid repetition is just recording or designing several variations on the same effect and let the middleware choose randomly between them every time the event is triggered. If you think about it, this imitates how reality behaves. A sword impact or an footstep are not going to sound exactly the same every single time, even if you really try to use the same amount of force and hit on the same place. 

You could also break up a sound into different components or layers. For example, a gunshot could be divided in a shot impact, its tail and the bullet shell hitting the ground. Each of this layers could also have their own variations. So now, every time the player shoots, the middleware is going to randomly choose an impact, a tail and a bullet shell sound, creating a unique combination.

Another cool thing to do is to have an event with a special layer that is triggered very rarely. By default, every layer on an event has a 100% possibility to be heard but you can tweak this value to make it more infrequent. Imagine for example a power-up sound that has an exciting extra sound effect but is only played 5% of the time the event is called. This is a way to spice things up and also reward players who spend more time playing.

An additional way of adding variability would be to randomize not only which sound clip will be played, but also their parameters. For example, you could randomize volume, pitch or panorama within a certain range of your choice. So, every time an audio clip is called, a different pitch and volume value are going to be randomly picked.

Do you see the possibilities? If you combine these three techniques, you can achieve an amazing degree of variability, detail and realism while using a relative small amount of audio files.

See above the collision_ashore event that is triggered whenever a ship collides with an island. It contains 4 different layers: 

  • A wood impact. (3 variations)

  • Sand & dirt impacts with debris. (3 variations)

  • Wooden creaks (5 variations)

  • A low frequency impact.

As I said, each time the event is triggered, one of this variations within each layer will be chosen. If we then combine this with some pitch, volume and EQ randomization we will assure that every instance of the event will be unique but cohesive with the rest.

III - Connecting audio tools to in-game variables and states.

This is where the real power resides.

Remember the audio tools built-in into middleware that I mentioned before? In the first section I showed you how we can use these audio tools the same way we use them on any DAW. Additionally, can also randomize their values, like I showed you in the second section. So here comes the big one.

We can also automate any parameter like volume, pitch, EQ or delay in relation to anything going on inside the game. In other words, we will have a direct connection between the language of audio and the language the game speaks, the code. Think about the power that that gives you. Here are some examples:

  • Apply an increasing high pass filter to the music and FX as the protagonist health gets lower.

  • Apply a delay to cannon shots that gets longer the further away the shot is, creating a realistic depiction of how light travels faster than sound.

  • Make the tempo of a song gets faster and its EQ brighter as you approach the end of the level.

  • As your sci-fi gun wears off, the sounds get more distorted and muffled. You feel so relieved when you can repair it and get all its power back.

Do you see the possibilities this opens? You can express ideas in the game's plot and mechanics with dynamic and interactive sound design! Isn't that exciting? The takeaway concept that I want you to grasp from this post is that you would never be able to do something this powerful with just linear audio. Working on games makes you think much harder about how sound coming from objects and creatures behaves, evolves and changes. 

As I said before,  you are just providing the ingredients and the rules, the sound design itself only materializes when the player starts the game. 

You can see on the above screenshot how an in-game parameter, distance in this case, affects an event layers' volume, reverb send and EQs.

How to get started

If I have piqued your interest, here are some resources and information to start with.

Fmod and Wwise are currently the two main middleware used by the industry. Both are free to use and not very hard to get into. You will need to re-wire your brain a bit to get used to the way they work, though. Hopefully, reading this post gave you a solid introduction on some of the concepts and tools they use.

If I had to choose one of them, Fmod could look less intimidating at first and maybe more "DAW user friendly". Of course, there are other options, but if you just want to have a first contact, Fmod does the job.

There are loads of resources and tutorials online to learn both Fmod and Wwise, but since I think that the best way to really learn is to jump in and make something yourself, I'll leave you with something concrete to start from for each of them:

Fmod has these very nice tutorials with example projects that you can download and play with.

Wwise has official courses and certifications that you can do for free and and also include example projects.

And of course, don't hesitate to contact me if you have further questions. Thanks for reading!

Pro Tools Functions, Tips and Tricks for Sound Design

Here is a compilation of tips and tricks for sound design and editing with Pro Tools. Some of these shortcuts could be obvious to the seasoned Sound Designer but you’d never know, you might learn a new trick or two. For the purposes of this post, I’ll assume that most readers are Mac users, although most of this content should also be PC-compatible.

The post contains short videos which demonstrate the shortcuts discussed in each section. If you have Pro Tools at hand, I would recommend that you open it and follow along. Let's go!

Use memory locations to mark sync points and scene changes.
Pretty basic but worth mentioning. You can add memory locations to your timeline and use them to mark certain key moments. As you import new clips to your session, these markers will be very helpful in lining up different layers. You can also jump between markers with shortcuts, which is particularly useful in long sessions with multiple scene changes.

Shortcut Function
Enter (numeric keyboard) Create new marker at playhead position.
Cmd + 5 (numeric keyboard) Open the memory location window.
Opt + Click on marker Deletes the marker.
"." (numeric keyboard) + marker number + "." (numeric keyboard) Jump to a marker location

Note: If your numeric keypad is on Classic Mode you can skip the first "." when recalling a memory location.


Using X-Form + Elastic Properties
This is my preferred method for quick clip pitch and length changes. It doesn't work very well for big changes but is good enough for small adjustments and you can tweak both parameters independently This is not a real time process, the changes are rendered offline and you'll keep the original version if you need to go back to it.

To use this method, you will need to activate X-Form in the track elastic properties, it's just under the track automation mode. (Video Bellow)

X-Form pitch changes work great when you have a sound that is similar to what you need but you feel it needs to be a little bigger or smaller in weight. The results are not always natural sounding, but adjusting the pitch can sometimes bring a clip close to the sound you're looking for.

Also, being able to change the length of a clip makes your library instantly bigger. Now your clips may work on more situations as you can make them shorter or longer to fit in context.

And don't forget this tool could be also used as a creative resource. Listen to the following clip where some plastic bag impacts are extremely slowed down, creating a weird, distorted kind of Sci-Fi sound. You can hear bellow the original sound first and then the processed one.

Shortcut Function
Alt + 5 (numeric keyboard) Opens the elastic properties for the selected clip.

Tab to transients. 

Again, pretty basic but super useful. When activated, (under the trim tool or with the fancy shortcut) you can use the tab key to jump between transients instead of clip boundaries. Very handy when editing steps, gun shots or impacts.

This function is great when working with just a few short clips, but If you just want to create clip separations on transients on a long file with loads of steps, the best to do this would be to use the function "Separate Clip on Transients".

Shortcut Function
Opt + Cmd + TAB Toggles "Tab to Transients" on and off.
TAB Jump between transients or clip boundaries.
B Separate Clip.
No official shortcut, but keep reading for a workaround! Separate Clip on Transients

Shortcuts in the ASDFG keys (cuts and fades). 

Naturally, my left hand is usually in a WASD position but when editing in Pro Tools, your fingers should be on the ASDFG keys to allow you to quickly trim and fade clips. It might take some time to get used to using these, but in no time at all, it will become second nature. Remember that you need to be on keyboard focus to use these.

Shortcut Function
A Trim Start
S Trim End
D Fade In
F Cross Fade
G Fade Out

Ctrl + click to move a clip to the playhead position.

To align two clips, select the clip to which you want to align the other(s) and then press Ctrl + click on the second clip to align them.. Also works on markers. Simple and neat.


Move a clip from one track to another without changing its sync.

Just press and hold Ctrl while moving a clip from one track to another and it will keep its timeline position no matter how much you horizontally move your mouse.. Add Opt to the shortcut to also duplicate the clip.


Fill gaps. 

This is very useful when you have an unwanted noise on an ambience track.

To eliminate the offending noise, first select it and press Cmd + B to remove it. Then select and copy another similar region in the audio clip, ideally longer than the gap you have removed. Lastly, select the area to fill and do a paste special repeat to fill selection. You can then cross fade the boundaries to make it seamless.

You can use this method rather than copying a section of audio and having to adjust the clip manually to fill the gap.. this shortcut does that tedious work for you!

Last thing, if your selection when copying is smaller than the gap itself, Pro Tools is going to paste the same clip several times until the gap is filled. It will also create crossfades between these copies of your selection. This won't probably sound very smooth but it may work if the gap is not very big and/or the scene is busy.

Shortcut Function
Cmd + B Clear or remove selection from clip.
Opt + Cmd + V Paste special repeat to fill selection.

Easy access to your most used plugins 

You can select your preferred EQ and Compressor plug-ins by going to Setup > Preferences > Mixing.

You can also select you most commonly-used plug-ins to appear at the top of your inserts list, by holding Cmd and then selecting the relevant plug-in in an insert slot. This also works with AudioSuite plug-ins.


Have Audiosuite plugins at hand with window configurations.

The previous trick will allow you to have your AudioSuite plug- ins at hand but there is an even quicker and better way to quickly access AudioSuite plugins.

First, open the AudioSuite plugin of your choice, you can even do this with more than one plugin at the same time. Now, create a new window configuration using the Window Configurations window or the shortcut. You can then call that window configuration to summon the plugin or even incorporate it in a memory location as you can see in the picture on the right.

Keep in mind that window configurations can do much more than that, you can save any edit and/or mix window distribution set up and easily toggle between them.

More info.

Shortcut Function
Opt + Cmd + J Open Window Configurations.
"," + Number from 1 to 99 + "+" (Numeric Keyboard) Create new window configuration.
"," + Window Configuration Number + "*" (Numeric Keyboard) Recall window configuration.

Or impress your friends with custom shortcuts...

Memory locations + Window configurations are very powerful. But there is hidden feature that may be even better for accessing Pro Tools functions. You can create your own custom shortcuts for unmapped commands and without any external macro software.

As far as I know, this only works on mac. Just go to Apple > System Preferences > Keyboard > Application Shortcuts and add Pro Tools to the list if it wasn't there already. Now you can create (Plus symbol button) a new shortcut to any Pro Tools function your heart desires as long as that function appears on any Pro Tools menu. Even AudioSuite plugin names. You just need to type the exact name and then add the shortcut you want to assign to that given function.

This blew my mind when I discovered it, you can now access loads of functions even if they are buried under 3 sub-menus.
Here is a list of some of the custom shortcuts I'm currently using, I tend to use Control as the modifier key since Pro Tools doesn't use it much:

Shortcut Exact function name Description
Ctrl + C Color Palette Pimp those tracks!
Ctrl + S Izotope RX 6 Connect Opens the window to send audio to RX
Ctrl + R Reverse AudioSuite Reverse Plugin
Ctrl + V Vari-Fi AudioSuite Vari-Fi Plugin
Ctrl + T At Transients Separates a clip on its transients
Ctrl + E EQ3 7-Band AudioSuite EQ Plugin
Ctrl + G Render Renders Clip Gain
Ctrl + P Preferences...
Ctrl + D Delete Deletes empty tracks
Ctrl + Opt + D Delete... Deletes non-empty tracks
Opt + Cmd + S Save Copy In....
Ctrl + Opt + Cmd + B QuickTime... Bounce to Quicktime

As you can see, you need two distinct shortcuts to delete tracks since the dialog is different depending on the contents of the track. With the set up I have, using "Ctrl + Opt + D" will always delete the track, regardless of its content, but it will only show the warning window if the track has content on it.

Of course, these are only the ones I currently have, I change them all the time. There are many other functions that you could hook up like I/O, Playback Engine, Hardware, Make Inactive (for tracks). Go nuts!


Automation Follows Edit is your friend.

By default, is a good a idea to keep this option on, so when you move a clip, its automation moves with it. But sometimes, especially when doing sound design, you want to swap a clip with another without moving the automation so you can hear how the same processing affects a different clip.

Just remember to turn this back on when you finish or you may mess things up badly. In newer Pro Tools versions like 12 the button will go bright orange to remind you the function is off which is very handy.


Moving through the session

My workflow is based on the mouse wheel because that's what I had when I started with Pro Tools. It might not be the fastest or most efficient way of working but it’s what I’m used to and I can move pretty fast through a session with this method. 

I use the mouse wheel to move horizontally and I like to use the "Mouse Wheel Scrolling Snaps to Track" feature (Pro Tools preferences > Operation > Misc) so every mouse wheel click is a track's length.

To move horizontally, I use Shift + the mouse wheel. To zoom, I use Alt + the mouse wheel or the R/T keys.

Shortcut Description
Shift + Mousewheel Move Horizontally in the timeline.
Alt + Mousewheel Zoom in and out.
R Zoom out.
T Zoom in.

Automation

You don't usually need to do complex automations while designing but here are some handy shortcuts to speed you up so you can focus on the actual sound design. One of the most tricky and annoying things is to move automation around and to automate one or more parameters on a plugin. These shortcuts may help. For accessibility, I'll avoid talking about HD features.

Shortcut Function Comment
"," "." Nudge automation. Select a section of an automation curve and nudge it to position.
Really useful when you are early or late on a automation pass.
(Pictured in the video bellow).
Ctrl + Opt + Cmd + Click Enable automation. Use it on a plugin parameter to make it "automable" or on the "Plugin
Automation Enable" button to enable everything.
Ctrl + Cmd + Click Show automation lane. Shows the automation lane for the selected parameter. Saves me hours.
Ctrl + Cmd + Left & Right Arrow Keys Change Track View Flip through track views. Very handy to go betweeen waveform,
volume and pan views.

Import Session Data

This is a very powerful and somewhat overlooked feature that allows you to import audio and other information from any sessions.

The key concept is that you can bring different elements independently. You could, for example, bring one track's plugins or its I/O without bringing any audio. 

If you decide to import audio, you can choose between just referencing the audio from the other session or to copying it on your current session's audio folder which would be a safer option.

You can also bring memory locations and window configurations. Remember those fancy window configuration shenanigans I was talking about above? You could import your little creations from other sessions with this!


Show number of tracks

Just go to View > Track Number. Very simple, but sometimes you want to know how many tracks you have used so you can brag about your unreasonable layering needs.


Miscellaneous useful shortcuts

And finally, here are some random shortcuts and functions:

Shortcut Function Comment
"*" (Numeric Keyboard) Enter timecode. Lets you type in the counter window so you can jump to any point on the timecode.
Also, this counter acts as a calculator so you can type + or - to jump a certain
amount of timecode forward or backwards.
Shift + Cmd + K Export Clips as Files A somewhat hidden feature (it's on the clip list window, not the main menus
that allows you to quickly export any clip as a separate file. You'll be able to
choose the settings of the new audio file but remember this is not the same
as a bounce, inserts and sends won't be considered.
Alt + Cmd + "[" or "]" Waveform Zoom Changes the waveform zoom in or out. Very useful when you need to work
with a zoom level that makes sense with the material you have. 
Ctrl + Opt + Cmd + [ Reset Waveform Zoom Resets the waverform zoom in case you want to see how loud a clip really looks.
Ctrl + Shift +
Arrow up/down or mousewheel
Nudge Clip Gain. It even works with several of them at the same time. A must if you use clip
gain a lot. Keep in mind that you are going to jump in a determined amount of
dB that you can change at Preferences > Editing > Clips > Clip Gain Nudge
Value. I usually use 0.5 dB but sometimes I like a smaller value.
None, Double Click on a
Crosfade and select
Equal Power.
Equal Power
(Crossfades)
Use this when there is a drop in volume on a crossfade. This setting will make
the transition much smoother.

That's it. Questions? Suggestions? Did I forget your favourite trick? Leave a comment!

Ghost 1.0 Sound Design Postmortem

Hi there! Here is a brief summary of my sound design work on the game Ghost 1.0
Since this is the first one I write, I'll go into some extra detail about my workflow but I'll try to always keep it related to Ghost 1.0

The Project

The game was developed by Francisco Téllez de Meneses (Fran). We had already worked together on Unepic, a pretty successful metroidvania RPG published on Steam and consoles.

I had worked on Ghost 1.0 on and off between April 2014 and June 2016. To give you an idea of the size of the project, here are some numbers from the audio folder where I did all the work:

-23K files in total.
-37 Gb in size.
-Around 200 Pro Tools folders.
-386 unique audio files in the final game (not including voice overs).
-Over 1000 lines of dialogue for each language (English, Spanish and Russian localizations were made).
-Around 230 sound events covered in the game.

 

Game & Audio Engine

For this game, we were using the Unepic engine (written in C) which already manages audio using DirectSound. This API allows cross-platform audio and, in the case of Unepic, allowed us to port the game to the a big variety of systems including WiiU, Playstation 4, Xbox One, PC, Mac & Linux. Ghost 1.0 is also planned to be ported to some of these systems in the near future.

DirectSound works nicely but sometimes I missed some middleware features like "live" layering, more complex randomization capabilities and better loop management. I realised that not using middleware forces me to really work on the SFX within Pro Tools and only bounce when everything seems perfect. Going back and forth between the game and the DAW is very common. On the other hand, with FMOD or Wwise, I often find myself just casually creating the SFX layers knowing that I can put them together and even EQ or compress them within the middleware environment.

This difference in the workflow is something to keep in mind while switching between projects with or without middleware.

Sonic Style

I began by working on some general sounds to define the game's tone and style. Footsteps, the starting weapons and initial character skills were the very first things we focused on. The basic, primary blaster pistol was one of the SFX that went under the most iterations. It's the gun every player starts with so we wanted to make sure we had the right sound for it.

Here you can have a listen at how the starting gun SFX evolved. At the beginning, we were just trying out different styles until settling with a more mechanical and simpler sound. Version 9.3 was the one used on the game.

One of the main priorities creating this sound was to avoid being annoying through repetition. Sometimes you think you have a cool sound but you have to remember that, if the SFX needs to be played over and over again sometimes you are better off with a simpler, shorter SFX. Also, we made it feel a little weak with the idea of making the player feel powerful when switching to better weapons.

The room where you start the game and try your primary weapon. I've been here too many hours.

After this initial phase, Fran would then ask for new sound effects as the game grew and expanded over time. The nice thing about this way of working is that the project doesn't require constant attention but what it does require, is an ability to adapt quickly once again to the tone of the game when work does need to be done.

In this way, it was a very organic process, usually without strict deadlines, that allowed us to spend the proper amount of time creating the right SFX.

Keeping track

I used a spreadsheet, to gather every relevant piece of information including the name of the SFX, its duration, how it was triggered (one shot or loop), its location in the game, description and examples. This was crucial to be able to keep track of dozens of sound effects at the same time, I'd usually found myself constantly going back to the spreadsheet to note down ideas or check on things.

Whenever the sound was going to be triggered by an animation and needed precise timing, I'd record a short video of the animation using OBS and then use it as a reference in Pro Tools

SFX Approval

Any given SFX would need on average around 2-3 versions to be approved. Usually language is not very good to describe the abstract world of sound so I always tried to get as much information as possible (the spreadsheet is great for this) to get close to the idea Fran had in his mind.

Weapons and power-ups are a good example of a SFX that can vary greatly in its style. Sometimes just knowing is a "Cloaking device SFX" is not enough information to get the SFX right. Should it feel powerful? Quiet? Electric? High Tech? Is there an example from other game or movie? Making these kind of questions is crucial.

I always took extensive notes through meetings, or, even better, recorded the whole thing. Having all the information and context for a given SFX in a video that you can go back to was a blessing. I never trust my memory and you probably shouldn't either.

Versions, versions, versions...

File Management & Version Control

I had a separated Pro Tools session per sound effect and usually even separated sessions per version of a given SFX. It would often happen that I had to go back to a previous version or combine layers from some of them so giving every version its own session is the safest way to go about this in my opinion.

For version control, I keep a very simple but effective system. I would use the sufix "_v1", "_v2", "_v3", etc. at the end of a sound file name. I recommend this system or a similar one and I'd avoid using terms like "_final" or "_last" because you never know when you are going to need to go back and change something, even months later.

A sequential numeric system keeps everything tidy and clear. I would sometimes use sub versions like "_2.1" or "_2.2" when the sound design was essentially the same but there was a small difference in volume or EQ, usually to make the sound sit better in its context.

 

Implementation

Basically, testing new sounds was as easy as swapping the audio files in the game folders. To make work easier, I had access to a developer copy of the game that allowed me to have instant access to the whole map and to cheats like being invincible or give myself any weapon or power-up. 

Having access to these kind of tools is key to speed up work and helped me a lot to focus on the sound design and testing SFX.

Developer map screenshot

I first worked on Ghost 1.0 using a Mac mini to do the sound design an run Pro Tools. I used bootcamp partition with Windows to run the game. I also had the option of emulating Windows but when working with a game still in development  things can be unstable sometimes so running the game on Windows natively was the most reliable option. 

Going back and forth between two operating systems is quite slow and usually breaks your creative momentum when working and testing new sounds in the game.

So I later upgraded to a dual computer system. I still use my Mac mini as a Pro Tools and Soundly machine and then I custom built a PC to run the games I'm working on. I share files between both computers via Ethernet and the keyboard and mouse using Synergy's Seamless which sometimes can be a little problematic, especially if you are using an old mac OS version, but in general, I would recommend this set up.

Distance, Panorama & Reverb

A comparison between Unepic and Ghost 1.0. Both have a similar point of view.

Both Unepic and Ghost 1.0 use a classic metroid style 2D Side-scrolling view that is particularly wide so our hero looks pretty small on the screen. This presented some challenges in the stereo sound placement and also in terms of getting the distance right.

If we use a completely realistic approach, with the listener on the camera, all sounds should be pretty much mono and somewhat far away but of course, this would be quite dull, since we wanted to convey information about enemy placement using the stereo field. So, what we did was to imagine that the listener was somewhere between the camera and the character. This way, we have some stereo depth but keeping also a sonic perspective that works with a wide camera angle where you can see several enemies and platforms at the same time.

We used baked-in reverbs since all the action happens in a very similar environment, the space station hallways and chambers. I may have used bigger reverbs for certain SFX that only play in large rooms, like boss rooms. 

Sample sounds

I leave you with some weapon, User Interface and environment sounds from the game. If you have any questions or want me to expand on anything further, feel free to leave a comment.

Thanks for reading!