Thoughts on buying gear

Hello! Here are some ideas and tips that I think could help you make better decisions while buying audio equipment.

Think long term

I like to see any piece of gear as an investment so I try to choose products that are known for being robust and durable. There are always cheaper options out there but I don’t mind paying a higher price if I have a better guarantee that the equipment is going to last for longer and be more reliable.

In order to determine durability, a good hint could be that the manufacturer offers a longer guarantee period than legally required and/or a good reputation among veteran users (some detective work in audio forums is a must). It is also a good sign when a product is manufactured in Europe or the US, although this is not very frequent and doesn’t guarantee a higher quality necessarily.

Buying higher end gear is particularly relevant for audio since electronic components are quite important in determining quality and life expectancy. The use of cheap plastic instead of more durable components like metal is also commonplace and something to avoid, specially in field equipment.

Something else to think about is that durable gear is usually well known in the industry and may give clients some extra confidence to hire you before others.

On the flip side, you can’t always afford to buy higher quality equipment and sometimes you may need to opt for entry level gear. This can also happen when you need an specific thing for a gig and don’t have time or money to find the best possible option. In those cases, well, you probably need to bite the bullet but in general my advice would be to wait if you can. Flip more burgers and sweep more floors. Once you have enough to at least access the mid tier, go for it. In my experience, those investments will pay off. Ten fold. You need to spend money to earn money.

I bought a Tascam HD-P2 in late 2011. I chose this model because of its reputation and quality. To this day, I still use it as my main recorder for sound effects. It has also accompanied me through features films and documentaries, on snowy cold exterior days and crazy hot Seville summers. It has never failed or died during a take.

I am not saying the HD-P2 is perfect. It only offers two microphone inputs, the pre-amps are not ultra clean (but they quite good for their price range) and the powering options are limited. Nevertheless, it served me well throughout my first years working in audio, it gave me confidence and allowed me to get a huge return on my investment.

The mighty HDP2. Respect.

Save on the features you don’t need

I think this is key. Don’t get dazzled with fancy stuff that you are never going to use. It is important that you think about the features that you actually need and then look for the best option the market has to offer.

Hopefully, I will have the chance to record more frequently now.

Of course, in order to do that, you need to know what your needs really are, which is the tricky part. Do you prefer more channels or a higher resolution? Bigger memory or longer battery life? If you know what kind of specific work you are going to do, this is going to be easier to decide. Try to narrow your needs and priorities.

I recently bought a Sony PCM D100 because I wanted to have something portable to record on the go. This recorder is quite expensive (for a handheld device) and doesn’t have XLR inputs which for me is a big issue. But the thing is my goal is to have something really portable so I can record in situations when a big rig would be cumbersome.

So I am losing the XLR feature in exchange for great quality of audio, battery life, internal memory and construction. All of them features that are essential if I’m going to use this on the go.

Avoid audio elitism

Sound is something that can be objectively measured but, nevertheless, the way we experience it is quite subjective. People apply all sorts of descriptions to audio like “silky”, “airy” or “muddy”. I’m not saying these are not useful or that these don’t describe real properties but sometimes I think we get caught up in these terms too much.

This problem is twofold. On one hand, sometimes people are so ready to justify their purchase that they start to hear mystical properties in a piece of gear. On the other hand, sometimes we can actually really tell the difference (in terms of clarity or timbre profile) between two pieces of gear but it is so small that it’s only noticeable while soloing and/or A-B testing. If the final consumer is probably not going to tell the difference, is it really that important?

Don’t get me wrong, I still think that audio quality should be a priority but usually when investing in equipment the very expensive stuff gives you diminishing returns. You need to really expend a lot of cash to get from the professional to the “elite” level. Maybe you don’t need to.

So yeah, choose quality but don’t get crazy. Beware of mystical claims and 20K€ cables. I honestly think that if we forced people to take blind A-B tests comparing decent gear with very high end equivalents they would be amazed with how close they can be.

Your sound is as good as your chain’s weakest link

Before buying a new fancy microphone, maybe stop for a second and think about the small stuff. There is always something outdated or in a bad condition. Maybe it would sensible to improve on those weak areas first.

Sure, you don’t need fancy solid gold cables but get yourself some decent ones. Another good example of this could be battery management. If your gear uses batteries of any kind, invest in good chargers. I recommend you get familiarized with the stuff that video and photography folks use. Smart chargers are a great option since they have independent charging cells and programs to keep batteries healthier.

Audio cases (I like Portabrace) are also a great option to make sure your equipment is safe while traveling or on location. I bought my Tascam HDP2 with a Portabrace case and it’s really a worthy investment. The velcros work like the first day eight years later.

This powerex charger is a very nice option if you need an army of batteries for your recorder and/or wireless kits.

Balance Risk and Personality

Some people are more risk averse than others and this is something you need to take into account. In my case, I don’t feel confortable rushing things or spending large sums of money so I try to avoid doing those two things at once. If you are similar to me, remember that at some point you have to take the leap and is going to feel uncomfortable. But that’s good. That’s what they mean when they say “Is good to step out of your confort zone”.

When I bought the Rode Blimp v1, I could not afford anything better. It’s an OK starting point, but I would not recommend it for a long term investment. Not very durable.

If, on the other hand, you tend to rush things, well, take it easy. It may help to give yourself some time to make sure to make the right decision. Sharing your situation with friends or colleagues may help too, you’d be surprised by how much better you can see things when you articulate them out loud and get feedback.

Personally, I don’t like to buy second-hand stuff because I feel like I’m taking a big risk but if you are confortable with that, it’s definitely an option. It helps if you can check the condition in person and knowing the seller is ideal. If you are buying online, using sites with a reputation system is a must. Other than that, second-hand is a risk that may pay off or end up in disaster. So ask yourself: how much more money am I willing to pay to get peace of mind instead?

Reviews are spooky

Any piece of equipment that is reasonably popular is going to have some scary reviews. That’s the nature of the polarized online world: people only bother giving 1 or 5 stars, so there isn’t much nuance. Having said that, reviews are still a valuable resource when used with caution.

My approach is to focus on quality rather than quantity. Sure, you can found many reviews in Amazon nowadays but I would prefer to check audio forums or specialized stores first. You can also check reviews for a product on online stores that you are not planning to use. If you are in Europe, B&H and Sweetwater are great. If you are in the US, Thomann is a fantastic source.

Other than that, your best bet is to join and participate forums like Gearlutz. With time, you’ll get to know people there whose opinion would probably be more valuable than a random Amazon user.

Limit your tools

The Sennheiser MKH 416 was my first mic and almost the only one for some time, forcing me to use it in many different ways (on location, for foley, for SFX, for VO…)

Scarcity may sound like a bad thing but I think you can learn a lot from it. Limiting yourself to a small number of tools forces you to be creative, try new things and of course you will master them. Is hard to do that if you have too much stuff so my advice would be to really make the most of what you have before buying something new.

For me, a good example of this is audio libraries. If you already have a decent amount of sounds, there is probably a lot you can do with them. Doing sci-fi or fantasy sounds, for example, will force you to experiment with what you have around in terms of recording gear and plugins and you will learn far more than if you just buy yet another library.

Dear Devs, 7 Reasons why your Game may need Audio Middleware.

There are several audio middleware programmes on the market. You may have heard of the two main players: Fmod and Wwise. Both offer free licenses for smaller budget games and paid licenses for bigger projects.

So, what is Audio Middleware? Does your game need it?

Audio Middleware is a bridge between your game's engine and the game's music and sound effects. Although is true that most game engines offer ready to use audio functionalities (and some of them overlap with the features explained below), middleware gives you more power and flexibility for both creating, organizing and implementing audio.

Here are the seven main reasons to consider using middleware:

1. It gives Independence to the Audio Team.

Creating sound effects and music for a game is already a good amount of work, but that is barely half the battle. For these assets to work, they need to be implemented in the game and be connected to in-game states and variables like health or speed.

This connection will always need some collaboration between the audio team and the programming team. When using middleware, this is a much easier process since once the variables are created and associated, the audio team will be free to tweak how the gameplay will affect the audio, without the need to go into the code or the game engine.

2. Adaptive Music.

Music is usually linear and predictable, which is fine for linear media like movies. But in the case of video games, we have the power to make music adapt and react to the gameplay, giving the player a much more compelling experience.

Middleware plays an important role here because it gives the composer non-linear tools to work and think about the soundtrack not in terms of defined songs but of different layers or fragments of music that can be triggered, modified and silenced as the player progresses.

3. Better handling of Variation and Repetition.

Back when memory was a limiting factor, games had to get by with just a few sounds, which would usually meant repetition, a lot of repetition. Although repetition is certainly still used to give an old school flavour, is not very desirable in modern, more realistic games.

When something happens often enough in a game, the associated sound effect can get boring and annoying pretty fast. Middleware offers tools to avoid this, like randomly selecting the sound from a pool of different variations or randomly altering the pitch, volume or stereo position of the audio file each time is triggered. When all these tools are combined, we end up with an audio event that will be different each time but cohesive and consistent, offering the player a more organic and realistic experience.

4. Advanced Layering.

Layering is how we sound designers usually build sounds. We use different, modified individual sounds to create a new and unique one. Middleware allows us to, instead of mixing down this combination of sounds, import all these layers into different tracks so we can apply different effects and treatments to each sound separatelly.

This flexibility is very important and powerful. It help us to better adapt the character and feel of a sound event to the context of the gameplay. For example, a sci-fi gun could have a series of layers (mechanical, laser, alien hum, low frequency impact, etc) and having all these layer separated would allow us to vary the balance between them depending on things like ammo, distance to the source or damage to the weapon.

5. Responsive Audio tools.

Sound effects are usually created using linear audio software, like Pro Tools, Nuendo or Reaper. These are also called DAWs (Digital Audio Workstations). The tools that we can find in DAWs allow us to transform and shape sounds. Things like equalization, compression and other effects are the bread and butter of audio people. Most of the modern music, sound effects and movies that you´ve ever heard came from a DAW.

But the issue is that once you bounce or export your sound it’s kind of set in stone, that’s how it will sound when you trigger it in your game. Middleware software not only give us the same tools that you can find in a DAW. More importantly, it also give us the ability to make them interact with variables and states coming from the game engine.

How about a monster whose voice gets deeper as it gets larger? Or music and sound effects that get louder, brighter and more distorted as the time is running out?

6. Hardware & Memory Optimization.

Different areas of a game compete for processing power and usually audio is not the first priority (or even the second). That´s why is very important to able to optimize and streamline a game´s audio as much as possible.

Middleware offers handy tools to keep the audio tidy, small and efficent. You can customize things like reverbs and other real time effects and also choose how much quality you want in the final compression algorithm for the audio.

7. Platform flexibility & Localization.

If you need to prepare your game for different platforms, including PC, consoles or mobile phones, middleware makes this very easy. You can compile a different version of the game’s audio for each of the platforms. Memory or hardware requirements may be different for each of them and you’ll need to maybe simplify sound events, bake-in effects or turn a surround mix into an stereo one.

You can also have a different version per language, so the voice acting would be different but the overall sound effects and treatment of the voices would be consistent.


I hope this gave you a bit of a taste of what middleware is capable of. When in doubt, don´t hesitate to ask us, audio folks!
Thanks for reading.

An Introduction to Game Audio

Talking to a fellow sound designer about game audio, I realised that he wasn't aware of some of the differences between working on audio for linear media (film, animation, TV, etc) and for interactive media (video games).

So this post is kind of my answer to that. A brief introduction to the creative depth of video game sound design. This would be aimed to audio people who are maybe not very familiar with the possibilities this world offers or just want to see in which way is different to, say, working on film sound design.

Of course there are many differences between linear and interactive sound design, but perhaps the most fundamental, and the most important for somebody new to interactive sound design, is the concept of middleware. In this post, I’ll aim to give beginners a first look at this unfamiliar tool.

I'll use screenshots from Unearned Bounty, a project I've been working on for around a year now. Click on them to enlarge. This game runs with Unity as the engine and Fmod as the audio middleware.

Linear Sound vs Interactive Sound

Video games are an interactive media and this is going to influence how you face the sound design work. In traditional linear media, you absolutely control what happens in your timeline. You can be sure that, once you finish your job, every time anyone presses play that person is going to have the same audio experience you originally intended, provided that their monitoring system is faithful.

Think about that. You can spend hours and hours perfecting the mix to match a scene since you know is always going to look the same and it will always be played in the same context. Far away explosion? Let's drop a distant explosion there or maybe make a closer FX sound further away. No problem. 

In the case of interactive media, this won´t always be the case. Any given sound effect could be affected by game world variablesstates and context. Let me elaborate on those three factors. Let's use the example of the explosion again. In the linear case, you can design the perfect explosion for the shot, because is always going to be the same. Let's see in the case of a game:

  • The player could be just next to the explosion or miles away. In this case, the distance would be a variable that is going to affect how the explosion is heard. Maybe the EQ, reverb or compression should be different depending on this.

  • At the same time, you probably don't want the sound effect to be exactly the same if it comes from an ally instead of the player. In that case, you'd prefer to use a simpler, less detailed SFX. One reason for this could be that you want to enhance the sound of what the player does so her actions feel more clear and powerful. In this case, who the effect belongs to, would be a state.

  • Lastly, is easier to make something sound good when you always know the context. In video games, you may not always know or control which sounds will play together. This forces you to play-test to make sure that sounds not only work in isolation but also together and in the proportions that usually the player is going to hear. Also, different play styles will alter these proportions. So, following with our example, your explosion may sound awesome but maybe at the same time dialogue is usually being played and is getting lost in the mix and you'd need to account for that.

After seeing this, linear sound design may feel more straightforward, almost easy in comparison. Well, not really. I´ll explain with an analogy. Working on linear projects, particularly movies, is like writing a book. You can really focus on developing the characters, plot and style. You can keep improving the text and making rewrites until you are completely satisfied. Once is done, your work is always going to deliver the same experience to anyone who reads the book.

Interactive media, on the other hand, is closer to being a game master preparing a D&D adventure for your friends. You may go into a lot detail with the plot, characters and setting but like any experienced GM knows, players will somewhat unpredictable. They will spend an annoying amount of time exploring some place that you didn´t give enough attention to and then they will circumnavigate the epic boss fight by some creative rule bending or a clever outside the box idea.

So, as you can see, being a book writer or working in linear sound design gives you the luxury of really focusing on the details you want, since the consumer experience and interaction with your creation is going to be closed and predictable. In both D&D and interactive media, you are not really giving the final experience to the players, you are just providing the ingredients and the rules that will create a unique experience every time.

Creating those ingredients and rules is our job. Let's explore the tools that will help us with this epic task.

Audio Middleware and the Audio Event

Here you can see code being scary.

Games, or any software for that matter, is built from a series of instructions that we call code. This code manages and keeps track of everything that makes a game run: graphics, internal logic, connecting to other computers through the internet and of course, audio.

The simplest way of connecting a game with some audio files is just calling them from the code whenever we need them. Let's think about a FPS game. We would need a sound every time the players shoots her shotgun. So, in layman's terms, the code would say something like: "every time the players clicks her mouse to shoot, please play this shotgun.wav file that you will find in this particular folder". And we may don't even need to say please since computers don't usually care about such things.

This is how all games were done before and is still pretty much in use. This method is very straightforward but also very limited. Incorporating the audio files into the game is a process that is usually called implementation and this is its more rudimentary form. The thing is, code can be a little scary at first, specially for us audio people who are not very familiar with it. Of course, we can learn it, and is an awesome tool if you plan to work in the video game industry, but at the end of the day we want to be focusing on our craft.

Middleware was created help us with this and fill the gap between the game code and the audio. It serves as a middle man, hence the name, allowing sound designers to just focus on the sound design itself. In our previous example, the code was pointing to specific audio files that were needed in any given moment. Middleware does essentially the same thing but puts an intermediate in the middle of the process. This intermediate is what we call an audio event

An example of audio events managing the behaviour of the pirate ships.

An audio event is the main functional unit that the code will call whenever it needs a sound. It could be a gunshot, a forest ambience or a line of dialogue. It could contain a single sound file or dozens of them. Anytime something makes a sound, is triggering an event. The key thing is that, once the code is pointing to an event, we have control, we can make it sound the way we want, we are in our territory.

And this is because middleware uses tools that we audio people are familiar with. We'll find tracks, faders, EQs and compressors. Keep in mind that these tools are still essentially code, middleware is just offering us the convenience of having them in an comfortable and familiar environment. Is bringing the DAW experience to the game development realm.

Audio middleware can be complex and powerful and I'd need a whole series of posts to tell you what they can do and how. So, for now. I'm just going to go through three main features that should give you an idea of what they can offer.

I - Conventional Audio Tools within Middleware

Middleware offer a familiar environment with tracks, timelines and tools similar to the ones found on your DAW. Things like EQ, dynamics, pitch shifters or flangers are common.

This gives you the ability to tweak your audio assets without needing to go back and forth between different softwares. Probably you are still going to start from your DAW and build the base sounds there using conventional plugins, but being able to also do processing within the middleware gives you flexibility and, more importantly, a great amount of power as you'll see later.

II - Dealing with Repetition and Variability

The player may perform some actions over and over again. For example, think about footsteps. You generally don't want to just play the same footstep sound every single time. Even having a set of, say 4 different footsteps, is going to feel repetitive eventually. This repetitiveness is something that older games suffer from and that generally modern games try to avoid. The original 1998 Half-Life, for example, uses a set of 4 footstep sounds per surface. Having said that, it may still be used when looking for a nostalgic or retro flavour the same way pixel art is still used. 

Middleware offer us tools to make several instances of the same audio event, sound cohesive but never exactly identical. The most important of these tools are variations, layering and parameter randomization.

The simplest approach to avoid repetition is just recording or designing several variations on the same effect and let the middleware choose randomly between them every time the event is triggered. If you think about it, this imitates how reality behaves. A sword impact or an footstep are not going to sound exactly the same every single time, even if you really try to use the same amount of force and hit on the same place. 

You could also break up a sound into different components or layers. For example, a gunshot could be divided in a shot impact, its tail and the bullet shell hitting the ground. Each of this layers could also have their own variations. So now, every time the player shoots, the middleware is going to randomly choose an impact, a tail and a bullet shell sound, creating a unique combination.

Another cool thing to do is to have an event with a special layer that is triggered very rarely. By default, every layer on an event has a 100% possibility to be heard but you can tweak this value to make it more infrequent. Imagine for example a power-up sound that has an exciting extra sound effect but is only played 5% of the time the event is called. This is a way to spice things up and also reward players who spend more time playing.

An additional way of adding variability would be to randomize not only which sound clip will be played, but also their parameters. For example, you could randomize volume, pitch or panorama within a certain range of your choice. So, every time an audio clip is called, a different pitch and volume value are going to be randomly picked.

Do you see the possibilities? If you combine these three techniques, you can achieve an amazing degree of variability, detail and realism while using a relative small amount of audio files.

See above the collision_ashore event that is triggered whenever a ship collides with an island. It contains 4 different layers: 

  • A wood impact. (3 variations)

  • Sand & dirt impacts with debris. (3 variations)

  • Wooden creaks (5 variations)

  • A low frequency impact.

As I said, each time the event is triggered, one of this variations within each layer will be chosen. If we then combine this with some pitch, volume and EQ randomization we will assure that every instance of the event will be unique but cohesive with the rest.

III - Connecting audio tools to in-game variables and states.

This is where the real power resides.

Remember the audio tools built-in into middleware that I mentioned before? In the first section I showed you how we can use these audio tools the same way we use them on any DAW. Additionally, can also randomize their values, like I showed you in the second section. So here comes the big one.

We can also automate any parameter like volume, pitch, EQ or delay in relation to anything going on inside the game. In other words, we will have a direct connection between the language of audio and the language the game speaks, the code. Think about the power that that gives you. Here are some examples:

  • Apply an increasing high pass filter to the music and FX as the protagonist health gets lower.

  • Apply a delay to cannon shots that gets longer the further away the shot is, creating a realistic depiction of how light travels faster than sound.

  • Make the tempo of a song gets faster and its EQ brighter as you approach the end of the level.

  • As your sci-fi gun wears off, the sounds get more distorted and muffled. You feel so relieved when you can repair it and get all its power back.

Do you see the possibilities this opens? You can express ideas in the game's plot and mechanics with dynamic and interactive sound design! Isn't that exciting? The takeaway concept that I want you to grasp from this post is that you would never be able to do something this powerful with just linear audio. Working on games makes you think much harder about how sound coming from objects and creatures behaves, evolves and changes. 

As I said before,  you are just providing the ingredients and the rules, the sound design itself only materializes when the player starts the game. 

You can see on the above screenshot how an in-game parameter, distance in this case, affects an event layers' volume, reverb send and EQs.

How to get started

If I have piqued your interest, here are some resources and information to start with.

Fmod and Wwise are currently the two main middleware used by the industry. Both are free to use and not very hard to get into. You will need to re-wire your brain a bit to get used to the way they work, though. Hopefully, reading this post gave you a solid introduction on some of the concepts and tools they use.

If I had to choose one of them, Fmod could look less intimidating at first and maybe more "DAW user friendly". Of course, there are other options, but if you just want to have a first contact, Fmod does the job.

There are loads of resources and tutorials online to learn both Fmod and Wwise, but since I think that the best way to really learn is to jump in and make something yourself, I'll leave you with something concrete to start from for each of them:

Fmod has these very nice tutorials with example projects that you can download and play with.

Wwise has official courses and certifications that you can do for free and and also include example projects.

And of course, don't hesitate to contact me if you have further questions. Thanks for reading!