Dear Devs, 7 Reasons why your Game may need Audio Middleware.
/There are several audio middleware programmes on the market. You may have heard of the two main players: Fmod and Wwise. Both offer free licenses for smaller budget games and paid licenses for bigger projects.
So, what is Audio Middleware? Does your game need it?
Audio Middleware is a bridge between your game's engine and the game's music and sound effects. Although is true that most game engines offer ready to use audio functionalities (and some of them overlap with the features explained below), middleware gives you more power and flexibility for both creating, organizing and implementing audio.
Here are the seven main reasons to consider using middleware:
1. It gives Independence to the Audio Team.
Creating sound effects and music for a game is already a good amount of work, but that is barely half the battle. For these assets to work, they need to be implemented in the game and be connected to in-game states and variables like health or speed.
This connection will always need some collaboration between the audio team and the programming team. When using middleware, this is a much easier process since once the variables are created and associated, the audio team will be free to tweak how the gameplay will affect the audio, without the need to go into the code or the game engine.
2. Adaptive Music.
Music is usually linear and predictable, which is fine for linear media like movies. But in the case of video games, we have the power to make music adapt and react to the gameplay, giving the player a much more compelling experience.
Middleware plays an important role here because it gives the composer non-linear tools to work and think about the soundtrack not in terms of defined songs but of different layers or fragments of music that can be triggered, modified and silenced as the player progresses.
3. Better handling of Variation and Repetition.
Back when memory was a limiting factor, games had to get by with just a few sounds, which would usually meant repetition, a lot of repetition. Although repetition is certainly still used to give an old school flavour, is not very desirable in modern, more realistic games.
When something happens often enough in a game, the associated sound effect can get boring and annoying pretty fast. Middleware offers tools to avoid this, like randomly selecting the sound from a pool of different variations or randomly altering the pitch, volume or stereo position of the audio file each time is triggered. When all these tools are combined, we end up with an audio event that will be different each time but cohesive and consistent, offering the player a more organic and realistic experience.
4. Advanced Layering.
Layering is how we sound designers usually build sounds. We use different, modified individual sounds to create a new and unique one. Middleware allows us to, instead of mixing down this combination of sounds, import all these layers into different tracks so we can apply different effects and treatments to each sound separatelly.
This flexibility is very important and powerful. It help us to better adapt the character and feel of a sound event to the context of the gameplay. For example, a sci-fi gun could have a series of layers (mechanical, laser, alien hum, low frequency impact, etc) and having all these layer separated would allow us to vary the balance between them depending on things like ammo, distance to the source or damage to the weapon.
5. Responsive Audio tools.
Sound effects are usually created using linear audio software, like Pro Tools, Nuendo or Reaper. These are also called DAWs (Digital Audio Workstations). The tools that we can find in DAWs allow us to transform and shape sounds. Things like equalization, compression and other effects are the bread and butter of audio people. Most of the modern music, sound effects and movies that you´ve ever heard came from a DAW.
But the issue is that once you bounce or export your sound it’s kind of set in stone, that’s how it will sound when you trigger it in your game. Middleware software not only give us the same tools that you can find in a DAW. More importantly, it also give us the ability to make them interact with variables and states coming from the game engine.
How about a monster whose voice gets deeper as it gets larger? Or music and sound effects that get louder, brighter and more distorted as the time is running out?
6. Hardware & Memory Optimization.
Different areas of a game compete for processing power and usually audio is not the first priority (or even the second). That´s why is very important to able to optimize and streamline a game´s audio as much as possible.
Middleware offers handy tools to keep the audio tidy, small and efficent. You can customize things like reverbs and other real time effects and also choose how much quality you want in the final compression algorithm for the audio.
7. Platform flexibility & Localization.
If you need to prepare your game for different platforms, including PC, consoles or mobile phones, middleware makes this very easy. You can compile a different version of the game’s audio for each of the platforms. Memory or hardware requirements may be different for each of them and you’ll need to maybe simplify sound events, bake-in effects or turn a surround mix into an stereo one.
You can also have a different version per language, so the voice acting would be different but the overall sound effects and treatment of the voices would be consistent.
I hope this gave you a bit of a taste of what middleware is capable of. When in doubt, don´t hesitate to ask us, audio folks!
Thanks for reading.