Differences between FMOD & Wwise: Part 2
/Here are some more thoughts about how FMOD and Wwise do things differently:
Game Syncs
This is a concept that works quite differently in Wwise, compared to FMOD. Let’s have a look:
RTPC (Real Time Parameter Control): This a direct equivalent to FMOD parameters and they work pretty much in the same way. I believe both are floats under the hood.
Something that I feel is clearer on FMOD is parameter scope. All parameters are either local or global and you choose this when creating them. But in the case of Wwise, as far as I have seen, parameters are agnostic and you really choose scope when you send values to them via code or blueprints. I guess that’s handy? I can see how it can also lead to confusion and errors since most parameter won’t change scope during runtime.
Is also interesting to note that on both Wwise and FMOD you have built-in parameters for things like distance, elevation, etc.. Very handy.
Switches: So on Wwise you can create switch groups which in turn contain individual switches. These basically work as enum type parameters. The classic example is to have a switch for material types so you can then change footsteps sounds using a switch container.
We would achieve the same with FMOD by just using labelled parameters. This is another key difference: FMOD offers different parameter types and each of them are useful in a different context. Is important to remember that under the hood all parameters are always a float variable so these types are there just make things easier for the user on the FMOD Studio side.
States: These work in a similar way to FMOD snapshots. You can call different states from code and then change values on Sound SFXs or buses.
Something to consider is that FMOD uses the concept of overriding and blending snapshots. The former type will always “force” whatever value you have set, following the snapshot priority order, while the latter will just apply their change additively (+3dB, for example) on top of whatever is the current position of the fader or knob. As far as I can see, Wwise states are basically always “on blending mode” since you can nudge values but not force them. As a consequence, there is no concept of state priorities in Wwise like we have on FMOD.
Triggers: I don’t know much about these because I haven’t done music work on Wwise but as far as I can see FMOD would accomplish the same with just normal parameters. I’m sure that if Wwise has its own bespoke system to play music, it must have cool features to blend and mix music but I still need to learn about this.
Positioning
On FMOD all events are 2D until you add an spatializer while on Wwise SFX Objects (or containers, or Actor-Mixers…) usually inherit values from their parents but I believe they are 2D by default too. What I mean is that if you create an SFX Object on a new working unit, the 3D spatialization will default to ‘None’.
Regarding other positioning options, things are organized and even named quite differently but roughly we can see that:
Wwise Speaker Panning / 3D Spatialization modes:
Direct Assignment: You would normally use this if you want to your SFX to be 3D and update its position when the associated emitter moves. 3D Spatialization needs to be on “Position” or “Position + Orientation”.
On FMOD: This is the normal behaviour of the vanilla spatializer.
Balance-Fade: You usually use this for 2D SFXs that you just play on regular stereo, mono or even surround. 3D Spatialization needs to be on “None”. These settings won’t update position in any way when things move around in the game world so they don’t need an associated emitter.
On FMOD: This is how an event without an spatializer basically works. You can also have an spatializer but use the pan override function to directly play audio on specific channels.
Steering: As far as I can see, this works like Balance-Fade but also allows to distribute the sound on a Z (vertical) coordinate. I’m not sure why this is a different mode, maybe I haven’t really understand why is there.
On FMOD: You can’t control verticality as far as I know.
FMOD Envelopment is an interesting concept that I don’t see in Wwise, at least not directly. This gives you the ability to make an SFX source wider or narrower in terms of directionality. Let’s see how it works:
Before anything else, on FMOD every event has a min and max distance. This used to be at the spatializer level but now lives on the event which has huge advantages to build automated systems. Anyway, the min distance is the distance at which the attenuation will start taking effect while the max distance is where the attenuation will cease.
Now let’s look at the envelopment variables, which we can find on the spatializer. We have two variables to play with: sound size & min extent. As the listener is closer to the min distance, the sound will increasingly be all around us. When the distance is equal or smaller than the sound size, we would be “inside” the SFX so it would have no directionality. On the other hand, as we get further away, the SFX would be a smaller point in space, so directionality increases. Min Extent would define how small the sound can get when you are further away or, in other words, how narrowly directional.
I assume you can re-create this behaviour by hand on Wwise using a distance parameter and automating the Speaker Panning / 3D Spatialization Mix value but I haven’t tried this yet.
Now let’s look at something Wwise offers but FMOD doesn’t have. On Wwise, we can find a way to automate movement on the 3D position based on where the emitter or listener are. As far as I know, this is not possible with FMOD, at least not with just FMOD Studio. Probably it can be done with the FMOD API? That sounds like an interesting thing to try to build.
Anyway, let’s have a quick look at what Wwise offers, this lives under the 3D Position options (Positioning tab).
Emitter: This is just normal positioning defined by the game engine itself. We would use this most of the time.
Emitter with Automation: You start with the same game based position as the above position but you can add some extra movement on top of it. Remember that the game engine is completely unaware of this movement, is not like Wwise is moving GameObjects/Actors or anything like that. The movement is just on the audio Wwise level.
Listener with Automation: This is a similar concept as the previous option but the movement is based on where the listener is, instead of the emitter. This is useful for SFXs that we want to move in relative positions around the player or camera (depending where the listener is). A perfect example would be ambience spots.
Attenuation & Orientation
Wwise bases its attenuation workflow on ShareSets. These are essentially presets that can be used by any Sound SFX, container or Actor-Mixer. Each attenuation preset allows you to change different properties as distance increases.
In contrast, FMOD offers distance attenuation built-in on the spatializer and it only affects volume. If you want to achieve something similar to Wwise attenuations and be able to re-use the same attenuation for several events, you would need to save your spatializer on an effect preset although that is not going to give you a lot of options, just the curve shape.
If you want to have a much more flexible system on FMOD, you need to turn off the distance attenuation on the spatializer and directly add a built-in distance parameter to the event. Then you can use the parameter to automate levels, EQs, reverb sends, etc… If you then save all this automation within a effect chain preset, you would have something as powerfull and flexible as Wwise attenuations (although with quite more work). If you want to know more about FMOD effect presets, check my article about it.
Is funny that I prefer Wwise attenuation system but, in contrast, I think FMOD’s orientation system is more powerful. On Wwise, orientation automation lives on the attenuation shareSets, which I think makes a lot of sense. You can set how levels and filters change as audio the listener is in front, to the side or behind the source. Basically think about this as if the audio emitter is a directional speaker, the frequency response and levels would change as we move behind it.
If you know FMOD orientation options, Wwise cone attenuation may remind you to FMOD’s Event Cone Angle built-in parameter. They are basically the same. The thing is FMOD offers two more orientation parameters, taking also into account the listener’s angles. If you want to know more about this works on FMOD, check my article here. As far as know, there is no equivalent to these advanced orientation parameters on Wwise.
Hi there! November 2022 Javier here again. I was wrong about this!. You can actually find something similar to FMOD’s orientation parameters by using Wwise built-in parameters. This means that you won’t find all of them on the attenuation set but on the RTPCs that you can apply to the object. Let’s see how these correspond to the ones in FMOD: (reading this before may help understanding)
Angle formed by the line between listener and emitter and the listener’s orientation. Emitter orientation is irrelevant:
FMOD: Direction
Wwise: Listener Cone
Angle formed by the line between listener and emitter and the emitter orientation. Listener orientation is irrelevant:
FMOD: Event Cone
Wwise: This is the cone attenuation that you can find on the attenuation share set.
Angle formed between the emitter orientation and the listener orientation (projected in 2D, verticality is ignored in both cases)
FMOD: Event Orientation
Wwise: Azimuth