Fmod Built-in Parameters

Here is a quick view at the parameters that Fmod offers us out of the box. The great thing about them is that you don’t need to explicitly send the values to the event instance, the Fmod engine takes care of this for us.

So let’s have a look at how the Fmod docs describe them and then I will go into details to see how they can be useful and how i use them myself:

  • Distance built-in parameters track the distance between the listener and the event instance in game distance units.

  • Distance (Normalized) built-in parameters track the distance between the listener and the event instance as a fraction of the difference between the event's minimum and maximum distance. This allows anything automated or triggered by a distance (normalized) parameter to respond to an event's minimum and maximum distance properties changing as your game is played.

  • Direction built-in parameters track which way the listener is facing relative to the event instance.

  • Elevation built-in parameters track whether the event instance is above or below the listener.

  • Event cone angle built-in parameters track which way the event instance is facing relative to the listener.

  • Event orientation built-in parameters track which way the event instance is facing relative to the way the listener is facing.

  • Speed built-in parameters track how quickly the event instance is moving relative to the listener.

Distance parameters

Distance is an obvious and clear one but I have three thoughts or tips anyway. Firstly, this is measured in game units, so make sure you know how much a unit “feels” in your project. It could be metres, inches or light-years, it all depends on the scale of the game.

Secondly, you may think this parameter is kind of useless since the spatializer already attenuates audio with distance but sometimes you want to change the audio in different ways as the distance increases. For example, you could have a lowpass filter, a compressor or other effects that change your event with distance. You could also have aural LODs, that is, audio sources that play different loops depending on how close you are to the source. So imagine a machine: if you are very close, the audio is very detailed and intricate but as you move away from it, we only play simpler layers. Once you are far away, you can only hear the machine humming faintly.

Thirdly, sometimes you don’t want to attenuate audio using the spatializer. In this case, you can add a distance parameter and automate the master fader based on the distance. This allows for far more intricate curves and also has an additional advantage: if you do this on a gain plugin instead of on the fader, you can save everything as a preset to use on all similar events. Don’t forget to turn off the distance attenuation on the spatializer though, or you would be attenuating twice!

Distance (Normalized) is a cool one. This parameter always goes from 0 to 1, where 0 represents the event’s min distance, while 1 is the max distance. So you can automate your event using distance like I explained above but with the additional advantage that if you need to change the min or max distance after the fact, you don’t need to tweak all your curves since they are already relative to those values.

Elevation

This one is useful if you want to change your event’s audio based on the vertical distance between source and listener. Values go from -90 to 90. Negative values mean the sound is below you, while positive ones indicate the sound is above you. Not sure why the values go from -90 to 90, since there are no angles involved, as far as I know.

This parameter can be useful to achieve some poor man’s vertical occlusion. If you have some sounds coming from a different floor in a building, you can use this to approximate some occlusion although it has obvious limitations.

Speed

This one, despite what the docs say, is now divided in two versions: relative and absolute. The first one is the same one the docs mention, the relative speed between the listener and source. As you may imagine, the absolute version ignores the listener and just measures the absolute source speed.

Is important to note that this would only work if the audio event is attached to a GameObject and moved using the physics engine. That is, it needs to have a rigid body in the case of Unity. If you move an object by code using the transform position or you move it with an animation, that would not work and the speed will always be 0. Remember that the same also applies for the built in doppler effect Fmod has.

Orientation parameters

These can be tricky to wrap your head around. They basically define the relative facings of the listener and audio source. We have three different parameters: Direction, Event Cone Angle and Event Orientation. Let’s try to understand the difference between them.

Direction takes into account where the listener faces, regardless of the orientation of the audio source . To see what direction really measures, let’s draw a line from our audio source to the listener (shown in green). Direction is the angle between this line and where the listener is facing (red arrow).

Is important to note that direction doesn't care about where the audio source is facing. It is measured in angles, so as you can see, ‘0’ means that the listener is looking directly at the source. Negative values indicate that the source is to the left of the listener, 90 being directly to the left. -90 is just the opposite, directly to the right. To represent the listener looking in the opposite direction we use 180 or -180, both mean the same thing.

Direction is useful to make things louder when the listener is facing them, particularly when an audio source emits audio in all directions. So, for example, a firecamp would not be louder from any particular direction, the only thing that would make it louder for a listener, apart from distance of course, is the way the listener is facing. From the same position, if you are looking directly at the fire, the sound would be louder than with your back to it.

Event Cone Angle is almost the reverse of Direction. We draw a line between audio source and listener (again in green). The event cone angle is the angle between this green line and where the audio source is facing.

Again, take into consideration that we don’t care about the angle the listener is facing here. Something important to keep in mind is that the event cone angle doesn’t differentiate between listeners being to the left or right of the source which is why the values go from 0 (listener is facing the source) to 180 (listener is looking in the opposite direction the source is). Thus, 90 would represent the source being to the side of the listener, no matter which side.

Event cone angle is usually used to make events louder when the source is facing them, while making them quieter or more muffled when the source is not in front of them but to the side or behind. This could initially sound similar, maybe too similar to how we use direction. It was to me. The key to see the difference is that here we don’t care about the listener orientation, only where the audio source is facing. So let’s say our audio source is a person talking. Definitely the sound changes as you move around the person since the human voice is somewhat directional, especially the higher frequencies. Being in front of the person would be the loudest, with an angle of 0, while being behind them would be the quietest, with an angle of 180.

Event Orientation is finally the last parameter we can use. This one takes into account the angle between where both the source and listener are facing. This may sound like a useful thing but it really isn’t if you think about it. Let’s have a look:

As you can see, here there is no green line. That means that we don’t care about the relative positions of the source and listener, we just care about where they are facing, relative to each other. If you look at the two 180 examples, you will see that two very different situations (opposites, actually) share the same value which may feel odd.

This is why this parameter is not not used very much. I have never found a situation to use it myself.

Orientation TLDR: Use direction to describe how the listener orientation angle to the source changes the audio. Makes sense with non-directional audio sources (like a fire camp) and directional listeners (like human ears). On the other hand, use event cone angle to describe how the audio source orientation angle to the listener affects the audio. Useful for directional audio sources, like the human voice. Of course, if you have a directional audio source AND a directional listener, you should use a combination of both parameters. On the other hand, event orientation can be ignored for most situations.

Learning C# Notes - Part VI: Handy Code Operators & Shorthands

Here is a compilation of abbreviations that are frequently used in Unity C# code. Is good to know them so you can understand other’s code and also so you can make your own more compact.

Basic Arithmetic

  • They are generally used by float, int and their related types.

  • You can use usual math operators like:
    + for addition (Can also be used for string concatenation)
    - for subtraction
    * for multiplication
    / for division
    % for reminder

Incrementing & Decrementing

  • Allow you to quickly modify values one unit at a time, useful for iterators or counters.

  • ++ Increments a variable by one.
    Example: number++;

  • — Decrements a variable by one
    Example: i—;

Arithmetic Assignments

  • These are a compact way to change a variable value and anger mathematicians at the same time.

  • Note that the same syntax is also used for subscribing and unsubscribing to delegates, see all about it here.

  • “+=” is an addition assignment operator.

    • “x += y” would be the same as saying “x = x + y”.

  • “-=” works in exactly the same way for subtractions.

  • You can also find “*=” for multiplications and “/=” for dvisions.

Logic and Comparison

  • We use “&&” as an AND statement when we want to check that two or more conditions are true.

  • We use “||” as an OR statement if we want to do something when either condition is true. Only one needs to be true.

  • We use “!” as a NOT statement to signify that we want to check the opposite of a condition. In other words, the “!” operator inverts a bool, turning a true into false.

  • When checking that something is bigger or smaller than our condition, we use “<“ and “>”.

  • We can also use “<=” to check if something is smaller or equal and “>=” to check if it is bigger or equal.

  • We use “==” to check if a variable equals another.

Ternary expressions

  • This is a compact way to write conditional statements. A boolean expression is evaluated and two possible result expressions are chosen.

  • The basic syntax is: “condition ? consequent : alternative;”

  • If the condition evaluates to true, the consequent expression is used while the alternative expression would be used in case of the condition evaluating false.

  • A nice way to remember how ternary operations work is to ask yourself:

    • Is this condition true ? yes : no

  • Example: See below how the expression in the middle has the same meaning as the ternary expression at the bottom.

int input = new Random().Next(-5, 5);
string classify;

//Old fashioned if statement:
if (input >= 0)
{
    classify = "nonnegative";
}
else
{
    classify = "negative";
}

//Ternary Expression shorthand:
classify = (input >= 0) ? "nonnegative" : "negative";

Properties

  • When we want to access a variable from another class, we can make the variable public. That works, but can lead to unintended consequences like a variable being changed from the outside when we don’t want this to happen.

  • Properties are fields that act as a gatekeeper to variables. Can even add logic within them and we can make them read only or write only, giving us much more control than a simple public variable.

  • See an example below with the standard syntax and then the shorthand version.

//Long way to create a property but allows us additional logic.
public int Health 
{
  get 
  {
    return health;
  }
  set 
  {
    health = value;
  }
}

//Shorthand way to create a property
public int Health {get; set;}
  • You can also create a quick read only accessor to any private variable that you already have by using the following lambda expression:

        private EventInstance m_fmodInstance;
        public EventInstance FmodInstance => m_fmodInstance;

Adding variable values to strings for debugging (String interpolation)

  • So sometimes it is very useful to print some info to the console containing values for debugging.

  • In order to make this more compact and human readable, we can use the “$” operator so we can add out variables between curly braces “{}” and avoid the need to break up our string. See an example below:

//Long form
UnityEngine.Debug.LogWarning("The value" + parameter + "is a local parameter.");

//Shorthand form
UnityEngine.Debug.LogWarning($"The value  is a local parameter.");

Learning C# Notes - Part V: Delegates, Events, Actions and Funcs

The Observer Pattern

Usually when we start using Unity and C#, we have to tackle the question of how classes should send information between each other. One of the most common solutions is to just make methods public and call them from one class to another.

This approach is simple and intuitive but the problem with it is that we create dependencies on the code where if we want to remove a class from the project, this could create errors in many others. The Observer pattern solves this issue by separating communication and functionality.

So the basic idea behind this pattern is that some classes will announce that something has happened while others will receive these messages and act on them. This creates a more modular system where an event can be broadcasted and very different systems can then use that information to trigger specific methods. For example, the event of the player dying could be registered by the audio, scoring and achievement systems.

C# and Unity offer a few different ways of using this pattern. Is quite easy to use but, to be honest, is not always very intuitive at the start, when you are not familiar with the syntax. Let’s see how it works.

Delegate Basics

You can think of delegates as variable types that can be assigned a method as a value. is also important to know that multiple methods could be assigned to the same delegate.

You can declare them in a similar way to how you declare methods. They also have a return type and optional parameters. Se an example below:

delegate void DelegateTest(float n);

If a method matches the return type and parameters of a delegate we can say they are compatible.

void CompatiblMethod(float n) 
{
    //Method Functionaility
}

We can create a delegate instance and set it equal to any method that is compatible. We can call the method that we set equal to our delegate by using Invoke(), after the delegate instance. A shorthand can be used where we can skip Invoke() and directly use the delegate instance itself.

delegate void DelegateTest(float n);

void Start() 
{
  DelegateTest myDelegate = MeThodName;
  myDelegate.Invoke(5f);
  myDelegate(5f); //Shorthand form
}

void CompatibleMethod(float n) 
{
    //Method Functionality
}

Notice how on the first line of the Start() method we assigned the delegate instance to our method without using () after its name. This syntaxis looks weird for sure but it hints to the idea that we are not calling the method itself, we are just assigning it or rather subscribing it to the delegate.

So, basically a delegate allows us to store references to methods inside a variable of type delegate. This, in turn, allows us to pass references to methods inside other methods. See the example below. We are calling CompatibleMethod in an indirect way through AnotherMethod and its delegate.

delegate void DelegateTest(float n);

void Start() 

 


void AnotherMethod(DelegateTest myDelegate) 

  


void CompatibleMethod(float n) 
{
    //Method Functionality
}

So why complicate things like this? Having the ability to pass methods to other methods by using delegates can allow us to write methods in a more compact and flexible way. If we have a few large methods which contain basically the same instructions but they only differ in a small consistent way, that’s a good candidate for using delegates.

We can use a delegate to turn those multiple methods into a single one whose functionality changes as we pass different small helper methods that are compatible to it. Furthermore, we can then use lambda expressions to make the code even more compact.

This is why when I first looked at this kind of code wizardry, it all looked very strange and cryptic but if you slowly take apart all the components, you can start to understand it. Having very compact code that uses lambda expressions is cool and all but the main point of delegates is to give flexibility and scalability to the way we design and build code.

See an example of all this in Sebastian Lague’s video below:

Using Delegates with the Observer Pattern

Sebastian’s example is confined to a single class but is not hard to see how making delegates static or just accessible to other classes in different ways can start to give us a path to using the observer pattern.

Something else to keep in mind is the important fact that when we invoke a delegate, the delegate MUST have a method already subscribed to it. Otherwise, we will have a null reference exception. This is way is good practice to do a null check when invoking:

if (myDelegate != null) 

    


//Or we can use the following shorthand:

myDelegate?.Invoke(10f);

We should also consider delegate return types. When working with the observer pattern, it usually doesn’t make sense to use delegates with a return type since many different systems could potentially subscribe to it and only the last method that was called will return the values. Is usually inconvenient to know or keep track of which method subscribed to our delegate last and so, to implement the observer pattern, void delegates are used.

There is another useful shorthand that we can use to subscribe and unsubscribe methods to delegates. It is good practice to always unsubscribe when we know we don’t need to listen anymore, which in Unity usually happens when the component is disabled or the game object destroyed.

public delegate void ExampleDelegate();
public ExampleDelegate exampleDelegate;

private void OnEnable() 
{
  exampleDelegate += MyMethod;
  exampleDelegate += ADifferentMethod;
}

private void OnDisable()
{
  exampleDelegate -= MyMethod;
  exampleDelegate -= ADifferentMethod;
}

Events

Now that we have seen how delegates work, we are ready to look at Events. in a nutshell, events are special delegates that have some specific restrictions which usually reduce the probability of us making mistakes:

  • You can’t directly assign a method to the delegate form an external class using the = operator. The only thing you can do is to subscribe to it using the += syntax.

  • You can’t directly call the delegate from another class. So this other class can only receive information from the class containing the delegate but can’t send any to it through the delegate.

As you can see, these restrictions make events a good option to use with the observer pattern. What they really do is to make sure information only flows in the direction we want, which is from the class broadcasting that something happened to the subscribed classes listening to this. To use an event, you just use the event keyword when you define the delegate instance.

See an example below where a Player class broadcasts when the player has died and an audio class subscribes to this event to trigger some audio. Notice how, since we are using the event keyword, the audio class would never be able to trigger the delegate or assign it directly to one of its methods, which would unsubscribe any other methods assigned from other classes.

public class Player 
{
  public delegate void DeathDelegate();
  public event DeathDelegate deathEvent;

  void Die() 
  
    
  
}

public class AudioClass 
{
  void Start() 
  {
    FindObjectOfType<Player>().deathEvent += OnPlayerDeath;
  }

  public void OnPlayerDeath() 
  {
    //Play some player death audio
  }

  void OnDestroy() 
  {
    FindObjectOfType<Player>().deathEvent -= OnPlayerDeath;
  }
}

The above example gives a closer look at how the observer pattern would work. Having said that, we still need to see a couple of other delegate types that can be even more useful and convenient to use.

Actions & Funcs

You can think of these as quick ways to create delegates with restrictions that we may find useful.

  • Actions can have input parameters but can’t have return values.

  • Funcs can have both input parameters AND return values but the return values are handled as an out value that is always the last input parameter.

When you look at code examples of people implementing the observer pattern, you will usually see that Actions are the delegate type most widely used. This is because we usually don’t need return values and we can declare them in a compact and convenient way. To declare an Action, we use the keyword event AND then Action. This already works as the instance declaration, all in one line. We can also specify the parameter by using <> after the Action keyword.

See an example below of two actions, one of them taking an int parameter:

public static event Action myStaticEvent;
public static event Action<int> myStaticEventWithInt;

private void Update() 
{
  myStaticEvent?.Invoke();
  myStaticEventWithInt.Invoke(12);
}

In Closing

I know all this looks scary at first but once you understand how it works and know the syntax, it can become a very useful tool for sure. See an additional video below that rounds up all the concepts we have talked about and good luck!

Figuring out: Headphones Impedance

I have always wondered about impedance in the context of pro audio vs consumer audio. Don’t get me wrong, this is a deep topic for the audiophile crowd, but that’s not going to be my approach. If you want to get deep into it and chase the absolutely clearest listening experience, have a look at this article that goes deeper into the technical details.

In the meantime, i just wanted to get an overview of the situation so when I look at my headphones, I have some understanding of what is going on. As you can see, my daily headphones which I use for almost everything are a pair of Sennheiser HD25. And there is the impedance staring at us, 70 ohms. Let’s see what this number means.

About Impedance

Remember Ohm’s Law? Voltage = Current * Resistance. You may be tempted to think about impedance in a similar fashion and it kind of makes sense because impedance also reflects how hard it is for a current to run through a circuit… kinda. The problem is that Ohm’s law only applies to DC, that is direct current. DC is what most electric appliances and machines use and was the technology championed by Edison in the War of the currents of the 1880s.

On the other hand, Tesla defended AC, alternating current, as a better alternative to transport energy across distances. On an AC current, electrons change their direction at a certain frequency, usually at 50 or 60Hz. Long story short, Tesla ended up winning and today we use AC to transport electricity. It is then converted to DC so machines can use it. That’s why there is a transformer on anything that you plug in to the wall and also why the band ACDC bear that same name.

So why do we care, what’s the relation with audio? Well, since sound is air particles moving back and forth at certain frequencies, all analogue audio signals are AC. So whenever an audio signal goes into a pair of headphones, comes out of a microphone or moves through an analogue mixing desk, that’s AC. Only the audio signal is AC, a mixing desk will be powered with DC electricity, don’t confuse the two.

So since audio signals are AC, we can’t just use resistance to measure how hard it is to run them through a system, we must use impedance which takes into account resistance, capacitance and inductance. I won’t go into detail since the math gets much trickier than just Ohm’s law but you can learn more in the article I linked at the beginning.

Implications on Headphones

You can think of impedance as a measure of how inefficient your headphones are at generating an audio level. Generally, the lower the impedance, the easiest is for your headphones to create a loud signal. Does that mean that you just should get headphones with the lowest impedance possible? Not at all!

Higher audio levels are not the only thing we are after. Sound quality should also be a big factor. Lower impedance headphones usually have lower audio quality while higher impedances are best if we want to avoid distortion and improve frequency response and faithfulness to the original source.

So high impedance headphones will be the most crystalline BUT (and this is a big but) you will need much more power to drive them and get a proper signal from them. This means that in an ideal world, you would have high impedance headphones and also a high impedance headphone amplifier for the best audiophile experience possible. This is what people call “Impedance Matching”, making usr eimpedance levels at the source and desitnation aren’t too far apart.

This means that you may go and buy expensive high impedance headphones and then find out that they give you a tiny quiet signal on your phone, your computer or on any other consumer level audio product. Not good, particularly if you want or need a loud signal. For those consumer uses, you would be much better off with lower impedance headphones which is why almost all normie headphones are relatively low impedance.

Again, things are more complicated. Impedance varies across the frequency spectrum and there are other factors like distortion and sensitivity. All of this will contribute to how loud a pair of headphones will be and how accurate their response will be. But for now, let’s just get an intuitive sense of how impedance works in different contexts.

Some Numbers & Perspective

So how does my 70Ω Sennheiser headphones fit on the impedance scale? A good reference to use is the Beyerdynamic DT770 which is one of the few pairs of headphones where you can buy 3 different versions which only differ in impedance. These versions are:

  • 32Ω: This is consumer level impedance. It will give good audio levels for phones and computers. As a reference, Apple Earpods are 42.2Ω, which by the way doesn’t mean they are better, as I said before there are other factors to consider, like distortion.

  • 80Ω: This is a good mid level which will still be loud enough with consumer products but offers an overall better audio quality. As you can see, my headphones fit here.

  • 250Ω: And this is now getting on the audiophile levels of impedance where you need to make sure you are using a good headphone amplifier if you want to have enough audio levels.

I also wanted to say that usually in pro audio, headphones are not that crucial since we don’t really use them to mix music or cinema. For this, speakers and a good sounding room are always preferable since we want to have a more natural listening experience and at the end of the day, sound is supposed to be propagated through the air. This is why you won’t see studios buying crazy high impedance headphones and this is mostly reserved to the audiophile tribe.

Conclusions

Now you know. Lower impedance will produce higher audio levels but with lower audio quality. It is important then to match impedance on the source and the headphones if we want to achieve the best quality/level relation possible.

5 Tips to Improve in Game Audio

Here are some thoughts I had about working in game audio.

Finding another way

Is not unusual for audio to not have as many resources as we would like. The reality is that our discipline is not appreciated in the same way fancy graphics or cool social features are. But this doesn’t mean we can just do mediocre work. Bad audio is noticeable while good audio is often invisible but realy enhances the player experience.

Finding another way means that, as a sound designer, you must work within the contrains you have to make things work and this usually means making compromises with quality, level of detail and performance. So maybe you can’t realy do things the way you initially planned but you must be resiliant enough to go around the obstacles and deliver something great anyway.

Take time to experiment

Audio has personality, it has a spirit. Sounds connects us to nature in an instictive way, they remind us to animals and weather. When creating audio for a machine, a creature, UI or an environment, we are tasked with giving them a personality, a certain flavour. For this, it can be very helpful to think about what you want to convey, what is the function of this thing in the story and in the world.

Sometimes that’s not enough and you just need to try crazy things, random stuff and see what sticks. I have created some great sounds like this but this certainly means you need to be willing to experiment freely which is not always possible when you need to meet deadlines. So remember to take time to stop and smell the roses, even aimlessly. You will get to results that can’t be achieved any other way.

Use limitations to boost creativity

Don’t see limitations as an obstacle, see them as a way to thrive. Less is more, sure, but is deeper than that. When you are limited to, say a single synth or instrument or just a few tracks or voices you are really forced to learn the only resources that you have deeply and get a knowledge ans mastery that you would never get if you have an arsenal of dozens of plusings to choose from.

Keep in mind the big picture

Is easy to over focus on what you need to do each day. You make sounds and implement them following a plan like ticking boxes. This happens usually when you base your work on lists, spreadsheets or jira tickets. It seems like as long as you tick boxes and cross tasks, you are progressing. This is needed, sure, but never forget that that doesn’t mater at all if the overall result is not working.

Always remember, the final user and their experience. At the end of the day, nobody cares about how you made that sound, how that bit of code is brilliant or the fact that you are knocking down tickets. Take a step back and play as a naive player, see what works and what doesn’t.

Be in flux with information

Things are going to be changing and fast. Features come and go, they are transformed and expanded. It can be tough to keep track of all of this, particularly when audio is usually left out of these decisions. Setting good comunications and expectations with the team is important but also remember that game development moves fast and you can’t possibly know every single thing.

You need to find the proper bandwith of information for each phase of development and keep on top of things but never compromising the actual work you need to do. For me, is helpful to remember that things must be flexible, that nothing is set in stone.