Learning C# Notes - Part VI: Handy Code Operators & Shorthands

Here is a compilation of abbreviations that are frequently used in Unity C# code. Is good to know them so you can understand other’s code and also so you can make your own more compact.

Basic Arithmetic

  • They are generally used by float, int and their related types.

  • You can use usual math operators like:
    + for addition (Can also be used for string concatenation)
    - for subtraction
    * for multiplication
    / for division
    % for reminder

Incrementing & Decrementing

  • Allow you to quickly modify values one unit at a time, useful for iterators or counters.

  • ++ Increments a variable by one.
    Example: number++;

  • — Decrements a variable by one
    Example: i—;

Arithmetic Assignments

  • These are a compact way to change a variable value and anger mathematicians at the same time.

  • Note that the same syntax is also used for subscribing and unsubscribing to delegates, see all about it here.

  • “+=” is an addition assignment operator.

    • “x += y” would be the same as saying “x = x + y”.

  • “-=” works in exactly the same way for subtractions.

  • You can also find “*=” for multiplications and “/=” for dvisions.

Logic and Comparison

  • We use “&&” as an AND statement when we want to check that two or more conditions are true.

  • We use “||” as an OR statement if we want to do something when either condition is true. Only one needs to be true.

  • We use “!” as a NOT statement to signify that we want to check the opposite of a condition. In other words, the “!” operator inverts a bool, turning a true into false.

  • When checking that something is bigger or smaller than our condition, we use “<“ and “>”.

  • We can also use “<=” to check if something is smaller or equal and “>=” to check if it is bigger or equal.

  • We use “==” to check if a variable equals another.

Ternary expressions

  • This is a compact way to write conditional statements. A boolean expression is evaluated and two possible result expressions are chosen.

  • The basic syntax is: “condition ? consequent : alternative;”

  • If the condition evaluates to true, the consequent expression is used while the alternative expression would be used in case of the condition evaluating false.

  • A nice way to remember how ternary operations work is to ask yourself:

    • Is this condition true ? yes : no

  • Example: See below how the expression in the middle has the same meaning as the ternary expression at the bottom.

int input = new Random().Next(-5, 5);
string classify;

//Old fashioned if statement:
if (input >= 0)
{
    classify = "nonnegative";
}
else
{
    classify = "negative";
}

//Ternary Expression shorthand:
classify = (input >= 0) ? "nonnegative" : "negative";

Properties

  • When we want to access a variable from another class, we can make the variable public. That works, but can lead to unintended consequences like a variable being changed from the outside when we don’t want this to happen.

  • Properties are fields that act as a gatekeeper to variables. Can even add logic within them and we can make them read only or write only, giving us much more control than a simple public variable.

  • See an example below with the standard syntax and then the shorthand version.

//Long way to create a property but allows us additional logic.
public int Health 
{
  get 
  {
    return health;
  }
  set 
  {
    health = value;
  }
}

//Shorthand way to create a property
public int Health {get; set;}
  • You can also create a quick read only accessor to any private variable that you already have by using the following lambda expression:

        private EventInstance m_fmodInstance;
        public EventInstance FmodInstance => m_fmodInstance;

Adding variable values to strings for debugging (String interpolation)

  • So sometimes it is very useful to print some info to the console containing values for debugging.

  • In order to make this more compact and human readable, we can use the “$” operator so we can add out variables between curly braces “{}” and avoid the need to break up our string. See an example below:

//Long form
UnityEngine.Debug.LogWarning("The value" + parameter + "is a local parameter.");

//Shorthand form
UnityEngine.Debug.LogWarning($"The value  is a local parameter.");

Learning C# Notes - Part V: Delegates, Events, Actions and Funcs

The Observer Pattern

Usually when we start using Unity and C#, we have to tackle the question of how classes should send information between each other. One of the most common solutions is to just make methods public and call them from one class to another.

This approach is simple and intuitive but the problem with it is that we create dependencies on the code where if we want to remove a class from the project, this could create errors in many others. The Observer pattern solves this issue by separating communication and functionality.

So the basic idea behind this pattern is that some classes will announce that something has happened while others will receive these messages and act on them. This creates a more modular system where an event can be broadcasted and very different systems can then use that information to trigger specific methods. For example, the event of the player dying could be registered by the audio, scoring and achievement systems.

C# and Unity offer a few different ways of using this pattern. Is quite easy to use but, to be honest, is not always very intuitive at the start, when you are not familiar with the syntax. Let’s see how it works.

Delegate Basics

You can think of delegates as variable types that can be assigned a method as a value. is also important to know that multiple methods could be assigned to the same delegate.

You can declare them in a similar way to how you declare methods. They also have a return type and optional parameters. Se an example below:

delegate void DelegateTest(float n);

If a method matches the return type and parameters of a delegate we can say they are compatible.

void CompatiblMethod(float n) 
{
    //Method Functionaility
}

We can create a delegate instance and set it equal to any method that is compatible. We can call the method that we set equal to our delegate by using Invoke(), after the delegate instance. A shorthand can be used where we can skip Invoke() and directly use the delegate instance itself.

delegate void DelegateTest(float n);

void Start() 
{
  DelegateTest myDelegate = MeThodName;
  myDelegate.Invoke(5f);
  myDelegate(5f); //Shorthand form
}

void CompatibleMethod(float n) 
{
    //Method Functionality
}

Notice how on the first line of the Start() method we assigned the delegate instance to our method without using () after its name. This syntaxis looks weird for sure but it hints to the idea that we are not calling the method itself, we are just assigning it or rather subscribing it to the delegate.

So, basically a delegate allows us to store references to methods inside a variable of type delegate. This, in turn, allows us to pass references to methods inside other methods. See the example below. We are calling CompatibleMethod in an indirect way through AnotherMethod and its delegate.

delegate void DelegateTest(float n);

void Start() 

 


void AnotherMethod(DelegateTest myDelegate) 

  


void CompatibleMethod(float n) 
{
    //Method Functionality
}

So why complicate things like this? Having the ability to pass methods to other methods by using delegates can allow us to write methods in a more compact and flexible way. If we have a few large methods which contain basically the same instructions but they only differ in a small consistent way, that’s a good candidate for using delegates.

We can use a delegate to turn those multiple methods into a single one whose functionality changes as we pass different small helper methods that are compatible to it. Furthermore, we can then use lambda expressions to make the code even more compact.

This is why when I first looked at this kind of code wizardry, it all looked very strange and cryptic but if you slowly take apart all the components, you can start to understand it. Having very compact code that uses lambda expressions is cool and all but the main point of delegates is to give flexibility and scalability to the way we design and build code.

See an example of all this in Sebastian Lague’s video below:

Using Delegates with the Observer Pattern

Sebastian’s example is confined to a single class but is not hard to see how making delegates static or just accessible to other classes in different ways can start to give us a path to using the observer pattern.

Something else to keep in mind is the important fact that when we invoke a delegate, the delegate MUST have a method already subscribed to it. Otherwise, we will have a null reference exception. This is way is good practice to do a null check when invoking:

if (myDelegate != null) 

    


//Or we can use the following shorthand:

myDelegate?.Invoke(10f);

We should also consider delegate return types. When working with the observer pattern, it usually doesn’t make sense to use delegates with a return type since many different systems could potentially subscribe to it and only the last method that was called will return the values. Is usually inconvenient to know or keep track of which method subscribed to our delegate last and so, to implement the observer pattern, void delegates are used.

There is another useful shorthand that we can use to subscribe and unsubscribe methods to delegates. It is good practice to always unsubscribe when we know we don’t need to listen anymore, which in Unity usually happens when the component is disabled or the game object destroyed.

public delegate void ExampleDelegate();
public ExampleDelegate exampleDelegate;

private void OnEnable() 
{
  exampleDelegate += MyMethod;
  exampleDelegate += ADifferentMethod;
}

private void OnDisable()
{
  exampleDelegate -= MyMethod;
  exampleDelegate -= ADifferentMethod;
}

Events

Now that we have seen how delegates work, we are ready to look at Events. in a nutshell, events are special delegates that have some specific restrictions which usually reduce the probability of us making mistakes:

  • You can’t directly assign a method to the delegate form an external class using the = operator. The only thing you can do is to subscribe to it using the += syntax.

  • You can’t directly call the delegate from another class. So this other class can only receive information from the class containing the delegate but can’t send any to it through the delegate.

As you can see, these restrictions make events a good option to use with the observer pattern. What they really do is to make sure information only flows in the direction we want, which is from the class broadcasting that something happened to the subscribed classes listening to this. To use an event, you just use the event keyword when you define the delegate instance.

See an example below where a Player class broadcasts when the player has died and an audio class subscribes to this event to trigger some audio. Notice how, since we are using the event keyword, the audio class would never be able to trigger the delegate or assign it directly to one of its methods, which would unsubscribe any other methods assigned from other classes.

public class Player 
{
  public delegate void DeathDelegate();
  public event DeathDelegate deathEvent;

  void Die() 
  
    
  
}

public class AudioClass 
{
  void Start() 
  {
    FindObjectOfType<Player>().deathEvent += OnPlayerDeath;
  }

  public void OnPlayerDeath() 
  {
    //Play some player death audio
  }

  void OnDestroy() 
  {
    FindObjectOfType<Player>().deathEvent -= OnPlayerDeath;
  }
}

The above example gives a closer look at how the observer pattern would work. Having said that, we still need to see a couple of other delegate types that can be even more useful and convenient to use.

Actions & Funcs

You can think of these as quick ways to create delegates with restrictions that we may find useful.

  • Actions can have input parameters but can’t have return values.

  • Funcs can have both input parameters AND return values but the return values are handled as an out value that is always the last input parameter.

When you look at code examples of people implementing the observer pattern, you will usually see that Actions are the delegate type most widely used. This is because we usually don’t need return values and we can declare them in a compact and convenient way. To declare an Action, we use the keyword event AND then Action. This already works as the instance declaration, all in one line. We can also specify the parameter by using <> after the Action keyword.

See an example below of two actions, one of them taking an int parameter:

public static event Action myStaticEvent;
public static event Action<int> myStaticEventWithInt;

private void Update() 
{
  myStaticEvent?.Invoke();
  myStaticEventWithInt.Invoke(12);
}

In Closing

I know all this looks scary at first but once you understand how it works and know the syntax, it can become a very useful tool for sure. See an additional video below that rounds up all the concepts we have talked about and good luck!

Figuring out: Headphones Impedance

I have always wondered about impedance in the context of pro audio vs consumer audio. Don’t get me wrong, this is a deep topic for the audiophile crowd, but that’s not going to be my approach. If you want to get deep into it and chase the absolutely clearest listening experience, have a look at this article that goes deeper into the technical details.

In the meantime, i just wanted to get an overview of the situation so when I look at my headphones, I have some understanding of what is going on. As you can see, my daily headphones which I use for almost everything are a pair of Sennheiser HD25. And there is the impedance staring at us, 70 ohms. Let’s see what this number means.

About Impedance

Remember Ohm’s Law? Voltage = Current * Resistance. You may be tempted to think about impedance in a similar fashion and it kind of makes sense because impedance also reflects how hard it is for a current to run through a circuit… kinda. The problem is that Ohm’s law only applies to DC, that is direct current. DC is what most electric appliances and machines use and was the technology championed by Edison in the War of the currents of the 1880s.

On the other hand, Tesla defended AC, alternating current, as a better alternative to transport energy across distances. On an AC current, electrons change their direction at a certain frequency, usually at 50 or 60Hz. Long story short, Tesla ended up winning and today we use AC to transport electricity. It is then converted to DC so machines can use it. That’s why there is a transformer on anything that you plug in to the wall and also why the band ACDC bear that same name.

So why do we care, what’s the relation with audio? Well, since sound is air particles moving back and forth at certain frequencies, all analogue audio signals are AC. So whenever an audio signal goes into a pair of headphones, comes out of a microphone or moves through an analogue mixing desk, that’s AC. Only the audio signal is AC, a mixing desk will be powered with DC electricity, don’t confuse the two.

So since audio signals are AC, we can’t just use resistance to measure how hard it is to run them through a system, we must use impedance which takes into account resistance, capacitance and inductance. I won’t go into detail since the math gets much trickier than just Ohm’s law but you can learn more in the article I linked at the beginning.

Implications on Headphones

You can think of impedance as a measure of how inefficient your headphones are at generating an audio level. Generally, the lower the impedance, the easiest is for your headphones to create a loud signal. Does that mean that you just should get headphones with the lowest impedance possible? Not at all!

Higher audio levels are not the only thing we are after. Sound quality should also be a big factor. Lower impedance headphones usually have lower audio quality while higher impedances are best if we want to avoid distortion and improve frequency response and faithfulness to the original source.

So high impedance headphones will be the most crystalline BUT (and this is a big but) you will need much more power to drive them and get a proper signal from them. This means that in an ideal world, you would have high impedance headphones and also a high impedance headphone amplifier for the best audiophile experience possible. This is what people call “Impedance Matching”, making usr eimpedance levels at the source and desitnation aren’t too far apart.

This means that you may go and buy expensive high impedance headphones and then find out that they give you a tiny quiet signal on your phone, your computer or on any other consumer level audio product. Not good, particularly if you want or need a loud signal. For those consumer uses, you would be much better off with lower impedance headphones which is why almost all normie headphones are relatively low impedance.

Again, things are more complicated. Impedance varies across the frequency spectrum and there are other factors like distortion and sensitivity. All of this will contribute to how loud a pair of headphones will be and how accurate their response will be. But for now, let’s just get an intuitive sense of how impedance works in different contexts.

Some Numbers & Perspective

So how does my 70Ω Sennheiser headphones fit on the impedance scale? A good reference to use is the Beyerdynamic DT770 which is one of the few pairs of headphones where you can buy 3 different versions which only differ in impedance. These versions are:

  • 32Ω: This is consumer level impedance. It will give good audio levels for phones and computers. As a reference, Apple Earpods are 42.2Ω, which by the way doesn’t mean they are better, as I said before there are other factors to consider, like distortion.

  • 80Ω: This is a good mid level which will still be loud enough with consumer products but offers an overall better audio quality. As you can see, my headphones fit here.

  • 250Ω: And this is now getting on the audiophile levels of impedance where you need to make sure you are using a good headphone amplifier if you want to have enough audio levels.

I also wanted to say that usually in pro audio, headphones are not that crucial since we don’t really use them to mix music or cinema. For this, speakers and a good sounding room are always preferable since we want to have a more natural listening experience and at the end of the day, sound is supposed to be propagated through the air. This is why you won’t see studios buying crazy high impedance headphones and this is mostly reserved to the audiophile tribe.

Conclusions

Now you know. Lower impedance will produce higher audio levels but with lower audio quality. It is important then to match impedance on the source and the headphones if we want to achieve the best quality/level relation possible.

5 Tips to Improve in Game Audio

Here are some thoughts I had about working in game audio.

Finding another way

Is not unusual for audio to not have as many resources as we would like. The reality is that our discipline is not appreciated in the same way fancy graphics or cool social features are. But this doesn’t mean we can just do mediocre work. Bad audio is noticeable while good audio is often invisible but realy enhances the player experience.

Finding another way means that, as a sound designer, you must work within the contrains you have to make things work and this usually means making compromises with quality, level of detail and performance. So maybe you can’t realy do things the way you initially planned but you must be resiliant enough to go around the obstacles and deliver something great anyway.

Take time to experiment

Audio has personality, it has a spirit. Sounds connects us to nature in an instictive way, they remind us to animals and weather. When creating audio for a machine, a creature, UI or an environment, we are tasked with giving them a personality, a certain flavour. For this, it can be very helpful to think about what you want to convey, what is the function of this thing in the story and in the world.

Sometimes that’s not enough and you just need to try crazy things, random stuff and see what sticks. I have created some great sounds like this but this certainly means you need to be willing to experiment freely which is not always possible when you need to meet deadlines. So remember to take time to stop and smell the roses, even aimlessly. You will get to results that can’t be achieved any other way.

Use limitations to boost creativity

Don’t see limitations as an obstacle, see them as a way to thrive. Less is more, sure, but is deeper than that. When you are limited to, say a single synth or instrument or just a few tracks or voices you are really forced to learn the only resources that you have deeply and get a knowledge ans mastery that you would never get if you have an arsenal of dozens of plusings to choose from.

Keep in mind the big picture

Is easy to over focus on what you need to do each day. You make sounds and implement them following a plan like ticking boxes. This happens usually when you base your work on lists, spreadsheets or jira tickets. It seems like as long as you tick boxes and cross tasks, you are progressing. This is needed, sure, but never forget that that doesn’t mater at all if the overall result is not working.

Always remember, the final user and their experience. At the end of the day, nobody cares about how you made that sound, how that bit of code is brilliant or the fact that you are knocking down tickets. Take a step back and play as a naive player, see what works and what doesn’t.

Be in flux with information

Things are going to be changing and fast. Features come and go, they are transformed and expanded. It can be tough to keep track of all of this, particularly when audio is usually left out of these decisions. Setting good comunications and expectations with the team is important but also remember that game development moves fast and you can’t possibly know every single thing.

You need to find the proper bandwith of information for each phase of development and keep on top of things but never compromising the actual work you need to do. For me, is helpful to remember that things must be flexible, that nothing is set in stone.

Figuring out: Ambisonics

Here are my notes about the world of Ambisonics. This is a new area to me so, following this blog’s phylosophy, I will try to learn by explaining. Take this as an introduction to the subject.

The basic idea

We usually think about audio formats in terms of channels. Mono and stero being the most basic and used ones. If we open up the 2D space even more, we get surround audio like 5.1 and 7.1. Finally, the last step is to use the full 3D space and that’s where Ambisonics comes in.

The more complexity and channels we have, the harder is to make systems compatible between each other. In order to solve this, Ambisonics trascends the idea of channels and uses the concept of sound fields which represent planes of audio in 3D space.

This then allows to keep the aural information in a “speaker arrangement agnostic format” that can be decoded into any amount of speakers at the time of reproduction.

M/S Format

These planes of audio are represented in a special format called B-Format. You can think of this format as a natural extension of the M/S format so let’s start with that.

To get a M/S recording, we first use a figure of eight microphone facing sideways to the source (this is the “side”). This microphone will pickup the stereo information. At the same time, we use a cardioid microphone facing the source (this is the “mid”).

Once we want to decode these signals into stereo, we just need to sum mid and side to obtain the stereo left and then sum mid and the side with polarity reversed to obtain the right side. If you think about this, you realize that the “side” signal is basically a representation of the difference between left and right.

But why would we want to record things this way? Why not just record in stereo with a X/Y technique or similar? Recording using M/S has a few advantages. Firstly, we get automatic mono compatibility since we have the mid signal which we can use without fear of the phase cancellations that would happen if we sum the channles from an X/Y format. Additionally, since we can decode the M/S recording into stereo after the fact, we can control how wide we want the resulting stereo signal to be by just adjusting the balance between mid and side during decoding.

B-Format

Amisonics takes this concept and pushes to the next dimension, making it 3D by using additional channels to represent height and depth. B-format is then built with the following channels:

  • W: Contains the sound pressure information, similar to the mid signal in M/S. This is recorded with a omnidirectinal microphone.

  • X: Contains the front minus back pressure gradient. Recorded by a figure of eight microphone.

  • Y: Contains the left minus right pressure gradient, similr to the side signal in M/S. Recorded by a figure of eight microphone.

  • Z: Contains the top minus bottom pressure gradient. Recorded by a figure of eight microphone.

Note: A-Format is how we would call the raw audio from an ambisonic recording, that is, the individual signals from each microphone, while B-Format is used when we have already combined all these signals into a unique set.

Ambisonic Orders

The top row shows the W component, while the second one shows X, Y and Z. Additional rows show higher ambisonic orders for higher resolutions.

Using the B-format described above works but comes with some drawbacks. The optimal listener position would be quite small and results won’t be very natural outside it. Also, diagonal information is not very accurate, since it has to be inferred from the boundary between planes.

A solution to these issues is to increase resolution by adding more selective directonal components which instead of using traditional polar patterns would use other specific ones resulting in a signal set that contains denser aural information.

There is really no theoretical limit in how many additional microphones we can add to improve the resolution but of course there are clear prctical limits. For example, a third order ambisonics set, would use 16 tracks so is easy to see how hard drive space and microphone placement can quickly become a problem.

Decoding B-Format

Regardless of the number of ambisonics orders we use, the important thing to keep in mind is that the resulting recording will be not channel dependent. We can build the sonic information of any point in a 3D sphere by just knowing the angles to this point.

This allows us to create virtual microphones in this 3D space with which we can then match with any number of speakers. This is very powerful because once we have an ambisonics recording we can then play it on any speaker configuration preserving as much as the spatial information as the reproduction system allows.

If the final user is using headphones, a binaural signal would the result of the decoding while the same source files can be used to decode a 3D Dolby Atmos mix in cinemas.

Nowadays, you can find a big selection of ambisonic plugins for your DAW so you can play around with B-Format files including coding and decoding them in any other mutlichannel format you can imagine.

Use in media

Ambisonics was created in the 70s but has never been used much in mainstream media. This is now changing with the advent of VR experiences where 3D audio makes a lot of sense since the user can move around the scene focusing on different areas of the soundscape.

In the area of cinematic experiences ambisonics achieves a similar result as Dolby atmos or Auro-3D but using different methods. See my article about atmos to know more about this.

Google’s Resonane Audio allows you ti use ambisonics in Fmod

Regarding videogames or just generally interactive audio, ambisonics is a great fit. You can implement B-Format files in middleware like Fmod or Wwise and also game engines like Unity. This gives you the most flexibility since the ambisonics format will be decoded in real time into whatever the user is using to reproduce audio and this decoding will react in real time to their position and direction which is particularly awesome for VR.

In closing

There is much more to learn about this so I hope i get the chance to work with Ambisonics soon. I’m sure there are many details to keep in mind once you are hands on working with this formats and will try to document what I learn on this blog as I go.