Exploring Sound Design Tools: Reformer Pro

Kroto's Audio Reformer Pro is a unique tool for sound design. Here is a look at the software from a practical everyday perspective. I will focus on how it can improve your workflow and also on how it can spice things up on the creative side when doing sound design. I encourage you to grab the demo version and follow along.

Technology

Basically, Reformer takes an input (another recording or a live microphone signal) and uses its frequency and dynamics content to trigger samples from a certain library, creating a new hybrid output.

reformer diag.png

In other words, it allows you to "perform" audio libraries in real time like a foley artist performs props.

Versions

Reformer uses a freemium pricing model and comes in two flavours, vanilla and pro. The first one is completely free but only allows you to use certain official paid libraries. These can be purchased on the Krotos Audio Store where you'll find a huge selection of different libraries.

The pro version uses a paid subscription model and offers more advanced features. This is the one I'll be covering on this post. This version allows you to load up to four libraries at the same time and do a real time mix between them (the free version only allows to load one library per plugin). More importantly, it also gives you the power to create your own Reformer libraries using your sounds.

Interface

As you can see on the right hand side, Reformer Pro controls are quite simple and self-explanatory. Nevertheless, here are some features worth mentioning:

Since you can load four libraries at a time, the X/Y Pad on the left hand side of the plugin will allow you to mix and mute them independently.

The Response value (bottom left corner) changes how fast reformer is processing incoming audio. In general, faster responses work better with sudden transients and impacts while slower values will work better with longer sounds. If you notice undesired clicks or pops, this is the first thing you should try to tweak.

The Playback Speed functions as a sort of pitch control allowing you change the character and size of the resulting signal.

Reformer Workflows

As you can see, Reformer offers an imaginative way of manipulating sounds but how can this be helpful in the context of everyday sound design and mix work? Here are some ideas:

  • Sync: To quickly lay down effects in sync with the picture using your voice or some foley props. For example, covering a creature's vocalizations by hand is always very time consuming and on these kind of tasks is probably where Reformer shines the most.

  • Substitute: Imagine you have all the FX laid down for a certain object or character and now you have to change all of them to a different material or style. In this case, you could keep the original audio, since it has the correct timing and use it to drive a reformer library with the proper sounds.

  • Layer: Once you've stablished a first layer for a sound, you can use reformer to add more layers that will be perfectly in sync with no effort.

  • Make the most of a limited set of sounds: Sometimes you find the perfect sound to use for something but you don't have enough iterations to cover everything. You can create a reformer library with these few sounds and, playing with the playback speed, response time and wet/dry controls, get the most of them in terms of different articulations and variations.

Screen Shot 2017-12-10 at 15.44.15.png

Creating your own libraries

Reformer Pro includes an Analysis Tool that allows you to create custom libraries with your own audio content. I won't go into much detail about how to do this since the manual and this video covers the topic perfectly and the whole process is surprisingly fast and easy. I encourage you to try to create your very own library too.

Ideally, you should use several sounds that follow a sonic theme so you can have a cohesive library. At the same time, these sounds need to be varied enough in terms of frequency and volume content so you can cover as many articulations as possible.

From a technical standpoint, make sure your files are high resolution, clean, closed mic'd and normalized.

As an example, I created ghostly, sci-fi monster voice library using sounds I created with Paulstretch (see my tutorial for Paulstretch here).

You can hear below some of the original samples that I used to build the library. As you can hear, I tried to mix different vocalizations and frequencies:

And here is how the built library behaves and sounds when throwing different stuff at it. The first sounds are the result of monster like vocalizations and you can hear how the library responds with different combinations of timbres. The last sound on the clip is interesting because is the result of the library responding to a ratchet or clicking sound. As you can see is always worth trying to throw weird stuff at reformer to see how it responds.

You can find this library ready to use for reformer in the link below and give it a go:
Ghostly Monster Reformer Library.

Reformer as a creative tool for sound design

In my view, Reformer is not specifically designed for creative sound design as it lacks depth in terms of how well you can manipulate and control the final results. I miss having some control on how the algorithm creates the output signal in a similar way Zynpatiq's Morph plugin has it. But again, I understand sonic exploration is not the main aim of Reformer. Having said that, you can still achieve interesting designs mixing together elements from different kinds of sounds.

For example, we can use a recording with some interesting transients, like a rattling noise to drive some different libraries. Here is the result with a bell:

As you can hear, Reformer takes the volume information and applies it to the bell timbre. And here is a hum and the same rattle creating some sort of fluttering engine or mechanical insect sound. Just for fun, I also added a doppler effect to add movement:

Being able to control any sound with your own source of transients opens a huge window of possibilities. For example, you could use a bicycle wheel as an instrument to perform different movements and articulations. Pretty cool.

I'm just scratching the surface here. There are many more creative ideas that I would want to try. The demo version only runs for 10 days so make sure you can really go for it during those days.

Conclusions

Reformer is a very innovative tool that for sure makes you think in a different way about sound design. Being able to sync and swap sounds on the fly is probably where Reformer shines the most, allowing you to perform recorded libraries live as a foley artist would do. Definitely worth a try.

Shotgun Microphones Usage Indoors

Note: This is an entry I recovered from the old version of this blog and although is around 5 years old (!), I still think the information can be relevant and interesting. So here is the original post with some grammar and punctuation fixes. Enter 2012 me:

So I have been researching an idea that I have been hearing for a while:

"It’s not a good idea to use a shotgun microphone indoors."

Shotgun microphones

The main goal of these devices is to enhance the on axis signals and attenuate the sound coming form the sides. In other words, make the microphone as directional as possible in order to avoid unwanted noise and ambience.

To achieve this, the system cancels unwanted side audio by delaying it. The operating principle is based on phase cancellation. At first, the system had a series of tubes with different sizes that allowed the on axis signals to arrive early but forces the off-axis signals to arrive delayed. This design, created by the prolific Harry Olson eventually evolved in the modern shotgun microphone design.

Indirect signals arrive delayed. Sketch by http://randycoppinger.com/

In Olson’s original design, in order to improve directivity you had to add more and more tubes, making the microphone too big and heavy to be practical. To solve this, the design evolved into a single tube with several slots that behaved in an equivalent manner to the old additional tubes. These slots made the off-axis sound waves hit the diaphragm later, so when they were combined with the direct sound signal, a noise cancellation occurred, boosting the on-axis signal.

This system has its limitations. The tube needs to be long if we want to cancel low enough frequencies. For example, a typical 30 cm (12″) microphone would start behaving like a cardioid (with a rear lobe) under 1,413 Hz. If we want to go lower, the microphone would need to become too big and heavy. Like this little fellow:

Electro Voice 643, a 2 meters beast that kept it directionality as low as 700 Hz. Call for a free home demostration!

On the other hand, making the microphone longer makes the on-axis angle narrower, so the more directive the microphone is, the more important is a correct axis alignment. The phase cancelation principle also brings consequences like comb filtering and undesirable coloration when we go off axis. This can work against us when is hard to keep the microphone in place, hence this is why these microphones are usually operated by hand or on cranes or boom poles.

In this Sennheiser 416 simplified polar pattern, we can appreciate the directional high frequencies (in red) curling on the sides. The mid frequencies (in blue) show a behaviour somewhere between the highs and a typical cardioid pattern (pictured in green) with a rear lobe.

mg19shotgunrotated.jpeg

This other pattern shows an overall shotgun microphone polar pattern. The side irregularities and the rear lobe are a consequence of the interference system.

Indoor usage

The multiple reflections in a reverberant space, specially the early reflections, will alter how the microphones interprets the signals that reach it. Ideally, the microphone, depending of the incidence angle, will determine if the sound is relevant (wanted signal) or just unwanted noise. When both the signal and noise get reflexed by nearby surfaces they enter the microphone in “unnatural” angles (If we consider natural the direct sound trajectory). The noise then is not properly cancelled since it does not get correctly identified as actual noise. Moreover, part of the useful signal will be cancelled, because it is identified as noise.

For that reason, shotgun microphones will work best outdoors or at least in spaces with good acoustic treatment.

Another aspect to have in mind is the rear lobe that these microphones have. Like we saw earlier this lobe captures specially low frequencies so, again, a bad sounding room that reinforces certain low frequencies is something we want to avoid when using a shotgun microphone. When we have a low ceiling, we are sometimes forced to keep the microphone very close to it so the rear lobe and the proximity effect combines and can make the microphone sound nasty. This is not a problem in a professional movie set where you have high ceilings and good acoustics. In fact, shotgun microphones are a popular choice in these places. 

Lastly, the shotgun size can be problematic to handle in small places, specially when we want precision to keep on axis. 

The alternative

So, for indoors, a better option would be a pencil hipercardioid microphone. They are quite smaller and easier to handle in tight spaces and more forgiving in the axis placement. Moreover, they don’t have an interference tube, so we won't get unwanted colorations from the room reflections.

Is worth noting that these microphones still have a rear lobe that will affect even the mid-high frequencies, but not as pronounced.

So hypercardioid pencil microphones are a great choice for indoors recording. When compared to shotguns, we are basically trading off directionality for a better frequency response and a smaller size.

Exploring Sound Design Tools: Paulstretch

Have you heard this?

That video was, years ago, my introduction to "Paul's Extreme Sound Stretch" or just Paulstretch for short, a tool created by Paul Nasca that allows you to stretch audio to ridiculously cosmic lengths

Some years ago it was fashionable to grab almost anything, from pop music to the simpsons audio snippets, stretch them 800% and upload them to youtube. When the dust settled we were left with an amazing free tool that has been extensively used by musicians and sound designers. Let's see what it can do.

I encourage you to download Paulstretch and follow along:

Windows - (Source)
Mac - (Source)

The stretch engine

The user interface may seem a bit cryptic at first glance but is actually fairly simple to use. Instead of going through every section one by one, I will show how different settings affect your sounds with actual examples. For a more exhaustive view, you can read the official documentation and this tutorial before diving in.

As you can see above, there are four main tabs on the main window: Parameters, Process, Binaural beats and Write to file. I'm just going to focus on the most useful and interesting settings from the first two tabs.

Under Parameters, you can find the most basic tools to stretch your sounds. The screenshot shows the default parameters when you open the software and import some audio. 8x is the default stretch value, that may explain why so many of those youtube videos where using a 800% stretch.

The stretch value lets you set how much you want to stretch your sound. You have three modes here. Stretch and Hyperstretch will make sounds longer. Be careful with Hyperstretch because you can create crazily long files with it. There is also a Shorten mode that does the opposite, makes sounds shorter. If you want to make a sound infinite,  you can freeze the sound in place to create an infinite soundscape with the "freeze" button that is just to the right of the play button.

Below the stretch slider, you can see the window size in samples. This parameter can have quite a profound impact in the final result. Paulstretch breaks up the audio file in multiple slices and this parameter changes the size of those slices, affecting the character of the resulting sound as will hear below.

Let's explore how all these settings will affect different audio samples. First, here is a recording of my voice on the left and the stretched version with default values on the right hand side:

Cool. As you can see on the file name above, 8X is the stretch value while 7.324K is the window size in samples. Notice that the end of the file that Paulstretch created cuts abruptly. This can be fixed using lower values of window size to create a smoother fade out. This is the classic Paulstretch sound: kind of dreamy, clean and with no noticeable artefacts. You will also notice that, although the original is mono, the stretched version feels more open and stereo.

Just for fun, let's see how the Pro Tools and Izotope RX 6 pitch algorithms deal with a 8x time stretch:

This kind of "artefacty" sound is interesting, useful and even beautiful in its own way. But in terms of cleanly stretching a sound without massively changing its timbre, Paulstretch is clearly the way to go.

Let's play now with the window size value and see how this affects the result. Intermediate values seem to be the cleanest, we are just extending the sound in the most neutral way possible. Lower values (under 3K aprox) will have poor frequency resolution, introducing all sorts of artefacts and a flangerish kind of character. A couple of examples of applying low values to the same vocal sample:

Using a different recording we get a whole new assortment of artefacts. Below, you can see the original recoding on the left, the processed version with the default, dreamy settings on the centre and lastly, on the right, a version with a low window value that seems to summon beelzebub himself. Awesome.

On the other hand, Higher values (over 15K aprox) are better at frequency resolution but the the time resolution suffers. This means that, since the chunks are going to be bigger, frequency response is more accurate and faithful to the original sound, but in terms of time, everything is smeared together into a uniform texture with timbres and chracters from different sections of the original sample blending together. So, it doesn't really make sense to use high values with short, homogeneous sounds. Longer and more heterogeneous sounds will yield more interesting results as in this case different frequencies will be mixed together.

You can hear below an example with speech. Again, original on the left, dreamy default values on the centre and high values on the right. You can still understand syllables and words with a lower window value (centre sample) but with a 66K value the slices in this case are 2 seconds long, so different vocal sounds blend together in an unintelligible texture.

Basically, high window values are great for creating smearing textures from heterogeneous audio. Here is another example to help you visualize what the window size does.

On the left, you have a little piece of music with two very different sections: a music box and a drum and bass loop. Each of them is around 3-4 seconds long. If we use a moderate window size (centre sample below) we will hear a music box texture and then a drum texture. The different music notes are blended together but we can still have a sense of the overall harmony. On the third sample (right) we use a window size that yields a slice bigger than 4 seconds, resulting in a blended texture of both the music box and the drums.

Not only can you choose the window size, but also the type of window. Sort of the shape of the slices. Rectangular/Hamming deal better with frequency but they introduce more noise and distortion. Blackman types produce much less noise but they go nuts with the frequency response. See some examples below:

Adding flavour

Jumping now to the Process tab, here we have several very powerful settings to do sound design with.

Harmonics will remove all frequencies form the sample except for a fundamental frequency and a number of harmonics that you can set. You can also change the bandwidth of these harmonics. A lower number of harmonics and a lower bandwidth will yield more tonal results since a fundamental frequency will dominate the sound, while higher values will be closer to the original source having more frequency and nosie content.

See samples below, the first two are the original recording on the left and the stretched version with no harmonic processing on the right. I left the window size kind of low so we have some interesting frequency warping there.  Further below, you can hear several versions with harmonic processing applied and increasingly higher bandwidths. Hear how the first one is almost completely tone and then more and more harmonic and noise content creeps in. I's surprising how different they are from each other.

Definitely very interesting for creating drones and soundscapes, Paulstretch behaves here almost like a synthetizer, it seems like it creates frequencies that were not there before. For example:

Also worth mentioning are the pitch controls. Pitch shift will just tune the pitch as any other pitch shift plugin. Frequency shift creates a dissonant effect by shifting all frequencies by a certain amount. Very cool for scary and horror SFX.

The octave mixer creates copies of your sound and shifts them to certain octaves that you can blend in. Great for calming vibes. See examples below:

 

Lastly, the spread value is supposed to increase the bandwidth of each harmonic which results in a progressive blend of white noise in the signal as you push the setting further. The cool thing about this, is that the white noise will follow the envelope of your sound. This could be used to create ghostly/alien speech. Here are some examples with no spread on the left and spread applied on the right:

And that's it form me! I hope you now have a good idea of what Paulstretch can do. I see a lot of potential to create drones, ghostly horror soundscapes, sci-fi sounds and cool effects for the human voice. Oh, and also just stretching things up to 31 billion years is nice too.

Mini Library

Here is a mini library I've put together with some of the example sounds, some extended versions an a bunch of new ones. It includes creatures, drones, voices and alien winds. Feel free to use them on your projects.

An Introduction to Game Audio

Talking to a fellow sound designer about game audio, I realised that he wasn't aware of some of the differences between working on audio for linear media (film, animation, TV, etc) and for interactive media (video games).

So this post is kind of my answer to that. A brief introduction to the creative depth of video game sound design. This would be aimed to audio people who are maybe not very familiar with the possibilities this world offers or just want to see in which way is different to, say, working on film sound design.

Of course there are many differences between linear and interactive sound design, but perhaps the most fundamental, and the most important for somebody new to interactive sound design, is the concept of middleware. In this post, I’ll aim to give beginners a first look at this unfamiliar tool.

I'll use screenshots from Unearned Bounty, a project I've been working on for around a year now. Click on them to enlarge. This game runs with Unity as the engine and Fmod as the audio middleware.

Linear Sound vs Interactive Sound

Video games are an interactive media and this is going to influence how you face the sound design work. In traditional linear media, you absolutely control what happens in your timeline. You can be sure that, once you finish your job, every time anyone presses play that person is going to have the same audio experience you originally intended, provided that their monitoring system is faithful.

Think about that. You can spend hours and hours perfecting the mix to match a scene since you know is always going to look the same and it will always be played in the same context. Far away explosion? Let's drop a distant explosion there or maybe make a closer FX sound further away. No problem. 

In the case of interactive media, this won´t always be the case. Any given sound effect could be affected by game world variablesstates and context. Let me elaborate on those three factors. Let's use the example of the explosion again. In the linear case, you can design the perfect explosion for the shot, because is always going to be the same. Let's see in the case of a game:

  • The player could be just next to the explosion or miles away. In this case, the distance would be a variable that is going to affect how the explosion is heard. Maybe the EQ, reverb or compression should be different depending on this.

  • At the same time, you probably don't want the sound effect to be exactly the same if it comes from an ally instead of the player. In that case, you'd prefer to use a simpler, less detailed SFX. One reason for this could be that you want to enhance the sound of what the player does so her actions feel more clear and powerful. In this case, who the effect belongs to, would be a state.

  • Lastly, is easier to make something sound good when you always know the context. In video games, you may not always know or control which sounds will play together. This forces you to play-test to make sure that sounds not only work in isolation but also together and in the proportions that usually the player is going to hear. Also, different play styles will alter these proportions. So, following with our example, your explosion may sound awesome but maybe at the same time dialogue is usually being played and is getting lost in the mix and you'd need to account for that.

After seeing this, linear sound design may feel more straightforward, almost easy in comparison. Well, not really. I´ll explain with an analogy. Working on linear projects, particularly movies, is like writing a book. You can really focus on developing the characters, plot and style. You can keep improving the text and making rewrites until you are completely satisfied. Once is done, your work is always going to deliver the same experience to anyone who reads the book.

Interactive media, on the other hand, is closer to being a game master preparing a D&D adventure for your friends. You may go into a lot detail with the plot, characters and setting but like any experienced GM knows, players will somewhat unpredictable. They will spend an annoying amount of time exploring some place that you didn´t give enough attention to and then they will circumnavigate the epic boss fight by some creative rule bending or a clever outside the box idea.

So, as you can see, being a book writer or working in linear sound design gives you the luxury of really focusing on the details you want, since the consumer experience and interaction with your creation is going to be closed and predictable. In both D&D and interactive media, you are not really giving the final experience to the players, you are just providing the ingredients and the rules that will create a unique experience every time.

Creating those ingredients and rules is our job. Let's explore the tools that will help us with this epic task.

Audio Middleware and the Audio Event

Here you can see code being scary.

Games, or any software for that matter, is built from a series of instructions that we call code. This code manages and keeps track of everything that makes a game run: graphics, internal logic, connecting to other computers through the internet and of course, audio.

The simplest way of connecting a game with some audio files is just calling them from the code whenever we need them. Let's think about a FPS game. We would need a sound every time the players shoots her shotgun. So, in layman's terms, the code would say something like: "every time the players clicks her mouse to shoot, please play this shotgun.wav file that you will find in this particular folder". And we may don't even need to say please since computers don't usually care about such things.

This is how all games were done before and is still pretty much in use. This method is very straightforward but also very limited. Incorporating the audio files into the game is a process that is usually called implementation and this is its more rudimentary form. The thing is, code can be a little scary at first, specially for us audio people who are not very familiar with it. Of course, we can learn it, and is an awesome tool if you plan to work in the video game industry, but at the end of the day we want to be focusing on our craft.

Middleware was created help us with this and fill the gap between the game code and the audio. It serves as a middle man, hence the name, allowing sound designers to just focus on the sound design itself. In our previous example, the code was pointing to specific audio files that were needed in any given moment. Middleware does essentially the same thing but puts an intermediate in the middle of the process. This intermediate is what we call an audio event

An example of audio events managing the behaviour of the pirate ships.

An audio event is the main functional unit that the code will call whenever it needs a sound. It could be a gunshot, a forest ambience or a line of dialogue. It could contain a single sound file or dozens of them. Anytime something makes a sound, is triggering an event. The key thing is that, once the code is pointing to an event, we have control, we can make it sound the way we want, we are in our territory.

And this is because middleware uses tools that we audio people are familiar with. We'll find tracks, faders, EQs and compressors. Keep in mind that these tools are still essentially code, middleware is just offering us the convenience of having them in an comfortable and familiar environment. Is bringing the DAW experience to the game development realm.

Audio middleware can be complex and powerful and I'd need a whole series of posts to tell you what they can do and how. So, for now. I'm just going to go through three main features that should give you an idea of what they can offer.

I - Conventional Audio Tools within Middleware

Middleware offer a familiar environment with tracks, timelines and tools similar to the ones found on your DAW. Things like EQ, dynamics, pitch shifters or flangers are common.

This gives you the ability to tweak your audio assets without needing to go back and forth between different softwares. Probably you are still going to start from your DAW and build the base sounds there using conventional plugins, but being able to also do processing within the middleware gives you flexibility and, more importantly, a great amount of power as you'll see later.

II - Dealing with Repetition and Variability

The player may perform some actions over and over again. For example, think about footsteps. You generally don't want to just play the same footstep sound every single time. Even having a set of, say 4 different footsteps, is going to feel repetitive eventually. This repetitiveness is something that older games suffer from and that generally modern games try to avoid. The original 1998 Half-Life, for example, uses a set of 4 footstep sounds per surface. Having said that, it may still be used when looking for a nostalgic or retro flavour the same way pixel art is still used. 

Middleware offer us tools to make several instances of the same audio event, sound cohesive but never exactly identical. The most important of these tools are variations, layering and parameter randomization.

The simplest approach to avoid repetition is just recording or designing several variations on the same effect and let the middleware choose randomly between them every time the event is triggered. If you think about it, this imitates how reality behaves. A sword impact or an footstep are not going to sound exactly the same every single time, even if you really try to use the same amount of force and hit on the same place. 

You could also break up a sound into different components or layers. For example, a gunshot could be divided in a shot impact, its tail and the bullet shell hitting the ground. Each of this layers could also have their own variations. So now, every time the player shoots, the middleware is going to randomly choose an impact, a tail and a bullet shell sound, creating a unique combination.

Another cool thing to do is to have an event with a special layer that is triggered very rarely. By default, every layer on an event has a 100% possibility to be heard but you can tweak this value to make it more infrequent. Imagine for example a power-up sound that has an exciting extra sound effect but is only played 5% of the time the event is called. This is a way to spice things up and also reward players who spend more time playing.

An additional way of adding variability would be to randomize not only which sound clip will be played, but also their parameters. For example, you could randomize volume, pitch or panorama within a certain range of your choice. So, every time an audio clip is called, a different pitch and volume value are going to be randomly picked.

Do you see the possibilities? If you combine these three techniques, you can achieve an amazing degree of variability, detail and realism while using a relative small amount of audio files.

See above the collision_ashore event that is triggered whenever a ship collides with an island. It contains 4 different layers: 

  • A wood impact. (3 variations)

  • Sand & dirt impacts with debris. (3 variations)

  • Wooden creaks (5 variations)

  • A low frequency impact.

As I said, each time the event is triggered, one of this variations within each layer will be chosen. If we then combine this with some pitch, volume and EQ randomization we will assure that every instance of the event will be unique but cohesive with the rest.

III - Connecting audio tools to in-game variables and states.

This is where the real power resides.

Remember the audio tools built-in into middleware that I mentioned before? In the first section I showed you how we can use these audio tools the same way we use them on any DAW. Additionally, can also randomize their values, like I showed you in the second section. So here comes the big one.

We can also automate any parameter like volume, pitch, EQ or delay in relation to anything going on inside the game. In other words, we will have a direct connection between the language of audio and the language the game speaks, the code. Think about the power that that gives you. Here are some examples:

  • Apply an increasing high pass filter to the music and FX as the protagonist health gets lower.

  • Apply a delay to cannon shots that gets longer the further away the shot is, creating a realistic depiction of how light travels faster than sound.

  • Make the tempo of a song gets faster and its EQ brighter as you approach the end of the level.

  • As your sci-fi gun wears off, the sounds get more distorted and muffled. You feel so relieved when you can repair it and get all its power back.

Do you see the possibilities this opens? You can express ideas in the game's plot and mechanics with dynamic and interactive sound design! Isn't that exciting? The takeaway concept that I want you to grasp from this post is that you would never be able to do something this powerful with just linear audio. Working on games makes you think much harder about how sound coming from objects and creatures behaves, evolves and changes. 

As I said before,  you are just providing the ingredients and the rules, the sound design itself only materializes when the player starts the game. 

You can see on the above screenshot how an in-game parameter, distance in this case, affects an event layers' volume, reverb send and EQs.

How to get started

If I have piqued your interest, here are some resources and information to start with.

Fmod and Wwise are currently the two main middleware used by the industry. Both are free to use and not very hard to get into. You will need to re-wire your brain a bit to get used to the way they work, though. Hopefully, reading this post gave you a solid introduction on some of the concepts and tools they use.

If I had to choose one of them, Fmod could look less intimidating at first and maybe more "DAW user friendly". Of course, there are other options, but if you just want to have a first contact, Fmod does the job.

There are loads of resources and tutorials online to learn both Fmod and Wwise, but since I think that the best way to really learn is to jump in and make something yourself, I'll leave you with something concrete to start from for each of them:

Fmod has these very nice tutorials with example projects that you can download and play with.

Wwise has official courses and certifications that you can do for free and and also include example projects.

And of course, don't hesitate to contact me if you have further questions. Thanks for reading!