Exploring Sound Design Tools: Reformer Pro

Kroto's Audio Reformer Pro is a unique tool for sound design. Here is a look at the software from a practical everyday perspective. I will focus on how it can improve your workflow and also on how it can spice things up on the creative side when doing sound design. I encourage you to grab the demo version and follow along.


Basically, Reformer takes an input (another recording or a live microphone signal) and uses its frequency and dynamics content to trigger samples from a certain library, creating a new hybrid output.

reformer diag.png

In other words, it allows you to "perform" audio libraries in real time like a foley artist performs props.


Reformer uses a freemium pricing model and comes in two flavours, vanilla and pro. The first one is completely free but only allows you to use certain official paid libraries. These can be purchased on the Krotos Audio Store where you'll find a huge selection of different libraries.

The pro version uses a paid subscription model and offers more advanced features. This is the one I'll be covering on this post. This version allows you to load up to four libraries at the same time and do a real time mix between them (the free version only allows to load one library per plugin). More importantly, it also gives you the power to create your own Reformer libraries using your sounds.


As you can see on the right hand side, Reformer Pro controls are quite simple and self-explanatory. Nevertheless, here are some features worth mentioning:

Since you can load four libraries at a time, the X/Y Pad on the left hand side of the plugin will allow you to mix and mute them independently.

The Response value (bottom left corner) changes how fast reformer is processing incoming audio. In general, faster responses work better with sudden transients and impacts while slower values will work better with longer sounds. If you notice undesired clicks or pops, this is the first thing you should try to tweak.

The Playback Speed functions as a sort of pitch control allowing you change the character and size of the resulting signal.

Reformer Workflows

As you can see, Reformer offers an imaginative way of manipulating sounds but how can this be helpful in the context of everyday sound design and mix work? Here are some ideas:

  • Sync: To quickly lay down effects in sync with the picture using your voice or some foley props. For example, covering a creature's vocalizations by hand is always very time consuming and on these kind of tasks is probably where Reformer shines the most.
  • Substitute: Imagine you have all the FX laid down for a certain object or character and now you have to change all of them to a different material or style. In this case, you could keep the original audio, since it has the correct timing and use it to drive a reformer library with the proper sounds.
  • Layer: Once you've stablished a first layer for a sound, you can use reformer to add more layers that will be perfectly in sync with no effort.
  • Make the most of a limited set of sounds: Sometimes you find the perfect sound to use for something but you don't have enough iterations to cover everything. You can create a reformer library with these few sounds and, playing with the playback speed, response time and wet/dry controls, get the most of them in terms of different articulations and variations.
Screen Shot 2017-12-10 at 15.44.15.png

Creating your own libraries

Reformer Pro includes an Analysis Tool that allows you to create custom libraries with your own audio content. I won't go into much detail about how to do this since the manual and this video covers the topic perfectly and the whole process is surprisingly fast and easy. I encourage you to try to create your very own library too.

Ideally, you should use several sounds that follow a sonic theme so you can have a cohesive library. At the same time, these sounds need to be varied enough in terms of frequency and volume content so you can cover as many articulations as possible.

From a technical standpoint, make sure your files are high resolution, clean, closed mic'd and normalized.

As an example, I created ghostly, sci-fi monster voice library using sounds I created with Paulstretch (see my tutorial for Paulstretch here).

You can hear below some of the original samples that I used to build the library. As you can hear, I tried to mix different vocalizations and frequencies:

And here is how the built library behaves and sounds when throwing different stuff at it. The first sounds are the result of monster like vocalizations and you can hear how the library responds with different combinations of timbres. The last sound on the clip is interesting because is the result of the library responding to a ratchet or clicking sound. As you can see is always worth trying to throw weird stuff at reformer to see how it responds.

You can find this library ready to use for reformer in the link below and give it a go:
Ghostly Monster Reformer Library.

Reformer as a creative tool for sound design

In my view, Reformer is not specifically designed for creative sound design as it lacks depth in terms of how well you can manipulate and control the final results. I miss having some control on how the algorithm creates the output signal in a similar way Zynpatiq's Morph plugin has it. But again, I understand sonic exploration is not the main aim of Reformer. Having said that, you can still achieve interesting designs mixing together elements from different kinds of sounds.

For example, we can use a recording with some interesting transients, like a rattling noise to drive some different libraries. Here is the result with a bell:

As you can hear, Reformer takes the volume information and applies it to the bell timbre. And here is a hum and the same rattle creating some sort of fluttering engine or mechanical insect sound. Just for fun, I also added a doppler effect to add movement:

Being able to control any sound with your own source of transients opens a huge window of possibilities. For example, you could use a bicycle wheel as an instrument to perform different movements and articulations. Pretty cool.

I'm just scratching the surface here. There are many more creative ideas that I would want to try. The demo version only runs for 10 days so make sure you can really go for it during those days.


Reformer is a very innovative tool that for sure makes you think in a different way about sound design. Being able to sync and swap sounds on the fly is probably where Reformer shines the most, allowing you to perform recorded libraries live as a foley artist would do. Definitely worth a try.

Shotgun Microphones Usage Indoors

Note: This is an entry I recovered from the old version of this blog and although is around 5 years old (!), I still think the information can be relevant and interesting. So here is the original post with some grammar and punctuation fixes. Enter 2012 me:

So I have been researching an idea that I have been hearing for a while:

"It’s not a good idea to use a shotgun microphone indoors."

Shotgun microphones

The main goal of these devices is to enhance the on axis signals and attenuate the sound coming form the sides. In other words, make the microphone as directional as possible in order to avoid unwanted noise and ambience.

To achieve this, the system cancels unwanted side audio by delaying it. The operating principle is based on phase cancellation. At first, the system had a series of tubes with different sizes that allowed the on axis signals to arrive early but forces the off-axis signals to arrive delayed. This design, created by the prolific Harry Olson eventually evolved in the modern shotgun microphone design.




Indirect signals arrive delayed. Sketch by http://randycoppinger.com/

In Olson’s original design, in order to improve directivity you had to add more and more tubes, making the microphone too big and heavy to be practical. To solve this, the design evolved into a single tube with several slots that behaved in an equivalent manner to the old additional tubes. These slots made the off-axis sound waves hit the diaphragm later, so when they were combined with the direct sound signal, a noise cancellation occurred, boosting the on-axis signal.

This system has its limitations. The tube needs to be long if we want to cancel low enough frequencies. For example, a typical 30 cm (12″) microphone would start behaving like a cardioid (with a rear lobe) under 1,413 Hz. If we want to go lower, the microphone would need to become too big and heavy. Like this little fellow:

Electro Voice 643, a 2 meters beast that kept it directionality as low as 700 Hz. Call for a free home demostration!

On the other hand, making the microphone longer makes the on-axis angle narrower, so the more directive the microphone is, the more important is a correct axis alignment. The phase cancelation principle also brings consequences like comb filtering and undesirable coloration when we go off axis. This can work against us when is hard to keep the microphone in place, hence this is why these microphones are usually operated by hand or on cranes or boom poles.

In this Sennheiser 416 simplified polar pattern, we can appreciate the directional high frequencies (in red) curling on the sides. The mid frequencies (in blue) show a behaviour somewhere between the highs and a typical cardioid pattern (pictured in green) with a rear lobe.


This other pattern shows an overall shotgun microphone polar pattern. The side irregularities and the rear lobe are a consequence of the interference system.

Indoor usage

The multiple reflections in a reverberant space, specially the early reflections, will alter how the microphones interprets the signals that reach it. Ideally, the microphone, depending of the incidence angle, will determine if the sound is relevant (wanted signal) or just unwanted noise. When both the signal and noise get reflexed by nearby surfaces they enter the microphone in “unnatural” angles (If we consider natural the direct sound trajectory). The noise then is not properly cancelled since it does not get correctly identified as actual noise. Moreover, part of the useful signal will be cancelled, because it is identified as noise.

For that reason, shotgun microphones will work best outdoors or at least in spaces with good acoustic treatment.

Another aspect to have in mind is the rear lobe that these microphones have. Like we saw earlier this lobe captures specially low frequencies so, again, a bad sounding room that reinforces certain low frequencies is something we want to avoid when using a shotgun microphone. When we have a low ceiling, we are sometimes forced to keep the microphone very close to it so the rear lobe and the proximity effect combines and can make the microphone sound nasty. This is not a problem in a professional movie set where you have high ceilings and good acoustics. In fact, shotgun microphones are a popular choice in these places. 

Lastly, the shotgun size can be problematic to handle in small places, specially when we want precision to keep on axis. 

The alternative

So, for indoors, a better option would be a pencil hipercardioid microphone. They are quite smaller and easier to handle in tight spaces and more forgiving in the axis placement. Moreover, they don’t have an interference tube, so we won't get unwanted colorations from the room reflections.

Is worth noting that these microphones still have a rear lobe that will affect even the mid-high frequencies, but not as pronounced.

So hypercardioid pencil microphones are a great choice for indoors recording. When compared to shotguns, we are basically trading off directionality for a better frequency response and a smaller size.

Exploring Sound Design Tools: Paulstretch

Have you heard this?

That video was, years ago, my introduction to "Paul's Extreme Sound Stretch" or just Paulstretch for short, a tool created by Paul Nasca that allows you to stretch audio to ridiculously cosmic lengths

Some years ago it was fashionable to grab almost anything, from pop music to the simpsons audio snippets, stretch them 800% and upload them to youtube. When the dust settled we were left with an amazing free tool that has been extensively used by musicians and sound designers. Let's see what it can do.

I encourage you to download Paulstretch and follow along:

Windows - (Source)
Mac - (Source)

The stretch engine

The user interface may seem a bit cryptic at first glance but is actually fairly simple to use. Instead of going through every section one by one, I will show how different settings affect your sounds with actual examples. For a more exhaustive view, you can read the official documentation and this tutorial before diving in.

As you can see above, there are four main tabs on the main window: Parameters, Process, Binaural beats and Write to file. I'm just going to focus on the most useful and interesting settings from the first two tabs.

Under Parameters, you can find the most basic tools to stretch your sounds. The screenshot shows the default parameters when you open the software and import some audio. 8x is the default stretch value, that may explain why so many of those youtube videos where using a 800% stretch.

The stretch value lets you set how much you want to stretch your sound. You have three modes here. Stretch and Hyperstretch will make sounds longer. Be careful with Hyperstretch because you can create crazily long files with it. There is also a Shorten mode that does the opposite, makes sounds shorter. If you want to make a sound infinite,  you can freeze the sound in place to create an infinite soundscape with the "freeze" button that is just to the right of the play button.

Below the stretch slider, you can see the window size in samples. This parameter can have quite a profound impact in the final result. Paulstretch breaks up the audio file in multiple slices and this parameter changes the size of those slices, affecting the character of the resulting sound as will hear below.

Let's explore how all these settings will affect different audio samples. First, here is a recording of my voice on the left and the stretched version with default values on the right hand side:

Cool. As you can see on the file name above, 8X is the stretch value while 7.324K is the window size in samples. Notice that the end of the file that Paulstretch created cuts abruptly. This can be fixed using lower values of window size to create a smoother fade out. This is the classic Paulstretch sound: kind of dreamy, clean and with no noticeable artefacts. You will also notice that, although the original is mono, the stretched version feels more open and stereo.

Just for fun, let's see how the Pro Tools and Izotope RX 6 pitch algorithms deal with a 8x time stretch:

This kind of "artefacty" sound is interesting, useful and even beautiful in its own way. But in terms of cleanly stretching a sound without massively changing its timbre, Paulstretch is clearly the way to go.

Let's play now with the window size value and see how this affects the result. Intermediate values seem to be the cleanest, we are just extending the sound in the most neutral way possible. Lower values (under 3K aprox) will have poor frequency resolution, introducing all sorts of artefacts and a flangerish kind of character. A couple of examples of applying low values to the same vocal sample:

Using a different recording we get a whole new assortment of artefacts. Below, you can see the original recoding on the left, the processed version with the default, dreamy settings on the centre and lastly, on the right, a version with a low window value that seems to summon beelzebub himself. Awesome.

On the other hand, Higher values (over 15K aprox) are better at frequency resolution but the the time resolution suffers. This means that, since the chunks are going to be bigger, frequency response is more accurate and faithful to the original sound, but in terms of time, everything is smeared together into a uniform texture with timbres and chracters from different sections of the original sample blending together. So, it doesn't really make sense to use high values with short, homogeneous sounds. Longer and more heterogeneous sounds will yield more interesting results as in this case different frequencies will be mixed together.

You can hear below an example with speech. Again, original on the left, dreamy default values on the centre and high values on the right. You can still understand syllables and words with a lower window value (centre sample) but with a 66K value the slices in this case are 2 seconds long, so different vocal sounds blend together in an unintelligible texture.

Basically, high window values are great for creating smearing textures from heterogeneous audio. Here is another example to help you visualize what the window size does.

On the left, you have a little piece of music with two very different sections: a music box and a drum and bass loop. Each of them is around 3-4 seconds long. If we use a moderate window size (centre sample below) we will hear a music box texture and then a drum texture. The different music notes are blended together but we can still have a sense of the overall harmony. On the third sample (right) we use a window size that yields a slice bigger than 4 seconds, resulting in a blended texture of both the music box and the drums.

Not only can you choose the window size, but also the type of window. Sort of the shape of the slices. Rectangular/Hamming deal better with frequency but they introduce more noise and distortion. Blackman types produce much less noise but they go nuts with the frequency response. See some examples below:

Adding flavour

Jumping now to the Process tab, here we have several very powerful settings to do sound design with.

Harmonics will remove all frequencies form the sample except for a fundamental frequency and a number of harmonics that you can set. You can also change the bandwidth of these harmonics. A lower number of harmonics and a lower bandwidth will yield more tonal results since a fundamental frequency will dominate the sound, while higher values will be closer to the original source having more frequency and nosie content.

See samples below, the first two are the original recording on the left and the stretched version with no harmonic processing on the right. I left the window size kind of low so we have some interesting frequency warping there.  Further below, you can hear several versions with harmonic processing applied and increasingly higher bandwidths. Hear how the first one is almost completely tone and then more and more harmonic and noise content creeps in. I's surprising how different they are from each other.

Definitely very interesting for creating drones and soundscapes, Paulstretch behaves here almost like a synthetizer, it seems like it creates frequencies that were not there before. For example:

Also worth mentioning are the pitch controls. Pitch shift will just tune the pitch as any other pitch shift plugin. Frequency shift creates a dissonant effect by shifting all frequencies by a certain amount. Very cool for scary and horror SFX.

The octave mixer creates copies of your sound and shifts them to certain octaves that you can blend in. Great for calming vibes. See examples below:


Lastly, the spread value is supposed to increase the bandwidth of each harmonic which results in a progressive blend of white noise in the signal as you push the setting further. The cool thing about this, is that the white noise will follow the envelope of your sound. This could be used to create ghostly/alien speech. Here are some examples with no spread on the left and spread applied on the right:

And that's it form me! I hope you now have a good idea of what Paulstretch can do. I see a lot of potential to create drones, ghostly horror soundscapes, sci-fi sounds and cool effects for the human voice. Oh, and also just stretching things up to 31 billion years is nice too.

Mini Library

Here is a mini library I've put together with some of the example sounds, some extended versions an a bunch of new ones. It includes creatures, drones, voices and alien winds. Feel free to use them on your projects.

Soundly: An unofficial user's guide

This current version of the guide covers Soundly 2.0. I will do my best to keep it updated.

Disclaimer: I decided to make this guide because there are almost no resources online for Soundly and I thought it would be something that some people may find useful. This is not intended to be a strictly objective guide, I've included some of my own opinions and suggestions. I don't have any working or commercial relation with the creators of Soundly.

Soundly is a audio library management software that lets you organise and tag your sound effects and add them to your projects in a fast and convenient way. It also includes an online proprietary library that you will be able to access in conjunction with your own local files. I'll be using the mac version and shortcuts. Any mac shortcut using CMD can be used on windows with CTRL.

Soundly Overview

Pricing and Accounts
Soundly uses a freemium model. Using the app is free with some limitations. You will only be able to import up to 2500 sounds from your local drive and you will only have access to a selection of sounds from the cloud based Soundly library.

The Pro version allows you to import unlimited local files and lets you access the Soundly online library in its entirety. Additionally, this version allows you to access other third party online libraries within the app. Some of these are free, like the whole catalogue from freesound.org, and others are paid libraries that you can buy via asoundeffect.com.

As you can see bellow, there is also a 24hr pass option that gives you all the pro features in case you just need them for a short period of time.

There is also a multi-user option for companies that gives you the pro features for multiple people and includes shared cloud storage so everyone in the team can access the same libraries. 

Audio libraries sources

Your sounds can come from three fundamental sources:

  • Local: These are the files that you have in your own computer.
  • Cloud: These are files that are always online and you can access from anywhere.
  • Network: These are files that are in your local network (but maybe not necessarily in your computer).

Let's see how you can manage files from these three sources.

Importing your local libraries

To import your libraries, you just drag and drop any audio files or folders. Make sure you drop them on the "Local" blue box. The best way to keep things organized is to have each of your libraries on individual folders and drop them all to soundly instead of just importing the parent folder of all these libraries.

This way, Soundly will list all your libraries one by one and you will be able to select which ones do you want to include in your searches. You will also be able to browse through subfolders within each library.

If you bring in loose sound files instead of folders, they will go into a "Loose sounds" folder that will be automatically created.

Managing your own sounds on the cloud

For now, this option is only available for multi-user accounts. When you drag and drop files or folder to Soundly, you will see that one of the blue boxes says "Cloud Storage". Once these files are uploaded you will be able to access them anywhere.

Managing third party cloud libraries

Soundly allows you to access some online cloud libraries with no need for you to have the files locally. If you are a pro user you will be able to access all the libraries listed bellow. In the case of a free user, only a selection of sounds from The Soundly Library will be available.

  • The Soundly Library: This is general purpose library built by sound designer Christian Schaanning. You will be able to access the whole set of sounds (currently around 10.400 files) if you are a pro user. Free users will just be able to access a selection of around 300 sounds. This ls a quite complete and well tagged library that will be probably enough for an editor or an student and a nice additional bonus resource for sound designers.
  • Freesound.org library: Soundly lets you access the huge catalogue hosted in freesound.org (around 300K sounds!). Is very handy to have this vast amount of material available to you without the need of jumping to your browser to search and download the sounds by hand. Having said that, freesound.org doesn't always offer the highest quality plus the content is under different creative commons licenses. Soundly will show all these different licenses and you can even set things up so Soundly will only show you creative commons 0 licensed sounds which is a great way to assure you that you are only using public domain material. To do this, just right click on the freesound folder (under "Cloud") and select Creative Commons 0 Only.
  • The Free Firearm/Medieval Weapon Library: These libraries were produced by Still North Media and financed via Kickstarter. You would need Soundly Pro to access them via the cloud but if you are a free user you can always download them and access them locally.
  • Paid asoundeffect.com Libraries: Pro users can browse, purchase and access third party libraries via the vast asoundeffect.com catalogue and access them anywhere on the cloud section.

So, as you can see, Soundly cloud capabilities are really about convenience. They give you the ability of searching on all these cloud based sources plus on your local files, all in a single place, saving time and improving your workflow.

Using shared network databases

The 2.0 version introduced this new feature aimed at companies or just groups of people using different clients of soundly locally. To use this feature, you will need the computers to be connected via a local network (LAN). Keep in mind that the database and audio files could be in one of the users computers or even just in a server or machine room computer that all other users will access.

So, why use this and not just local copies for each user? This option allows you to have an unified and centralised database that any user can access, edit and improve upon, instead of a fragmented database that is going to be different for every member of the team. Additionally, this database will update its metadata in real time, without the need for users to restart Soundly or re-import the audio libraries when another user makes any metadata change.

To set up a database, you will need to create it with Database > New shared network database. You will then see the following screen:

As the text above says, the database is stored in a .sdb file and this file should be stored on the same network disk as the audio files forming the library. Click on the folder icon to name the database and locate where do you want to save the database file. As mentioned, it should be saved on your library root folder and on the same disk where the audio files are. Then, you will find these three options:

  • Duplicate local database: This option will add to your local library folders to your otherwise empty network library.
  • Password protect: This lets you add a password that any users trying to connect to the database will need to know.
  • Restrict editing: If selected, unauthorised users won't be able to edit the database content or metadata. 

Once your database is created, any user can access it going to Database > Connect to shared network database. As mentioned before, the user will need to have access, via the local network, to the computer or hard drive where the files are. Once connected to the database, the user will be able to search within the library and see any new changes like new folders, files or metadata made by any other user with permission to edit.

Using Metadata

Having solid metadata is very important if you want to find the sound you need easily. It's no use to have a great sounding library if the metadata is vague or incorrect.

There are other audio library managing software with far more options to manage metadata but, in my opinion Soundly offers enough features to keep your files well labeled and easy to find. I think most users won't need much more.

When browsing local files you will see the following fields: 

  • Name: The name of the audio file.
  • Time: The duration of the file.
  • Format: Sampling frequency + Bit depth when browsing wav files. In case of other formats, it will show the name of the format.
  • Channels: The amount of channels. Very handy when you just want to see surround files, for example.
  • Library: The name of the parent folder containing the audio file. Ideally, it should be the name of the library. 
  • Description: Additional information about the file to make it easier to find.

If you want to change which fields are visible, right click on any of them to add or remove them. The only editable fields within Soundly are the "Name" and "Description". For my own libraries, I personally use the name field to just state what the sound is in terms of materials, moods and/or actions. I then use the description field to add any additional information that may help locate the sound when searching in the future, even if this information is very different to the original purpose of the sound.


  • CMD + E: Edit the name of the file.
  • CMD + T: Edit the description of the file.

If you select more than one file at the same time and do CMD + T, you will be able to access this following dialogue window. (See picture on the right hand side)

This window will allow you to modify the names, descriptions or originators of all the selected files at the same time.

The originator is an additional field that appears when editing a file's description and just indicates where the original metadata is coming from.

This are the commands accessible on this window. Keep in mind that any of this can be applied to either the name, description or originator field:

  • Add to start: Adds any text to the start of the field. You may need to end with an space to keep things clear.
  • Add to end: Adds any text to the end of the field. You may need to start with an space to keep things clear.
  • Replace with: Replaces occurrences of a determined chain of characters with a new one of your choice. Super useful for removing things like underscores and replacing them with spaces, as you will see bellow.
  • Replace whole: Replaces the whole field with a new text.

Here are some general tips and tricks to keep in mind when editing metadata. Not all of them are necessarily related to Soundly and some are definitely a matter of personal preference but they may be useful to you:

  • When possible, use wav and interleaved files. Metadata on some other types of files will be saved in soundly but not on the file itself, so it will be lost if the files are moved or renamed. In my experience, sticking to wav and interleaved is the safest option.
  • Searches are not case-sensitive but I like to keep everything lower case for the sake of simplicity.
  • You can use dashes "-", underscores "_" and commas "," to separate words (like sci-fi). Soundly will treat them as spaces.
  • Self-contained words: Looking for "cars" won't give you anything labeled as "car". But looking for "car" will give you both "car" and "cars". So, in general, is better to search for the shortest form of a word (usually singular). Another example of this would be "plane". It will give you words labeled as "plane", "planes" and "airplane" but also "planer" and "planet".
  • Establish a set of words used to describe certain sounds and be consistent with them, specially when two options are possible, like "impact and "hit". Choose one of them and stick with it.


Searching is very straightforward but I wanted to share some tips:

  • You can use the minus symbol "-" to remove results from your search. For example, if you search for "rock" you may also end up with results referring to rock music. You can then use "rock -music" and hopefully you will filter those unwanted results out.
  • You can see your search history (and go back to any previous search) clicking on the magnifying icon on the left the search box or using the arrows on the right of the search box.
  • When you do a search, Soundly will give you suggestions based on global sound effect searches.
  • If you want to do a search just in a group of libraries or folders you can uncheck the rest of folders and just leave those you want to search in.
  • If you want to search in just one library, you can use the "Search in this library" command that you will see by right-clicking on any of your libraries.
  • You can save a particular selection of folders  to be used while searching. To set this up, first select the folders you would like to be included and then click on the three dots on the right hand side of the SOUNDS tab (see picture bellow). Once on that dialogue, you can name and save your selection or delete previous selections. You can then access these selections using the shortcut CMD + Number.

Audio Operations

There are several audio operations that you can do to your sounds within Soundly. Keep in mind that you won't be affecting the original audio file when using these operations. Let's have a look at them:

  • You can change the sounds pitch and this will affect its duration too, so this is an old school pitch change where pitch and length are linked. You can do this by using the big slider on the bottom left hand side. There is also an amount setting that you can change to make the change bigger. This setting can be 2X, 4X and 8X. Note that if you change the pitch and then move the sound to to other app you will be moving the pitch shifted version of the file.
  • There is a volume slider next to the pitch control. This will just change your auditioning level and will not affect the sound's level when importing it into other software.
  • You can change the waveform size or zoom with a slider that is on the right hand side of the waveform display. Again, this will just change how you view the waveform, not the actual level of the sound.
  • Use CMD + R to reverse the audio file. This change will be carried forward to other software.
  • Use CMD + I to invert the channels on an stereo audio file. This change will be carried forward to other software.
  • Use CMD + N to normalize the audio file. This change will be carried forward to other software.
  • Use SHIFT + CMD + M to sum an stereo file to mono. This change will be carried forward to other software.
  • Use SHIFT + CMD + LEFT ARROW to use the left channel only on an stereo file. This change will be carried forward to other software.
  • Use SHIFT + CMD + RIGHT ARROW to use the right channel only on an stereo file. This change will be carried forward to other software.

Exporting sounds to other apps

You can export sounds to any audio or video editing software. To do this just select a section of a sound (or the entire file) and drag and drop.

You can also work on Dock Mode, this compact mode allows you to see Soundly and your editing software at the same time. To use this mode just click on "Dock Mode" on the top right hand side of Soundly's window.

Using Dock Mode

Using Soundly with Pro Tools

To spot sound directly to Pro Tools you can either use "S" to spot to the cursor or "B" to spot to the Pro Tools bin. This is the best way to ensure that the sounds are in the session's audio files folder. When spotting to the cursor, make sure you have selected to the proper track beforehand, since Soundly will spot the file to the selected track on Pro Tools.

When spotting a stereo track to a mono track, Soundly will first bring the left channel and then right channel on top of it, so you will end up with just the right channel which is OK if you just want a mono effect. If you want use to use the other channel you can either undo in Pro Tools (this will undo bringing the right channel so you will just have the left channel) or you can just use the commands explained above to just bring the left channel or a mono sum of both channels.

If your selection doesn't span the whole file and you spot to pro tools, you will just bring that particular selection but it will include handles so you will be able to extend the clip if needed.

Using Playlists

Playlists are a great way of saving your favourite sounds per category (Whooshes, Grabs, Night) or per client or show you are working on. In the picture on the right, you can see some of the playlists I'm currently using to give you an idea of categories you could use.

To create a new playlist, just click on "New Playlist" just bellow your libraries. Name your playlist and you can start adding sounds just right clicking on any sound and selecting "Add to playlist" and your desired playlist.

The Starred playlist is a somewhat special playlist that comes by default and you can use to tag your favourite sounds.

You can also share any playlist with other Soundly users. This is really handy for teams working on the same set of projects. Right click on any playlist and select share. You can then add any users using their emails. You can also choose to let them add files to the playlist and even give them permissions to manage it.

Settings Page

Access the setting page by clicking on "SETTINGS" on the up right hand side, going to Soundly > Settings or with the shortcut CMD + {COMMA}

  • Output Device: Your Audio output device.
  • Auto play: When browsing, selecting a new audio file will play it automatically.
  • Loop play: Files will be played on loop. Shortcut: SHIFT + CMD + L
  • Hide on drag out: Soundly window will hide when dragging and dropping a sound.
  • Auto resize search result column: When searching, columns will automatically resize to fit the content.
  • Window always on top: Soundly window will always stay on top, useful when using Dock Mode.
  • File name on export: You can choose if either the file name or the description will be carried when exporting to other software.
  • Output format: The format that exported files will have. It can be the same as the original file ("Same as input") or a custom one.
  • Reset local database: Deletes the local database.
  • Offline License: This options allows you to export and import an offline license if you need to work on a computer not connected to the internet.
  • Audio storage location:  If you work with video editing software, Soundly will not automatically save the audio to the project audio folder when exporting. With this feature, you can manually set that path to the project's audio folder.
    Soundly will save the files you export to this folder whenever your editing software 
  • File transfer quality: If you have a slow internet connection you can use the low quality settings to load the files faster.
  • Update: Checks if you are using the last version of Soundly.
  • Networking: Use this option if you are accessing the internet through a proxy server.
  • ReWire: Rewire lets you audition Soundly through a track in your DAW. Specially useful if you use Pro Tools HD.

Soundly shortcuts

  • CMD + E: Edit the name of the file.
  • CMD + T: Edit the description of the file.
  • CMD + Number: Activates a saved folder selection preset.
  • CMD + [COMMA]: Preferences page.
  • CMD + R: Reverse an audio file. 
  • CMD + I: Invert the channels on an stereo audio file. 
  • CMD + N: Normalize the audio file. 
  • SHIFT + CMD + M: Sum an stereo file to mono. 
  • SHIFT + CMD + LEFT ARROW: Use the left channel only on an stereo file. 
  • SHIFT + CMD + RIGHT ARROW: Use the right channel only on an stereo file. 
  • SHIFT + CMD + L: Toggle loop play on and off.

Features for the future

This just a compilation of ideas from myself and other colleagues that we would love to see in Soundly in the future. If there is anything that you think is missing here, leave a comment and I'll add it to the list. As you can see, some features were added!

  • Being able to use "" to search for concrete words but not words containing them. So, if you search for "car", you won't find cartoon or cardboard.
  • File descriptions get updated in real time between different users accessing the same files. Added in version 2.0
  • Being able to edit more than rule at once for a certain field. For example, we would want to add "speed" to the end of the description and also replace all instances of "vehicle" with "car" within the description too. This would be very handy when re-naming big libraries.
  • Search statistics. Being able to see what you search the most would be awesome.
  • Multi-edit is great because of the replacing tools and sometimes I'd love to be able to do it on just one file.
  • Sort playlists in folders and share those folders.
  • More boolean logic for searches. Like look for anything with the words "dog" or "wolf".
  • More metadata fields or even custom metadata fields.
  • Ability to spot to pro tools with handles. Added in version 2.0
  • A button allowing you to "Go back" to previous page. Added in version 2.0
  • Being able to do conditional formatting on metadata. For example, being able to tell Soundly: within this selection, if you see the term "wind", add "ambience".
  • Add local tags tied to your account to cloud libraries.
  • Spell correction in search bar. Added in version 1.2
  • Being able to add tags in a particular time along a file's waveform.
  • Right click on a Freesound track has an option to open that sound's page on a browser.

An Introduction to Game Audio

Talking to a fellow sound designer about game audio, I realised that he wasn't aware of some of the differences between working on audio for linear media (film, animation, TV, etc) and for interactive media (video games).

So this post is kind of my answer to that. A brief introduction to the creative depth of video game sound design. This would be aimed to audio people who are maybe not very familiar with the possibilities this world offers or just want to see in which way is different to, say, working on film sound design.

Of course there are many differences between linear and interactive sound design, but perhaps the most fundamental, and the most important for somebody new to interactive sound design, is the concept of middleware. In this post, I’ll aim to give beginners a first look at this unfamiliar tool.

I'll use screenshots from Unearned Bounty, a project I've been working on for around a year now. Click on them to enlarge. This game runs with Unity as the engine and Fmod as the audio middleware.

Linear Sound vs Interactive Sound

Video games are an interactive media and this is going to influence how you face the sound design work. In traditional linear media, you absolutely control what happens in your timeline. You can be sure that, once you finish your job, every time anyone presses play that person is going to have the same audio experience you originally intended, provided that their monitoring system is faithful.

Think about that. You can spend hours and hours perfecting the mix to match a scene since you know is always going to look the same and it will always be played in the same context. Far away explosion? Let's drop a distant explosion there or maybe make a closer FX sound further away. No problem. 

In the case of interactive media, this won´t always be the case. Any given sound effect could be affected by game world variablesstates and context. Let me elaborate on those three factors. Let's use the example of the explosion again. In the linear case, you can design the perfect explosion for the shot, because is always going to be the same. Let's see in the case of a game:

  • The player could be just next to the explosion or miles away. In this case, the distance would be a variable that is going to affect how the explosion is heard. Maybe the EQ, reverb or compression should be different depending on this.
  • At the same time, you probably don't want the sound effect to be exactly the same if it comes from an ally instead of the player. In that case, you'd prefer to use a simpler, less detailed SFX. One reason for this could be that you want to enhance the sound of what the player does so her actions feel more clear and powerful. In this case, who the effect belongs to, would be a state.
  • Lastly, is easier to make something sound good when you always know the context. In video games, you may not always know or control which sounds will play together. This forces you to play-test to make sure that sounds not only work in isolation but also together and in the proportions that usually the player is going to hear. Also, different play styles will alter these proportions. So, following with our example, your explosion may sound awesome but maybe at the same time dialogue is usually being played and is getting lost in the mix and you'd need to account for that.

After seeing this, linear sound design may feel more straightforward, almost easy in comparison. Well, not really. I´ll explain with an analogy. Working on linear projects, particularly movies, is like writing a book. You can really focus on developing the characters, plot and style. You can keep improving the text and making rewrites until you are completely satisfied. Once is done, your work is always going to deliver the same experience to anyone who reads the book.

Interactive media, on the other hand, is closer to being a game master preparing a D&D adventure for your friends. You may go into a lot detail with the plot, characters and setting but like any experienced GM knows, players will somewhat unpredictable. They will spend an annoying amount of time exploring some place that you didn´t give enough attention to and then they will circumnavigate the epic boss fight by some creative rule bending or a clever outside the box idea.

So, as you can see, being a book writer or working in linear sound design gives you the luxury of really focusing on the details you want, since the consumer experience and interaction with your creation is going to be closed and predictable. In both D&D and interactive media, you are not really giving the final experience to the players, you are just providing the ingredients and the rules that will create a unique experience every time.

Creating those ingredients and rules is our job. Let's explore the tools that will help us with this epic task.

Audio Middleware and the Audio Event

Here you can see code being scary.

Games, or any software for that matter, is built from a series of instructions that we call code. This code manages and keeps track of everything that makes a game run: graphics, internal logic, connecting to other computers through the internet and of course, audio.

The simplest way of connecting a game with some audio files is just calling them from the code whenever we need them. Let's think about a FPS game. We would need a sound every time the players shoots her shotgun. So, in layman's terms, the code would say something like: "every time the players clicks her mouse to shoot, please play this shotgun.wav file that you will find in this particular folder". And we may don't even need to say please since computers don't usually care about such things.

This is how all games were done before and is still pretty much in use. This method is very straightforward but also very limited. Incorporating the audio files into the game is a process that is usually called implementation and this is its more rudimentary form. The thing is, code can be a little scary at first, specially for us audio people who are not very familiar with it. Of course, we can learn it, and is an awesome tool if you plan to work in the video game industry, but at the end of the day we want to be focusing on our craft.

Middleware was created help us with this and fill the gap between the game code and the audio. It serves as a middle man, hence the name, allowing sound designers to just focus on the sound design itself. In our previous example, the code was pointing to specific audio files that were needed in any given moment. Middleware does essentially the same thing but puts an intermediate in the middle of the process. This intermediate is what we call an audio event

An example of audio events managing the behaviour of the pirate ships.

An audio event is the main functional unit that the code will call whenever it needs a sound. It could be a gunshot, a forest ambience or a line of dialogue. It could contain a single sound file or dozens of them. Anytime something makes a sound, is triggering an event. The key thing is that, once the code is pointing to an event, we have control, we can make it sound the way we want, we are in our territory.

And this is because middleware uses tools that we audio people are familiar with. We'll find tracks, faders, EQs and compressors. Keep in mind that these tools are still essentially code, middleware is just offering us the convenience of having them in an comfortable and familiar environment. Is bringing the DAW experience to the game development realm.

Audio middleware can be complex and powerful and I'd need a whole series of posts to tell you what they can do and how. So, for now. I'm just going to go through three main features that should give you an idea of what they can offer.

I - Conventional Audio Tools within Middleware

Middleware offer a familiar environment with tracks, timelines and tools similar to the ones found on your DAW. Things like EQ, dynamics, pitch shifters or flangers are common.

This gives you the ability to tweak your audio assets without needing to go back and forth between different softwares. Probably you are still going to start from your DAW and build the base sounds there using conventional plugins, but being able to also do processing within the middleware gives you flexibility and, more importantly, a great amount of power as you'll see later.

II - Dealing with Repetition and Variability

The player may perform some actions over and over again. For example, think about footsteps. You generally don't want to just play the same footstep sound every single time. Even having a set of, say 4 different footsteps, is going to feel repetitive eventually. This repetitiveness is something that older games suffer from and that generally modern games try to avoid. The original 1998 Half-Life, for example, uses a set of 4 footstep sounds per surface. Having said that, it may still be used when looking for a nostalgic or retro flavour the same way pixel art is still used. 

Middleware offer us tools to make several instances of the same audio event, sound cohesive but never exactly identical. The most important of these tools are variations, layering and parameter randomization.

The simplest approach to avoid repetition is just recording or designing several variations on the same effect and let the middleware choose randomly between them every time the event is triggered. If you think about it, this imitates how reality behaves. A sword impact or an footstep are not going to sound exactly the same every single time, even if you really try to use the same amount of force and hit on the same place. 

You could also break up a sound into different components or layers. For example, a gunshot could be divided in a shot impact, its tail and the bullet shell hitting the ground. Each of this layers could also have their own variations. So now, every time the player shoots, the middleware is going to randomly choose an impact, a tail and a bullet shell sound, creating a unique combination.

Another cool thing to do is to have an event with a special layer that is triggered very rarely. By default, every layer on an event has a 100% possibility to be heard but you can tweak this value to make it more infrequent. Imagine for example a power-up sound that has an exciting extra sound effect but is only played 5% of the time the event is called. This is a way to spice things up and also reward players who spend more time playing.

An additional way of adding variability would be to randomize not only which sound clip will be played, but also their parameters. For example, you could randomize volume, pitch or panorama within a certain range of your choice. So, every time an audio clip is called, a different pitch and volume value are going to be randomly picked.

Do you see the possibilities? If you combine these three techniques, you can achieve an amazing degree of variability, detail and realism while using a relative small amount of audio files.

See above the collision_ashore event that is triggered whenever a ship collides with an island. It contains 4 different layers: 

  • A wood impact.  (3 variations)
  • Sand & dirt impacts with debris. (3 variations)
  • Wooden creaks (5 variations)
  • A low frequency impact.

As I said, each time the event is triggered, one of this variations within each layer will be chosen. If we then combine this with some pitch, volume and EQ randomization we will assure that every instance of the event will be unique but cohesive with the rest.

III - Connecting audio tools to in-game variables and states.

This is where the real power resides.

Remember the audio tools built-in into middleware that I mentioned before? In the first section I showed you how we can use these audio tools the same way we use them on any DAW. Additionally, can also randomize their values, like I showed you in the second section. So here comes the big one.

We can also automate any parameter like volume, pitch, EQ or delay in relation to anything going on inside the game. In other words, we will have a direct connection between the language of audio and the language the game speaks, the code. Think about the power that that gives you. Here are some examples:

  • Apply an increasing high pass filter to the music and FX as the protagonist health gets lower.
  • Apply a delay to cannon shots that gets longer the further away the shot is, creating a realistic depiction of how light travels faster than sound.
  • Make the tempo of a song gets faster and its EQ brighter as you approach the end of the level.
  • As your sci-fi gun wears off, the sounds get more distorted and muffled. You feel so relieved when you can repair it and get all its power back.

Do you see the possibilities this opens? You can express ideas in the game's plot and mechanics with dynamic and interactive sound design! Isn't that exciting? The takeaway concept that I want you to grasp from this post is that you would never be able to do something this powerful with just linear audio. Working on games makes you think much harder about how sound coming from objects and creatures behaves, evolves and changes. 

As I said before,  you are just providing the ingredients and the rules, the sound design itself only materializes when the player starts the game. 

You can see on the above screenshot how an in-game parameter, distance in this case, affects an event layers' volume, reverb send and EQs.

How to get started

If I have piqued your interest, here are some resources and information to start with.

Fmod and Wwise are currently the two main middleware used by the industry. Both are free to use and not very hard to get into. You will need to re-wire your brain a bit to get used to the way they work, though. Hopefully, reading this post gave you a solid introduction on some of the concepts and tools they use.

If I had to choose one of them, Fmod could look less intimidating at first and maybe more "DAW user friendly". Of course, there are other options, but if you just want to have a first contact, Fmod does the job.

There are loads of resources and tutorials online to learn both Fmod and Wwise, but since I think that the best way to really learn is to jump in and make something yourself, I'll leave you with something concrete to start from for each of them:

Fmod has these very nice tutorials with example projects that you can download and play with.

Wwise has official courses and certifications that you can do for free and and also include example projects.

And of course, don't hesitate to contact me if you have further questions. Thanks for reading!