Pro Tools Batch Rename & Regular Expressions

Batch renaming was introduced into Pro Tools at the end of 2017, with the 12.8.2 version. Since then, I haven’t had much of a chance to use this feature since most of my work has been mixing and sound design. Nevertheless, after some recent days of voice acting recording and all the editing associated, I have been looking a bit into this feature.

So this is a quick summary of what you can do with it with some tips and examples.


There are two batch rename windows in Pro Tools, one for clips and another for tracks. They are, for the most part, identical. You can open each of them with the following shortcuts:

  • Clips: CTRL + SHIFT + R

  • Tracks: OPTION + SHIFT + R

Both windows also have a preset manager which is great to have.

As you can see, there are four different operations you can do: Replace, Trim, Add and Numbering. As far as I can tell, the different operations are always executed from top to bottom, so keep that in mind when designing a preset. Let’s see each of them in more detail:

Replace (CMD + R) allows you to search for any combination of letters and/or numbers and replace with a different one. The “Clear Existing Name” checkbox allows you to completely remove any previous name the track or clip had. This option makes sense when you want to start from scratch and use any of the other operations (add and numbering) afterwards.

For example, let’s say you don’t like when Pro Tools adds that ugly “dup1” to your track name when duplicating them. You could use a formula like this:

Original names New names

FX 1.dup1 FX 1 Copy
FX 2.dup1 FX 2 Copy
FX 3.dup1 FX 3 Copy

You may realise that this would only work with the first copy of a track. Further copies of the same track will be named “…dup2, ...dup3” so the replace won’t work. There is a way to fix that with the last checkbox, “Regular Expressions”. This allows you to create complex and advanced functions and is where the true power of batch renaming resides. More about it later.

Trim (CMD + T) is useful when you want to shave off a known amount of characters from the beginning or end of the name. You can even use the range option to remove characters right in the middle. This of course makes the most sense when you have a consistent name length, since any difference in size will screw up the process.

So, for example, if you have the following structure and you want to remove the date, you can use the following operation:

Original names New names

Show_EP001_Line001_280819_v01 Show_EP001_Line001_v01
Show_EP001_Line002_280819_v03 Show_EP001_Line002_v03
Show_EP001_Line003_280819_v02 Show_EP001_Line003_v02

Add (CMD + D) lets you insert prefixes and suffixes, pretty much doing the opposite of Trim. You can also insert any text at a certain index in the middle of the name.

We can add to the previous example a suffix to mark the takes that are approved. It would look like this:

Original names New names

Show_EP001_Line001_v01 Show_EP001_Line001_v01_Approved
Show_EP001_Line002_v03 Show_EP001_Line002_v03_Approved
Show_EP001_Line003_v02 Show_EP001_Line003_v02_Approved

Finally, Numbering (CMD + N) is a very useful operation that allows you to add any sequence of numbers or even letters at any index. You can choose the starting number or letter and the increment value. As far as I can tell, this increment value can’t be negative. If you want to use a sequence of letters, you need to check the box “Use A..Z” and in that case the starting number 1 will correspond with the letter “A”.

If we are dealing with different layers for a sound, we could use this function to label them like so:

Original names New names

Plasma_Blaster Plasma_Blaster_A
Plasma_Blaster Plasma_Blaster_B
Plasma_Blaster Plasma_Blaster_C

As you can see, in this case, we are using letters instead of numbers and and underscore to separate them form the name. Also, you can see that in the case of clips, you can choose wether the order comes from the timeline itself of from the clip list.

Regular Expressions

Regular expressions (or regex) are kind of an unified language or syntax used in software to search, replace and validate data. As I was saying this is where the true power of batch renaming is. In fact, it may be a bit overkill for Pro Tools but let’s see some formulas and tips to use regular expressions in Pro Tools.

This stuff gets tricky fast, so you can follow along trying out the examples in Pro Tools or using

Defining searches

First off, you need to decide what do you want to find in order to replace it or delete it (replace with nothing). For this, of course you can search for any term like “Take” or “001” but obviously, you don’t need regex for that. Regex shines when you need to find more general things like any 4 digit number or the word “Mic” followed by optional numbers. Let’s see how we can do all this with some commands and syntax:

[…] Anything between brackets is a character set. You can use “-” to describe a range. For example, “[gjk]” would search for either g, i or k, while [1-6] means any number from 1 to 6. We could use “Take[0-9]“ to search for the word “Take” followed by any 1 digit number.

Curly brackets are used to specify how many times we want to find a certain character set. For example ”[0-9]” would look for any combination of numbers that is 5 digits long. This could be useful to remove or replace a set of numbers like a date which is always constant. You can also use ”[0-9]” to search for any number which is between 5 and 8 digits. Additionally, ”[0-9]” would look for any number longer than 5 digits.

There are also certain special instructions to search for specific sets of charaqcters. “\d” looks for any digit (number) type character, while “\w” would match any letter, digit or underscore character. “\s” finds any whitespace character (normal spaces or tabs).


When defining searches, you can use some modifiers to add extra meaning. Here are some of the most useful:

. (dot or full stop) Matches any character. So, “Take_.” would match any character that comes after the underscore.
+ (plus sign) Any number of characters. We could use “Take_.+” to match any number of character coming after the underscore.
^ (caret) When used within a character set means “everything but whatever is after this character:. So “[^a-d]” would match any character that is not a, b, c or d.
? (question mark) Makes a search optional. So for example, “Mic\d?“ would match the word Mic by itself and also if it has any 1 digit number after it.
* (Asterisk) Also makes a search optional but allowing multiple instances of said search. In a way, is a combination of + and ?. So for example, ”Mic\d*” would match “Mic” by itself, “Mic6” but also “Mic456” and, in general, the word Mic with any number of digits after it.
| (vertical bar) Is used to expressed the boolean “or”. So for example, “Approved|Aproved” would search for either of these options and apply the same processing to both if they are found.

Managing multiple regex in the same preset

You sometimes want to process several sections of a name and replace them with different things, regardless of their position and the content around them. To achieve this, you could create a regex preset for each section but is also possible to have several regex formulas in just one. Let´s see how we can do this.

In the “Find:” section, we need to use (…) (parenthesis). Each section encompased between parenthesis is called a group. A group is just a set of instructions that is processed as a separated entity. So if we want to search for “Track” and also for a 3 digit number we could use a search like this one “(Track)(\d)“. Now, it is important to be careful with what we use between the two groups depending of our goals. With nothing in between, Pro Tools would strictly search for the word track, followed by a 3 digit number. We may want this but tipically what we want is to find those terms wherever in the name and in whichever order. For this, we could use a vertical bar (|) in between the two groups like so: “(Track)|(\d)“ which is telling Pro Tools: hey, search for this or for this and then replace any for whatever.

But what if you want to replace each group for an specific different thing? This is easily done by also using groups in the ¨Replace¨section. You need to indentify each of them with “?1”, “?2” and so on. So the example on the right would search for the word “Track” anywhere in the name and replace ti with “NewTrack” and then it would search for any 3 digit number and replace it with “NewNumbers”

Here is a more complex example, involving 4 different groups. If you have a look at the original names, you will see this structure: “Show_EpisodeNumber_Character_LineNumber”. We would want to change the character and show to the proper names. We are also using a “v” character after the line number to indicate that this is the approved take by the client, it could be nice if we could transform this into the string “Approved”. Finally, Pro Tools adds a dash (-) and some numbers after you edit any clip and we would want to get rid of all of this. If you have a look at our regex, you would see that we can solve all of this in one go. Also, notice how the group order is not important since we are using vertical bars to separate them. You will see that in the third group, I’m searching for anything that comes after a dash and replacing it with just nothing (ie, deleting it), which can be very handy sometimes. So the clip names will change like so:

Original names New names

Show_045_Character_023-01 Treasure_Island_045_Hero_023
Show_045_Character_026v-03 Treasure_Island_045_Hero_026_Approved
Show_045_Character_045v-034 Treasure_Island_045_Hero_045_Approved

Other regex functions that I want to learn in the future

I didn´t have time to learn or figure out everything that I have been thinking regular expressions could do, so here is a list of things I would like to reasearch in the future. Maybe some of them are impossible for now. If you are also interested in achieving some of these things, leave a comment or send me an email and I could have a look in the future.

  • Command that adds the current date with a certain format.

  • Commands that add meta information like type of file, timecode stamp and such.

  • Syntax that allows you to search for a string of characters, process them in some way, and them use it in the replace section.

  • Deal with case sensitivity.

  • Capitalize or uncapitalize characters.

  • Conditional syntax. (If you find some string do A, if you don´t, do B).

Regex Resources:


I hope you now have a better understanding of how powerful batch renaming can be. With regular expressions I just wanted to give you some basic principles to build upon and have some knowledge to start building more complex presets that can save you a lot of time.

Figuring out: Gain Staging

What is it?

Gain staging is all about managing the audio levels of different layers within an audio system. In other words, when you need to make something louder, good gain staging is knowing where in the signal chain would be best to do this. 

I will focus this article on the realm of mix & post-production work under Protools, since this is what I do daily, but these concepts can be applied in any other audio related situation like recording or live sound.

Pro Tools Signal Chain

To start with, let's have a look at the signal chain on Protools:

Untitled Diagram (10).png

Knowing and understanding this chain is very important when setting your session up for mixing. Note that other DAWs would vary in their signal chain. Cubase, for example, offers pre and post-fader inserts while on Pro Tools every insert is always pre-fader except from the ones on the master channel.

Also, I've added a Sub Mix Bus (an auxiliar) at the end of the chain because this is how usually mixing templates are set up and is important to keep it in mind when thinking about signal flow.

So, let's dive into each of the elements of the chain and see their use and how they interact with each other.

Clip gain & Inserts

As I was saying, on Pro Tools, inserts are pre-fader. It doesn't matter how much you lower your track's volume, the audio clip is always hitting the plugins with its "original" level. This renders clip gain very handy since we can use it to control the clip levels before they hit the insert chain.

You can use clip gain to make sure you don't saturate your first insert input and for keeping the level consistent between different clips on the same track. This last use is specially important when audio is going through a compressor since you want roughly the same amount of signal being compressed across all the different clips on a given channel.

So what if you want a post-fader insert? As I said, you can't directly change an insert to post-fader but there is a workaround. If you want to affect the signal after the track's volume, you can always route that track or tracks to an auxiliar and have the inserts on that aux. In this case, these inserts would be post-fader from the audio channel perspective but don't forget they are still pre-fader from the aux channel own perspective.

Signal flow within the insert chain

Since the audio signal flows from the first to the last insert, when choosing the order of these plugins is always important to think about whatever goal you want to achieve. Should you EQ first? Compress first? What if you want a flanger, should it be at the end of the chain or maybe at the beginning?

I don't think there is definitive answer and, as I was saying, the key is to think about the goal you have in mind and whichever way makes conceptual sense to your brain. EQ and compression order is a classic example of this. 

The way I usually work is that I use EQ first to reduce any annoying or problematic frequencies, having also a high pass filter most of the time to remove unnecessary low end. Once this is done, I use the compressor to control the dynamic range as desired. The idea behind this approach is that the compressor is only going to work with the desired part of the signal.

I sometimes add a second EQ after the compressor for further enhancements, usually boosting frequencies if needed. Any other special effects, like a flanger or a vocoder would go last on the chain.

Please note that, if you use the new Pro Tools clip effects (which I do use), these are applied to the clip before the fader and before the inserts.

Channel Fader

After the insert chain, the signal goes through the channel fader or track volume. This is where you usually do most of the automation and levelling work. A good gain stage management job makes working with the fader much easier. You want to be working close to unity, that is, close to 0.

This means that, after clip gain, clip effects and all inserts; you want the signal to be at your target level when the fader is hovering around 0. Why? This is where you have the most control, headroom and confort. If you look closely at the fader you'll notice it has a logarithmic scale. A small movement next to unity would suppose 1 or 2 dB but the same movement down below could be a 10 dB change. Mixing close to unity makes subtle and precise fader movements easy and confortable.


Pro Tools sends are post-fader by default and this is the behaviour you would usually want most of the time. Sending audio to a reverb or delay is probably the most common use for a send since you want to keep 100% of the dry signal and just add some wet processed signal that will change in level as the dry also changes.

Pre-fader sends are mostly useful for recording and live mixing (sending a headphone mix is a usual example) and I don't find myself using them much on post. Nevertheless, a possible use on a post-production context could be when you want to work with a 100% of the wet signal regardless of how much of the dry signal is coming through. Examples of this could be special effects and/or very distant or echoey reverbs where you don't want to keep much of the original dry signal.

Channel Trim

Trim is pretty much like effectively having two volume lanes per track. Why would this be useful? I use trim when I already have an automation curve that I want to keep but I just want to make the whole thing louder or quieter in a dynamic way. Once you finish a trim pass, both curves would coalesce into one. This is the default behaviour but you can change it on Preferences > Mixing > Automation.


VCAs are a concept that comes from analogue consoles (Voltage Controlled Amplifier) and allows you to control the level of several tracks with a single fader. They use to do this by controlling the voltage reaching each channel but on Pro Tools, VCAs are a special type of track that doesn't have audio, inserts, inputs or outputs.  VCA tracks just have a volume lane that can be used to control the volume of any group of tracks.

So, VCAs are something that you usually use when you want to control the overall level of a section of the mix as a whole, like the dialogue or sound effects tracks. In terms of signal flow, VCAs are just changing a track level via the track's fader so you may say they just act as a third fader (the second being trim).

Why is this better that just routing the same tracks to an auxiliar and changing the volume there? Auxiliars are also useful, as you will see on the next section, but if the goal is just level control, VCAs have a few advantages:

  • Coalescing: After every pass, you are able to coalesce your automation, changing the target tracks levels and leaving your VCA track flat and ready for your next pass.

  • More information: When using an auxiliar instead of a VCA track, there is no way to know if a child track is being affected by it. If you accidentally move that aux fader you may go crazy trying to figure out why your dialogue tracks are all slightly lower (true story). On the other hand, VCAs show you a blue outline (see picture below) with the real affected volume lane that would result after coalescing both lanes so you can always see how a VCA is affecting a track.

  • Post fader workflow: Another problem of using an auxiliar to control the volume of a group of tracks, is that if you have post-fader sends on those tracks, you will still send that audio away regardless of the parent's auxiliar level. This is because you are sending that audio away before you send it to the auxiliar. VCAs avoid this problem by directly affecting the child track volume and thus also affecting how much is sent post-fader.

Sub Mix buses

This is the final step of the signal chain. After all inserts, faders, trim and VCA, the resulting audio signals can be routed directly to your output or you may also consider using a sub mixing bus instead. This is an auxiliar track that sums all the signals from a specific group of channels (like Dialogue tracks) and allows you to control and process each sub mix as a whole.

These are the type of auxiliar tracks that I was taking about on the VCA section. They may not be ideal to control the levels of a sub mix, but they are useful when you want to process a group of tracks with the same plugins or when you need to print different stems.

An issue you may find when using them is that you may find yourself "fighting" for a sound to be loud enough. You feel that pushing the fader more and more doesn't really help and you barely hear the difference. When this happens, you've probably run out of headroom. Pushing the volume doesn't seem to help because a compressor or limiter further on the signal chain (that is, acting as a post-fader insert) is squashing the signal.

When this happens, you need to go back and give yourself more headroom by making sure you are not over compressing or lowering every track volume until you are working on manageable level. Ideally, you should be metering your mix from the start so you know where you are in terms of loudness. If you mix to any loudness standard like EBU-R128, that should give you a nice and comfortable amount of headroom.

Final Thoughts

Essentially, mixing is about making things louder or quieter to serve the story that is being told. As you can see, is important to know where in the audio chain the best place to do this is. If you keep your chain in order, from clip gain to the sub mix buses, making sure levels are optimal every step of the way. you'll be in control and have a better idea on where to act when issues arise. Happy Mixing.

All you need to know about the decibel

Here is an bird's eye view on the decibel and how understanding it can be useful if you work as a sound designer, sound mixer or even just anywhere in the media industry.

I've included numbered notes that you can open to get more information. So, enter, the decibel:

The Decibel is an odd unit. There are three main reasons for this: 

1: A Logarithmic Unit

Firstly, a decibel is a logarithmic unit1. Our brains don't usually enjoy the concept of logarithmic units since we are used to things like prices, distances or weights, which usually grow linearly in our every day lives. Nevertheless, logarithmic units are very useful when we want to represent a vast array of different of values.

Let's see an example: If we take a value of 10 and we make it 2, 3 or 5 times bigger, we'll see that the resulting value will get huge pretty fast on a logarithmic scale.2
  1. Note that I will use logarithmic units and logarithmic scales interchangeably.

  2. I'm using a logarithm to base 10. Is the easiest to understand since we use the decimal system.

How much bigger? Value on a linear scale Value on a logarithmic scale
1 Time 10 10
2 Times 20 100
3 Times 30 1000
4 Times 40 10000
5 Times 50 100000
The reason behind this difference is that, while the linear scale is based on multiplication, the logarithmic scale uses exponentiation.3 Here is the same table but with the math behind it, including the generic formula:
  1. And actually, the logarithm is just the inverse operation to exponentiation, that's why sometimes you will see exponential scales or units. They are basically the same as a logarithmic ones.

How much bigger? Value on a linear scale Value on a logarithmic scale
1 Time 10 (10*1) 10 (101)
2 Times 20 (10*2) 100 (102)
3 Times 30 (10*3) 1000 (103)
4 Times 40 (10*4) 10000 (104)
5 Times 50 (10*5) 100000 (105)
X Times 10*X 10X

As you can see, with just a 5 times increment we get to a value of a hundred thousand. That can be very convenient when we want to visualise and work with values on a set of data ranging from dozens to millions. 

Some units work fine on a linear scale because we usually move within a small range of values. For example, let's imagine we want to measure distances between cities. As you can see, most values are between 3000 and 18000 km, so they fit nicely on an old fashioned linear scale. It's easy to see how the distances compare.

Now, let's imagine we are still measuring distances between cities, but we are an advanced civilization that has founded some cities throughout the galaxy. Let's have a look:

As you can see, the result is not very easy to read. Orion is so far away that all other distances are squashed on the chart. Of course, we could use light years instead of km and that would be much better for the cities on other stars but then we will have super low, hard to use numbers for the earth cities. Another solution would be measure earth cities in kllometres and galaxy cities in light years but then we wouldn't be able to easily compare the values between them. 

The logarithmic scale offers us a solution for this problem since it easily covers several orders of magnitude. Here is the same distance chart, but on a logarithmic scale, I just took the distances in kilometres and calculated their logarithms.

This is much more comfortable to use, we can get a better idea of the relationships between all these distances.

Like the city examples above, some natural phenomena that span through several orders of magnitude, are more comfortably measured with a logarithmic scale. Some examples are pH, earthquakes and... you guessed it, sound loudness. This is the case, because our ears are ready to process both very quiet and very loud sounds.4
  1. It seems like we animals experience much of the world in a logarithmic way. This also includes sound frequency and light brightness. Here is a cool paper about it.

So the take away here is that we use a logarithmic scale for convenience and because it gives us a more accurate model of nature.

2: A Comparative Unit

Great, so we have now an easy to use scale to measure anything from a whisper to a jet engine, we just need to stick our sound level meter out of the window and check the number. Well, is not that simple. When we say something is 65dB, we are not just making a direct measurement, we are always comparing two values. This is the second reason why decibels are odd, let me elaborate:

Decibels are really the ratio between a certain measured value and a reference value. In other words, they are a comparative unit. Just saying 20dB is incomplete in the same way that just saying 20% is incomplete. We need to specify the reference value we are using. 20% percent of what? 20dB respect to what? So, what kind of reference value could we use? This brings me to the third reason:

3: A Versatile Unit

Although most people associate decibels with sound, they can be used to measure ratios of values of any physical property. These properties can be related to audio (like air pressure or voltage) or they may have little or nothing to do with audio (like light or reflectivity on a radar). Decibels are used in all sort of industries, not only audio. Some examples are electronics, video or optics.

OK, with those three properties in mind, let's sum up what a decibel is.

A decibel is the logarithmically expressed ratio between two physical values

Let that sink in and make sure you really get those three core concepts.
Now, let's see how we can use them to measure sound loudness, that's why we were here if I remember correctly.

In space, nobody can hear you scream


As much as Star Wars is trying to convince us on the contrary, sound's energy needs a physical medium to travel through. When sound waves disturb such mediums, there is measurable pressure change as the atoms move back and forth. The louder the sound, the more intense this disturbance is.

Since air is the medium through which we usually experience sound, this gives us the most direct and obvious way of measuring loudness: we just need to register how pressure changes on a particular volume of air. Pressure is measured in Pascals, so we are good to go. But wait, if this is the most direct way of measuring loudness couldn't we just say that a pair of speakers are capable of disturbing the air with a pressure of 6.32 Pascals and forget about decibels?

Well, we could, but again, it wouldn't be very convenient. While the mentioned speakers can reach 6.32 Pascals and this seems like a comfortable number to manage, here are some other examples, from quiet to loud:

Source Sound Pressure in Pascals (Pa) Sound Pressure (mPa)
Microsoft's Anechoic Chamber 0.0000019 0.0019
Human Threshold of Hearing @ 1 KHz 0.00002 0.02
Quiet Room 0.0002 0.2
Normal Conversation 0.02 20
Speakers @ 1 meter 6.32 6320
Human Threshold of Pain 63.2 63200
Jet Engine @ 1 meter 650 650000
Rifle shot @ 1 meter 7265 7265000

Unless you love counting zeros, that doesn't look very convenient, does it? Note how using Pascals is not very confortable with quiet sounds while mPa (a thousandth of a Pascal) doesn't work very well with loud ones. If our goal is to create a system that measures sound loudness, one of the key things we need is that the unit we use can comfortably cover a large range of values. Several orders of magnitude, actually. To me, that sounds like a job for an logarithmic unit.

Moreover, maybe measuring just naked Pascals doesn't seem like a very useful thing to do when our goal is to just get an idea of how loud stuff is. A better way of doing this, could be to compare our measured value to a reference value and get the ratio between the two. This is starting to sound an awful lot like our previous definition of a decibel! We are getting somewhere.

So, what could we use as a reference level to measure the loudness of sound waves on the air? If you have a look at the table above, you'll notice a very good candidate: the human threshold of hearing. If we do this, 0dB would be the very minimal pressure our ears can detect and after that, the numbers would go up in a comfortable scale as we go up in intensity. Even better, if we measure sounds that are below our ear's threshold the resulting number will be negative, indicating not only that the sound would be imperceptible for us but also saying by how much. That's an elegant system right there. I'm starting to dig decibels.

Now, let's look at the previous Pascals table, but adding now the corresponding decibel values:

Source Sound Pressure in Pascals dBSPL
Microsoft's Anechoic Chamber 0.0000019 -20.53
Human Threshold of Hearing @ 1 KHz 0.00002 0
Quiet Room 0.0002 20
Normal Conversation 0.02 60
Speakers @ 1 meter 6.32 110
Human Threshold of Pain 63.2 130
Jet Engine @ 1 meter 650 150
Rifle shot @ 1 meter 7265 171

That looks like a much easier scale to use. Remember that dBs are used to measure both very quiet things like anechoic chambers and very loud stuff like space rockets. This scale does a better job for the whole range of human audition, it is fine tuned to those microphones we carry around and call ears.

Here is a nice infographic with some more examples so you get an idea of how some daily sources of sound fit in the decibel scale.

Decibel Flavours

Did you notice that on the table above there is a cute subindex after dB that reads SPL? What's up with that? That subindex stands for Sound Pressure Level and is a particular flavour of decibel. Since decibels can be based on any physical property and since they can use any reference value, we can have many different flavours of decibels depending of which measured property and reference value is more convenient to use in each case.

In the case of dBSPL, this type of decibel is telling us two things. Firstly, that the physical property we are using is pressure. Secondly, that our reference value is the threshold of human hearing. This is fine for measuring loudness on sound waves travelling through the air but, is audio information capable of travelling through other mediums?

We have learned to transform the frequency and amplitude information contained in sound waves in the air into grooves in a record or streams of electrons in a cable. That's a pretty remarkable feat that deserves its own post but for now let's just consider that we are able to "code" audio information into flows of electrons that we can measure.

Since dBs can be used with any physical property, we can use units from the realm of electronics like watts or volts to measure loudness in a electrical audio signal. In this sense, both pascals and volts give us an idea of how intense a sound signal is, even though they refer to very different physical properties.

So, we need to establish which units and reference values will be useful to use to build new decibel flavours. We also need to label our particular flavour of dB somehow. This is usually done using a subindex (dBSPL) or a suffix (dBu).

Let's have a look at some of the most used decibel flavours:

dB Unit Property Measured (Unit) Reference Value Used on
dBSPL Pressure (Pascals) 2*10-5 Pascals
(Human Threshold of Hearing)
dBA, dBB, and dBC Pressure (Pascals) 2*10-5 Pascals
(Human Threshold of Hearing)
Acoustics when accounting for
human sensitivity
to different frequencies.
dBV Electric potential (Volts) 1 Volt Consumer audio equipment.
dBu Electric potential (Volts) 0.7746 Volts Professional audio equipment.
dBm Electric Power (Watts) 1mW Radio, microwave and
fiber-optical communication networks.

As you can see, we can also use units from the electric realm to measure how loud an audio signal is. We will choose the most convenient unit depending on the context. Ideally, when using decibels, the type should be stated although sometimes it has to be inferred by the context.

If you read dB values on a mixer desk, for example, chances are they will be dBu, since this is the unit usually used in professional audio. When shopping for a pair of speakers or headphones, SPL values are usually given. Finally, when measuring things like an office space or a computer fan you will see dBA, dBB or dBC. These units are virtually the same as dBSPL but they apply different weighting filters that account for how we are more sensitive to certain frequencies than others in order to get a more accurate result.

And that's all folks. I left several things out of this post because I wanted to keep it focused on the basics. The decibel has some more mysteries to unravel but I'll leave that for a future post. In the meantime, here are some bullet points to refresh you on what you've learned:


The decibel:

  • Uses the logarithmic scale which works very well when displaying a wide range of values.

  • Is a comparative unit that always uses the ratio between a measured value and a reference value.

  • Can be used with any physical property, not only sound pressure.

  • Uses handy reference values so the numbers we manage are more meaningful.

  • Comes in many different flavours depending on the property measured and the reference value.

Shotgun Microphones Usage Indoors

Note: This is an entry I recovered from the old version of this blog and although is around 5 years old (!), I still think the information can be relevant and interesting. So here is the original post with some grammar and punctuation fixes. Enter 2012 me:

So I have been researching an idea that I have been hearing for a while:

"It’s not a good idea to use a shotgun microphone indoors."

Shotgun microphones

The main goal of these devices is to enhance the on axis signals and attenuate the sound coming form the sides. In other words, make the microphone as directional as possible in order to avoid unwanted noise and ambience.

To achieve this, the system cancels unwanted side audio by delaying it. The operating principle is based on phase cancellation. At first, the system had a series of tubes with different sizes that allowed the on axis signals to arrive early but forces the off-axis signals to arrive delayed. This design, created by the prolific Harry Olson eventually evolved in the modern shotgun microphone design.

Indirect signals arrive delayed. Sketch by

In Olson’s original design, in order to improve directivity you had to add more and more tubes, making the microphone too big and heavy to be practical. To solve this, the design evolved into a single tube with several slots that behaved in an equivalent manner to the old additional tubes. These slots made the off-axis sound waves hit the diaphragm later, so when they were combined with the direct sound signal, a noise cancellation occurred, boosting the on-axis signal.

This system has its limitations. The tube needs to be long if we want to cancel low enough frequencies. For example, a typical 30 cm (12″) microphone would start behaving like a cardioid (with a rear lobe) under 1,413 Hz. If we want to go lower, the microphone would need to become too big and heavy. Like this little fellow:

Electro Voice 643, a 2 meters beast that kept it directionality as low as 700 Hz. Call for a free home demostration!

On the other hand, making the microphone longer makes the on-axis angle narrower, so the more directive the microphone is, the more important is a correct axis alignment. The phase cancelation principle also brings consequences like comb filtering and undesirable coloration when we go off axis. This can work against us when is hard to keep the microphone in place, hence this is why these microphones are usually operated by hand or on cranes or boom poles.

In this Sennheiser 416 simplified polar pattern, we can appreciate the directional high frequencies (in red) curling on the sides. The mid frequencies (in blue) show a behaviour somewhere between the highs and a typical cardioid pattern (pictured in green) with a rear lobe.


This other pattern shows an overall shotgun microphone polar pattern. The side irregularities and the rear lobe are a consequence of the interference system.

Indoor usage

The multiple reflections in a reverberant space, specially the early reflections, will alter how the microphones interprets the signals that reach it. Ideally, the microphone, depending of the incidence angle, will determine if the sound is relevant (wanted signal) or just unwanted noise. When both the signal and noise get reflexed by nearby surfaces they enter the microphone in “unnatural” angles (If we consider natural the direct sound trajectory). The noise then is not properly cancelled since it does not get correctly identified as actual noise. Moreover, part of the useful signal will be cancelled, because it is identified as noise.

For that reason, shotgun microphones will work best outdoors or at least in spaces with good acoustic treatment.

Another aspect to have in mind is the rear lobe that these microphones have. Like we saw earlier this lobe captures specially low frequencies so, again, a bad sounding room that reinforces certain low frequencies is something we want to avoid when using a shotgun microphone. When we have a low ceiling, we are sometimes forced to keep the microphone very close to it so the rear lobe and the proximity effect combines and can make the microphone sound nasty. This is not a problem in a professional movie set where you have high ceilings and good acoustics. In fact, shotgun microphones are a popular choice in these places. 

Lastly, the shotgun size can be problematic to handle in small places, specially when we want precision to keep on axis. 

The alternative

So, for indoors, a better option would be a pencil hipercardioid microphone. They are quite smaller and easier to handle in tight spaces and more forgiving in the axis placement. Moreover, they don’t have an interference tube, so we won't get unwanted colorations from the room reflections.

Is worth noting that these microphones still have a rear lobe that will affect even the mid-high frequencies, but not as pronounced.

So hypercardioid pencil microphones are a great choice for indoors recording. When compared to shotguns, we are basically trading off directionality for a better frequency response and a smaller size.