Obviously there are a number of audio editing software apps to choose from, but it is the sheer number of options within each app that can be intimidating to the new voice actor.
Today I thought I’d share with you my workflow when recording voiceovers, particularly the processing steps I take. For any of you who are considering voiceover, you may find this useful as a guide. It is not the be-all and end-all of VO processing technique, but this procedure is easy to follow, gives consistent results, and results in a baseline to use when experimenting with settings and techniques. The beauty of it is that you don’t have to understand the physics and properties of sound to do this, although it is beneficial to learn at some point.
Currently I’m using Adobe Audition 3.0 running on a PC with Windows 10. I use Audition for the simple reason that my instructor used it during my training. He taught me to use a simple 3-step procedure that generates consistent results, and very likely also for many others. I’ve made a few small adjustments to that which works for me in my particular environment.
Once you set up your editing app, you’ll want to choose default recording settings. Voice123.com gives instructions to set bit rate to 44.1K, 16-bit resolution, and 96 or 128 kb/sec, which are probably close the industry standard. They also recommend normalizing the audio to -3 dB. My app default settings are the same, and I use the 96 kb/sec value.
So, what happens when you’ve recorded your audio, assuming you have proofed it, cleaned up the errors, mouth noises, and digital glitches? My instructor would tell me, “save the file!” (Ctrl-S on the PC), then follow these steps:
– Normalize the file to -3 dB
– Hard Limit to -3 dB
– Compander the file
Those three steps give a lot more depth and resolution to your voice, but also make it easier to hear on a “low-fi” platform while still faithfully reproducing your resonance and range. But what did I add to this equation? I use the Automatic Click Remover (standard) and Hiss Reduction (normal). At first I was using the Click/Pop Eliminator, but that tended to create more “warbling” glitches than it was worth. The Click Remover doesn’t remove as many clicks, but it’s safer and doesn’t distort.
So now my workflow after initial clean-up is:
– Normalize
– Hard Limit
– Automatic Click Remover
– Hiss Reduction
– Compander
Now from start to finish, it looks like this:
– Record
– Clean up breaths, gaps, fluffs
– If you leave a small header and footer of silence, say, at least 0.5 second at the beginning and end, highlight it and reduce the volume by -40 dB. This makes a clean transition before the voice playback starts and after it ends – I tend to breathe out noticeably when finishing a read, so this part is mandatory if I do that. If you are submitting an audition with two takes, do the same thing in between the takes.
– Normalize – Hard Limit – Compander
– Use time compression if necessary to make audio the correct length
– Spectral Analyzer to check for any digital ghosts that the processing steps may have created
– SAVE THE FILE
Again, this doesn’t mean my workflow is the absolute best or the one everyone should use, but it’s a good starting point for someone new to the business. What IS important is finding the sequence of steps that works for you, and maximizing your efficiency executing them
I joined the RadioForecastNetwork.com team almost three months ago. Ron Allan advised me that it would be excellent training to streamline my reading, editing and processing skills, and he was right. My first attempt at doing a 30-second broadcast for ten stations (five which needed music beds and mixing) took me almost 3.5 hours.
After two months I was down to 90 minutes, but three weeks ago I buckled down and found I could do them in one hour exactly, averaging six minutes per station. The key was to focus, focus, focus on that workflow, and not worry or stop to analyze all aspects of the finished VO. So I asked for 10 more stations, and have been doing them in 2 hours and 20 minutes, with lots of room to become even more efficient.
I timed myself today, and found at one point I was doing some stations in 3 minutes each (average 30 second spot) – 4 or 5 minutes if I had to mix a music bed. I did stop to take a mental break in the middle, but at that rate, I estimate I could possibly do 20 stations in one hour and 20 minutes. Of course I would have to block out everything and focus on the process, not how great I thought I sounded. But it can be done. That gave me a whole new respect for Rod Tanner, the RFN operations guy who fills in when a station isn’t covered, and probably does at least 200 forecasts each day himself!
As an added bonus, I found my auditioning technique has improved in the last month. I’m able to review a file and process it for sending much quicker, and know it’s error-free.
Due to the nature of RFN’s current business model, I don’t see myself staying there a long time – if they change, then I could easily change my mind – but I will say it has been a great learning experience and I’ve benefitted personally as voice artist from that experience.
What about you readers out there? What tips, techniques, or workflows do you use when recording voiceovers that you’re comfortable sharing with others? I’d love to hear from you, just click on Leave A Comment at the upper right of this post. (Please, no spam or sales pitches, those WILL be deleted)
Okay, that’s all for now. My wife and I signed up at a community workout center last week and we’re schedule to go on an exercise date when she gets home in the next 90 minutes. Wish me luck! Until next time, keep your best voice forward, and pass the sauce.
Hey, Mr. Joe! What is “compander,” and what does it do? Also, I’ve been told to “normalize” as the LAST step. I sometimes do it first to bump up volume, do edits, but “normalize” again. What do you think?
Hi, Tema-Talking. Thanks for your reply, much appreciated. Boy, those are great questions.
“Compander” is an audio processing function in your DAW (Digital Audio Workshop) such as Adobe Audition. It may be called something different in other DAW apps. The word is a portmanteau of “compressor” and “expander”. I’m not a techno-whiz, but my understanding is it analyzes your recording, then looks at peak levels over a certain dB threshold and compresses them in a ratio formula, and it also takes your low-levels and expands them a bit. Briefly, it “evens out” a voiceover track so that the parts that are at a lower volume are not overshadowed by the higher-volume portions. The net result is a smoother, richer-sounding voiceover that is pleasing to the ear.
In Adobe Audition 3.0, you can find Compander under Effects-Amplitude & Compression-Dynamic Processing-Compander. When you click on it, you will see the different threshold and ratio settings:
compress 3.75 : 1 above -9 dB
compress 1.1 : 1 below -9 dB
expand 2 :1 below -40 dB
So all portions that are louder than -9 dB will be reduced by almost 1/4, while all portions quitter than -40 dB will have their volume doubled. The net effect is a smoother-sounding VO track with more noticeable depth and texture.
“Normalize” just means to raise the level (volume) of any selected portion of your track just like turning the volume knob up, except your app analyzes the track so that loudest portion is exactly at -3 dB, the maximum accepted standard for VO levels, and all other portions are raised in the same proportion.
Since the compander function does bring peak levels down a bit – say anywhere from -4.5 to -6 dB – you certain can normalize the track so it’s back up to -3 dB. However, you may get differing opinions on this. In my estimation, you are not required to so. I’ve already normalized my track when I begin processing, and the peak levels after compandering are strong enough to be easily heard on playback. I do have a fairly strong voice. It’s possible someone with a softer voice might benefit by normalizing at the end. Experiment with it and see what gives you the best results.
In my case, during the time I recorded daily weather forecasts for RadioForecastNetwork.com, I never normalized my dry tracks after compandering, and I never heard complaints from their staff about my voice being too soft. However, when listening to a few of the other forecasters on there, I’m guessing they did normalize after compandering, because their forecasts were noticeably louder.
Again, I’m not an audio engineer. I learned to use the normalize, hard-limit, and compander functions when recording voiceovers before I actually understood what was happening, but now I have at least an idea of what is going on when I use those functions.
Here’s a YouTube video by Jason Huggins that does a great job explaining those basic VO processing functions:
https://www.youtube.com/watch?v=FHpfim5-Pcs
Thanks again for your reply, Tema-Talking. Great to hear from you!
-Z-