One of the most common questions we get at CTA is how to build volunteer teams. We dig deep into that topic tonight with a guy who is really doing a great job with it. Get out the pencil and paper, you're going to want to take notes...
Based on some of the consoles I’ve seen lately, audio console layout is something that doesn’t seem get a lot of thought. However, a properly laid out console not only makes mixing more fun, it can keep us from making big mistakes during a service.
Legend has it that in the early days of mixing, as analog consoles got larger, engineers noticed that channels farther from the master had more noise in them. So it made sense to put the money channels—the vocals—nearest to the master. As the master was typically on the right, that meant the left-most channels became home of the drums—who would notice noise in the drum channels?
Somewhere along the line, a somewhat common layout emerged: drums, bass, guitars, keys, vocals. As consoles and input count continued to grow, we started seeing the master section land in the middle of the console instead of on the right. In that case, usually the band fell to the left while vocals and effects fell to the right.
Back then, where channels showed up on the console was completely dependent on what inputs they were plugged into. Today, with digital consoles, it’s easy put any channel on any fader. But before we do any patching—digitally or analog—it’s important to spend some time thinking about why channels go where they do.
Why You Do Is More Important Than What You Do
I’ve seen all sorts of…shall we say, interesting channel layouts on consoles. Drums spread all over the place, the lead guitar next to the pastor’s mic, vocal effects in the middle of the keyboards. It’s as if someone just patched inputs into the first open channel or floor pocket without any thought at all.
And while there are all sorts of ways you can lay out your console, the first consideration is to make sure you do it on purpose. Don’t just shove inputs into any old channel. Take some time to think about it and patch it in a way that makes sense. Keep all the drum channels together, and then keep the band together. Having all the vocals next to each other makes it a lot easier to find them. Put the channels you adjust all the time closest to you, so you’re not reaching all the way to the end every time. Think about how you mix, then organize the channels in a way that supports what you do.
There is no “right” way to organize a console. But here are some ideas of how to do it. Personally, I like to start with drums (kick, snare, hat, toms, overheads), then bass, guitars, keys, vocals and finally effects. Other channels like speaking mic’s, music playback, video and other utility channels are either to the right or left of effects depending on the console.
When I was at Coast Hills, I had my current console set up with my VCAs on the right, which put my vocals right in the middle in front of me. My preference is to mix more on channel faders than VCAs, but I know others who prefer the opposite.
I also know guys who put the bass right next to the kick because they like to work those two together. I keep my bass in my guitars VCA; others put in with the drums or dedicate a VCA to just kick and bass.
When I mix on analog consoles, I still follow the same basic layout. The advantage of a consistent layout is that I can mix almost any band on any console and without looking know where the faders are. In contrast, I’ve watched other guys mix and spend half their time searching the board for the guitar fader, only to miss the solo.
Regardless of how you choose to layout the console, once you come up with a plan, stick with it. Adapt and change as needed, but maintain as much consistency as you can.
Small Digital Consoles are Tricky
The current trend toward smaller mixers (i.e. fewer faders) with higher channel counts makes smart layout absolutely critical. If you only have 16 handles to deal with, you simply must be intentional about what you put where. In that case, I would most likely not use up the first 8 faders on the top layer for the drums.
In that case, it might be more prudent to put a drums VCA on channel one and treat it as one instrument (which, arguably, you should do anyway). As you fill up your fader bank and channels spill into another layer, it is often a smart idea to duplicate a few channels on every layer. For example, you might want to have the worship leader’s mic on the same fader of every layer so you can get to it quickly regardless of the layer you’re on.
The fewer faders you have, the more strategic you need to be with grouping channels into VCAs. How you group the channels will be dependent on your band and your workflow.
I once mixed a 28-input CD release party on 12 faders. I built multiple layers that were very similar, but expanded various sections. For example, layer one had the drums as a single VCA. But layer two gave me all 8 drum channels. Layer three split out all my effects, which were a single VCA on layer one. Things that didn’t get used often were down on layers four and five, but the lead singer’s vocal and guitar were always in the same place on every layer.
I spent about 30 minutes initially setting up the board, then tweaked my layout during rehearsal based on how the set unfolded. Of course, this is easier with a digital board than analog, but the principles remain. Think about your layout and adjust it until it makes sense and works for you.
Next week, we’re going to revisit the concept of input sheets. I’ve written about them before, but I think the topic bears repeating. Plus, I have some new stuff to share. Have a great weekend!
This post is brought to you by Shure Wireless. The new ULX D Dual and Quad wireless systems feature RF Cascade ports, a high density mode with significantly more simultaneous operating channels and bodypack diversity for mission critical applications. Visit their website at Shure.com.
I’ll be honest, I’ve been putting off writing this one. Mainly because this could go down the rabbit hole very quickly and I don’t want to do that. So I’ll say at the start that I’m going to keep this fairly simple and not delve into a deep treatise on phase. I would suggest you listen to our podcast with Bob Heil if you want to learn more about phase. He’s pretty smart in that area. With that out of the way, what’s the difference?
Phase has a time component, Polarity does not.
That’s about as simple an explanation as I can give. Polarity is the reversal the positive and negative terminals in a balanced circuit. As you may recall from our previous discussions of balanced and unbalanced circuits, you’ll recall that a balanced circuit has a positive pole, a negative pole and a ground. It is said to be balanced because the voltage that exists on the positive side is the same as the negative side; it’s just that one is + voltage, the other is - voltage.
When you press the polarity button—which is often labeled with a ⍬ or ⌀ symbol—what you are doing electrically is swapping the positive and negative poles of the input. That has the effect of flipping the phase 180° relative to itself. If you were to have two mic’s right next to each other pointed at the same source, flipping the polarity on one would cause a near total cancellation if you brought both channels up.
Down the Rabbit Hole
Here is a pretty vast simplification of what we’re talking about. Thanks to the desmos graphing calculator for the visuals. Below are two signals that are out of polarity with each other; that is, we’ve swapped the + and - making them 180° out of phase. For every + voltage on one signal, there is a corresponding - voltage, which would cause total cancelation of the signal.
Below is the same thing, only the polarity of both signals is the same. It’s hard to see but both signals are overlaid on top of each other. In this case, the resulting signal would be twice as loud as one of them alone because they would add.
To further our discussion, below is one with one of the signals shifted over in time. This causes a phase shift.
Phase is Time
When I took the Rational Acoustics Smaart course last January, Jamie Anderson said, “Phase is the most demonized and BS term in the industry today.” He also said that, “Filters don’t have phase shift; filters are phase shift.” Phase has a time component, whereas polarity does not. I said that again in case you missed it the first time.
So don’t say, “Flip that channel out of phase, would you?” Instead the correct phrase is, “Flip that channel out of polarity, please.”
Why Change Polarity?
You may want to change polarity for a few reasons. When I mic up a Leslise 122 rotary speaker cabinet that is being driven by a Hammond B3 (we’re getting really technically correct here at CTA today…), I like to put two mic’s on the top horn. I place them 90* to each other and flip one out of polarity with the others. That results in a really wide stereo image of the top horn.
I also find when I have an interview situation on stage polarity reversal can come in handy. Let’s say you have a pastor on a headset mic interviewing someone with a handheld mic. The headset mic may also pick up the other person, but because they’re a foot or two away, it will be out of phase with the handheld mic (phase is time, remember?) Sometimes flipping one of them out of polarity will minimize the phase interaction. Basically, that shifts the phase offset by 180°, which may make it less destructive.
When mic’ing a snare drum on the top and bottom, you’d want to flip the polarity of the bottom mic. The wave front coming from the bottom of the snare is 180° away from the top of the snare, and you need to flip polarity so you don’t get cancellation. That’s a bit of a simplification, but try it sometime. The reason you don’t get complete cancellation is because is because rarely are the mic’s pointed straight up and straight down at the snare heads, so you’re not technically 180° out on either of them. But if you don’t flip the polarity on the bottom mic, there will be some cancellation, which will make the resultant sound very thin.
Well, we ended up down the rabbit hole after all. I made a video a while back that demonstrates some phase shift concepts that may help further explain the concept. Check it out if you want to learn more.
Today's post is brought to you by CCI Solutions. With a reputation for excellence, technical expertise and competitive pricing, CCI Solutions has served churches across the US in their media, equipment, design and installation needs for over 35 years.
So far in our What’s the Difference series, we’ve considered AFL/PFL and Pre-Fade/Post Fade. Today, we’re going to look at another pairing that I see confused all the time. That is the difference between Groups and VCAs.
A Group is a Mix Bus
A group is a place to send channels post-fader. To make this clearer, the LR main output on your console is technically a group. So is the Mono output if your mixer has one. When you send a channel to a group, it is after all processing and the fader, so it is truly the final step on the way out of the console. You can assign channels to as many groups as you want; up to the number of groups you have. The level of the channel going to the groups will always be the same. In this way, groups are different from auxes. With an aux, each aux send goes out at its own level. Sending a channel to two groups sends the same exact signal to both groups.
A VCA is a Remote Control
A VCA (short for Voltage Controlled Amplifier) is really a way to remotely control the level of a channel or group of channels from a single fader. When you assign a channel to a VCA, you can add and subtract gain from that channel using the VCA fader and/or the channel fader. Moving the VCA master up by 5 dB will have the same effect on the channel as moving the channel fader up 5 dB. Turning off the VCA master will effectively mute the channel(s), making it easy to turn entire groups of channels on and off with one fader move.
When to Use Them
I wrote a much longer series on this topic some time back, but here’s the shortened version. Groups are useful for applying the same processing to a group of inputs. Clever, huh? For example, if you want to do some parallel compression on the drums, you can assign all the drum inputs to a group and insert a compressor on the group. Mix that with the uncompressed version and you have parallel compression. Or perhaps you want to subtly compress all the BGVs. Same thing. Only don’t assign them to the main LR bus; send them to the group, compress then send the group to the LR mix.
VCAs are useful for mixing similar types of instruments. On digital consoles, you may not have the faders on the surface for all your inputs. Really large analog consoles may be a long reach. So, you can combine channels into one to make it easier to manage. For example, you may set up the mix for the drum kit, then assign all the drum channels to a VCA. Because the drums are one instrument, you can adjust the level of the drums with the VCA. Some engineers like to put the bass and kick on a VCA and move their level together. Others will assign all the keys to a VCA and all the guitars to another.
It’s important to note that a VCA is not better than a group, nor is a group better than a VCA. They are different. Not all mixers—especially small ones—have VCAs so you have to make do with groups. But when you have both, use them for what they are good at.
VCAs and DCAs
On some digital consoles, Yamaha for example, VCAs are called DCAs. DCA stands for Digitally Controlled Amplifier. The function is the same, but the underlying technology is different. For all practical purposes, they are the same.
This is a pretty simplified explanation. For a lot more detail, check out some of the posts below.
Other posts with more detail: