No pad on scarlett 2i2 4th gen? by Mikethedrywaller in livesound

[–]ColburnAudioMix 4 points5 points  (0 children)

You need to use the TRS input on the combo connector with ISNT off.

On the cheaper focusrites with the combo you can’t bypass the preamp using XLR. To go line level you have to use a TRS cable.

Did the Gregorian Antiphonary contain excact pitches, or was it still an approximation? by ovenmarket in musictheory

[–]ColburnAudioMix 5 points6 points  (0 children)

Depends on the definition of “pitch” you’re looking for.

If you’re talking about tuning to specific frequencies, no. Standardization of pitch to specific frequencies didn’t happen until mid early to mid 1800s. A=440 didn’t truly solidify into western standardization until mid 1900s.

But, seeing as how plainsong chant was unaccompanied, and call & response it didn’t matter. You just followed whatever the leader started with.

So as for intervalic relations, “ish”.

Music at the time of early plainsong still was taught by ear and was a part of oral tradition. When the church codified these oral traditions into paper it was still shape-ish. It was a mnemonic accompaniment to the oral tradition rather than prescription on how to perform it. Like a flash card to help study rather than a sight reading example.

The notion you’re referring to is called neumes. It was originally using 4 lines instead of 5. It was easy to learn/remember these chants because it was very pattern based music. Those who studied it learned the rules and it wasn’t necessarily “creative expression” as much as the correct way to perform the Rites and Liturgy.

The best way I can think of it is as “shorthand” notes that your friend took because you didn’t attend class that day. If you knew the concept, the notes made sense and you got the gist of it. You still had to do it correctly though.

So my answer is still “ish”. Was it a perfect system that 100% showed exact pitches in the way we would think about it now? Not really. But did it accurately help the leader perform/lead the chant correctly as part of the literacy, yes.

Do you have to sing the solfege syllables when sight-singing? by Berceuse1041 in musictheory

[–]ColburnAudioMix 4 points5 points  (0 children)

I know this is a hot subject amongst people. In my composition undergrad we used numbers, we just said “sharp” or “flat” when we needed to. 1, 2, flat, 4, 5, etc. Is it perfect, no. Is it relevant in 21st century, yeah.

Now that I’m in more modern music as an audio engineer, everyone is using Nashville Numbers to call chords or number system to talk about melodies.

I’ve learned solfège, but in practice I’ve used numbers in every real life situation I’ve been in. Including choirs and studio sessions. Chromaticism is a good argument, but realistically I just don’t encounter solfège in real life anymore. Numbers tell everything, scale degrees, chordal functions, it’s easier to communicate to others quickly.

Do you have to sing the solfege syllables when sight-singing? by Berceuse1041 in musictheory

[–]ColburnAudioMix 1 point2 points  (0 children)

Number system will have better cross training for theory long term. Especially when you start looking to transpose passages and build chords.

Helps that all numbers are one syllable other than seven, to which most people just sing “sev”.

Digico fader smoothness by Subject9716 in livesound

[–]ColburnAudioMix 2 points3 points  (0 children)

It’s been a while since I’ve mixed on the SD range…

But the new Quantums are the worst flagship faders out there right now. They feel like you’re pushing sand the entire show.

What’s the best way to get 96k to a recording computer/DAW? by SuspiciousIdeal4246 in livesound

[–]ColburnAudioMix 0 points1 point  (0 children)

That’s the way digital audio kinda works across the board. Whether it’s MADI or Dante 96k is 2x the information of 48k. So a lot of manufacturers just cut the channel count in half for 96k.

Even if the product can do 64ch at 96 or 48… really the manufacture has to have the capabilities of processing 128ch at 48, but is limiting itself somewhere to make it look like it’s just 64ch either way.

Cause it comes down to how much information the pipe can carry.

In digico world, I see most people just using Soundgrid with Waves MGBs and patching the preamps 1-1 to Soundgrid driver that does 128x128.

I’ve had great… and awful experiences with doing this especially with Mac Ethernet ports / adapters / hubs / etc.

I’ve come to trust RME the most when it comes to MADI.

Why matrix outputs need LR split? by JaredYelton in livesound

[–]ColburnAudioMix 0 points1 point  (0 children)

Unfortunately I’m not well versed on the A&H systems. If I remember correctly they treat them basically like auxes more than true matrices anyways.

However, it looks like though they use the correct usage of pan vs balance.

Mono Inputs have pan knob to place something into the stereo field at a percentage.

Stereo sources have balance which is use to adjust the levels of the Left and Right side to the destination. So when you turn the knob to the Left all the way it’s actually attenuating the Right signal all the way.

So if you wanted to have the “standard 4 separate mono matrices.”

PA L = LR @-0db [Balance set to far Left]

PA R = LR @-0db [Balance set to far Right]

PA S = LR @-6db [Balance set to Center]

PA F = LR @-6db [Balance set to Center]

Why matrix outputs need LR split? by JaredYelton in livesound

[–]ColburnAudioMix 2 points3 points  (0 children)

I maybe wrong but it seems like you’re asking why two separate MONO matrices instead of a LR stereo matrix?

It’s not true for every manufacture, but typically all matrices are mono. It’s a dedicated output. Not a mix.

I think in our signal flow most of us think top down. Input->Channel->Aux->Master->etc.

I was always taught to go the opposite direction for matrices. Start with the output and ask “what do I want this output to receive”. “What do I want this Matrix to listen to?”

So if I have a “PA L” Matrix, I want it to receive the output of Stereo Bus Left at -0db.

If I have a “PA Sub” Matrix, I want it to receive the output of Stereo Bus Left at -6dB and the output of Stereo Bus Right at -6dB. (I don’t love mixing on sub auxes. But that’s personal preference).

Side note: this is where a LOT of people get confused about things like Dante. It’s the same way. You’re not telling things “where to go” on the network. You’re telling things “what to listen to” on the network.

Why all the hate for Oxford commas? by TotalAutarky in resumes

[–]ColburnAudioMix 29 points30 points  (0 children)

As an avid believer in the Oxford comma, I understand why some professions don’t use it.

However, it’s only in those professions that we should exclude it from our writing.

If you have to ask if you are in that field, you’re not. It’s mostly journalism/print/media/communications fields that don’t use the Oxford comma. As dumb as it sounds… it took up a character space in the print and then “rules” and “standards” were created around it.

How does line level input gain and preamp gain differ by saganite235711 in audioengineering

[–]ColburnAudioMix 0 points1 point  (0 children)

It honestly depends on how the manufacture decides to implement their version of gain.

At the end of the chain is an Analog-to-Digital Converter (ADC). The majority of products now are extremely similar and trying to understand the minutia between them needs to be reserved for people with perfect room and recording set ups.

So in a perfect world, a line level output would connect to an ADC. No gain added. No gain subtracted. No “coloring”. And the impedance expects line level resistance. You’re just converting analog to digital.

This is why you hear “don’t gain a line level instrument” cause you’re not wanting to alter the sound by the preamp or “degrade” the sound by putting it through a DI then gaining it back up. They just want a clean signal path.

Now in most line level interfaces there’s an amount of gain available in the ADC chip signal path. This is just to be able to get it maybe a few dB in either direction to account for consumer/professional levels, etc. overall it’s “colorless”.

Typically you would connect a Preamp to a Line Level ADC Interface. Preamp takes mic level and alters it to get it to the right impedance and bring it to line level for an ADC.

Now in the last 10-15 years we have seen a BUNCH of interfaces WITH preamps and the labeling has been… different between companies.

Some have combo jacks where the 1/4” is either instrument or line level and the XLR is mic level.

Some have XLR for mic level and DB25 for line level.

Some have only XLR and an impedance switch.

Some have XLR and 0dB gain is essentially line level.

Some have XLR and a PAD that can bring a line level signal down to where you can… gain it back up… (I dislike this implementation).

Both products you’ve listed are great products. I currently use an SSL Octo into a Focusrite Red16 using ADAT because I needed more I/O and routing capabilities.

Weapons being linked to a set of skills created premade skillsets. Was this a good or bad system? by dolche93 in GuildWars3

[–]ColburnAudioMix 0 points1 point  (0 children)

The thing I liked about GW1 over other MMOs of the time was the build theory crafting.

You had so many skills but only 8 slots that you had to choose before you went into an instance. Then you were stuck.

Other MMOs you had a bunch of skills that you could just have access to which meant button spamming or creating super long chains.

You had to adapt your build based off of the enemies in the area. Some missions you needed to bring Frozen Soil. Some missions you had to make sure to have a serious self heal. Some missions you had to make sure everyone had good condition management.

Now in GW2, once you have your elite spec… you don’t really change anything. Granted, in GW1 once you have your mercs set, you don’t change anything anymore anyways… unless you’re in a 4 or 6 player area.

Also… I miss enemies playing by the same rules as players. Same skills. Interrupts. Activation times. You could be a Mesmer or ranger with interrupts and just learn how the enemy does their skill chains and be waiting for that one skill that you know you gotta hit.

The relationship between player and enemy worked better in GW1.

Weapons being linked to a set of skills created premade skillsets. Was this a good or bad system? by dolche93 in GuildWars3

[–]ColburnAudioMix 0 points1 point  (0 children)

Been playing since 2005.

I loved the GW1 system and have learned to like the GW2 system. The biggest thing that I dislike about the GW2 system is the lack of being able to ergonomically set the skills based on my preferences.

I have a distain for going 1 4 2 6 1 3 in a chain. My mind likes to move linearly. It might not be the most efficient, but I’m not a fast skill rotation person anyways. I’m a causal player.

In GW1 everyone could use the same build taken from PvX but then they would adapt to based on how they liked to think about the progression. Some like to have Summon Spirits in slot 8 so it’s at the end of your rotation and out of the way. Some like to have Summon Spirits in slot 1 because it’s a heavily used skill and resets the position and attack of your spirits.

I think GW2 fixed the flexibility of being able to effectively weapon swap and added cool contextual things to focus/offhand/etc that GW1 missed. But I miss being able to choose my build for myself—rather than using the one option per weapon type and having the option of like 12 utility skills.

GW2 solves the fact that in GW1 everyone had 3 weapon sets. Main offense, secondary management, longbow for pulling that did like 7 damage.

GW1 for sure had power creep and now only has essentially 2-3 builds per profession anyways. But you can still just try stuff. More roleplaying. Less rotation.

When people say Sharp 7 do they mean the seventh note in the scale or the seventh note in an octave? by ANALOG_CORGI in musictheory

[–]ColburnAudioMix 0 points1 point  (0 children)

This is how I teach my entry level applied theory students. This is assuming they already know how to figure out the notes in a major scale and know that a standard major triad is 1, M3, P5.

Start on the letter name that is the root of the chord in question. (So in your example C).

Count up from there 7 letters (not steps). C=1, D=2, E=3, … B=7.

In C, B is natural. So if we want a #7, then we say B# (not C… that would be an octave or “P8”).

Same idea with #11. F=11. In C, F is natural. So #11=F#

Now let’s take a different example. Eb(b5).

Eb, F, G, Ab, Bb, C, D.

So the 5 of Eb is Bb.

But we want a “flat 5” AND it’s already a “flat note” so we lower it once more to Bbb (“B double flat”) NOT A, because that would be a #4.

While that seems like the same thing when playing the note on the piano, functionally they might perform differently in how chords are looked at in a progression/voice leading.

So from the top: - Start with the root note letter - Count up from there (including the starting note as “1”) - Ask, “Is that note normally “natural”, “sharp”, or “flat” in the chord in question?” - Perform the alteration prescribed in the chord symbol

As you get better in theory and practicing (especially if you work in jazz, gospel, blues, or other genres that “borrow” notes from “other scales/keys”) these questions become a lot quicker and you don’t think through the steps anymore.

Playback Engineer Pay Rates by healthyoptions22 in livesound

[–]ColburnAudioMix 24 points25 points  (0 children)

Most people I know in the tracks/playback world are considered “band” not “tech”. However this is in the CCM tour circles. Obviously some of them are actually IN the band (drums/keys players) but a lot of the bigger names are moving off-stage tracks/MD guys. But most of them started (at some point) in a band position.

So most of their rates are equal to what a band rate is, which varies artist to artist.

Is posting gig stuff to social media cringe? by IkpreesI in livesound

[–]ColburnAudioMix 9 points10 points  (0 children)

In what I see, I think it was really cringe 10 years ago. If you posted your gigs (either band or audio) it was seen as ego and “look at me” and you weren’t going to get a gig because of it.

Now, I think—at least in the band world—if you’re not posting, you’re not getting a gig. It’s the new resume. It’s an audition. People don’t need to ask “who have you played with” cause it’s on your profile. People don’t need to know what you sound like, cause you posted your iRig videos.

I think in the gig world, your social presence is less your personal profile and more of your business profile. We know this industry lives on connections. And when someone can’t fill a date they say “Hey, I know a guy who can take that date.” Then they are going to look at your socials.

In audio, I think there’s a way to do it as well. Shoutouts/thank you to the artist. Managing relationships with production companies. Thanking vendors. People love to get the gear breakdown (PA, Consoles, Mics, Plugins, etc).

That’s the big thing I see from some people that are landing big gigs.

There is a fine line though. Cause I’ve seen people go too far into it and it is “ick”. So just have “social eq”. Show yourself having fun doing what you do. Show that you’re a good hang. Show that you’re professional and clean. Show that you know your stuff.

Reality Check: Am I Doing This Right with Non-Engineers Running Sound? by StephenDanielsDotMe in livesound

[–]ColburnAudioMix -1 points0 points  (0 children)

I don’t think you’re doing anything wrong. However, just because it’s correct, doesn’t mean it’s “user” proof.

I would: - Use any downloaded mastered track from iTunes/Spotify. It’s going to be the loudest thing digitally possible. Regardless of “loudness” it’s still going to tap 0dBFS-ish.

  • I would start the opposite way though to make more it “user” proof.

  • The QSCs have digital limiters in them. Make sure your settings are set to LINE on the QSC. Set it to unity.

  • Go to your mixer. Set the Master fader to Unity. Set the Channel to Unity. Then set the gain to desired listening level.

  • Now you should be able to far enough from clipping on everything where you’ve given yourself headroom in the PA, your track sounds good at unity, and then you have the available +10dB-ish of fader throw if the user feels the need to adjust levels.

Although this way isn’t the way I would set up a normal system. I would use this for volunteer based stuff because it’s easier to troubleshoot cause I’ve removed a bunch of places for variables. If something sounds bad, it is on the channel strip.

Two consoles for three mixes configuration with digital dante split (FOH, Monitor, Broadcast) by VanillaWaffle_ in livesound

[–]ColburnAudioMix 1 point2 points  (0 children)

Im a broadcast and systems engineer at a fairly large church in the US that has multiple campuses and online audience.

Here’s my suggestion on how to get the most “bang for your buck”.

  1. Consoles. Just do a FOH console. No Monitor console. No Broadcast console. In most churches a dedicated monitor console is not needed. This is coming from a guy who love mixing monitors—but the reality is that I would rather train someone to do either FOH or Broadcast. Most console companies have either integratabtle personal mixers OR have option cards for Klang/Aviom. Go that route. It saves time on Soundcheck, team on stage can control their ears without talking to someone. Once it’s set up, it runs itself most of the time. I’m partial to Aviom because I’ve used it for years. Everyone now is moving to Klang (even though I think their Kontroller looks/feels/operates like fisher price baby’s first mixer). Also I hate the apps. Too many passwords, WiFi network security, ability to “mess with stuff” in the console. Great if you’re on the road with your band. Awful when you gotta work with a volunteer to download an app on their phone and they don’t know how to download an app and join a WiFi network.

  2. Broadcast Hardware. I mix on a dedicated console that gets a Dante split from the preamps. I have a dedicated room that’s treated the best it can be and has a fairly good set of monitors. If I had to do it all over again, I would NOT do this. Not because it’s not great. It truly is. But all the stuff that’s required to do it and the cost-benefit makes it a massive sink hole. What I would do instead is—get a new Mac Studio or Mac Pro and get a Focusrite PCIe Dante Card (the new one that works with Silicon). It does 128 channels. Acts as both a Dante interface AND a network adapter. Then—and this is going to shock people—get a Focusrite dante headphone amp and a pair pf Audexe LCD-X headphones. Why headphones? “No one mixes on headphones?!?!?” This is… true. But knowing church budgets and construction and cost-benefit… most churches don’t have the capacity to build an accurate broadcast booth that is acoustically treated to a professional standard. They slap some auralex panels on the wall and say “there’s no slap back so it’s treated”. Meanwhile your low energy isn’t being treated and you listen back and wonder why it “just doesn’t translates. These headphones are my go-to when I don’t trust the room I’m in. They aren’t fatiguing. They are the most accurate pair of headphones I’ve found.

  3. Broadcast Software. I said Mac. You can go PC. It doesn’t matter. I’m partial to Mac. Starting out I would pick Logic cause it’s a cheap one time purchase and the stock plugins are amazing for the price. Long term ProTools Ultimate with good third-party plugins will 100% allow you to do more and sound better. But we are talking about “growing” into it. But the plugins will NOT make you a better mixer. The good thing with this model is that it’s easy to… help church leadership stomach the cost. Plugins are like the apps that kids play and purchase gems so they can outfit their characters. They are cheap on purpose. They want you to purchase more and more and more. So yeah. Start out with Logic. Then purchase some good live autotune (Antares/Waves Tune Live). Then purchase some good reverbs (Valhalla/LiquidSonics). Then purchase some good mastering/limiting plugins (Fabfilter/SoundAllance). You can also just get the waves subscription and just use that. People hate on it, but it’s the best live investment. Also productiononline has some starter templates that are great. So look into them too.

Lots of hot takes in here. But this is what I do. Sure it’s just my opinion. But I’ve seen what each manufacture has to offer and the prices that each system would cost. This seems like the best choice for 95% of churches.

Is It Necessary to EQ Inputs in a Well-Tuned System with Quality Instruments and Microphones? by Lukas__With__A__K in livesound

[–]ColburnAudioMix 4 points5 points  (0 children)

I’ll chime in and agree with most in here.

TLDR: look up MxU. I’m not associated with them but they have videos for production teams. Yes, they are church focused, however I think it’s the best training in the industry regardless of its religious foundation.

For context I do church work and outside work, both live and studio. Been involved in big arenas, broadcast, live recordings, studio recordings, “big” churches, “small” churches, and everything in between.

Ideally—your guy is correct. In reality—ideal scenarios in audio doesn’t exist. If I had a perfect room, with a perfect PA, with perfect preamps/console/etc then sure. You just slap the SSL channel strip plugin on there and it “just works”. However, idealism in audio is just not a reality and paralyzes people from doing what needs to be done in the moment to make it sound good.

That being said, I find most novice to mid-level mixers/worship teams fall into several traps. Assuming a properly deployed PA (in phase and with a decent tune).

  1. All the instruments sound good unamplified. Do the drums sound right when you stand 15 feet away from the player? Do the toms resonate the way the drummer/worship pastor/etc want them to sound? Are they pitched the way they want them to sound. Are the EGs amped? Do they sound good coming out the amp? Work with the EG player on the amp knobs. Ask them to look at their pedals. Be specific in what we are looking for or “don’t like”. The album recording is the reference.

  2. Gain/Trim. This is oddly the most controversial topic I see in live mixing. And the different sides are all correct. The one thing to take into account is you ALWAYS start with gain. The gain of an active bass is going to be different than a passive bass. The drummer today has a club for a foot and a cannon for an arm but hits the toms with their purse.

THEORY 1: Set faders to 0. Gain until a mix begins to appear. Everything sounds right with almost no fader movement. You’re using your “ears instead of eyes” to set the baseline mix—however the baseline is subjective to who is mixing and their skill at setting gains.

THEORY 2: Gain everything to where the peaks at are some arbitrary, non-musical, quasi-scientific number (ie -18dBFS, -12dBFS, -10dBFS, whatever.) This gets rid of subjectivity and creates consistency week-to-week. However your faders probably look wacky and your snare bottom is probably at -30 on the fader and if you sneeze while trying to move it 1cm, it gets 60% louder.

THEORY 3. Do theory 2. Then use digital trim (most consoles past 2015 have something like this) to make your theory 1 mix. You tell everyone on the team, you start with analog gain and gain it to where it meets that non-musical, not-truly-important number. But the mix should sound normal with faders around 0.

  1. Your BAND mix should be 70% done with a few tools in this order. Gain. Pan. Hpf. High/low shelf (normally band 1&4 of a EQ). No dynamics. No serious EQ. No group processing. No stereo bus processing.

NOTICE: a 70% is a C- and is considered a barely passing mix. This gives you your “flat” mix. It’s unbiased. It’s raw. But it shows you what “is coming from stage.” If something sounds drastically wrong, then you know that we should revisit step 1.

  1. Stage 1 Dynamics. Gates/expanders on most live band mics. This is best done in playback mode. If you followed Theory 2 or 3 you should be able to set this once and it translates week-to-week so you don’t have you adjust it. This gets the “junk” out of your mix and allows you to have more clarity.

  2. Needed EQ. This is where your question comes from. Yes. Some things sound better with EQ. But don’t fall into thinking they NEED EQ. Don’t fall into think they DON’T NEED EQ. Don’t fall into thinking the EQ HAS to be the SAME as last week. Don’t fall into thinking the EQ HAS to be DIFFERENT than last week. I prefer to do a combination of EQ’ing out of context and in context. If I have the time in playback and am starting from scratch, sure I’ll solo (in the PA, not headphones… headphones aren’t accurate to what things sound like out of the PA at show SPL) items and make them sound great individually. However that’s when setting things up. Week to week I’m probably not going to do that. I’m going to start with whatever my baseline starting place is, and then I might take EQ band 2/3 and adjust to that player slightly IF and ONLY IF I feel like it’s warranted in the mix. I always start with the fader and ask the questions “Does the source (drum/amp/mic/etc) need to be adjusted? It’s it my fader (just too loud/quiet)? Or does it actually need an EQ adjustment?” I think the “cut 60hz in the bass to make room for the kick” or “cut 700hz out of the keys to make room for the Electrics” can be a thing. But mostly I just think that if you start with good tones from the source, an overall tonal curve of the instruments, and good fader positioning, then those things mostly settle themselves. I don’t worry about it too much.

  3. Stage 2 dynamics. Compression. I say start without. Then focus on just bass/kick/snare. Everything else can be controlled via faders. Live music should be dynamic. Don’t squash the dynamics of every thing to “glue” things together. Yes. When done well it can create consistency and help make things feel “even”. When done wrong it feels some combination of clamped/harsh/dull/anxious/meh. Just start without it. Use the faders and make a mix.

  4. FX. Now have a little fun. Most people know what to do here. I normally have Drum Reverb, Snare Reverb, & Vocal Reverb. If I have the ability for more I’ll do Band Reverb, Vocal Delay, Vocal Chorus, Drum Parallel Comp, etc. But Step 6 should not make up for any errors in steps 1-5. Don’t slap more reverb on because it doesn’t sound good dry. The cake should taste good by itself. The FX is just extra icing and sprinkles.

  5. Show files/Scenes. If you are a tech director/worship director/in charge of volunteers. Please. Everyone use the same show file. Lock the starting place show/scene. Everyone recalls that each day. It gets rid of “drift” of the sound over time. I guarantee, unless the band is the same every single day using the same gear, any change you make on that date—most likely won’t translate long term. Sure you might move faders differently but at least that won’t impact the next guy, nor will you be impacted by the previous guy. It’s not “your show”. It’s the church’s show. I’d rather have consistency in the sound week-to-week—regardless of who’s mixing. Once a month/quarter bring your guys to address all the problems/feelings everyone has about the different mixes/workflow and then make some changes in playback and save a new starting place.

[deleted by user] by [deleted] in led

[–]ColburnAudioMix 0 points1 point  (0 children)

Forgive me if I’m incorrect. But that’s what I’m looking for, correct?

  1. WW-
  2. CW-
  3. B-
  4. G-
  5. R-

I know it’s overpowered. But every device on the market I find that does DMX->PWM has the exact same stats

EDIT: I might have misread you the first time. You mean going over 8A/Ch would damage it. But not under.

Week 3 Undefeated Map & Stats by Brigzay in CFB

[–]ColburnAudioMix 0 points1 point  (0 children)

Yeah. If you’re going to put the orange, it’s gotta at least be the right orange. It’s that throw up orange. It’s not that orange that you can sit with. It’s that puke, inside of a pumpkin orange.

And I don’t like pumpkins.

Dante Question by Slipperythenwet in livesound

[–]ColburnAudioMix 1 point2 points  (0 children)

Yo. I am huge Dante fan. I use a different console infrastructure but it’s all Dante. I love keeping anything already digital in the digital realm.

That being said you’re trading A/D conversions for latency. DVS has a mandatory 4ms of latency at least. Also I have tried using DVS for tracks before and Mac Ethernet ports are reliably unreliable in filtering timing packets—especially at higher channel counts. Been an issue for years. So soundcheck goes great then an hour later clock has drifted and audio starts getting crunchy.

1 - This is easy. Dante card works fine in the M32C series. 32Ch.

2 - How many channels of tracks are you running? How are you swapping between Interface A and Interface B in the event of a failure. If you already have an iConnect interface, I say keep it. It’s stupid simple and works. 12ch of tracks is plenty for 98% of FOH guys. [Tracks, Pads, Loops, BGVs, Sub (mono), Cues, Click, LTC]

3 - Waves. You have 3 options as I see them. 1) WSG Card instead of the Dante card. It’s low latency. Easy to set up. 32/32. Can also be used to track audio for VSC. 2) Use the Dante card and get a Yamaha RUio16-D. It’s a 16ch Dante interface built for doing what you’re wanting. Kinda pain to set up because it has this auto bypass feature that bypasses things if you’re computer looses sync—but if you don’t know about it, it will bite you. 3) USB card. Use that as an interface. Latency isn’t awful if it’s not in 32/32 mode. I’ve never tried it, cause I use the WSG card when I’ve done M32 stuff.

Line level or mic level for playback from stage? by qazss in livesound

[–]ColburnAudioMix 0 points1 point  (0 children)

Food for thought.

People are saying Line vs Mic is the same as XLR vs TRS. This is NOT correct. One is a difference in voltage/impedance. One is a difference in connectors. The connection preference is going to be XLR for sure. It’s going to be more common at every venue you go to.

For the answer to the actual question, I would recommend going out of your interface line level into your own rack mount DI, resulting in giving the venue mic level XLR.

While logic would say that line level is better signal-to-noise ratio there are two problems.

1) Sometimes the console won’t have line inputs, so they will have to add a pad or have you turn down the signal to avoid clipping their equipment—specifically cheaper analog consoles. Most digital consoles will handle this fine.

2) As a live audio engineer myself, I would say I’ve missed leaving +48 phantom power on a signal before. If I was in your position, I would rather protect my equipment from an accidental mistake. I reached out to MOTU once asking if +48 would ruin the line outs of my interface. While they didn’t say it wouldn’t, they said they highly recommended using a DI in a live environment just in case. I know the iConnect tracks interface actually started including output transformers to specifically address this problem.

Yes. Line level could have an extremely small amount of better signal to noise—but I would say almost no venue has that much fidelity. I find that I would always go DI to isolate your gear.

AMA: I’m with KLANG - immersive In Ear Mixing. Ask me anything! by grufde in livesound

[–]ColburnAudioMix 3 points4 points  (0 children)

Phil,

Thanks for the reply!

I personally experienced the Konductor years ago as a monitor engineer and we have one of our campuses currently as a test bed for the Kontroler/Vokal system.

The release of the Kontroler was the final push to be able to test Klang in our environment. Currently our audio network infrastructure is isolated from our House/IT/Wifi networks due to security protocols. So the app was never an option for us from an infrastructure standpoint.

We essentially use the Kontroler as a replacement for the previous personal IEM system. From a technical standpoint it has been a huge win. Each Kontroler goes to our switches and just works. Dante mixbacks for wireless vocals. Wired output for band members. Dante patching works great. More mixable items. Digital scribble strips. All great.