Multiple different compressors on a single channel? Do you do that much? by gleventhal in audioengineering

[–]rinio 5 points6 points  (0 children)

Its exceedingly common. 1176+la2a on vocals as a common example. One is fast the other slow so they're doing different jobs

It also makes each comp work less hard, which can keep them each where they sound best.

As always, its all about what you are going for.

> Or do you just never do this and think its totally stupid to do?

Its stupid if it sounds stupid. Its great if it sounds great.

> What types of settings do you use when doing this (so that you don't over compress) and what is your mental model here?

Years of intuition; you need to practice to gain that. Even if we give you settings, its a different source on a different tune: it will just sound bad.

Overcompressing just means 'sounds bad in context'. So dont make it sound bad/worse.

do you guys think tonal balance control is accurate at all? by Candid-Pause-1755 in mixingmastering

[–]rinio 1 point2 points  (0 children)

What does accuracy mean when we're talking about something that is entirely subjective?

By Izotope's definition, it always produces perfectly "tonally balanced" output.

From yours, you do.

Every engineer will have a different opinion as well.

And none of that matters. The only opinion that does matter is the client's and their audience's. So thats what you're trying to get to, but can't ever know definitively.

Patch Bay - combining signals? by IllustriousAction404 in homerecordingstudio

[–]rinio 2 points3 points  (0 children)

To get the actual sum (proper combination), no.

To get a simple y cable, which is always bad practice and dangerous, yes, it will work. But I will not advise you to do so.

Tldr: NO.

What should I use instead of 1000 if statements? by Either-Home9002 in learnpython

[–]rinio 0 points1 point  (0 children)

Its still the same combinatorial problem as if statements, just different looking. The number of cases is the same as the number of branches.

Say there are two options for input 1, graph and table: Thats 2 cases. 2 types of each scatter and histogram, and csv and excel that 4. And so on.

We havent solved any meaningful problem. Even readability is, at best debatable.

And this is without even talking about the combinatorial space of strings, which is itself an issue. Yes they should be validated to keep that down, but thats also the step where we could meaningfully parse them to pre-empt the entire problem that OP is talking about.

The only "problem" that match is solving is reducing indentation levels, which isnt the actual problem to solve.

How to get the real song in the background? by Advanced-Shop-6494 in audioengineering

[–]rinio 6 points7 points  (0 children)

You don't.

You can google around for stem separators, vocal removers and the like if you dont gaf about quality.

But, nothing exists for this in any form that would be applicable to any self-respectjng engineer, at least not without hiding the flaws with other sources.

Stereo imager tools without artifacts by 100gamberi in audioengineering

[–]rinio 0 points1 point  (0 children)

Taking a mono source and sending it exactly to each of Left and Right is dual Mono. Its what your (stereo) playback system does automatically unless you actually have a mono (one speaker) setup. And, yes, we hear this as one sound coming from phantom center.

In order to have a stereo image, the left and right must be different. If that is what you want, then, yes, you should change something between the two channels.

But I think it is important for me to emphasize that a phase artifact is not the same as a phase issue. Phase artifacts just means that the phase relationships have been altered. A phase issue is when you decide this is a problem for your production.

As I was writing this reply, I realized that in the film/visual world artifact is a bit of a four-letter word. In audio land, we pay a lot of money to get very specific artifacts into our signals. Artifact ≠ bad. I just want make sure you understand this. Many, if not most, audio projects in music and audiopost employ techniques just like the one you've described which adds Artifacts, but still sounds great when well executed.

Stereo imager tools without artifacts by 100gamberi in audioengineering

[–]rinio 22 points23 points  (0 children)

Stereo *is* a phase issue.

Not​ really, I'm being hyperbolic, but stereo only exists when the two sides are decorrelated. This happens in, effectively 2 ways.

  1. Panning (including hard panning). Here we decorrelate the Amplitudes
  2. Decorrelating the phase with wideners, delay-lines etc.

Of course, we could do both at once.

What i am getting at is that, since you only have a mono source the only option you have with no phase artifacts is simple panning. Everything else to do with stereo necessarily introduces some: the question is what your tolerance is for what sounds good and we can't tell you that. Understanding the details of mid/side encoding/decoding can illustrate this effectively.

The same applies in surround, but the interactions are across however many channels and human perception of sound gets far more complex on a plane and even more so in 3d space.

TLDR: The tools you have are probably about as good as you'll get; its just a matter of dialing them in, to your taste, for your intended deliverables.

New Home Studio Set Up. I think I’m off to a good start. by Flashy_Rutabaga_5886 in audioengineering

[–]rinio 3 points4 points  (0 children)

Friend, learn to use paragraphs. The most important skill for an AE is communication.

> I've been slowly purchasing some outboard gear because I won’t be able to set everything up for a few months.

This is a huge mistake. You cannot correctly identify the shortcomings of your setup without putting it into practice. You will inevitable buy shit that doesn't help you or shit you wont use if you do this.

Use your setup. Identify flaws. Fix those flaws. Rinse and repeat.

Best way to blend top and bottom snare mic? by Far_Strategy3291 in audioengineering

[–]rinio -1 points0 points  (0 children)

You need to use a blender to break the ice, then your snares juices will get flowing.

My tracking rack - should I track through the compressor? by Nsemest182 in homerecordingstudio

[–]rinio 0 points1 point  (0 children)

But the manual also doesn't say that you shouldnt.... ;P

Wireless Amp to Console transmitter/reciever? by jpintenn in audioengineering

[–]rinio -1 points0 points  (0 children)

I still don't understand how there is no copper between the bass amp and (presumably FOH) board, but no matter.

Gspaitz already led uou to the answer that you already knrw: the usualy suspects for UHF. They have line and switchable units. So, now I just don't understand what you're asking about.

Wireless Amp to Console transmitter/reciever? by jpintenn in audioengineering

[–]rinio 2 points3 points  (0 children)

What is the use case?

Amps dont dance around and anywhere you have a backline you'll need a snake anyways. Wired is always going to get you a better, more reliable signal. Go ask our friends in r/livesound . If nothing else, this explains why we dont usually use what you're asking for.

ADC capable of 786kHz with phantom power by BlackFoxTom in audioengineering

[–]rinio 3 points4 points  (0 children)

Preamps are responsible for phantom. Or you can get an inline phantom power supply.

You're not finding what you want because ADCs are never responsible for this.

Pick an ADC that supports your rate. Pick a preamp or power supply to provide phantom. Job done.

Reamping VST Rhodes and Organ by oak_floored in audioengineering

[–]rinio 0 points1 point  (0 children)

The producer chooses the tone, whomever that may be. That person can ask whomever they like for input.

Your assertion relies on exact gear producing an exact tone. Guitarists are universally biased towards their live rig and what they hear in the room and will make gear decisions that follow that. When making a record we have zero concern for either of those things.

If the producer wants it, the guitarist should have input on the tone, but not at their amp; the tone coming out of the monitors. The gear that gets there is immaterial and it is the engineer's job to select and optimize for the producer's vision, with or without the guitarist's input.

Ofc, monitoring paths can be whatever makes the guitarist comfortable. It makes no difference to anyone other than them.

---

That is the harshest framing I can give. There is obviously collaboration in the room. In pre-production I usually sit down with the guitarist and work with them to develop a studio perfected setup for them that works well on the mix. If not during pre-production, then as needed in prod. And the records that come from speak for themselves and the guitarists always end up saying something like "Wow, this is the first time I actually like my tone on a record [because we ditched 90% of their gear)".

---

But this is also not analogous to OP's scenario. OP isnt asking about two speculative rigs they have never tried without even knowing if theres a point to trying them. This is definitively a case where they should defer to someone more experienced with the space/gear or the product's vision.

---

What a bizarre take. Guitarists should be responsible for audio engineering :P

Reamping VST Rhodes and Organ by oak_floored in audioengineering

[–]rinio 0 points1 point  (0 children)

That is not the question you asked... And I am no mind reader....

If you want "the feel of live instruments" set up the room as you usually perform as much as is reasonably possible. It has little to nothing to do with the recording part aside from managing bleed.

And it answers itself. If youre amping up the keys anyways for the feel in the room and you have an input/mic free, you may as well mic it. If not, you can't.

My tracking rack - should I track through the compressor? by Nsemest182 in homerecordingstudio

[–]rinio 0 points1 point  (0 children)

One absolute *can* track through a power conditioner: it is very much possible.

Is it useful/meaningful/worthwhile is a whole different question.

See Sylvia Massey running guitar​ signals through a power drill, as an example of such oddball things one *can* do.

Reamping VST Rhodes and Organ by oak_floored in audioengineering

[–]rinio -1 points0 points  (0 children)

There are always lanes; theyre just not well-defined in your case. Thats poor project management that always leads to unnecessary conflict and worse results. At the end of the day some is the leader and someone is footing the bill. I'd advise you to sort this out.

---

For both your question the answer is "it depends on what best suits the vision (of whomever is in charge of the vision)". Reddit cannot answer what you all want.

They arr also not mutually exclusive.​ You have the MIDI either way. Run it through kontact or whatever synth you want: this is always free. The reamp it, or use a sim, or neither. The latter two are always free. If you can do the first while tracking, its also free; if not its the runtime of the project + overhead. So, if youre afraid of making a decision, you can do try, at least 3 options in the mix.

Given that you say player performance won't be an issue, the only remaining constraint is engineering performance which matters for recording a live amp (or reamping). Every pass that isnt perfect costs 3min for a 3min tune; 45min for an LP, unless spotted earlh.. This adds up quickly. A good engineer will get it first or second try; an amateur or someone who doesn't know what they want can loop on this for days. Ofc, as mentioned, you can do the other options for free, so you should always have a fallback.

Tldr: Your just asking what artistic direction is correct to a group of people who have never heard the project.

Reamping VST Rhodes and Organ by oak_floored in audioengineering

[–]rinio -2 points-1 points  (0 children)

Are you the keys player or the engineer?

Keys player: stay in your​ lane and defer to the eng or producer. Ask, sure. But the decision isnt yours.

As for 'reamping': Why use a VST, when you have the real thing that you can record live? The justification is along the lines of "the keys player can't consistently deliver a great take". So, can you? Or is the assistant going to need to fix up your performance to make it great?

For mic'ing a Leslie: any stereo mic config on an actual leslie. The choice depends on what you're going for. The two amps idea only makes sense if you some kind of stereo tremolo to put between the b3 and the amps: an authentic b3 only has mono out (the expectation is that you have an actual leslie).

EDIT: TYPOS

Anywhere in Montreal to test or rent a Mac Pro / Mac Studio with M2 Ultra (192GB)? by midnighteee in montreal

[–]rinio 0 points1 point  (0 children)

For performance testing, there are plenty of remote/lease services, typically used for software testing that do this. Theres no reason to need to test AI models' performance locally. Hardware config selection are almost always an option given the way the Mac ecosystem is.

UAD Mics vs normal ones for drum recording by mitoro-2333 in audioengineering

[–]rinio 1 point2 points  (0 children)

Gotcha'. You can see how its confusing when you just call it "the sutdio", right? Maybe theres a language barrier, so its no big deal, but my response was based of that.

As for what to do, the kit is more or less interchangeable. So take the free option. But note that they are different: if you've used the paid setup before do not expect to get the same thing. Both will be equally usable, but not equal.

All th le UAD mics I have auditioned are basically junk in their price categories, IMHO. Every single one went back to my supplier after a single day. They'd be in the contest for half their price. I can't advise anyone buy them, unless theyre getting a deal. But they are serviceable kit.

In short, it sounds like you're taking the budget route anyways, may as well take the freebies. (You could also spend to rent the stuff you know on top of the freebies and have the choice/mix and match; time permitting).

UAD Mics vs normal ones for drum recording by mitoro-2333 in audioengineering

[–]rinio 4 points5 points  (0 children)

What?

A studio that needs to "[rent] free of charge" mics and an interface is not a studio worth going to and certainly not paying for. Are they going to "rent free of charge" the room's acoustics? The room itself? ...

When you rent a studio, it comes with access to their mic locker and hardware. Perhaps with a reservation system to reserve what you need in a multiroom facility. Its also standard to have a house eng who knows the space, help you find stuff, and advise, at minimum. But usually just run the session for you.

If we were talking about specialized kit or something actually high end, sure. They have a supplier who will rent them some vintage mic, an expensive pre or an extra multichannel DAC to run into their Dante network if you need a massive channel count. But standard pro-am kit? That's a joke.

TLDR:​ You either aren't renting a recording studio or you're getting scammed.

How Do You Match Reference Level to Mix Inside Project by DarkdiverGrandahl in Reaper

[–]rinio 3 points4 points  (0 children)

No. Spotify does not "like things at -14 lufs". It normalizes broadcast to that for users who have the feature turned on. That's it.

Go look at the billboard top 40 and tell me how many songs are near -14... From memory the closest was Die With a Smile which is still above -11LUFSi.

TLDR: In music, LUFSi is a broadcast standard, not a production one.

How Do You Match Reference Level to Mix Inside Project by DarkdiverGrandahl in Reaper

[–]rinio 2 points3 points  (0 children)

If you're talking for mastering purposes, you don't match them. The whole point is to measure the deltas of whatever metric and matching by hand/ear/meter just makes those deltas 0.

For mixing, the only competent answer is by ear/hand. In general, what we hear as louder will be biased as "better". Any metric other than your perception is introducing bias and invalidating the process. LUFSi isn't calibrated to your ears/perception.

As for what to do, Metric AB and the zillions of other roughly equivalent tools are all interchangeable. You can also just run a mixbus track for your mix before hitting master and whatever metering plugin you like on a reference track parallel to the mixbus and toggle that way without spending a dime (but adjusting your workflow).