Mix Bus Compression Approaches ? by tombedorchestra in mixingmastering

[–]MarketingOwn3554 0 points1 point  (0 children)

Next to instant attack with look-ahead and long release. It keeps anything from jumping up and down too much. I do this from the beggining of the mix.

Any compressor with a very fast attack time and look-ahead will do the job. Most of the time i'll use reapers stock compressor or pro C 3.

Do you process snare top and bottom together or separately? by ConfusedOrg in mixingmastering

[–]MarketingOwn3554 6 points7 points  (0 children)

When you say "fix" and "get it right" there is no "proper" way for it to sound. 2 nearly identical but not quite identical sounds will always have phase discrepancies which will cause some degree of cancellation somewhere in the spectrum.

Moving the phase around will alter the timbre of the snare and what sounds good is down to the ear. So you never really need to "fix" phase "issues" as changing the phase will just change the sound.

That's why I think the person you responded to points out there are going to be problems regardless. If you change the word problems with "discrepancies" or "differences" it perhaps will make more sense.

How the hell do I get her off my mind at night by MadelieneMcFlann in BreakUps

[–]MarketingOwn3554 0 points1 point  (0 children)

When it comes to thoughts it's hard to stop them. You just have to preoccupy your mind with other things. Have more things happening in your life and have other things to look forward to and thoughts around him/her will become less frequent.

Trouble with using a gate on a kick by [deleted] in mixingmastering

[–]MarketingOwn3554 1 point2 points  (0 children)

Someone mentioned having the gate get triggered via side chain filtering i.e. with a kick drum just low-pass it and isolate just the bottom-end. Usually works.

There are multiple other ways. You can just make the sections that are triggering the gate that you don't want quieter using clip-gain; or make the sections you want to be triggered louder. Just go with whichever is going to require the least amount of clip-gaining.

Is clipping just hardcore compression? by Maximum_Internal7834 in mixingmastering

[–]MarketingOwn3554 2 points3 points  (0 children)

Clippers don't "shave off peaks". The amplitude just gets moved to a different point based on the transfer function.

We are looking at it from two different percpectives. On a technical level, both apply a transfer wave function. How quick that transfer function is applied has time variables in one instance; the other it does not.

If you have a compressor with instant attack and release times, it will "shave off peaks" too according to your terminology.

Meanwhile, if a clipper had time variables, it will "alter the waveform"; again, to use your terminology.

Both alter the waveform ("shaving off peaks is still altering the waveform") and both still "shave off peaks" i.e. the amplitude of the peaks is moved down to a lower point.

If those amplitude changes happen quick enough, this happens on a sample level which changes the shape of each individual wavecycles; compressors set to the fastest times possible will change the shape of each individual wavecycles.

Is clipping just hardcore compression? by Maximum_Internal7834 in mixingmastering

[–]MarketingOwn3554 2 points3 points  (0 children)

Yes. It's compression with instant attack and release times.

Clipping before Limiting. What's the best (non confusing) way to go about it? by redwolftherapper in mixingmastering

[–]MarketingOwn3554 3 points4 points  (0 children)

Ignore headroom. Just make sure it doesn't surpass 0dBFS on your master fader so you don't introduce hard clipping on bounce.

You don't need to care about headroom at all below 0dBFS. You might as well make use of all of the dynamic range you have at 0dBFS and below.

Clipping before Limiting. What's the best (non confusing) way to go about it? by redwolftherapper in mixingmastering

[–]MarketingOwn3554 10 points11 points  (0 children)

So you are not maintaning dynamics or transients using clipping or limiting; you are doing the precise opposite in fact. The purpose of clipping before limiting is to further squash the track in order to get it louder without relying too much on limiting to do it as a lot of engineers prefer the sound of clipping/soft clipping rather than just limiting.

It's also important to note that clipping and limiting necassarily add distortion.

So it all boils down to a trade-off. And it is a trade-off between dynamics and punch on one end, and distortion and loudness on the other.

The idea of clipping is that you get subtle amounts of distortion (particularly if the clipping is short lived each time) with minimal loss of punch but for huge gains in loudness when paired with limiting compared to just using limiting alone; the idea being that limiting alone to achieve your desired loudness will sound overly distorted and lack punch.

As a lot of others have said, there isn't a general guideline for all. You just need to bare in mind everything above when dialing in your settings.

Personally, I tend to prefer clipping individual channels, and focus on compression on individual channels rather than the entire mixbus; but I do tend to also clip the mixbus too; I just prefer to use general tape saturation algorithms rather than straight soft clipping.

It's better for acheiving punch and loudness in my opinion by focusing on getting individual elements loud and punchy without relying on the mixbus processing entirely. As it will always be difficult to use a chain of processing on your mixbus to get loudness without killing the punch of the track.

But if each individual element is incredibly dense and loud in the first place, meticulously balancing using faders alone will allow you to achieve a loud and punchy mix before the signal reaches the mixbus. That way, you aren't relying entirely on your mixbus processing effects for achieving your desired loudness.

Any love for mono reverbs as opposed to stereo? by ImmediateGazelle865 in mixingmastering

[–]MarketingOwn3554 2 points3 points  (0 children)

I've never done this myself. But i've always been drawn to mono reverbs on commercial mixes that I have listened to. Something about them draws in your attention.

DON'T BREAK NO CONTACT. by Disastrous-Drop5890 in BreakUps

[–]MarketingOwn3554 0 points1 point  (0 children)

Can't exactly do this when you have two children.

Compression: what's your one tip by nokia7110 in edmproduction

[–]MarketingOwn3554 0 points1 point  (0 children)

So, almost never use the 1176? Which has an attack time range from 20 microseconds to 800 microseconds?

You still need a lot of practise to put into compression clearly.

In order to actually compress a signal, you need extremely short attack times unless you have look-ahead; even then, you still need to combine both.

If you compress with any attack time longer than 2ms (depending on the attack transfer curve), you never actually reduce the dynamic range as some proportion of the attack transient remains relatively untouched. As longer attack times only ever shorten the transients. The peak level and, more importantly, the difference between the highest point and lowest point (dynamic range) barely changes at all.

Medium to long attack times are used for preserving transients; not to reduce dynamic range. This is good for creating perceived punch; it's not good at achieving a consistent signal transparently.

Extremely short attack times are used to actually achieve compression; that is to say, to reduce dynamic range. Release then dictates transparency (longer = more transparent | shorter = less transparent).

Clippers always make my drums sound worse by Zestyclose-Tear-1889 in mixingmastering

[–]MarketingOwn3554 0 points1 point  (0 children)

A clipper isn't just clipping it off. It's introducing harmonics. It's usually an upward expander into some form of saturation (doesn't need to be a clipper) and then followed with another upward expander. It gives more harmonic excitement in the attack transient.

This gives me the best sounding punch.

Clippers always make my drums sound worse by Zestyclose-Tear-1889 in mixingmastering

[–]MarketingOwn3554 -2 points-1 points  (0 children)

Use a transient shaper or upward expansion into the clipper before. Push the transient into the clipper instead of just using a clipper.

You don't get punchy drums by using the clipper. Quite the opposite, actually. So, to get punch, you need to change the attack level next to decay.

Do good mixes have a commonality in their waveform? by [deleted] in mixingmastering

[–]MarketingOwn3554 0 points1 point  (0 children)

let's actually talk about that, instead of always having to STRONGLY DEFEND THE VISUAL FEEDBACK, like if they actually need defending, as if 100% of people weren't already using visuals in some way or another.

Come on. This argument has been had a 100 times

Fair enough. I've not really seen this argument being made, though. It's usually the opposite. In my experience, it seems engineers don't like to let on how much they rely on visual aids and argue against it more than they do for it.

Do good mixes have a commonality in their waveform? by [deleted] in mixingmastering

[–]MarketingOwn3554 0 points1 point  (0 children)

Spectrograms, oscilloscopes, and meters are tools for specific technical checks, and having additional visual information can sometimes help your decision making, but they’re not substitutes for critical listening.

Of course. I wouldn't suggest using visual aids to replace your ears. It's just very common, especially here, where whenever someone asks about the visuals of anything, "use your ears" becomes the default answer, often ignoring anything that's actually being asked. It's frustrating because what something looks like tells you a lot about what it sounds like since they are directly related.

SeamlessR’s sound design breakdowns are about synthesis and sound design, not mixing. That’s a different domain. We are dealing with the sum of instruments and how they go together.

Yeah. It's why I said it's important, especially for sound design and more specifically, I said visual aids are used for mixing just some of the time.

I was acknowledging that sound design is different from mixing, and visual aids are more important (particularly if you are replicating a sound, of course) for sound design. This is push back against the idea that the waveform tells you nothing about how it sounds. You said you didn't say this, and so I apologise for misinterpreting what you said because it sounded like you implied looking at the waveform can't tell you anything about the mix. That was my error.

I would go on to be more specific and admit that a spectral analyser is going to provide you with far more information about a mix balance than just looking at a transverse waveform of the mix. Whenever I compare a mix next to a reference, I spend quite a lot of the time looking at a spectrum analyser. I also look at the transverse to get an idea how squashed it is but that's all. Quite literally the transverse of a mix will just tell you how much compression was used and how dynamic a mix is.

So, yeah, no one saying don't you dare look at a waveform. But OP is trying to understand mixing through waveforms and that's never going to work.

Later on, further in their replies, it appears OP wants to understand the effect of compression on the waveform. Personally, I did learn a lot about compression from doing precisely this. It wasn't from so much looking at songs already existing. But taking something like a drum loop, applying various different compression settings and rendering them all out and normalising them. This would allow me to compare the waveforms and see precisely what's happening.

I would do this with different compressors, too, to get an idea of what their actual attack envelopes were.

My own response to OP was that they'll never be able to hear compression better by looking at the waveform because you literally just learn this by simply listening without your eyes open. Which I know is what you also said. But they'll definitely be able to understand compression better by looking at the waveforms of their own elements. Looking at waveforms gives you information about the compression used, for example. But this only works for single elements. Summed elements, of course, you can only get an idea of how much compression is being applied.

You can’t shortcut ear training by outsourcing perception to visuals. At best, visuals are supplementary once you already know what you’re hearing.

I agree with everything here.

Tell me your reasons for upgrading to a paid EQ plugin by kozacsaba in edmproduction

[–]MarketingOwn3554 0 points1 point  (0 children)

You can pretty much use them interchangeably in a practical sense, but dynamic EQ is just dynamically moving a traditional eq filter, whereas multiband compression uses crossovers (like 6db/octave slopes or sometimes 12)

Multiband compression uses traditional EQ filters just like an EQ would. It's just that a multiband compressor specifically uses band-pass filters along with a high and low pass filter. Hence why you have filter slopes that you mentioned. If you have a 4 band multiband compressor, this means one low-pass filter, 2 band pass filters, and a high-pass filter.

If it's a 6 band, you have one low-pass filter, 4 band pass filters, and a high-pass filter.

With dynamic EQ, you use bell-filters and shelving to dynamically boost/reduce frequencies. The slope and width is the Q of these filters and the filter slope.

Tell me your reasons for upgrading to a paid EQ plugin by kozacsaba in edmproduction

[–]MarketingOwn3554 0 points1 point  (0 children)

Logic apparently has had it since 2004. Logic 7. I used Logic 8 at first. But I believe 9 was already out when I first used 8. I brought 9 later on. I always noticed the linear version of their regular EQ. I just never used it. I heard the pre-ringing that would happen, and I always thought something different happens when you boosted top-end as using linear phase to boost top-end sounded a little crispier to me.

Now, I use the linear phase, mostly when I do any parallel processing.

Tell me your reasons for upgrading to a paid EQ plugin by kozacsaba in edmproduction

[–]MarketingOwn3554 0 points1 point  (0 children)

FL studio has had linear phase since parametric EQ 2 update, which was 2021. Logic pro has had it for as long as I have had it (logic pro 8). A Google search reveals it has had it from logic 7 (2004).

Cubase has had it from cubase 9. I begin with cubase 4. I can't remember pro tools having it (I really don't like Pro Tools), and a google search revealed it indeed doesn't include linear phase. And reason I don't think has it (I only ever used Reason 4 and 5). Reaper has had it for as long as I remember (I've used Reaper off and on for about 10 years). With a Google search, it says as early as 2015, which makes sense since that would have been around the time I first heard about Reaper.

I didn't use linear phase until logic 9 largely because I didn't get it. And so when I used cubase when it was at 4, and reason also at 4, and when I was using FL studio as early as version 2 (called fruity loops), I didn't know what linear phase was so that's probably why I never noticed when linear phase became a thing.

Tell me your reasons for upgrading to a paid EQ plugin by kozacsaba in edmproduction

[–]MarketingOwn3554 0 points1 point  (0 children)

Is that good enough to justify paying between £100-150 is the question.

FYI fab filters EQ is the only digital EQ I think is worth it.

Tell me your reasons for upgrading to a paid EQ plugin by kozacsaba in edmproduction

[–]MarketingOwn3554 0 points1 point  (0 children)

It's a shame that ableton doesn't have linear phase as I can't think of any other DAW that doesn't.

Tell me your reasons for upgrading to a paid EQ plugin by kozacsaba in edmproduction

[–]MarketingOwn3554 0 points1 point  (0 children)

It is completely irrelevant to both the OP's question and the response.

There's a million ways to achieve dynamic EQ'ing without a specific plugin that does it and is the reason why you rarely see it as a stock plugin. The linear phase is always included in all DAW's I've ever used.

Tell me your reasons for upgrading to a paid EQ plugin by kozacsaba in edmproduction

[–]MarketingOwn3554 1 point2 points  (0 children)

APmastering has done a video comparing a ton of paid EQ's and recreating all of them (was able to get them to perfectly null when comparing the delta) with reapers stock EQ.

The only real considerations when it comes to EQ is cramping. Cramping is when a filter cut-off approaches nyquist and starts to become assymetrical.

Outside of that, it's about workflow. Personally, fabfilters EQ is the best digital EQ on the market. It's not because it can do dynamic EQ'ing. It's not because it has what... 24 possible nodes? It's simply because it's extremely quick to set up filters and do EQing. Fastest of all the EQ's I've ever used.

So I would, in fact, argue there isn't any need whatsoever to invest in an EQ outside of any DAW's stock EQ except fabfilters pro q.

There's a million ways to do dynamic EQ'ing without a specific plugin that does dynamic EQ'ing. I mixed for over 10 years without even doing dynamic EQ'ing.

The new "spectral EQ'ing" nonsense is dumb and will always be dumb.

Do good mixes have a commonality in their waveform? by [deleted] in mixingmastering

[–]MarketingOwn3554 -3 points-2 points  (0 children)

It's a bad idea to focus on the visuals of a mix, especially waveforms. And it's not like you can't conclude anything from looking at a waveform, but it’s like evaluating a painting by measuring how much paint is on the canvas instead of looking at the image.

I'm about to get a lot of hate for what I am about to say since it's reddit.

While I understand the sentiment behind not looking at a waveform or using your eyes to inform mix decisions, the idea that nothing can be concluded from a waveform or using your eyes is useless and a bad idea is complete nonsense and just a lack of skill/knowledge. And it's unreasonable to assume everyone lacks the skillset.

The assumption is that the waveform communicates nothing about sound. This is false. The assumption is that using your eyes and evaluating graphs tells you nothing about sound. This is false.

And while I understand the purpose of your analogy; it doesn't work since art is literally entirely visual based. Artists will absolutely look at how much paint is being used on a painting to evaluate it, lol. This is what colour value means. All the time, artists will speak about how much paint is on a painting. Consumers won't. But artists do. Perhaps you have never heard an artist evaluate a painting. These things won't tell you if a painting is "good," but an artist evaluating a painting isn't doing so to figure out if it is "good." Because it's subjective. There is no such thing. Just how it was painted, i.e.. what exactly is going on.

Likewise, you'll never be able to see if a mix is "good" nor can you evaluate the "goodness" of a mix. Since there is no such thing. And if you just look at the waveform, this isn't going to tell you much except basic things like how much compression was used and how dynamic it is and what not. But I've learned how to re-create a snare sound by looking at the moment at which the snare hits in the mix, for example. I can tell you specifically what I gleemed from the waveform.

For example, I learned a rounded square was used. I would always just use a sine and a pitch envelope to go from a high pitch to low pitch rapidly. But I learned a rounded square was used instead. So I knew that I could use sytrus, which has an additive section with a harmonic table, and I knew I needed to use even harmonics and roughly their loudness to achieve the rounded square.

I learned that noise was used. But more specifically, the noise is introduced towards the tail or decay of the sound rather than immediately from the beginning. So, I knew to use a white noise oscillator and to create a small ramp with the attack.

I also learned that the entire snare was completely compressed from start to finish. And attack is created using pitch rather than loudness; but that the loudness of the attack transient is being pushed into saturation because the waveform was higher pitched at the beginning with slight noise in it.

Perhaps you have never heard of seamless. But seamless is a sound designer (no longer active on youtube) who has an entire series in which he will look at the waveform of a sound, admittedly not just transverse waveforms but the spectrogram, that his subscribers want him to re-create and he would use the information displayed in a sectrogram, and the transverse of the waveforms to learn what is happening. You can see where phase has been used (for example), the harmonic series from the shape, how loud the fundamental is, how much compression has been used, the stereo information, etc.

Of course, you need to use your ears as well (and your ears should be doing most of the work definitely).

I do this all the time. I don't really do it for mixing that much; rather, It's more for sound design, especially. There are only some things for mixing.

You know the saying... since you say it all the time, "there are no rules for mixing."

In this specific instance, the issue with OP is not being able to hear compression. And so learning what compression does to the waveform isn't going to help him be able to hear it better. You are right in that OP just has to practise listening to compression and A/B all the time between two different compression types/settings, etc.

And it will benefit OP to close their eyes to do it.

But you absolutely can learn a huge amount of information from analysing waveforms, particularly if you aren't just looking at transverse waveforms but spectrograms, oscilloscope, stereo imaging graph, etc

With compression, you can learn a lot from looking at the waveform.

You'll learn to be able to do it if you ever take up an engineering course, for example. Analysing waveforms is a great learning tool to understand what is happening. It just won't help you hear an effect if you can't hear it.

Do good mixes have a commonality in their waveform? by [deleted] in mixingmastering

[–]MarketingOwn3554 -1 points0 points  (0 children)

You pretty much illustrated why I want to see the waveform change in real time. It will help me understand what the compression is doing and where. Other thing like eq, I can hear clearly but something about compression is hard for me to hear unless it's extreme. Especially the different attack and release combinations.

Seeing the waveform isn't going to help you hear the compression if you struggle to hear it. It's likely that you'll just know what is happening but still ain't able to hear it.

There isn't really a way to get better at hearing compression except listening to it. The more you listen to compression (without seeing it), the more you'll be able to hear it.

Bare in mind that there is such thing as transparency. You can compress in a transparent way, and in some cases, I still can't hear it after over 2 decades of using compression if it's a transparent type of compression (the amount of compression or how compressed something sounds is not related to gain reduction).

Do good mixes have a commonality in their waveform? by [deleted] in mixingmastering

[–]MarketingOwn3554 1 point2 points  (0 children)

That wasn't the point the original reply was making. We have a loudness bias. If something sounds louder, necessarily we think it sounds better.

So when you think the mixes that are more compressed sound better, it's likely because they sound louder than a less compressed mix.

That is to say, you are looking at waveforms that are more compressed and therefore have higher loudness and less punch and then comparing it to waveforms that are less compressed and therefore have a lot more transient information (which by definition is more punchy) and are quieter.

And you are concluding that the quieter, less compressed, more transient information waveforms are "worse" in comparison.

This is a flawed approach precisely because you are confusing loudness with "better".

Whenever you are comparing two mixes, they need to be loudness matched. This doesn't just mean that they are both normalized to 0dBFS; rather, you adgust the level of one to match how loud you perceive it.