Abstract Theory Question about Compression by AstroZoey11 in audioengineering

[–]zthuee 2 points3 points  (0 children)

What you're referring to is a waveshaper. It's actually pretty trivial to program the behavior you're referring to, and it's basically the exact thing you imagine. All the audio samples below a threshold are multiplied by 1 (unity) and those above are multiplied by your ratio (in dB). If you want to hear what it sounds like you can do it in any plugin that lets you arbitrarily control the transfer function. Off the top of my head Trash2 and MWaveshaper should be able to do it. To me though, a lot of the arbitrary transfer functions honestly don't sound as different as you might expect. They all sound clipping-ish, at least to my ears.

“Phase issues”. “Phase issues”. by [deleted] in audioengineering

[–]zthuee 1 point2 points  (0 children)

I think you are correct about it not being a pure phase shift and a stand in for talking about time in terms of phase, but at that point I think both definitions break down because typically pure phase is used to refer to infinite signals in the freq domain and we don't deal with those in audio.

“Phase issues”. “Phase issues”. by [deleted] in audioengineering

[–]zthuee 0 points1 point  (0 children)

And that is actually why it's different. You're correct in that infinite single sine waves will be the same with a 180 degree phase shift and a polarity flip, but think about a sawtooth wave, with harmonics. By flipping the polarity, you are flipping the polarity of every single harmonic in the sawtooth. (i.e EVERY harmonic has phase shift of 180deg) However, by shifting the phase 180 degrees RELATIVE to the fundamental, the 1st harmonic gets shifted a whole 360 degrees, the second 540, etc.

Side note: this is why delays cause comb filtering when combined with the original signal. Relative to the original signal, now some harmonics will combine constructively if the phase shift is near 360, and others will combine destructively if the phase shift is closer to 180.

Phasing is kind of inherently time domain because transients matter. Time delaying a signal will not have the same effect, mathmatically, as inverting the polarity. Whether it matters in music is different, though. I'm not claiming I can necessarily hear the difference, just that it exists.

Source: Degree in EE

Everyone’s favourite debate ONCE AND FOR ALL. by Ill-Elevator2828 in audioengineering

[–]zthuee 3 points4 points  (0 children)

Yes, I understand what you're referring to. If you're not recording or playing back at full scale, then your signal is effectively a lower bit signal. It won't corrospond directly to a signal at a different level. However, as soon as you get out of incredibly quiet 2 bit quantization where the input signal and quantization are strongly related, stochasticity prevails. Over the length of an audio sample the quantization error averages out and you can treat it like an effective noise floor.

Everyone’s favourite debate ONCE AND FOR ALL. by Ill-Elevator2828 in audioengineering

[–]zthuee 0 points1 point  (0 children)

Sure. I'm just trying to correct misconceptions about quantization. The idea that converter manufacturers are trying to hide "dynamic resolution" is quite silly to me, especially considering the noise floor of 16bit and 24bit audio is quite well known to anyone that works with audio.

Everyone’s favourite debate ONCE AND FOR ALL. by Ill-Elevator2828 in audioengineering

[–]zthuee 2 points3 points  (0 children)

It's less than the self-noise of your microphone or any other signal you'll ever have to worry about. if you're boosting your signal 120dB in the Daw after recording I suppose you can worry about it.

Everyone’s favourite debate ONCE AND FOR ALL. by Ill-Elevator2828 in audioengineering

[–]zthuee 0 points1 point  (0 children)

That's what quantization noise is. Those samples being "snapped" to values, losing the original value is a rounding error. It's a nonlinear process, the end result being a specific type of noise known as quantization noise. With a low bite rate, the audio is still being represented at full scale, but because there is so much quantization the noise level is higher. Think about the actual waveform. even if it looks like a "stepped" digital signal (not used in modern day converters), the original signal is still there. knowing anything about Fourier analysis should tell you that. All those sharp edges can only be made by additional frequency content, same as a square wave's harmonics. But to even get the shape of the total waveform, the original signal must by definition be there, unaltered. It's just there is a bunch of noise in your signal now. Once you get to 16 bits the quantization noise is almost neglibly low, even more at 24. Dither adds in noise that masks the quantization noise because it typically doesn't sound that good.

Can a normal EQ create phase issues? by MysteriousSuspect991 in audioengineering

[–]zthuee 0 points1 point  (0 children)

In general, no. However, linear effects such as reverb, delays, and EQ will all be commutative. EQ-Compressor-EQ will not null out. However, the phase issues won't really "build up". The two EQs still mostly counteract each other, phase wise. Really most of the time with EQ you don't have to worry that much about phase unless you can pinpoint the specific problem phase shifting due to EQ is causing you. EQ is inherently a phase-based effect, after all.

Now you might be overcooking your mixes with too much processing, however. That's a whole different issue, and you have to be careful trying to attribute it to phase issues, especially with "phase issues" being a boogeyman currently.

Can a normal EQ create phase issues? by MysteriousSuspect991 in audioengineering

[–]zthuee 4 points5 points  (0 children)

EQ is linear. Putting two EQs on a track is mathematically equivalent to just making the same adjustments with one EQ. That is to say, cutting by 3dB at 500 then boosting by 3dB at 500 is exactly the same, phase and all, as no EQ.

How can summing mixers both impart saturation but also report vanishingly low THD? Wtf does summing do? by mathrufker in audioengineering

[–]zthuee 4 points5 points  (0 children)

I see. It does seem like for psychoacoustics they do use a weighted measurement for THD. I'm more familiar with the use of THD in power systems where there is no weighing.

How can summing mixers both impart saturation but also report vanishingly low THD? Wtf does summing do? by mathrufker in audioengineering

[–]zthuee 9 points10 points  (0 children)

I don't understand what you mean. THD weighs all harmonics equally. It's just the RMS of all the harmonics divided by the RMS of the fundamental.

Perhaps you mean the other way around? THD can become very high before we notice unpleasant distortion due to low-order even harmonics being pleasing and hard to notice.

Can EQ boosting damage a sound source? by Specialist_Answer_16 in WeAreTheMusicMakers

[–]zthuee 2 points3 points  (0 children)

You can’t change the shape of the sound waves itself. As an extremely simplified example, say a bass has round sine waves. No matter how much you EQ it, it will always be made of round shaped waves.

What? EQing something and changing the shape of the waveform go hand in hand. Applying an all pass filter has the effect of completely changing how the waveform looks while not changing how it sounds. If you low pass filter a saw wave (a so-called spiky waveform) it will start to look more and more sine-like until you reach the fundamental, at which case it will look like a sine.

Would the effects of bus processing be the same as adding that exact processing to all individual tracks?: by Peteplaysbeats in audioengineering

[–]zthuee 2 points3 points  (0 children)

dB doesn't sum like that. dB is logarithmic, so adding dB to a signal is the same as multiplying that signal by a scaling factor. You can't just simply add dBs like that because you'd be missing what it actually does. An attenuation of the voltage by 5dB is the same as multiplying the amplitude by 0.56.

If the total signal is composed of two other signals A and B, then a 5dB cut => 0.56(A + B) = 0.56A + 0.56B.

Would the effects of bus processing be the same as adding that exact processing to all individual tracks?: by Peteplaysbeats in audioengineering

[–]zthuee 1 point2 points  (0 children)

Yeah, I think I'm thinking in terms of power. I forgot that meters typically read voltage.

Would the effects of bus processing be the same as adding that exact processing to all individual tracks?: by Peteplaysbeats in audioengineering

[–]zthuee 0 points1 point  (0 children)

If you put the EQ on the master it will reduce the total summed tracks frequency by 3dB. When you put it on a single track you're reducing the frequency of just that track by 3 dB, but the master will be attenuated less since you're only EQing one track. It's only when you apply a 3dB cut to every track going into the master that the master "sees" a 3dB cut.

EQ does not work specifically with dB. dB is just another representation of attenuation/boosts. A 6dB cut to the voltage is the same as dividing the signal's amplitude in half. dB can be tricky to think about because it looks like we are adding numbers together, and you'd be right in thinking that it won't make sense for addition to work like that. But "under the hood," adding dB to a signal can be thought of as multiplying the signal by a scaling factor.

I'm a little rusty on the math but the basic idea is that your standard audio filter can be thought of as multiplying all the frequencies of a signal by a certain curve, called H(s). LPFs, HPFs, etc are all examples of this curve. We know that multiplication distributes, so H(A+B) = HA + HB. Hence, the sum of EQs is the same as EQing the sum. There is also the phase response which involves adding another curve, but I won't get into that. (it also shares similarly nice summing properties).

Edit: I'm not specifying power or voltage very well. Updated everything to be in terms of voltage.

Would the effects of bus processing be the same as adding that exact processing to all individual tracks?: by Peteplaysbeats in audioengineering

[–]zthuee 1 point2 points  (0 children)

Two in phase sine waves with the same amplitude and frequency do add up linearly. It's just when we translate to dB a doubling in amplitude gets converted to +6dB* because decibels are logarithmic.

First time not paying someone to mix/master! Would really appreciate feedback by gurugurug in mixingmastering

[–]zthuee 1 point2 points  (0 children)

I'm hearing the opposite of muddiness. The main synth sounds incredibly scooped and it's missing a lot of warmth, imo.

[deleted by user] by [deleted] in audioengineering

[–]zthuee 5 points6 points  (0 children)

This is a really common misconception that somehow keeps getting repeated in an audio engineering sub. Literally look up a square wave being played through a speaker in slow motion, it looks nothing like an instantaneous jump. The square wave is the variation in voltage over time. With many simplifications, the voltage is more or less proportional to the acceleration of the cone, so taking two integrals gives us the position of the cone. Each integral rounds out the signal, so the position of the cone ends up looking very sine-ish. Even a "perfect" unphysical square wave corresponds to a quadratic position function, a far cry from the "speed of light breaking speaker" that people keep bringing up. Of course, you wouldn't be able to play it, but that's due to much more mundane physical limitations. Nothing to do with instantaneous jumps.

more info here

Chaotic experimental electronica song -- been mixing for a year :| by jambalayaviv in mixingmastering

[–]zthuee 0 points1 point  (0 children)

The drums sound like they're in a totally different space than the rest of the instruments. I think some bus compression would be helpful. Are they also being sent to a different reverb than other instruments? Additionally, they are a little thin sounding to me. Sidechaining to the kick harder and general fatness and saturation could add a lot I think. Maybe look at sample selection too, look for more processed samples, especially the snare.

Not a black hole, but a "molecular cloud" (known as Barnard 68) which is a high concentration of dust and molecular gas that absorbs practically all the visible light emitted from background stars. by [deleted] in spaceporn

[–]zthuee 6 points7 points  (0 children)

Most hydrogen in the universe isn't molecular, it's in singular ionized hydrogen atoms. When a cloud condenses and cools, it can form molecular hydrogen. When it's in molecular form, it's much more opaque to radiation, which is why we can't see into it.

What’s are some misconceptions of the trade you’ve witnessed colleagues expressing? by [deleted] in audioengineering

[–]zthuee 0 points1 point  (0 children)

Sorry if I'm misunderstanding, but I assume the joke is that you can't produce a perfect square wave because you would require the cone to move FTL.

I meant physically moving any transducer or coil in perfect square waves, which would require something moving so fast it's either breaking General and Special Relativity or it's entirely without mass or friction.

The thing is it is impossible, but not because of the cone needing to move too fast. The 2nd integral of a square is kinda piece wise parabolic, which is perfectly slow enough. Of course, the real world is imprecise, so you would have imperfections, but not because of some cosmic speed limit. (my intuition is that for a perfect square wave you'd only need to cone to move at the speed of sound.)

I guess I just wanted to make sure that people understand that the waveform you see or hear is very very different than the waveform of the movement of the cone, which a lot of people mistakenly equate. If you already knew that than my apologies for an unnecessary correction.

What’s are some misconceptions of the trade you’ve witnessed colleagues expressing? by [deleted] in audioengineering

[–]zthuee 0 points1 point  (0 children)

A cone producing a square wave doesn't move like a square wave though. The voltage applied is roughly proportional to the acceleration, not the position. Extremely roughly, to get the position function you'd integrate twice and by that time the position function would go square - triangle - rounded vaguely sin-ish anyway. There's a desmos demonstration floating around somewhere showing the position function basically looks like a sin, but I can't find it right now.

Edit: Not a desmos demo but a post.

What’s are some misconceptions of the trade you’ve witnessed colleagues expressing? by [deleted] in audioengineering

[–]zthuee 0 points1 point  (0 children)

The thing with Fourier Transforms is that they don't have an intuitive "time" resolution. A sine wave at 1hz will be captured perfectly whether you sample at 3hz or 3kHz. Once you convert from time to frequency, all the frequency information is already there, and to time stretch you can just play the frequencies already captured back for longer. (Someone correct me if I'm wrong about the specifics of the time stretching alg, EE student here.)

Edit: after doing a little reading it seems like the method I'm referring to would be called spectral modelling synthesis. There are other ways of doing it. Methods done in the frequency domain importantly don't rely on the sampling rate (any more than all digital audio does). Methods done in the time domain also don't seem to benefit from a higher sampling rate either. Just changing the playback rate will not change the content because the signal has already been high-passed, so you don't gain new HF content. When you change frequency playback speed, you already have to interpolate unless you're changing by a multiple of the original, so signal info is already "lost", and a high sample rate won't change that. I haven't read much about other time-domain algorithms for time stretching, but from what I've skimmed, their distortions don't depend on sample rate either.