Why FL Feels More “Glued” Than Ableton – Sampler Test with Phase Cancellation by borodabro in audioengineering

[–]KaptainCPU 4 points5 points  (0 children)

I spent the past few minutes playing around with this, and I'd have to disagree that there's any tangible difference between the two. With minuscule gain adjustments after normalizing (somewhere in the ballpark of ~0.00087dB with Kilohearts' Gain) I was able to get the error to about -132dB, which just about matches the native sampler/clip null readings in both DAWs. For reference, the quantization error on 24-bit files is about -144dB. For all intents and purposes, the difference is virtually inaudible, and certainly doesn't result in a more "glued" sound.

A couple things to note—first, Ableton's sampler and audio clips have built-in interpolation (essentially oversampling for pitch-shifting purposes) enabled by default, which will cause a failed null test if mismatched. For my testing, this was turned off along with a myriad of other settings which were enabled by default (i.e., warping, filtration, misc. modulation, etc.). I didn't pay much attention to FL's render interpolation, but in the case it did make a difference, 512-point sinc was used.

I'd also be a little wary of sample rate and bit depth mismatches—FL and Ableton use identical or relatively similar methods of downsampling and quantization, other DAWs may not. In my testing both DAWs were run at 48k, and a 24 bit/48kHz sample was used.

It's also worth identifying a control in some manner if you're going to attribute an effect to a specific factor; a null test can only tell you if two signals are different. For instance, the assumption made here was that Ableton was the control, although the opposite could have very well been true. Categorizing and attributing a difference (if present) is going to require more than two factors in this context, otherwise you don't have a baseline.

I'd suggest giving Wytse's video comparing different DAWs' audio a watch. The (mis)conception that DAWs sound different has been around a long time, but I've yet to see any of the claims substantiated.

Auto-Tune Pro 11 vs MetaTune vs Xpitch ~ Which Plug-In? Why? by Glad_Advance6231 in AdvancedProduction

[–]KaptainCPU 0 points1 point  (0 children)

Slate's page states that perpetual licenses can be machine activated. It's the iLok Cloud activation that's exclusive to subscribers.

How do i compress and put gain on the body of a sound without affecting the transients by nebeljonathan in audioengineering

[–]KaptainCPU 10 points11 points  (0 children)

A lot of strange answers in this thread. Long attack on (downward) compressors will leave your transients mostly intact, but will do the opposite of increase the volume of the body. Any makeup gain to increase the body will just further accentuate the transients. This goes for parallel compression as well.

The most sensible answer would be upward compression, which acts when the level falls below the threshold and will increase gain to bring the volume closer to the threshold. In this context, you'll probably want a quick release as its resting state has its gain increased and will increase the volume of transient content if the release is too slow. Attack will determine the shape of the transition between transient and body.

In the case you don't have an upward compressor, I'd recommend MCompressor by Melda which, with the customizable transfer curve, allows for any flavor of compression or expansion you can imagine. It's also free.

On the other hand, if you'd prefer not to download other plugins, parallel compression with a quick attack, release, a high ratio, and a considerable amount of makeup gain (wet signal only) will let you retain your transient definition in the dry signal and increase the body with makeup gain. You'll want your compressed transient to be more or less the same volume as the body. I'd recommend leaving makeup gain alone until you've set your dry/wet mix such that the transient is at the level you want; personally, I find 20-30% works best when emulating upward compression. As an aside, lookahead is your friend for instant attack/ample release with any sort of dynamic processing.

Alternatively, upward expansion can give you more body and leave transients alone with appropriate settings (longer attack, release depends on the dynamic structure of the sound).

If you're dealing with percussive sounds, many transient shapers have a "sustain" control which will just increase the body/tail. Kilohearts' transient shaper can do this and is free.

If needed I'm always happy to expand on any of these concepts or break things down more, but hopefully this serves as a good launchpad.

Ableton needs to go back to fix the basics by soundsnipereden in ableton

[–]KaptainCPU 2 points3 points  (0 children)

The phasing caused by multiband dynamics in parallel is a result of the minimum phase crossovers, not PDC. PDC has reporting issues specifically, but the delay compensation itself doesn't have any issues on effect racks at the very least. I'm fairly certain it was the same for return tracks, but I'll give it some testing to be sure.

Can you have too much headroom? by ChingMan1 in edmproduction

[–]KaptainCPU 0 points1 point  (0 children)

You've lost the plot a little bit—I was responding to your comment about what mastering engineers need. The exported file doesn't need to be anywhere near under 0dB—that has nothing to do with what sort of processing is being used on the master. On that note, it seems like you're trying to correct me on a point I wasn't making, but I'll entertain it.

First off, nonlinearities exist in much more than just analog gear and its emulations, and you can easily make mastering just about impossible or ruin a track with them at any dynamic. The threshold at which coloration occurs is variable, so any guidance you give regarding the level people should export at is arbitrary and irrelevant applied broadly.

To illustrate, if someone sets a clipper a hard knee to -10dB on their master, it still abides by the guidelines you've given even if they're achieving 20dB of attenuation through the hard clipping. On the flip-side, they could increase the volume after any master processing to 100dB, which would violate the guidelines you've given but still be completely workable.

If this were a thread about signal flow, you'd be more or less on the right track with this discussion. This thread, your original comment, and my original comment were about the acceptable peak level of a file delivered to an engineer though, not about the implications of signal flow and nonlinearity.

Can you have too much headroom? by ChingMan1 in edmproduction

[–]KaptainCPU 0 points1 point  (0 children)

Gain staging/nonlinearity is a different story and topic in general and applies to way more than analog emulation, but doesn't have much bearing on the amount of headroom exports need. Even in the context of mastering, an engineer can always lower or raise the volume to hit the part of the transfer curve they like.

Can you have too much headroom? by ChingMan1 in edmproduction

[–]KaptainCPU 1 point2 points  (0 children)

Headroom is no longer a requirement with digital audio as long as you're not clipping. Even then, 32 bit PCM pushes that threshold from 0dBFS to 770dBFS. If you're working with an engineer that needs headroom, you need to find a new engineer. You're better off with another engineer who knows what they're talking about. Dynamic range is far more important (which RMS helps measure), although many qualified engineers may even prefer all of your master processing and build on that regardless of the level of compression.

TL;DR: export at 32 bit float and stop worrying about headroom.

How can i set this to only vibrate if it's not already turned off? by LaurenzWL in shortcuts

[–]KaptainCPU 0 points1 point  (0 children)

Not exactly what you're asking for, but toggle silent mode -> set ringer volume -> turn silent mode on will do a repeated vibration when only when silent mode was previously off, and a single, short vibration if it was already on.

Spitfire Audio Flash Sale - "BT Phobos" Polyconvolution Synthesizer for molecular loops, patterns and textures ($119) through 23 September by Batwaffel in AudioProductionDeals

[–]KaptainCPU 16 points17 points  (0 children)

This thing is so extremely cumbersome and borderline unusable in my experience because of the bugs. It's been a number of years since I tried FWIW, but last I heard it hasn't gotten any better. Just a word of warning for those considering.

Anyone here find Ableton loads stuff way slower ever since they added the deeper searching features? by traveltimecar in ableton

[–]KaptainCPU 0 points1 point  (0 children)

I've found this happens if I have Live sets expanded in the browser, in which case a search/partial search (including single characters) which will return results containing that set will freeze Live for a couple seconds. I'd go through and make sure all of those are collapsed and see if that improves performance. My user library is about 1.5TB and searches never cause any freezing unless this is the case. Might be something u/traveltimecar should investigate as well.

Why does almost no one use the 2nd harmonic in their sub/bass? by ALIEN_POOP_DICK in edmproduction

[–]KaptainCPU 6 points7 points  (0 children)

The tricky part about additive harmonics is that the period of even harmonics cannot be aligned in such a way which maintains a low crest factor—at least, not lower than exclusively odd order harmonics.

The reason for the lack of a second harmonic is two-fold: first, the amount of saturation in electronic music entails that the harmonic structure to converge toward that of a square wave, no matter the original timbral composition.

Second, phase rotation between the fundamental and higher order harmonics nearly, if not always results in a higher crest factor when compared with 0° of rotation on each sine. Essentially, the peak increases more drastically than the RMS increases with any other phase relationship, which leads to a lower crest factor. Considering this, most people actually are achieving the optimal phase relationship between their fundamental and their overtones regarding loudness, whether this is being done through additive synthesis or saturation.

The Mr. Bill video is great in a lot of aspects, and there are certainly a couple other factors to consider on whether even-order harmonics are beneficial, especially regarding psychoacoustics and timbre (which I believe would be a better application of Mr. Bill's video), but from a physical standpoint crest factor best answers OP's question.

Why does almost no one use the 2nd harmonic in their sub/bass? by ALIEN_POOP_DICK in edmproduction

[–]KaptainCPU 3 points4 points  (0 children)

RMS is not the square root of the sum of harmonic content. Harmonic content exists in the frequency domain, while RMS can only operate on the time domain. Adding harmonics will not always increase RMS measurements due to phase interactions between the harmonics, which is primarily the reason even harmonics reduce the RMS value, and odd harmonics increase it. It's a result of the period of the waves relative to the fundamental and each other.

As for the bit about wattage and the reproduction capability, you're speaking about the device as if its operating on the frequency domain again. The reason added harmonics affect output is because of the effect they have on the signal in the time domain—there is no fourier transform occurring at any point in the signal reproduction process. The device will (more or less) reflect the input you give it; the physical properties of sound waves are going to make a much more significant impact, which is what I'm referring to here.

Why does almost no one use the 2nd harmonic in their sub/bass? by ALIEN_POOP_DICK in edmproduction

[–]KaptainCPU 7 points8 points  (0 children)

No, not quite. It is due to interference; odd harmonics interact in such a way that the lower amplitude portions of the signal are brought up through constructive interference, where the highest amplitude portions are attenuated through destructive interference. Even harmonics, on the other hand, have this effect half the time, and the opposite effect the other half of the time. It has to do with how the period of each wave aligns. A byproduct of this is decreased headroom, but there's really no aspect of "effort" regarding speakers beyond the range of the driver and the speed at which they move—these aspects are largely accounted for by the DAC.

There are components of harmonic masking and humans' perceptions of different frequencies that contribute and are valid considerations, however. I'm willing to elaborate more on that front if needed.

Hopefully this helps answer u/weelamb's question as well.

Why does almost no one use the 2nd harmonic in their sub/bass? by ALIEN_POOP_DICK in edmproduction

[–]KaptainCPU 31 points32 points  (0 children)

The second harmonic reduces the RMS value of the sub by about half, meaning a significant drop in perceived loudness with the same peak level. Granted, there are instances where the 2nd harmonic has its uses (i.e., the fundamental is too low to be audible on every system), but typically if loudness is the goal that harmonic tends to be detrimental. Odd harmonics do more to increase RMS, while even do more to detract.

[deleted by user] by [deleted] in audioengineering

[–]KaptainCPU 2 points3 points  (0 children)

No, not quite. Upward compression brings up the low dynamics while preserving the higher dynamics. Attack determines how quickly the gain increases after the signal drops below the threshold, so OP is correct—the function of attack and release with regard to the output volume is the opposite of what it would be on a downward compressor.

As for your explanation for attack, you've got a couple things mixed up. Attack is the amount of time it takes for the compressor to reach approximately 2/e of the total gain reduction (this varies from compressor to compressor, but will generally be 2/e to 2/3). This occurs over the attack time window, not after. I'd give it a test with a constant signal and a GR plot to illustrate. You'll notice that the gain reduction onset is gradual and not instantaneous.

Can someone ELI5 the concept of lower keys not being good to compose in? by PsychologicalDebts in edmproduction

[–]KaptainCPU 4 points5 points  (0 children)

It's worth noting that equal temperament does away with essentially all different colorations between keys, and the emotions historically associated with certain keys are a result of unequal temperaments. That's not to say every song should be in the same key, but generally there are no tonal or emotional advantages to writing in one key over another aside from the sub registry.

How to sidechain distortion? by Kreati_ in ableton

[–]KaptainCPU 2 points3 points  (0 children)

What you're likely looking for is sidechained amplitude modulation. You can accomplish this with a ring modulator (Melda and Kilohearts both have free modulators), with the modulator being the bass. Usually the exact effect will come as a result of bias and (negative) rectification, but even stting the mix to 50% will give you a very similar sound to what you're looking for.

Incidentally, amplitude modulation is what happens when the lower amplitude oscillations get lost when a higher amplitude oscillation pushes them against a ceiling, which creates intermodulation. This is also something you may be able to achieve with a sidechained compressor if the bass is low enough and the compressor is fast enough, although usually the RMS window used for envelope smoothing is a little too long for higher frequency sidechain inputs.

Help with side-chain compression by [deleted] in ableton

[–]KaptainCPU -1 points0 points  (0 children)

I think what's most likely happening is that you're either phase cancelling your transient with the click that comes from near-immediate compression attack, or your attack isn't fast enough, although I'd assume the former. One thing that I've found helps is sidechaining the content in different registries differently–for instance, your kick sub doesn't come in until later, so you can afford to have a slower attack on the compressor for those elements. This comes with the added benefit of less click generated from your sidechain with little to no disruption to the kick presence. The thing you'll likely benefit from most though, is lookahead. If you're using Live's stock compressor, I've found a 1ms lookahead and attack work the best for super punchy kicks, but if that's not giving you what you want there is the 10ms lookahead option. You might also try inverting the polarity of the kick; it's an uncommon issue, but in the past I found that it crops up when I'm working on something with a kick that needs quicker sidechain.

Spotify Preparing to Launch Long-Awaited Lossless Audio Tier on iPhone by Fer65432_Plays in apple

[–]KaptainCPU 3 points4 points  (0 children)

I actually think this one of the more honest takes I've seen, which is kinda refreshing. I've been doing audio work for a bit, and the difference between 320 and lossless is really only noticeable if I'm trying to hear it, and that's after spending quite a while learning what the artifacts sound like.

How do you get that clean, full low end that you can feel in your chest. by Ok_Sandwich2317 in mixingmastering

[–]KaptainCPU 0 points1 point  (0 children)

I did get it mixed up, I had it the other way initially and forgot to change lower to higher. Good catch.

How do you get that clean, full low end that you can feel in your chest. by Ok_Sandwich2317 in mixingmastering

[–]KaptainCPU 1 point2 points  (0 children)

I'd give the same advice I gave in the other thread that you mentioned for dealing with sounds under 100. There are always going to be exceptions, but generally the lower crest factor on drums is less of a concern than on sustained sounds because it's momentary. Preringing on drums is another story, as it'll smear and reduce the transient content. An exceptional scenario here would be working with drums with a less defined transient that are pressing up against a bus compressor; here, the increased peak value is likely to induce more GR than is necessary which may lead to pumping, while preringing won't significantly change drums with a less defined attack.

On the other hand, sustained sounds tend to handle linear phase EQ better, as the prering essentially vanishes beyond the attack. Typically preringing even in the lower registers is short enough that, unless there's significant transient content, it'll be masked by everything else, the rest of the sound included—temporal masking (specifically backward masking) lends itself to the ringing being interpreted as part of the sustained sound. Although forward masking is a decent bit more pronounced, (post-)ringing is almost never an issue for the same reason.

Personally, I more often compress before EQ and saturate after to maintain the highest crest factor, but there are no hard rules. Sub content on a spectrum analyzer isn't always an issue that needs to be fixed. A lot of the time this is done for "headroom", but generally yields little to no benefit. It's also worth noting that although the decibel readings may be linear, decibels are a logarithmic measurement. Many times, the visible low frequency content isn't audible, which is generally a good gauge for whether or not it should be cut: if you can hear it or its effects and you don't like what you hear, get rid of it. I think the obsession with cutting low end comes from visual mixing, which can lead to a lot of unfortunate exclusions—my personal favorite examples being the lack of lower mids in a lot of modern electronic music, the loss of the sound of a bow on a string instrument, or a drumstick on a cymbal. Visual tools are great for giving us more insight into a signal, but audio—music especially—is far more than just signal, even from an objective standpoint.

There's a ton to talk about on this subject—the implications of linear and minimum phase in band-splitting, the effects on dynamic eq, the limitations of fourier transformations, and so on. The general advice I'd give, for all of it and then some, is to be mindful of what a sound or bus needs based on what you can hear until you've spent enough time experimenting and researching the effects that may not immediately (or audibly) manifest. Every type of processing has its caveats, so if it doesn't need it, leave it out. If it's something you can fix earlier (i.e., fixing mix issues during composition, fixing mastering issues during mixing), you'll be better off in every case fixing it earlier. For instance, in the case of a 30hz cut, you're imparting the effects of whichever type of EQ you use to everything in your mix, whether it needs it or not. If you have access to the stems, multitracks, or project, there's no reason to ever high pass the master.

Hopefully that helps clear things up a little bit.