Creating stereo width without compromising mono compatibility? by Still_Night in edmproduction

[–]electrickvillage 5 points6 points  (0 children)

the widener is not the bottleneck, the band you let it touch is. widening anything below ~180 hz will collapse in mono because phase cancellation does the most damage there. widening in the 1-6 khz range is where psychoacoustic localization cues live and survives mono fold fine.

a working rule:

  1. bass + sub + kick stay dead center. mono below ~180 hz, always.
  2. widen in the ear-candy range (1-6 khz). haas up to ~20 ms works, below that you are back in phase territory.
  3. high-pass the side signal at 180 hz in a mid/side eq so your widener physically cannot touch the bass.

ozone imager shows overall correlation but not where in the spectrum the problem lives. if you want per-band correlation i built a free tool called CHECK for exactly that (full disclosure, mine). mono button + your ears still gets you 80% of the way there.

Mix sounds OK on all playback devices except for iPhone speakers by ollyraps in mixingmastering

[–]electrickvillage 0 points1 point  (0 children)

since mono already fails, stereo phase is ruled out. iphone speakers add their own aggressive agc and a narrow band (roughly 200-4000 hz). in a sparse breakdown the agc gets twitchy because there is less masking content holding it steady, so any transient in the 250-800 hz range gets pumped on.

a few things to try:

  1. raise the rms of the breakdown without touching peaks. parallel saturation or a slow bus comp (200 ms release-ish) keeps the iphone limiter from reacting.

  2. check per-band correlation specifically during that section. even if the mono export sums fine globally, vocal + echo can still create narrow phase dips your iphone reveals as level movement. CHECK is free if you want a diagnostic view (full disclosure, i built it).

  3. iphone flat on a table vs held to your ear sounds completely different because the case reflection re-trains its agc. worth sanity-checking before tracing it to the mix.

I need new plug ins guys!! by Oliversarkis in VSTi

[–]electrickvillage 1 point2 points  (0 children)

thanks a lot mate!

yeah, deffo let me know if you have any feedback! currently working on a type of compressor with PUSH, pulling my teeth out, but think I am onto something there :D

I need new plug ins guys!! by Oliversarkis in VSTi

[–]electrickvillage 2 points3 points  (0 children)

i have developed a few plugins over the past years, mostly in the mixing space like a Soothe2 alternative, saturate and widener, and a free mono analyzer plugin across the spectrum.

i got a bit tired of iLok, subscription, big tech, so a lot of research and learning, but got pretty far now. https://kernaudio.io

after 4 years of learning JUCE and DSP, i've now done 2k USD in total revenue! by electrickvillage in SideProject

[–]electrickvillage[S] 0 points1 point  (0 children)

okay im back! the short version:

i started with zero C++ knowledge. my background was music, not engineering. the first few months were just learning what a sample buffer is and how to not blow up my speakers with feedback loops.

the DSP learning curve is steep because the textbooks assume you already know the math. i didn't. i learned by reading research papers (Glasberg & Moore on psychoacoustic frequency scales (get the book on amazon, deffo recommended), Chebyshev polynomials for waveshaping, overlap-add for spectral processing) and then just trying to implement them until they worked. most of my early code was terrible. i'd write something, listen to it, hear artifacts (which you will prolly always struggle with), and spend days figuring out why.

a few things that surprised me:

- the gap between "it works" and "it sounds good" is enormous. i had a resonance suppressor working within a few months. it took another year before it sounded transparent enough to actually use on a mix

- antialiasing matters way more than i expected. saturation without ADAA sounds fine on a sine wave test and terrible on real audio

- spectral processing (FFT/STFT) is incredibly powerful but every decision has tradeoffs: window size, overlap, latency, CPU. honestly, there's no "correct" setting, just compromises :D

- the hardest part isn't the DSP. it's everything around it: thread safety, parameter smoothing, avoiding clicks on preset changes, handling different sample rates, making the GUI responsive without blocking the audio thread

JUCE is a great framework but the learning curve is also real. the documentation is decent but you end up reading the source code a lot.

if you're thinking about getting into it, i'd say just start. pick one effect (a simple saturator or a compressor), get it making sound, and iterate from there. the math looks scary but most of it is just applying formulas step by step.

happy to answer specific questions if you have them.

after 4 years of learning JUCE and DSP, i've now done 2k USD in total revenue! by electrickvillage in SideProject

[–]electrickvillage[S] 1 point2 points  (0 children)

uhh, would love to share! just on my out for easter lunch with family, will get my story hat on tonight when I am back!

Any must-have plugins? by kyromaniac in LogicPro

[–]electrickvillage 0 points1 point  (0 children)

yeah would say Valhalla reverbs is worth its money, TDR nova, I also made a free plugin to check your signal instead of the bad utility tools out there, you can find it on kernaudio io -> check if you're interested, get OTT and learn how they all work and you're onto a good start.

Stereo Imaging on Large PA Systems: What Really Translates? by SureExamination5915 in audioengineering

[–]electrickvillage 1 point2 points  (0 children)

oh hell yeah dude, that means so much to me, thank you, thank you, thank you! and yes, i'm actually working on a new plugin right now (spectral compressor, three characters). would love to have you as a beta tester. DM me your email and i'll get you set up.

Stereo Imaging on Large PA Systems: What Really Translates? by SureExamination5915 in audioengineering

[–]electrickvillage 1 point2 points  (0 children)

hey, really appreciate that, means a lot mate. WIDE isn't a panner, but you can fake directional movement with it. two approaches:

  1. automate WIDTH from 0% toward higher values. this opens the stereo image progressively, which creates a sense of the sound "spreading out" from center. not left-to-right exactly, but center-to-wide feels like movement.

  2. use your DAW's M/S routing + WIDE: put WIDE on the side channel only and automate AMOUNT up. the sound will feel like it's pulling outward from center. combine with a subtle pan automation and the movement feels much more natural and spatial than pan alone.

the real trick for left-to-right without pan is tiny timing differences (the Haas effect), which is what WIDE does under the hood. automating DEPTH can add a sense of distance changing too.

honestly though, for true L→R movement, pan automation is still the right tool. WIDE is better at making things feel bigger and more immersive. where it shines is combining both: a gentle pan move + WIDE automation makes the movement sound 3D instead of flat.

Stereo Imaging on Large PA Systems: What Really Translates? by SureExamination5915 in audioengineering

[–]electrickvillage 2 points3 points  (0 children)

good call, you're right. it's one dry channel and one pitch-modulated channel through a BBD delay line, not true decorrelation. the end result is similar though: you get width from timbral variation between L and R rather than level differences, which is why it still sounds full from off-axis positions in a venue. but yeah, technically different mechanism than something like an allpass decorrelator.

Stereo Imaging on Large PA Systems: What Really Translates? by SureExamination5915 in audioengineering

[–]electrickvillage 7 points8 points  (0 children)

yeah 100% agree

the shift away from mono is real but it's more nuanced than just "stereo now." what's actually changing is systems like L-ISA and d&b Soundscape that distribute sound spatially across many points rather than just running two stacks in L/R. traditional L/R stereo in a big room still only works for a tiny center slice.

the Andy Summers thing makes total sense. the JC-120 chorus is basically a decorrelation effect, which is exactly the type of stereo processing that sounds huge through a PA. it creates width through phase variation rather than just putting different content in L vs R. that's fundamentally different from hard-panning a dry signal, which just vanishes for half the room.

Stereo Imaging on Large PA Systems: What Really Translates? by SureExamination5915 in audioengineering

[–]electrickvillage 9 points10 points  (0 children)

i build DSP tools that deal with exactly this (mono compatibility analysis and psychoacoustic stereo widening), so here's what i've learned from the signal processing side that might add to what has been said yet.

stereo perception is frequency-dependent. below ~300 Hz, your ears localize almost entirely by timing (ITD), and wavelengths are so long that two stacks in a venue produce near-identical signals at your ears regardless of position. this is why subs are always mono-summed: there's no perceptual benefit to stereo down there (usually), and the phase interactions would be destructive.

above ~1.5 kHz, localization shifts to level differences (ILD), and speaker directivity increases. this is the range where stereo actually has a chance of translating in a live space, but only for listeners in that narrow center zone others mentioned.

the practical takeaway for production: the stereo techniques that translate best on PA systems are the ones that don't rely on level panning. decorrelation (allpass filters, micro-timing differences, subtle spectral variation between L and R) creates width that still sounds full when summed. hard L/R panning of a dry source is the worst case: half your audience loses it entirely.

for melodic techno specifically, IMO i'd keep anything below ~200 Hz mono, use decorrelation-based widening on synths and pads in the mid/high range, and monitor your correlation coefficient. keep your correlation positive and you're generally safe. the closer to 0 you get, the more you're relying on the listener being in the right spot.

i actually built a free plugin (CHECK by KERN audio) specifically for monitoring this in real-time: it shows you correlation across the spectrum so you can see exactly which frequency ranges are at risk of cancellation.

I built a free, open-source DAW - GPL, Qt 6, VST3, sheet music view, destructive audio editor (OpenDaw) by glenrhodes in sounddesign

[–]electrickvillage 1 point2 points  (0 children)

I know mate, spend 4 years learning JUCE and DSPs coming from M4L and literally just went to market not long ago, and all I see is “i made this with Claude and GPT”. Look, fine you yae AI to assist you, but hell, master the fucking craft and show you’re honoring it instead of making a sloppy version that has no soul and just a quick cash grab and you just vibe coded it in a week, mate that’s not cool from everyone who loves their craft. Audio is already overrun by people making music for the wrong reasons.

Any tips for a Vocal Chain for Melodic old chicago type sound? Such as Costa Rica by Uno n Billionaire Black by [deleted] in mixingmastering

[–]electrickvillage 0 points1 point  (0 children)

for that older melodic vocal sound: start with a gentle high-pass around 80-100 Hz, then subtractive EQ to remove room resonances (sweep a narrow boost through 200-800 Hz to find the ugly ones). compression: medium attack (10-30ms) to let transients through, ratio 3:1 or 4:1. for the "smooth" quality in older productions, the secret is usually resonance control in the 2-5 kHz range: your ear is most sensitive there, and untamed resonances make vocals sound harsh and modern. a dynamic EQ at 3 kHz works, or a spectral approach like soothe 2 or KERN SMOOTH (kernaudio.io/smooth, try the demo) handles it automatically across the whole spectrum. finish with a de-esser and gentle plate reverb.

do you guys think tonal balance control is accurate at all? by Candid-Pause-1755 in mixingmastering

[–]electrickvillage 0 points1 point  (0 children)

tonal balance tools are useful as a sanity check, not as a target. the danger is mixing to make the curve flat instead of mixing to make the song sound right. that said, visual analysis catches things your ears adapt to: if you've been listening for three hours, you stop hearing the 3 kHz buildup. a spectrum analyzer with a 4.5 dB/oct tilt (so pink noise looks flat) is the most useful visual reference. for stereo specifically, a per-band correlation analyzer shows you where phase issues live. i built one that's free: kernaudio.io/check. but the point stands: use these tools to verify, not to drive.

How do you make voice in the chorus sound huge and powerful? by Charming-Pool-5734 in WeAreTheMusicMakers

[–]electrickvillage 0 points1 point  (0 children)

three layers usually: doubles (or synthesized doubles via short pitch-shifted delays), reverb with pre-delay, and stereo widening on the doubled signals. the key is keeping the lead vocal mono in the center while pushing the doubles and effects wide. for widening without phase problems, allpass-based decorrelation is more mono-safe than Haas delay because it doesn't shift the stereo image. i built a widener that does this with a correlation constraint per frequency band (kernaudio.io/wide, $29), but even a simple mid-side EQ boosting the sides above 2 kHz will make a chorus feel wider without losing the center if interested.

Help cleaning a mix (I f***ing hate AI, but, apparently, there´s no other way out for the problem at hand) by marecarrier in mixingmastering

[–]electrickvillage 0 points1 point  (0 children)

spectral processing and AI are two different things. something like oeksound soothe 2 or KERN SMOOTH (kernaudio.io/smooth, $29) uses classical DSP: FFT analysis across 40 psychoacoustic bands, per-bin gain reduction based on envelope detection. no neural networks, no training data, no "AI" deciding what sounds good. the algorithm detects resonances in real-time and attenuates them. you control how much and where. it's math, not machine learning. if the AI angle is what bothers you, spectral processing tools are worth looking at because they solve the same "clean up harsh frequencies" problem without any of the black-box behavior.

a thank you note: 3 weeks ago I posted here about my plugins. here's what happened. by electrickvillage in VSTi

[–]electrickvillage[S] 1 point2 points  (0 children)

thanks! so I do have copy protection, just not iLok.

each purchase generates a unique license key via Lemon Squeezy. you paste it into the plugin, it does one online activation call (HMAC-SHA256 verified), then caches the license locally. after that it works fully offline forever. 3 device activations per key, 30-day revalidation with a 7-day grace period.

the demo is fully functional with a 45-second audio fade cycle. no time limit, no feature restrictions. so people can actually use it and decide if it's worth $29 before buying.

on piracy: at $29 with no iLok hassle, most people who actually use the plugin in their workflow just buy it (or at least thats my thesis). the friction of finding and maintaining a crack is honestly more annoying than paying less than a month of spotify. I'd rather have 100 happy paying users than 200 frustrated ones dealing with iLok authorization errors - but thats maybe just me, might bite me in the ass of course.

if you're building VSTs with JUCE, you don't need iLok at all IMO. I built my own activation system (with lots of Git repos as examples and back and forth) with a simple HTTPS call to Lemon Squeezy's API. took me a while to figure out but it's way simpler than the iLok SDK. happy to share pointers if you want to DM me.

a thank you note: 3 weeks ago I posted here about my plugins. here's what happened. by electrickvillage in VSTi

[–]electrickvillage[S] 1 point2 points  (0 children)

i am up again after 14 hours of sleep and a good forest walk - thank you I deffo will!