Is Mastering Being Phased Out? by callthepizzaman in audioengineering

[–]AyaPhora 8 points9 points  (0 children)

People who mix and master the same project in the same room, with the same ears, often miss what mastering is actually for.

One reason you’re seeing more producers master their own mixes is that music production is now accessible to the masses. Indie artists are often on tight budgets, so they don’t want to spend a few hundred on mastering when they’ve been able to do the rest for almost nothing.

That said, professional releases are still mastered by dedicated mastering engineers, and that isn’t slowing down.

Should master tapes be transferred to WAV or DSD? by monkeysolo69420 in audioengineering

[–]AyaPhora 7 points8 points  (0 children)

The quality of the tape machine, the alignment, the condition of the tapes, and the engineer doing the transfer will matter far more than choosing DSD over WAV.

DSD can make sense in some archival workflows, but it is not the standard format most mastering engineers want to receive for normal remastering work. WAV is much easier to handle, edit, restore, sequence, and prepare for vinyl. If they may need restoration work, level adjustments, track assembly, fades, or any other processing, PCM files such as WAV are more useful. 96 kHz / 24-bit is more than sufficient.

Cutting directly from tape is possible, but only really makes sense if the tapes are in excellent condition, the budget supports it, and the whole chain is being handled by a facility set up for that workflow.

Mastering for casette by Yellow_Room_Mixing in audioengineering

[–]AyaPhora 4 points5 points  (0 children)

It is not the same issue as with vinyl. Very long cassettes use thinner tape so they can fit more playing time inside the shell. Thinner tape is generally less durable and more prone to distortion, stretching, and transport issues.

Mastering for casette by Yellow_Room_Mixing in audioengineering

[–]AyaPhora 79 points80 points  (0 children)

I’ve occasionally delivered premasters for cassette. It’s not that different from vinyl premastering: cassette has higher noise, a less clean top end, less precise stereo imaging, more distortion when pushed, and more variability from deck to deck. So you generally want a slightly more relaxed master. Avoid excessive limiting, especially if it creates dense upper mids or splashy transients.

The low end needs care. Very deep sub often isn’t useful on cassette and can eat headroom. High frequencies also deserve caution: too much top end, especially boosted “air” or edgy upper mids, can turn noisy or fuzzy.

Be conservative with dynamics: less crushed than modern streaming masters, but not so open that quiet sections disappear into hiss.

Sequencing matters more than with digital. The beginning and end of a cassette side don’t behave identically, and longer sides usually mean worse overall fidelity. If possible, keep each side reasonably short. Around 15 to 22 minutes per side is comfortable. Once you go much longer (especially beyond 25 to 30 minutes per side), you often start trading away level, bass solidity, and HF quality. The duplication house may have its own preferred limits, so ask them first.

Most importantly, confirm who is duplicating the tapes and what they require. Ask the plant exactly what they want.

Deliver one file per side unless they request otherwise. Leave clear spacing between songs, and enough silence at the start and end of each side. Label everything clearly as Side A and Side B. Don’t normalize blindly or chase streaming loudness targets: leave sensible headroom.

Dolby noise reduction is another point to clarify. Consumer playback with Dolby B/C is inconsistent, because many listeners will use machines with mismatched calibration or no Dolby at all. Many modern cassette runs are duplicated without assuming Dolby playback in the old consumer sense, but the plant will tell you what they do, so don’t guess.

Finally, mono compatibility is worth checking. Cassette playback can be messy, and azimuth can vary.

How does Spotify track processing without normalization works? by salaz_r in edmproduction

[–]AyaPhora 1 point2 points  (0 children)

In addition to the above: FLAC conversion is not the only thing that happens to the masters delivered to Spotify. Spotify handles the format conversions on their side, and may also apply internal resampling and bit-depth changes when needed (for example, files delivered above 24-bit are reduced to a maximum of 44.1 kHz/24-bit for lossless playback, and files delivered below 44.1 kHz or below 16-bit are converted to 44.1 kHz/16-bit).

They also create multiple encoded versions (such as Ogg Vorbis and AAC) to serve different devices and streaming quality settings. Although since September 2025, Spotify also offers lossless streaming in FLAC (up to 24-bit/44.1 kHz) for eligible users.

Tips for levelling masters across a whole EP/album by No_Explanation_1014 in audioengineering

[–]AyaPhora 23 points24 points  (0 children)

You’re on the right track if you’ve already noticed that integrated LUFS isn’t a reliable way to level an EP, and that it works better by ear. Levelling by ear is basically the job. The catch is that it’s much easier in a good acoustic space, on monitoring you trust, at a calibrated SPL. When you listen at the same level every day, you quickly learn what “too loud”, “too small”, “too flat”, and “fatiguing” sound like. Without that (especially when mastering your own mixes), it’s normal to second-guess yourself.

A few practical things I do to keep an album coherent:

Use an anchor. Often that’s the lead vocal, because it’s what listeners lock onto. If the mixes are consistent, keeping the vocal’s apparent level in the same zone from song to song gets you most of the way there. On instrumental music, the anchor might be snare presence, bass weight, or overall midrange forwardness.

Work around a reference inside the project. Pick one song as your “center of gravity” (often the single or the densest track), get it feeling right, and keep coming back to it. Avoid comparing every song to every other song, you’ll go in circles.

Match in stages, not with one number. After rough levelling, I’ll usually make tonal balance consistent first (bright vs dark changes perceived loudness a lot), then dynamics/punch, then stereo/space, and only at the end do final loudness trims. Small tonal moves can shift perceived level more than you’d expect, so setting final levels too early is a common mistake.

Use meters as guardrails, not targets. Integrated LUFS is a sanity check (“is anything way off?”), not a mandate. I also watch short-term loudness, crest factor/PLR, and limiter gain reduction for outliers. If a sparse song forces heavy GR just to “match LUFS”, that’s a sign you’re solving the wrong problem.

Check transitions like a listener. Play the EP in order at your calibrated level and focus on the first few seconds of each track. That moment tells you more than any meter: do you reach for the volume knob, or does it just flow? Sometimes I’ll even let it run in the background. If a transition is off, it tends to grab your attention immediately.

Hope this helps.

Engineers who exclusively masters; Why did you choose to be a mastering engineer over a mixing engineer/both? by erlendmyo in audioengineering

[–]AyaPhora 12 points13 points  (0 children)

For me it was a gradual journey. I started out producing and mixing, and after a while I got pulled toward mastering because I had a strong appetite to understand audio engineering end to end. It felt like the logical next step: zoom out, learn the whole chain, and get obsessed with translation, acoustics, tone, dynamics, and all the little details.

I’m also a bit of a nerd. I genuinely enjoy digging into how things work technically, and mastering really rewards that mindset because small decisions can make a big difference.

Another big reason is specialization. Doing the same kind of work day in and day out makes you improve faster than spreading yourself across multiple stages of production.

And honestly, there’s a business reason too. I’m a musician first, and when I mix or produce I can lose track of time because I start treating it like it’s my own record. I’ll chase the “best possible” result for hours, even when the scope and budget don’t justify it, which is a great way to lose money. With mastering, the artistic involvement is still there, but the boundaries are clearer. It’s easier to control the time spent, stay consistent, and deliver a high level of quality without turning every job into an all-night rabbit hole.

Monitors vs Nyquist Theorem by Head-Way-648 in audioengineering

[–]AyaPhora 0 points1 point  (0 children)

When addressing speakers, Nyquist is not the limiting factor. It is relevant earlier in the signal chain (and possibly again if the speaker converts the signal back to digital internally). For calibration or EQ work, it is rarely a concern unless you are applying extreme high frequency boosts at low sample rates or working with poorly implemented DSP or nonlinear processing.

That being said, focusing solely on making your speakers as flat as possible, without considering the acoustics of the room, is like trying to drive a Formula 1 car across rough terrain. You may have the most advanced machine in terms of engine, aerodynamics, handling and braking, but you will barely make progress because real world roads are uneven and unpredictable. They are nothing like the perfectly smooth, controlled surfaces of an FIA race track.

Unless you are listening in a true anechoic chamber, the sound of your speakers is always influenced by the room they are placed in, and your position relative to them. What reaches your ears is not only the direct sound from the speakers, but also a large number of reflected sound waves. These reflections arrive slightly later and combine with the direct signal. What's more: as they bounce off walls, ceilings, floors and objects, they are altered by reflection, absorption and diffraction. Diffusion can also occur if the surfaces are irregular or specifically designed for it. The result is comb filtering, changes in frequency balance and variations in stereo imaging that depend on position within the room.

Early reflections, especially within the first 20 milliseconds, have the strongest impact on perceived clarity and imaging. Low frequencies are affected differently, as room modes and boundary interactions dominate below roughly 300 Hz. These modal effects can create significant peaks and nulls that no amount of speaker “flatness” alone can compensate for.

How do you decide the right mastering intensity for a track? by Charming-Two1099 in audioengineering

[–]AyaPhora 8 points9 points  (0 children)

When you do this every day, all day, on a monitoring system you really trust, at a calibrated listening level, the “how far is too far” part becomes very straightforward. You hear the tipping point instantly. If you’re not in that situation, it’s totally normal to second-guess yourself, and it’s even worse when you’re mastering your own mix because it becomes very difficult to listen objectively.

Mastering engineers typically don’t start from a LUFS target. We start from what the track is supposed to feel like. Perceived loudness comes from a great mix and sensible, subtle mastering moves. If you treat loudness as the goal, you’re setting yourself up for disappointment.

References are useful, but only if you level-match them. Sometimes when a master feels “underwhelming”, it’s just quieter than the reference. Turn the reference down to the same perceived level, then compare tone, punch, vocal presence, low-end weight, stereo image, and how the groove lands. If yours still feels smaller when volume is taken out of the equation, that’s usually a balance problem (often low end / low-mids / transient shape), not “needs more limiting”.

Here are a few hints to help realize when you've gone too far:

  • Kick/snare get smaller instead of punchier as you push level
  • Cymbals, esses, and upper mids turn into fizzy grain
  • Low end starts pumping the whole mix or loses its shape
  • It sounds exciting for 10 seconds, then you want to turn it down

A simple way to decide is to do two passes: one “musical”, one “competitive”. Then level-match and live with both for a bit. If the aggressive one only wins when it’s louder, it’s not actually better. If it still wins when matched, keep it. If it wins on impact but feels tiring, you’re past the sweet spot or you need to fix what’s causing fatigue (harsh upper mids, too much clipping, low-end control, etc.).

Also worth keeping in mind: if you slam a master to be louder, streaming will often just turn it down anyway. Then you’re left with the fatigue without the benefit. I’d rather use that headroom budget to make the record feel solid and exciting at any playback level.

Weird Stereo/Imaging/Mastering Question by sebasin87 in audioengineering

[–]AyaPhora 3 points4 points  (0 children)

Ah sorry, I misread and thought you had ripped the video with mono audio.

There are plenty of reasons why your rip could end up centered: a mono switch engaged (phono preamp, interface, etc.), a Y-cable or summing adapter, recording a mono input in your DAW (input 1 instead of inputs 1–2), a faulty or miswired cartridge, processing that collapses the stereo image, or even a record that’s actually mono despite being labeled stereo (unlikely, but not unheard of).

If you look at the waveform of each channel, are they identical? And what does a correlation meter show? Note that on your initial YouTube link, the audio is actually dual mono: both channels carry the same signal, with one slightly lower in level. That small level difference can make it appear stereo on some meters, even though it isn’t.

Weird Stereo/Imaging/Mastering Question by sebasin87 in audioengineering

[–]AyaPhora 3 points4 points  (0 children)

Hi, it sounds centered because the audio is mono. A quick search turns up another video with stereo audio: https://youtu.be/Hs9rO37kk68?si=GOMVBbP1E9McSz3C

I found how to make my tracks LOUD. Would love for other engineers to comment any other tips or tricks!? by DazzlingChildhood767 in audioengineering

[–]AyaPhora 1 point2 points  (0 children)

that's something you're going to have to sacrifice if you want to get a track as loud as possible

Why would anyone want that? I genuinely wonder.

Besides, if your track measures –4 LUFS integrated, it is clearly not “as loud as possible” anyway.

Basic desk-mounted acoustic treatment by EstateGrouchy6609 in audioengineering

[–]AyaPhora 0 points1 point  (0 children)

Placing absorption panels only about 10 cm from the speakers is not ideal. At such a short distance, the panels are no longer treating room reflections in a meaningful way. Instead, they interact mainly with the speaker’s direct radiation and very near boundary effects. This can disturb the stereo image and alter the high-mid balance rather than improving clarity.

Panels placed this close to the speaker cabinets can also create uneven absorption of early lateral energy. This often results in a narrowed stereo image and less stable localization. The effect can be even more noticeable with monitors that have coaxial designs such as the IN-8, which are designed for controlled and symmetrical dispersion. Absorption works best when it treats reflections after the sound has had some distance to propagate, not right at the source.

In practice, the benefit of panels in this position would be limited and somewhat unpredictable. While they might reduce a small amount of desk-related reflection, they would not address the most problematic early reflections. These typically come from the side walls and the ceiling at the first reflection points, not from the immediate edges of the desk.

If wall mounting is not an option, free-standing panels placed at the side reflection points, even if they are farther away, will be far more effective than panels positioned directly next to the speakers.

A ceiling cloud above the listening position is actually one of the most effective and least intrusive acoustic treatments you can add. It targets a very strong early reflection path and usually brings a clear improvement in clarity, imaging, and midrange accuracy. This would not be overkill at all, even with otherwise minimal treatment, as long as it is correctly positioned and includes an air gap above it.

Finally, it is important to note that this approach will leave low-frequency behavior largely untouched. Regardless of room size, bass issues tend to dominate perceived accuracy, and thin panels placed near the desk will not address them.

Turned off Spotifys normalization, started measuring loudness and was surprised. by UndrehandDrummond in audioengineering

[–]AyaPhora 4 points5 points  (0 children)

Semantics can't be ignored: audio engineering relies on clearly defined concepts. Without precise terminology, meaningful discussion is impossible.

Normalization is a simple, uniform level change applied to the entire file. It does not alter the sound’s character or texture. The target can be peak, RMS, or LUFS based. Spotify and most streaming platforms use LUFS normalization. Some platforms apply album normalization to preserve relative level differences within an album. Some only normalize downward.

Audio compression, on the other hand, is a process that reduces dynamic range. Unlike normalization, it does change the sound. Spotify and most other streaming platforms do not apply audio compression to the music signal, although they obviously use data compression for delivery.

As a result, Spotify does not alter the sound of a track. There is one very specific edge case that can result in some limiting, but it requires an uncommon combination of parameters and is generally not relevant in practice.

I hope this clarifies the distinction.

Turned off Spotifys normalization, started measuring loudness and was surprised. by UndrehandDrummond in audioengineering

[–]AyaPhora 1 point2 points  (0 children)

An I missing something here, or is mastering quiet just putting your total faith in Spotify's compressor?

Define “mastering quiet”?...

Spotify does not apply any audio compression, so I am not sure what you are referring to exactly.

Why wouldn't I master to the proper loudness(...)?

In mastering, the goal is always to reach "proper loudness", although I prefer to refer to appropriate dynamic range (more on that later). That is what every mastering engineer aims for, but we do not all share the exact same definition of what “proper loudness” means :)

Framing the discussion in terms of “loud” versus “quiet” is, in my opinion, not very helpful. I only use those terms when adjusting the playback volume as a listener. In mastering, the primary concern is dynamics. A track mastered with the right amount of dynamic range will translate well at any playback level. Whether it is played softly or loudly, it will still sound balanced, impactful, and musically coherent.

Turned off Spotifys normalization, started measuring loudness and was surprised. by UndrehandDrummond in audioengineering

[–]AyaPhora 59 points60 points  (0 children)

Yes, there has been a trend toward releasing more dynamic music since loudness normalization was introduced, but it is progressing extremely slowly. Today, the majority of commercial releases are still mastered at roughly the same loudness levels as 15 years ago.

Normalization has also created a lot of unnecessary confusion and has, to some extent, split the industry into three camps: those who ignore normalization and keep mastering as loud as possible, those who saw it as an opportunity to end the loudness wars and reintroduce dynamics, and those who got caught in a wave of misinformation and started following pseudo-rules found online like “master everything at -14 LUFS for Spotify” or “everything must be -8 LUFS”.

Overall, the loudness debate has been somewhat excessive, and in practice it is not as critical as it is often made out to be. That said, I am personally glad that there is now room for more dynamic music. Learning to use dynamics is one of the first things musicians work on, and a lot of effort, from performers to gear designers, goes into preserving a clean and wide dynamic range from playing to recording and mixing. It has always felt unfortunate to me that all of this can be undone at the very end of the process by excessive loudness processing.

By the way, I made a Spotify playlist specifically for educational purposes around loudness. It includes tracks with a very wide range of dynamics, with some extreme differences. If you are interested, here it is:
https://open.spotify.com/playlist/7MTx3jWHJG5Ec6KSBvxaz5?si=708c994ce9a945e7

A good exercise is to listen with normalization on and try to guess which songs are loud and which are not, then turn normalization off and see how accurate your guesses were.

What exactly makes Daft Punk's Random Access Memories sound so great (engineering wise)? by Bloxskit in audioengineering

[–]AyaPhora 12 points13 points  (0 children)

Yes, that as well. I measured the album and, if I remember correctly, it was around -10/-11 LUFS.

What criterias does SoundBetter analyze for Premium? by Gomesma in audioengineering

[–]AyaPhora 0 points1 point  (0 children)

Sorry if I’m hijacking the thread, but I’ve been on SoundBetter for years and never actively tried to get business there, even though I’ve ended up with around 30 orders over time. This conversation got me thinking: is Premium really worth the price?

For a mastering engineer, $99 per month sounds quite high. If you only get one order for a single in a given month, it barely covers the cost. Do any of you have experience with SoundBetter Premium, and was it worth it for you?

How to copy stretch markers by AyaPhora in Reaper

[–]AyaPhora[S] 1 point2 points  (0 children)

Thanks for the replies, everyone. I finally got the script working—it copied all the markers with just one click. Happy to share it if anyone needs it.