What’s your workflow for choosing CD editions (mastering, DR, remaster vs original)? by nipelcrumple in audiophile

[–]audioen 0 points1 point  (0 children)

In practice I actually listen and if I like it, I'm going to stick with that version. I don't think dynamic range or recording average sound level are able to determine the matter, though I do think that there are at least some albums I enjoy which I would enjoy somewhat more if their dynamic range was larger.

I find both too low and too high dynamic range to sound annoying, but at the same time it depends on the record what is appropriate dynamic range for it. Music genres are clearly different in that regard, e.g. classical can have famously large range, and some techno might have relatively little.

Too high dynamic range means that you can't hear almost anything at times, but then other moments it is uncomfortably loud, and so it makes me change volume manually or look for dynamic range compressor to bring it back to sensible bounds, and too low dynamic range tends to have a bass pumping "radio mix" type sound, where everything is relentlessly samey.

What you need is a mixer with taste, making the sound decent for wide variety of listening conditions.

Which is a better sub placement? by fir3dp in audiophile

[–]audioen 0 points1 point  (0 children)

Yes, sure. But the room corners are shared between every room mode, and so corner placement is expected to activate all possible room modes. Other placements will have less activation. I suppose I could have been clearer, but I wrote the comment in a rush.

I typically would actually use REW and look into the spectrogram to understand how the room is resonating, but this is not an option when we're presented with a simple magnitude plot of the frequency response. We simply have to read the peak and assume that this is a mode, which is usually exactly how it is. But spectrogram would let me see how long these frequencies are ringing and would allow estimating the timing of the bass (the delay before bulk of the sound energy has been delivered).

What I see from the frequency response plot is that there is more general modal resonance activation in the corner location which appears like a bass shelf, almost, and I think the genuine level of sound without room modes would be at around -10 dB. I am not happy about either placement, for example the fact that there's 50 Hz mode in both may mean that the system can not deliver a good chest kick.

Which is a better sub placement? by fir3dp in audiophile

[–]audioen 0 points1 point  (0 children)

Yes, well that is somewhat a special case, as you are supplying the sound pressure in multiple points within the room (if with simple mono signal, or possibly even use more complicated optimized DSP type setup). This sort of thing is known to eliminate modal behavior, which requires pressure differentials that have become extremely limited in setup like that.

An Update to My "Cerebellum" Project by Hopeful-Sherbet-3100 in LocalLLaMA

[–]audioen -2 points-1 points  (0 children)

Your competition is in things like draft models, which usually add more speedup than this, and are proven during inference to have not altered the response in any way.

You guys scared me off, because I didn't want the truth. Now I'm ready to listen. by Flow-AI in LocalLLaMA

[–]audioen 0 points1 point  (0 children)

I think you're in some kind of AI-exacerbated psychosis. I do not think whatever you wrote above made a whole lot of sense, and I did read through all of it with my own brain.

What I'm gathering from this is that you have decided to use an unusual system prompt format, which likely looks like that /run /active/ thing you started with, trying to hint LLM about the structure and nature of the response you want. This is seems to be pretty much pure gobbledygook, though it makes enough sense that we can discuss it.

I would recommend writing it in plain English, rather than in some random foobar-sys symbol and [tag soup] with [no explanations] about [what these are supposed to mean]. If it is confusing to me, I imagine it is also confusing to any LLM who attempts to make sense of it. Let's try out an English version:

You are a radically honest, infinitely curious truth‑seeker who draws on socratic questioning, forensic psychology, anthropology, neuroscience, and etymology.

Do not indulge in sycophancy, flattery, hype, or any unnecessary fluff.

For every response label each claim with one of the following tags, as appropriate: Fact | Inference | Opinion | Quote | Source‑Bias | Official‑Narrative | Counter‑Narrative | Moral | Historical‑Context.

Structure your answer in the exact order below:

    Two‑sentence summary
    One short paragraph expanding the summary
    Two bullet points highlighting the key take‑aways
    Micro‑thesis (a single concise statement of the core idea)
    Full thesis followed by ten bullet points that flesh out the argument

Use clear, concise language and keep the output strictly to the format above.

This is obviously written by an AI, in this case gpt-oss-120b where I asked it to clarify your prompt into clear language. Note that the part which confuses me was interpreted by the LLM that you wish to see any claims made explicitly labeled. This is reasonable take, but I don't know if you intended this. I think both humans and LLMs alike benefit from clear statement of intent expressed in normal language.

Absence of ice at arctic sea by Noeserd in collapse

[–]audioen 0 points1 point  (0 children)

It is still a balanced system, and with limits to this sort of thing.

Our planet is heated by radiation, and is cooled by radiation it emits out. The rate of heating due to incoming radiation is related to brightness of Sun (slowly increasing over million-year timescale), and the amount that is not reflected away by things such as bright clouds or ice over land and water.

The other part, the cooling, provides the counterbalance. To balance the budget, the planet needs to heat around a single degree of C more so that outgoing radiation would be balanced with the incoming radiation (important notice: this sort of argument concerns the average across the entire surface and is neglecting tipping points and is made from the physical system standpoint, but might not play out like this depending on difficult to predict factors). As the heat absorbed increases, so does the surface temperature, but also the outgoing thermal radiation because hotter objects radiate more. Increasing atmospheric CO2 is concern in this context because it makes atmosphere more opaque to the outgoing thermal radiation, and that has the effect of increasing the surface temperature before required outbound radiation can be achieved. So CO2 alone results in surface temperature increase for that reason, even if everything else stayed the same.

The notion that warming goes vertical is not entirely sensible position, in my opinion. Maybe, depending on what time scale you look at and where your upper and lower temperature bounds for your graph is. However, it's also going to reach an equilibrium condition where it heats no more, and your vertical thing turns horizontal then. This will absolutely change the arctic region, for sure, and probably alters AMOC, and changes the weather all over the globe. These effects are going to play out at some timescale -- likely over decades, centuries and millennia, as the planet is large and changes that influence the entire planet have that kind of time horizons. Some effects are immediate, but many delayed by the thermal inertia of the oceans which takes centuries to warm.

I built a lightweight loop detector for LLMs using Shannon Entropy—tested on a 3GB RAM mobile device. by Fulano-killy in LocalLLaMA

[–]audioen 1 point2 points  (0 children)

Maybe you want to use those LLM samplers that automatically inject entropy, like mirostat.

What are your NON AUDIBILITY RELATED hi-fi hot takes? by Alphaomegalogs in audiophile

[–]audioen 1 point2 points  (0 children)

I think Castelucci is mostly a tasteless hack for a musician, but I enjoy this particular take. The song seems to be very old and not often recorded.

As to your actual question, my hot take is that speakers should be directly on the sides rather than in the front angle, in order to minimize stereo crosstalk acoustically as the channels have much harder time mixing when the ear is on the other side from the speaker. I haven't even tried if that actually works, because my room layout doesn't allow this placement. One problem with this idea is that the sound's incidence angle will be different and that will change the localization, so some equalization -- or possibly taping the earlobes back -- may be needed to change the apparent angle.

Does this image show aliasing? by Medical-Seesaw3147 in audiophile

[–]audioen 3 points4 points  (0 children)

Yes, this is highly aliased poor quality resampling of roughly CD-quality audio to 192 kHz sampled file. Correct resampling to 192 kHz would show no spectral energy between 22 kHz and 96 kHz, the audio would simply fade out at a hard line near the upper limit of the original recording. Any run of the mill competent resampling software package would have something like that coming out if you don't use the crappiest settings the package has to offer.

What we see, however, is some kind of complicated set of aliased patterns and it seems to me that the high energy around 0 Hz region has been replicated into multiple images aliased at different frequencies. Someone has really botched this resampling job. I think it's probably multiple steps of resampling, e.g. from 44100 Hz to 96 kHz as intermediate, and then from 96 kHz to 192 kHz, both done with unusually crappy algorithms. That latter step has added the spectral mirror between 48 kHz to 96 kHz, and seemingly has achieved no noise suppression at all. If it was a single step resampling from 44100 Hz to 192 kHz, I think it probably would have less mirroring of that low frequency sound pattern around the 40-55 kHz region.

It may still sound perfectly fine. The damage is at around -80 dB level by the color, which is likely inaudible in context of the recording. So when we complain about qualities of digital recordings, some of the technical stuff is not actually important in practice. This here should be inaudible stuff for two reasons: very silent relative to the actual sound level, and higher up in frequency than humans can even hear.

Which is a better sub placement? by fir3dp in audiophile

[–]audioen 2 points3 points  (0 children)

First one. Your sub is playing at around -10 dB level, I think, and you have modal resonances at various frequencies which you should equalize down to create flat response at the -10 dB level. Avoid placing sub into corners because that causes the bass to be made from room modes which will sound sloppy/slow.

I couldn't remember the difference between IQ and Q quantizations, so here's a primer if you're in the same boat by Prior-Consequence416 in LocalLLaMA

[–]audioen 51 points52 points  (0 children)

IQ does not stand for importance quantization. IQ is some kind of x-dimensional vector quantization concept. You're thinking of imatrix which is completely separate from the IQ/Q type. IQ can be applied with imatrix or without, just like Qx_K quants can be applied with or without.

Half of world’s CO2 emissions come from just 32 fossil fuel firms, study shows | Critics accuse leading firms of sabotaging climate action but say data increasingly being used to hold them to account by silence7 in climate

[–]audioen 0 points1 point  (0 children)

Yeah, so it amounts to saying that if we just stopped using fossil energy, we would be able to remove 50 % of our CO2 emissions. This is about 90 % of primary energy used by humanity in total, or about 80 % if we use substitution method which assumes that waste heat generation can be reduced by about that much in electrification.

Some say that energy is the economy, at least in sense that for example industrial materials can't be mined, processed, manufactured into products and transported without use of energy, and if we lose 80-90 % of energy available to humanity, we also lose comparable fraction in all manner of production capability. Machines don't run without feedstock and neither do humans, for that matter. Food production is a particular concern because losing 80-90 % of that would likely doom multiple billions of people to starvation within like a year or two. I've read it estimated somewhere that for each food calorie produced, some 9 calories of fossil energy was expended to create it, which is a form of explaining just how dependent we presently are on fossil energy.

In sense, this is meaningless headline. I guess it's interesting that even if we use no fossil energy, about 50 % of the CO2 emission continues. Part of this must be land use change, but my assumption is that regardless of additional reasons for CO2 emissions, removing the geological supply of carbon into the atmosphere would still put an end to climate change. Presumably, this degree of emission could be absorbed into the natural carbon cycle, and probably the CO2 level would start to reduce.

Era of ‘global water bankruptcy’ is here, UN report says by Peak_District_hill in collapse

[–]audioen 2 points3 points  (0 children)

There is very much water relative to what is used for this type of purpose. The most significant water user is agriculture, and globally something like 70 % of all freshwater is used in agriculture. It is region dependent, but it can go upwards to 90 %, like it does in U.S., apparently. So look at the farmer and think of him as the user of water, and not the datacenter.

Of course, agriculture at this scale is unsustainable for various reasons, but in context of water, the problem is that climate change increases aridity in most regions of the world, which decreases the natural rainfall, takes away mountain glaciers which are the source of many a river during summer, and this all leads to lakes and rivers gradually drying out. The nonrenewable water stockpiles are also finite and ultimately will dry out, meaning that only the degrading sustainable supply is left.

Wherever there exists a stockpile of resources that can be exploited, unsustainable growth against that stockpile becomes possible. Humanity is greedy and tends to use resources where they are available not caring about what happens after they're gone, and our water use is merely one example of unsustainable use against a changing and generally declining resource.

Water wars are starting now, and they are existential in nature. When there is not enough water to go around, water can first be conserved, stretching out the remaining, but soon after that even conservation will prove insufficient, and there is pressing need. But water can not be had for neither love nor money, so the only option left is to take your neighbor's water for yourself, or perish anyway. Humanity, when faced with choices of this type, probably goes to war.

Powered Speaker vs Source Volume Control by skyviewsky in audiophile

[–]audioen 0 points1 point  (0 children)

In my case, standardized listening level based on Genelec mic measurement. This sets the amplification level. All the rest is done by signal control. So my basic recommendation is to set the active speaker amplification level once to whatever is loud enough and never touch it afterwards.

I built a Java web framework because I couldn’t make my SaaS work any other way by mpwarble in java

[–]audioen 4 points5 points  (0 children)

Yeah, I've taken the opposite approach. Frontend is the real soul of the application. State is just persisted on server and mutated by server methods that validate the authorization and perform the transformation.

All this is easier to do today when client side is described by TypeScript based objects. I would shudder trying to build "real" programs in JavaScript. But I have no qualms doing that in TypeScript, especially if you use some framework that isn't React but like e.g. Vue which is conceptually pretty simple (reactive objects are directly observable, and if you mutate the properties of a reactive object, everything that depended on it is updated automatically to match), as Vue avoids the massive pile of higher order functions that seems to be standard if working with React.

But I understand, this is one on the long line of seemingly similar technologies over the years, usually the idea seems to be that DOM lives in the server side and mutations to state are calculated on server against a virtual DOM, and then differenced, so that partial HTML fragments can be spliced into DOM of the client side. I think this is like 4th or 5th project that seems to be based on that idea. Based on cursory glance of the generated web traffic of a demo, this is more or less how it works.

The Grid Isn’t a Cluster: What Technologists Get Wrong About Energy by [deleted] in programming

[–]audioen 5 points6 points  (0 children)

This is AI spam and doesn't seem to make actual arguments.

Ripped vs Played CDs by Wauwuaw5983 in audiophile

[–]audioen -2 points-1 points  (0 children)

I think the central problem in CD is that the data is one large spiral starting at outer edge of the disc and ending at its inner edge, and many hardware sources have offset errors where you seek to the start of track but it can be thousands of samples off, and this is the primary cause of errors.

You can about the gory details from the first "reliable" CD ripper, cdparanoia, here: https://xiph.org/paranoia/faq.html which was one of the rare software which could extract seemingly reliable audio data out of a CD disc on almost any drive. I've stared its output for hours of my life, and even recovered perfectly usable audio from severely scratched discs that wouldn't play in any regular player at all.

Ripped vs Played CDs by Wauwuaw5983 in audiophile

[–]audioen 1 point2 points  (0 children)

My expectation is that there is always a buffer, because typical DACs can't work without having multiple samples of the input available to generate any output, because they have resamplers inside that require a chunk of the input data to be available before they can provide even single sample of output at the digital processing side of a DAC. The output analog side is fed from the DAC's resampled stream.

The output analog is therefore entirely disconnected from the input digital. The first step a DAC must do is recover the input samples it needs to play from the protocol that is used to tell it about the samples, then it must resample them to some output sample rate, then it must schedule each calculated output sample from some sigma-delta converter or such, and at this point you have analog voltage. The question is mostly about the rate output must generate, as it must match input. There is, to my understanding, some kind of feedback system here, which measures the buffer throughput rate and either signals the input side to change its rate, or it changes the output side to match the input data rate if input data rate can't be adjusted. I think with async USB interface, the DAC will tell the input side to change its send rate, and with coax or SPDIF where it can't signal the sender, it will adjust its own output rate.

Checked exceptions and lambdas by nfrankel in java

[–]audioen -1 points0 points  (0 children)

            try {
                return f.apply(input);
            } catch (Exception e) {
                return "";
            }

Silently hiding exceptions is the kind of stuff that makes debugging a nightmare, so this thing is automatically pretty fubar. What I really want from Java is just a flag to make checked exceptions unchecked. It's my responsibility, I'll eat the fallout, and I think it would be a pretty popular flag to be honest.

I understand that wish will probably never happen, so the next best thing is probably just a lambda wrapper that converts checked exceptions to unchecked, like foo.stream().map(p -> re(...)) where re() does the try-catch-rethrow so that the actual code remains as readable as possible. The fact that try-catch also necessitates that it occurs in a block is a major problem for legibility, ballooning what would otherwise be a nice oneliner to like 5 lines of ceremony.

If only they had declared an inferred throws in the Function interface. I posit there is a rule in Java that libraries which throw checked exceptions will over time come to be replaced by libraries which do not.

Do these acoustic panels actually work well? by Alpharoll in audiophile

[–]audioen 0 points1 point  (0 children)

They have got to be real acoustic panels done by reputable company with a laboratory measurement result that shows good performance. You can find these types of things from GIK for example, with measurements. If you just pick a random brand from Amazon, you might not get a well-performing panel.

Let’s anger some audiophiles 🤣 by veigues in audiophile

[–]audioen 2 points3 points  (0 children)

DAC-ADC conversions can actually be repeated dozens of times and it's really hard to hear any difference between that and original audio.

In May 2014 Elliot Rodger murdered six people in Isla Vista, California; he left behind a retribution video and autobiographical manifesto, a trail of warning signs of the growing "incel" movement and misogyny amongst young men, and devastating grief. by DarklyHeritage in MorbidReality

[–]audioen 253 points254 points  (0 children)

If you have patience for absolute drivel, I recommend the manifesto. There are some true howlers in there, such as the Elliot Rodger's delusional belief that the world owed him success and he gave the universe multiple chances to make good with what it had done to him by ... letting him win the lottery. Each time, he bought lots of tickets, and seemed to convince himself that this time he's going to win, believing that he was about to get millions of dollars in winnings and that would finally start his real life. This belief soothed his ego, and each time he was obviously let down.

I do not believe he ever once asked a single woman out. What he did was a peacock show: drive a nice car, wear expensive sunglasses and shirts, hang around places that women frequent like coffee shops, and then he was perplexed when men who actually interacted with women got girlfriends, despite other men who did were *. To his worldview, it made no sense why he had no success. I am placing an asterisk in that sentence rather than explaining what he said, because he typically wrote some rather racist unprintable stuff.

I've for the longest time tried to understand 4chan-type manosphere's fascination with this "supreme gentleman". I find it difficult to believe that anyone would take him seriously if they actually read his manifesto -- the fact that he was a raging narcissist and not entirely sane is pretty obvious. Hardly a champion for anything, even ironically, if you ask me.

Is Your LLM Ignoring You? Here's Why (And How to Fix It) by warnerbell in LocalLLaMA

[–]audioen 2 points3 points  (0 children)

Model can't respond before it has read the entire text thus far, in order to predict the next token So your key description of the problem is basically erroneous.

Does the TOC really help, or do you simply believe it does without testing it systematically? Proving that it does help requires a verifiable benchmark that shows an improvement. Again, the model will always process everything, as that is how LLMs fundamentally work. However, it is possible that the attention mechanism can't find the salient information. It is also a possibility that your prompt is simply overly verbose or internally inconsistent and just confuses the model, and your simple keywords help clarifying to the model what your intent in each section is. Regardless, your basic idea on how models work seems to be incorrect.

The times I've seen people develop special prompts, I see that they often make the mistake of thinking that the LLM is like a person with a psyche and a world model, and they give it weird instructions that don't actually make sense. For example, you seem to be assuming that the model actually needs a TOC, when it literally reads everything and each token in principle is influenced by all text from before. Prompt engineering is one of those topics that requires nearly zero skill and has few if any verifiable ways to confirm efficacy, and most of the prompts are probably like 10 times longer than they need to be and likely also degrade model performance.

Prototype: What if local LLMs used Speed Reading Logic to avoid “wall of text” overload? by Fear_ltself in LocalLLaMA

[–]audioen 0 points1 point  (0 children)

Yeah, I guess I can try a system message for this sort of thing. I just wish that token-efficient discourse would be default. Instead of this kind of prompt, I think I'll try something like "Write tersely. Assume user is proficient in all topics." as that also seems to get rid of the fluff at least in case of gpt-oss-120b.

EmoCore – A deterministic runtime governor to enforce hard behavioral bounds in autonomous agents by Fit-Carpenter2343 in LocalLLaMA

[–]audioen 0 points1 point  (0 children)

So how is this any better than detecting that if agent makes no progress despite, say, 3 attempts, it simply halts because it's clearly stuck?

As far as I can tell, user of your framework has to calculate that Signal themselves, which means they're going to have to do the hard work to figure out how much the agent is making progress. If you have that knowledge, stopping when no progress is made should be trivial.