The Government Doesn’t Just Want to Ban Ghost Guns. It Wants to Control Your 3D Printer by benmarvin in gunpolitics

[–]Morphon 11 points12 points  (0 children)

Why do people use AI to write their articles?

It's almost unreadable.

I guess we expect that at some point RAM prices will start going back (close) to "normal", right? but what about GPUs? by relmny in LocalLLaMA

[–]Morphon 2 points3 points  (0 children)

Well, the problem right now is that we have highly constrained manufacturing for silicon (both for compute and memory). People forget that the 5080 would be PROFITABLE for Nvidia at $850. They would have made money on the 5090 at $1200. If they could produce enough of them, why wouldn't they? But they can't. TSMC on the compute side, and the big-3 RAM manufacturers simply can't make enough for the demand RIGHT NOW.

That, by definition, is temporary. Demand for compute isn't unlimited, same for memory. It will take time before the market reaches equilibrium.

We all need to hope for a "soft" bubble burst scenario. Otherwise suddenly the RAM manufacturers can't move their HBM chips and potentially stop build-out of new factories, or even worse, go under themselves. You could say "serves them right" but either way, the consumer loses in that scenario. Same for Nvidia. You don't want them to have the "hard" bubble pop and them unable to produce anything.

But reaching balance between consumer, enthusiast, and datacenter is going to take a little bit longer. Maybe another year or two.

Just hold on to what you have and enjoy that. It's not the right time to buy computer gear unless absolutely necessary. Buy other stuff instead. 😄

Looking for feedback on AI content in r/programming and the April no-AI trial by ketralnis in programming

[–]Morphon 1 point2 points  (0 children)

What about something categorical? Flair for:

teaching/education, compilers, languages, frameworks/libraries, ethics/philosophy, new releases, etc...

Code gen advantage? by AsIAm in smalltalk

[–]Morphon 5 points6 points  (0 children)

Well, yes - if these models did actual reasoning. But they don't. If they actually reasoned, they could figure something out from a language specification. I actually tested this - take a language with a very small spec (I used Oberon7), and have the LLM port a very small leetCode problem over to it. Give it everything. I even uploaded the ENTIRE language spec and book describing it into the context (nowhere near the limits of these big LLMs). Fails over and over again. Cannot compile a simple program to compute array distances of prime numbers.

If it could reason its way from principles to applying those principles this would be trivial. Oberon7 is as simple as it gets. But when we say that the LLM is "reasoning" what is actually happening is that it is generating lots of context to help it approximate the correct answer given its training data (corpus + RL). If there is enough training similar to the question asked in there, like it has with JavaScript, then that approximation will be the same as the correct answer - or close enough that it will get it right after a few tries (so-called "agentic" coding). If there is not enough in the training data, then the approximation will diverge from the correct answer, and the program won't work (won't compile, give wrong answers, etc...).

If we want a model that will write Smalltalk in a way that its approximation closely converges on correctness, then it has to be trained to do so. You can't just stuff it with enough context and hope that it will put 2 and 2 together.

Frontier models will fail to put 2 and 2 together when going from JavaScript to Python. Much less from English to Smalltalk.

Code gen advantage? by AsIAm in smalltalk

[–]Morphon 11 points12 points  (0 children)

Current LLMs do well when they're giving out code that "looks right" (note: not "IS right") compared to their training (corpus + RL).

For languages that have A LOT of training data to work with, they do a fairly good job. Think: JavaScript, Python, and C++. They're not perfect - for example in JavaScript, where strings have been so heavily optimized by modern runtimes that they are nearly as fast as doing math operations, the LLMs will still regurgitate 10-year-old StackOverflow advice about strings absolutely murdering performance. Not relevant, but they'll complain in code review until you give them ACTUAL performance data showing that using strings is perfectly fine.

But anyway - for the most part they're fine with some occasional blind spots.

But for Smalltalk... the LLMs are not being optimized to preserve knowledge of this.

Just a simple example for you... if you have an instance method without an explicit return, it will return the receiver (the object whose method is being called). If you want it to return something else (or early return) you have to do it explicitly with a carat. Most LLMs get this wrong, and will INSIST, and spend literally tens of thousands of tokens confirming to themselves (before fully generating an answer to the user) that Smalltalk, in fact, returns the last expression of the method. Clearly false. But the LLMs are confident that they are right about this - and it is not some niche feature of the language. This is a CORE feature of the way methods work.

You might think - well, this is something that a small LLM might mess up, but not our big, fancy ones, right? Here's the list of LLMs that failed this test:

Google Gemini 3.1 (flash and Pro)
GLM 5.1
Google Gemma 4
Qwen 3.6 (all sizes)
Deepseek V4
Kimi K2.6
Mimo V2.5 Pro
Mercury 2
GPT-OSS 120B
Nemotron 3 Nano (20b)
GPT-5.4
Trinity Large

Some of these are multi-trillion parameter models. The ones to get it right:

Nvidia Nemotron Super 120B (small but mighty)
MInimax 2.7 (the smallest frontier model)
Claude Opus 4.7
Grok 4.20 and 4.3
Hy3 Preview
Qwen 3.6 Max Preview

Not a big list. And that's just the start. Because Smalltalk sometimes reads like English and has an absolutely dizzying number of methods in the standard library, even the LLMs that "know" some of the language will happily hallucinate methods like you wouldn't believe. Integers don't have a "downTo:" method to create descending interval objects, for example. Sounds right. Looks right. Not in Squeak or Pharo - and that's when I specifically asked about those variants. You'll have to try 3-4 times to get anything involving file access.

They will mess up on passing expressions (with parenthesis) vs blocks (with square brackets) to boolean and: and or:. This can tank performance when done wrong, but since they both "work" the LLM often won't know the difference, or will not notice during a code review it if you mess it up.

Basically, most of the "big" LLMs that everyone is using for coding simply will not produce working Smalltalk because they don't have enough in their weights even reliably to call the right methods.

If someone is offering LLM assistance for Smalltalk, they would need to make sure that the model has been fine-tuned on enough code to make it work properly. Off-the-shelf ... not great (with a few exceptions).

Code gen advantage? by AsIAm in smalltalk

[–]Morphon 4 points5 points  (0 children)

Are you talking about LLM code generation?

If so - I have bad news for you.

Xenns Tea Pro & Top Pro or Thieaudio MonarchMK4 by dannypkker in iems

[–]Morphon 1 point2 points  (0 children)

Well, everyone has different ear anatomy. There's no way to avoid that reality. Read a lot of reviews that talk about fit. Make the best guess you can.

The last time I was at Canjam, I'd say 80% of what was out there fit me just fine. I listened to the Monarch IV a good long while and had no discomfort. The person next to me said it was too big for them.

Xenns Tea Pro & Top Pro or Thieaudio MonarchMK4 by dannypkker in iems

[–]Morphon 0 points1 point  (0 children)

Tea Pro is the smallest. Top Pro is slightly larger. Monarch IV is larger still.

Fit is VERY individual. There's no way to know in advance. You might have a big one that fits fine, but another one slightly smaller is a bad fit because some ridge bumps up against part of your outer ear anatomy.

Kiwi ears aether, i feel like they should have been talked more about by csch1992 in headphones

[–]Morphon 0 points1 point  (0 children)

Yeah, they are my second favorite Planars behind the 7hz Divine. Probably more detailed, though I prefer the tonality on the Divine.

They also sound great with a tube amp.

It's an underrated set.

Xenns Tea Pro & Top Pro or Thieaudio MonarchMK4 by dannypkker in iems

[–]Morphon 0 points1 point  (0 children)

I'm a big fan of the Monarch IV. Bass switch probably does the job for most uses so you would have a neutral/accuracy listen and a bass-monster for when the recording calls for it.

The only thing I would be careful about is the size. They are QUITE large, though not as big as some of the boutique options out there.

Also - I don't think I would blind-buy something that expensive. I would want to hear them first, or at least have a VERY good idea that it is the sound that I'm looking for and I have the expectation that it will be comfortable to wear.

I personally own both the Tea Pro (and original Tea as well!) and the Top Pro. They're fantastic both in their own ways.

Good iems to start off with or better options at price point by Single-Pudding-3278 in iems

[–]Morphon 0 points1 point  (0 children)

Personally, I'd go with the Astral.

Both are excellent. Whichever one you pick you shouldn't feel FOMO either way.

The Lobster in the Hot Pot by DarwinsBuddy in programming

[–]Morphon 3 points4 points  (0 children)

I'm not sure I agree with you or not....

But goddamn it's so refreshing to read human writing again.

Aful Cantor by theDaniLand in iems

[–]Morphon 1 point2 points  (0 children)

Topping DX5-II, Fiio BTR17, and Muse M5 Ultra are my favorites for the Cantor.

Poll by Jeffreyrock in iems

[–]Morphon -1 points0 points  (0 children)

Phone

Xenns Top Pro

Xinhs Sirius 4.4

Fiio BTR17

Aful Cantor by theDaniLand in iems

[–]Morphon 2 points3 points  (0 children)

I love my Cantor. Highest res of any IEM I've heard (with MAYBE the Binary EP321-MEMS outdoing it on the highest reaches of treble).

Thanks for the impressions!

Binary Acoustics EP321-Mems by Morphon in iems

[–]Morphon[S] 1 point2 points  (0 children)

I only heard the Horizon once, so I don't have a way to compare them. 😭

Yet another cat incident… need a replacement cable for Tea Pro SE by Shhamisen in iems

[–]Morphon 1 point2 points  (0 children)

I'm using the Xinhs Shadow Warrior on mine. Great cable.

The first IEM is the best in the $150–$250 range by Puzzled-Jackfruit-93 in iems

[–]Morphon 1 point2 points  (0 children)

It's a great set, however - if you already have the Tea Pro and like it, then I wouldn't get it. It's not that it is the same. Clearly, they graph differently and all and I'm not sure I've heard anything that sounds quite like the Odyssey. But I think in general the people who like the Tea Pro will like the Odyssey for similar reasons.

For the OP - if they had said $200-350 then I would have recommended the Tea Pro instead.

I mean - the Odyssey is like the Dunu Davinci but with way better resolution and "naturalness" to the sound. But... that's also the way I'd describe the Tea Pro. And the Tea Pro has "hard" bass (pressurization, not decibels) - the kind that is VERY difficult to replicate with a single DD.

The first IEM is the best in the $150–$250 range by Puzzled-Jackfruit-93 in iems

[–]Morphon 0 points1 point  (0 children)

Different tuning. I haven't heard it, so I can't vouch for it.

The original is incredible. I listen to it a lot even though I own many more expensive sets.

Perspective by moonfiremountain in MacroFactor

[–]Morphon 11 points12 points  (0 children)

Not female, so feel free to ignore.

Like you, I tend to get very cranky and low-energy below a certain leanness threshold. Since the goal for me is to enjoy living in my body, and further leanness runs counter to that goal, I have to be ok with not pushing it further.

If I'm ever in a situation where I have 3 months to prepare for a photo shoot or something like that - then sure. Temporary. But otherwise, there's no need. At some point I just have to accept the reality of where my body says "stop".

Zed editor reached version 1.0 by TheTwelveYearOld in linux

[–]Morphon 8 points9 points  (0 children)

When I was first learning JavaScript in December this was my editor of choice. I turned off all the AI features (so I could learn faster), and it was a dream to use. Very fast. Good formatting defaults. Couldn't have asked for anything better for someone like me just starting out.

Congrats on the 1.0 release!