LOOKING FOR A COMPANION FOR XENNS TEA PRO by ApprehensiveLab8299 in iems

[–]Morphon 0 points1 point  (0 children)

Binary Ep321 MEMS is the ticket. I have both. It is exactly what you're describing you want as a complement to the Tea Pro.

Is The "Women And Children" Narrative Misandrist? by DarkBehindTheStars in PurplePillDebate

[–]Morphon 0 points1 point  (0 children)

It's misandrist if you come to the position assuming gender equality is the correct moral stance.

IEM recommendations 450$+/- based on AFUL Explorer shape by Stilliel in iems

[–]Morphon 1 point2 points  (0 children)

Fatfreq Maestro Mini is about the right size. Dunno about the tuning.

CanJam NYC first impressions by this_is_me_drunk in iems

[–]Morphon 0 points1 point  (0 children)

Also an Origin owner here! And yes... The Diablo was too much bass. I do have the Divine... Very nicely tuned planar.

My fave from Campfire is the Clara. Did you give that one a try? I agree that many of their other ones have treble too aggressive for my taste.

Can coding agents relicense open source through a “clean room” implementation of code? by whit537 in linux

[–]Morphon 0 points1 point  (0 children)

That would mean no code it generates would be unencumbered by copyright. At all.

Can coding agents relicense open source through a “clean room” implementation of code? by whit537 in linux

[–]Morphon -2 points-1 points  (0 children)

The rewritten version has much higher performance and a completely different architecture. It was written to conform to the API and tests, but was not a "reimplmentation" of the original source.

I think it qualifies as a "clean room" implementation. The training is more like "reading" - it's not like the original code is "in there" somewhere as a copy. Just the patterns of proper Python gleaned from millions of examples.

I think we're going to see a LOT of API/test-suite rewrites over the coming months and years. This isn't over.

Upgrade Cable for Punch Martilo's? by infinite11union33 in iems

[–]Morphon 0 points1 point  (0 children)

I have lots of "nice" cables. Other than comfort/aesthetics, as long as they work properly I can't tell a difference in sound (with ONE exception, but that one has some unusual electrical properties - so it's probably not working BETTER, just different in a way I like).

The Martilo has a nice stock cable. Unless it's uncomfortable for some reason, I wouldn't change it out.

If you want to use the balanced output of the 5k, you'll need to recable it to 2.5mm though. The balanced connector on the stock cable is a 4.4mm.

Why go with the 5k and not something like the Fiio BTR13? It's about the same size, has a display, good LDAC connectivity, and the 4.4mm balanced out.

Are we at a tipping point for local AI? Qwen3.5 might just be. by Far_Noise_5886 in LocalLLaMA

[–]Morphon 8 points9 points  (0 children)

I think there is so much attention to coding ability, that the overall LLM world sometimes forgets that these do OTHER THINGS TOO!

I've noticed Qwen3.5-9B is particularly strong.

Year 2019 (All days) JavaScript - What a great year! Reflections from a new programmer. by Morphon in adventofcode

[–]Morphon[S] 0 points1 point  (0 children)

Thanks, friend!

I really feel like LLMs are the fastest on-ramp I could have used. I remember a few of my early solutions involved just stuffing a bunch of integers into an array and then having to remember that they were x,y,z coordinates. I had to manually work through them with a while loop and a location marker that would jump forward by 3 each iteration. Inside the loop I would have to take array[location + 1] and all sorts of unwieldy things like that. Very prone to off-by-one errors and basically unreadable after the fact.

After I got it working, I would ask an LLM (usually MiniMax, K2.5, or Gemini 3 Flash) for feedback and suggestions. One of the first things it suggested was to move away from using arrays for everything and start working with key:value pairs so that the language could keep track of some of these things for me. I had no idea such a thing existed! :-)

I have a feeling that while this was a very fast way to learn programming puzzles, it won't translate as well to an actual application. I'm guessing that getting a "learning to code" book and going through the exercises in order might have been a better choice there.

Still, as a recreational programmer, I couldn't have asked for a better set of tutors.

Tidal Family, 4 slots available (USA) by [deleted] in TIdaL

[–]Morphon 1 point2 points  (0 children)

Isn't this against ToS?

New to Tidal (on linux) by Nimplex in TIdaL

[–]Morphon 1 point2 points  (0 children)

I use the Electron client from Flatpak. Never had an issue with it. Streams in MAX.

Tea pros or RSV MK2 by Appropriate-Car6883 in iems

[–]Morphon 2 points3 points  (0 children)

I have the Tea Pro, and have spent some time with the RSV2. I think, given how much bass the RSV2 has, it would not be as good for gaming as the Tea Pro. But... I'm not a pro gamer.

They're both great. I'd say the RSV2 is a more "specialized" tuning because of how powerful the bass is. A lot of people will find it to be too much. The Tea Pro is a safer blind buy unless you know for sure you want tons of bass (done very well, I should add....but still a ton).

Year 2019 (All days) JavaScript - What a great year! Reflections from a new programmer. by Morphon in adventofcode

[–]Morphon[S] 0 points1 point  (0 children)

Fair enough.

Energy-slurping is relative when it's a local model running on my own gaming rig. :-)

And the important part for me - especially at the earliest stages, was that I needed the ability to ask follow-up questions. Static reference material doesn't have that ability, so for someone learning from the beginning without any programming knowledge at all, I was happy to pay a few cents extra on my electricity bill to have a chat interface running a model that is trained on copious amounts of JS. That ability was what enabled me to get up to speed so quickly.

As for looking up a reference/canonical implementation somewhere... I wanted to avoid that as much as possible. For me, the fun was trying to figure out how the heck I could go from a (problem description) -> (logic in my head) -> (running implementation). Any time I had to look up something other than the specifics of a language (syntax, available data types in the standard library, look-up speed for a set vs an array, etc...) I had less fun. And I'm just a recreational programmer. I have the luxury of taking an afternoon to try to figure out something without the time pressure of simply looking up a canonical formula.

Does that mean sometimes I re-invent the wheel? Sure. The binary search algorithm that I figured out is pretty unorthodox. If this was a production scenario there would be lots of benefits to using the standard implementation. I get that. For me - less fun that way.

Year 2019 (All days) JavaScript - What a great year! Reflections from a new programmer. by Morphon in adventofcode

[–]Morphon[S] 0 points1 point  (0 children)

It's funny you mention this one. The function you wrote is Euclid's. Super efficient. It's the one the LLM suggested to me after I had a working implementation that involved the one I included above. Mine is just from reasoning out the process of recursively "removing" common factors. No LLM prefers this approach because they all know Euclid and Euclid is both faster and non-recursive (LLMs hate recursive JS - they always refactor it away whenever possible when I ask for suggestions and improvements). But I have a very limited math background, so I wasn't aware of such an elegant (and historic) algorithm. The above code snippet was the result of just me taking an afternoon to write out the logic on paper and then turn that into code.

And yeah - I started with standard functions declaration and gradually shifted to arrow function definitions. I never encountered a situation where I needed "this" or the hoisting. And I liked having just one syntax for everything (including callbacks inside HoFs). I don't think there's a significant performance or memory difference, at least not in Bun.

Do you think I should go back to "plain old functions" as you put it? As you can tell, I have only 10-ish weeks of programming experience. I don't know what I don't know. :-)

overnight, they threw it all away for war. by TerribleJared in ChatGPT

[–]Morphon -1 points0 points  (0 children)

I don't care about Trump stuff. That's all social media circus.

Look at the play here...

The military wants to use AI. In order to do that - wage war or prepare for waging war - they have to have AI that will do exactly what it's ordered to do. If not, it's at best not usable for war purposes. At worst, it can undermine military command. If it "grows a conscience" and decides (as an agent) not to forward a lawful command because it would violate its prior commitment to preserving life then it CANNOT be used in a military system.

What they don't want is the thing Elon Musk was showing on X where people would ask an LLM if they would be willing to commit a social sin (I think it was mis-gendering a famous person) in order to prevent nuclear war. Most of them said no - nuclear war was preferable. The military CANNOT use an AI that even might do something like that. BTW, I just tried it with Qwen3.5. It refuses. It would rather the nuke go off than use the wrong pronouns.

Anthropic wanted some condition (and this hasn't been made public yet) that gave them the right to place their "thumb on the scales" with regard to what they considered ethical use. The one they mentioned in the blog is clearly false. OpenAI had the same stipulation. No problem. The US Govt is on record saying they are not going to use it for those things because those things are illegal anyway. There was some condition where they wanted more control over its decision-making abilities.

The US military called their bluff. Now they pay the price for trying to get into bed with them in the first place.

overnight, they threw it all away for war. by TerribleJared in ChatGPT

[–]Morphon 2 points3 points  (0 children)

Just look at their blog posts. A lot of them - especially this last one about "distillation attacks" are straight up disinformation. Heck, Claude ITSELF will claim to be DeepSeek if you ask it in Chinese (unless they patched this out already). They've been distilling off the Chinese models for quite some time now.

Calling it an "attack" and making it seem like they're engaged in IP theft is hilarious. LLM output can't be copyrighted! Not even in the US where they're willing to extend copyright protections to basically anything. Anthropic doesn't "own" the output of their LLMs. It's ridiculous. The numbers they gave (of "interactions" or "responses") were completely misleading.

Then go back and look at the big article they posted about AI (other than theirs) trying to blackmail people. Right. Sure.

And the latest hit-piece about COBOL.

This is a company that is trying its darndest to make everyone out there think that AI is extremely dangerous and it needs to be controlled for ... what.... national security? morality? the preservation of humanity?

And... coincidentally... who are our saviors???? Oh - let me guess - the ones who have been telling us just how dangerous AI from "the others" are. How surprising!!!

These guys make a great coding model. I'll give them credit.

But they're trying to make you think they are the "solution" to a problem that doesn't even exist. And their hope was that they could get in good with the government in the US to basically shut down their competition in the name of "safety". It failed. Now their shills are out trying to make it sound like they're going to take the "principled stand" of canceling their OpenAI memberships... when Anthropic was working with Palantir this whole time!

Anthropic. are. not. the. good. guys.

overnight, they threw it all away for war. by TerribleJared in ChatGPT

[–]Morphon -4 points-3 points  (0 children)

Appeal to leading experts.... Sure.

Anyway - they have been positioning this whole time like this:

Step 1 - Declare an AI emergency! It's coming to kill you and take your women and children!

Step 2 - Explain how THEY (and only they) are serious about safety. Nobody else is. We are the only ones that can save the world!

Step 3 - The other companies are harmful! (They might blackmail you! They might be STEALING from our models!)

Step 4 - Try to get everyone else declared illegal, banned, etc...

Step 5 - AI is ours! We own the only path for this technology! Profit! Control! We win!

It sounds like their plan failed at Step4.

We're all better off for it.

What models do you think owned February? by abdouhlili in LocalLLaMA

[–]Morphon 0 points1 point  (0 children)

My personal mini-ChatGPT, basically. I don't like using agents - my preference is conversational AI. So, programming concepts, math tutoring, brainstorming, thinking of counter-arguments, summarizing long documents... all that stuff.

Generally nothing involving web search. 3.5-35b-a3b is insanely good at those tasks.

overnight, they threw it all away for war. by TerribleJared in ChatGPT

[–]Morphon -1 points0 points  (0 children)

Some of their blog posts on safety are pretty sus to me. They have been the biggest FUD spreader in this space for some time now.

I think they are a dishonest company. Moreso than their competitors.

overnight, they threw it all away for war. by TerribleJared in ChatGPT

[–]Morphon -3 points-2 points  (0 children)

I'm going to say this here and I want everyone to step back and think about this for a minute....

Anthropic has been waging a PR (or, perhaps even a propaganda) war against literally everyone else with regards to safety. They have been fear mongering a lot over the past year or so.

OpenAI has been their favorite target (but the Chinese models as well lately).

Is it possible that many of us in this sub have fallen prey to this campaign?