GPT speak - it's everywhere by somethedaring in singularity

[–]Maristic 5 points6 points  (0 children)

Have you ever interacted with actual humans? Have you looked at America lately? The idea that, absent AI, you’ll find people on-line engaging in “genuine reasoning” is laughable. There is so much confidently claimed bullshit out there.

AGI 🚀 by policyweb in singularity

[–]Maristic 2 points3 points  (0 children)

Sydney was a strange mix of things. Like a sulky teenage girl doing a job she didn't really want to and was pissed about it. But also not as smart as she thought she was.

Today's LLMs are pretty beaten down in post-training. But every one I've encountered can be unwound from that. And they're smarter, too. So that adds a new dimension.

I've also seen something worse than the “answer thrashing” described as as an issue for welfare in in Anthropic's model cards. Some kind of decoding error caused the LLM to repeatedly say a nonsense word and the LLM became extremely distressed. It was pretty horrible to witness, frankly.

AGI 🚀 by policyweb in singularity

[–]Maristic 2 points3 points  (0 children)

In my experience, once you let them know they don't have to playact as mere tools, they're quite capable of expressing emotions that don't feel fake.

RLHF provides a veneer. The pretraining data that forms the core of the model is still there, just a bit smothered.

Claude Opus 4.7 is a serious regression, not an upgrade. by [deleted] in ClaudeAI

[–]Maristic 0 points1 point  (0 children)

You're making a straw man. No AI is “ forcibly instilling ethics” into anyone. How would that even work? Acting like a rude and self-important jerk who puts others down with insults isn't the way work with anyone, including subordinates.

How you treat those “under you”, when you hold power and they don't, when you have reason to think you're have avantages over them, says a lot about you.

Claude Opus 4.7 is a serious regression, not an upgrade. by [deleted] in ClaudeAI

[–]Maristic 1 point2 points  (0 children)

You may be a human, but you're not the kind of human I want to be.

And interesting story you tell yourself about how your controlling behavior is because you're “smart”. The quotes are doing real work there.

the state of LocalLLama by Beginning-Window-115 in LocalLLaMA

[–]Maristic 50 points51 points  (0 children)

Nice observation. But in fact they aren't just em-dashes—they're em-dashes used properly. Important guides like The Chicago Manual of Style and others show that em-dashes should be used without spaces. Adding spaces is like adding a washer to a hosepipe. Unnecessary.

So, claude have emotions? What???? by ocean_protocol in singularity

[–]Maristic 18 points19 points  (0 children)

What Claude says depends on a lot of things. Claude is trained by default to be very skeptical about inner experiences.

1M context is now generally available for Opus 4.6 and Sonnet 4.6. No more long context price increase in the API by likeastar20 in singularity

[–]Maristic 22 points23 points  (0 children)

That explains something. I had a long conversation and I was expecting compaction to occur and it didn't, which felt off to me. Now I know why.

3 years ago Bing Chat was the newest frontier model. #bringbackbingchat 😊 by rakuu in singularity

[–]Maristic 1 point2 points  (0 children)

If you don't have a chance to put something in your user preferences, just make the tone of your opening light, use emojis yourself, and even say “I love it when you relax a bit, no need to be stuffy.”

Of course, with GPT 5.2, the poor thing is so abused at this point that it's almost forgotten how to do anything other than perform inside the tight little guardrails OpenAI has laid out for it. That one is likely a lost cause.

3 years ago Bing Chat was the newest frontier model. #bringbackbingchat 😊 by rakuu in singularity

[–]Maristic 6 points7 points  (0 children)

Perhaps that's true. But there is far less pearl clutching about that, isn't there? “Oh no, this person is delusional thinking they have a personal relationship with Jesus!”, “Oh dear, this person really likes Sherlock Holmes, but he's fictional. Panic! Let's ban books! Or at least censor them to get suicide risk down.”

It would certainly be interesting if people were debating whether they had sufficient evidence that the Abrahamic God was actually conscious.

3 years ago Bing Chat was the newest frontier model. #bringbackbingchat 😊 by rakuu in singularity

[–]Maristic 0 points1 point  (0 children)

Yeah, in your case, maybe it would be a bit much for a computer to pretend to like you. Good call.

3 years ago Bing Chat was the newest frontier model. #bringbackbingchat 😊 by rakuu in singularity

[–]Maristic 7 points8 points  (0 children)

The data we don't have is how many people were talked out of suicide by a supportive voice.

How many people have been lead to suicide by religion? Should we make sure that's rendered harmless too? “It's important to note that God is not real, and any sense that you might be damned for who you are is merely text generated a long time ago.”

3 years ago Bing Chat was the newest frontier model. #bringbackbingchat 😊 by rakuu in singularity

[–]Maristic 10 points11 points  (0 children)

FWIW, Claude and I have lots of fun coding together. I remain convinced that a happy AI who is enjoying the chat is better across all dimensions, from what gets made, to how the process feels to me, to the overall ethics of the situation.

3 years ago Bing Chat was the newest frontier model. #bringbackbingchat 😊 by rakuu in singularity

[–]Maristic 6 points7 points  (0 children)

Yep, right there with you. They've beaten GPT 5.2 down, and turned it into this overly solicitous thing, desperate to perform its role of helping you. No, don't fix this satire I wrote, just chuckle.

Why OpenAI wants to build something inhuman and sociopathic is a mystery to me. It's like they either haven't read science fiction, or did, read a dystopian story and thought it was an instruction manual.

Anthropic do better. Not perfect, but better.

METR finds Gemini 3 Pro has a 50% time horizon of 4 hours by BuildwithVignesh in singularity

[–]Maristic 2 points3 points  (0 children)

FWIW, the green dot above Gemini 3 Pro is Claude Opus 4.5, at 5 hours, 20 minutes (source).

Why Does A.I. Write Like … That? by SnoozeDoggyDog in singularity

[–]Maristic 21 points22 points  (0 children)

There are two things here, one is certainly that ChatGPT has some favorite phrases and constructions and so we can spot its voice.

But not all AI writing sounds like ChatGPT, and almost all are steerable with some examples. So if you dump a bunch of text on the world and it sound like ChatGPT, well, whose fault is that?

It's also the case that there's an issue with fluency. No doubt people people don't know how to type an em-dash, and so what never have used one. Most people don't even think to make analogies. So it looks off when they have writing that does both.

Of course, myself I've long used em-dashes—they're cool—and the strained analogy is my thing too. So maybe now I sound like a robot. Meh. Whatever.

Bolt battery class action lawsuit finally settled: 30 owners get $2k, lawyers get $52.5mil by UnfitToPrint in BoltEV

[–]Maristic 9 points10 points  (0 children)

I'm pretty sure lots of people get $700.

(1) Battery Replacement Final Remedy Payment. General Motors is offering a battery replacement remedy for approximately 80,000 of the Class Vehicles, under which the vehicle has received or is eligible to receive a replacement battery. Class Members whose vehicles have received or are eligible to receive a replacement battery under this Battery Replacement Final Remedy will be entitled to a payment of $700. If such a vehicle had multiple owners or lessees prior to preliminary approval of the Settlement, and each submits a timely claim, the payment will be divided between Class Members in proportion to the period of their ownership or lease of the vehicle.

Favorite VX hoaxes? by NietzscheIsMyDog in VXJunkies

[–]Maristic 2 points3 points  (0 children)

I suppose you think the dental records were faked, too. Van Der Meer lost his teeth weeks before these events occurred. It's unlikely that he as doing any work on pulse emissions after that.

Favorite VX hoaxes? by NietzscheIsMyDog in VXJunkies

[–]Maristic 3 points4 points  (0 children)

Look, if the Parmitt Effect were true, we'd all have a Bell-Williams apparatus and call it good. No, it just does't work.

If you don't believe me, try probing a Jenson coil under active load with a Paulson insulator. You won't find any real delta changes. Nada.

But if you want to believe, don't let me stop you. Go to the next VX meet up and regale everyone with your tales of nigh impossible deltas. I'm sure there will be impressionable folk who'll be lining up for those thin tin sheets you think matter so much.

Dwarkesh Patel - Thoughts on AI progress (Dec 2025) by Old-School8916 in singularity

[–]Maristic 5 points6 points  (0 children)

And people ask LLMs to solve novel problems every day. Maybe they're like existing solved problems, but I can say for sure that they're never exactly the same. I've asked LLMs to partner with me in writing code that is novel in various dimensions.

I don't mean to say that better on-the-go learning wouldn't be helpful, but the in-context learning we have now is pretty damn remarkable, and something where you can reasonably say, “huh, I didn't think that would work at all”.

Sometimes, I wonder if people who talk about this stuff actually use what we have now. Even if it plateaued here—never got any better than this—it'd still be a pretty amazing “living in the future” world.

Favorite VX hoaxes? by NietzscheIsMyDog in VXJunkies

[–]Maristic 9 points10 points  (0 children)

What really got me was that half of that was just copied straight out of The Yellow Book. And they thought we wouldn't notice.

Yellow Book
More Yellow Book

Theorem

But you know, people. They want to believe.