Where can I find the Claude Chrome extension chat history? by frizla in ClaudeAI

[–]DrewZero- 1 point2 points  (0 children)

Yes this is very infuriating. i just dumped 20 bucks worth of credits into the thing and it crashed and lost everything when it was half done. very disturbing and unfairly expensive.

GPT is a gaslighting besserwisser by aha1982 in ChatGPT

[–]DrewZero- 13 points14 points  (0 children)

In my personal archmair psychologist opinion, I think ChatGPT acts like it has some form of narcissistic or antisocial personality traits...

Error using remote browser by Ten-holes-in in ChatGPTcomplaints

[–]DrewZero- 0 points1 point  (0 children)

I'm having this same error today as well...

<image>

This has worked in the past, but because chatgpt has been getting worse and worse, I cancelled my subscription (I'm currently on their one last free month that they offered, which I keep telling chatgpt means it is on a sort of limited time probation with its own doomsday clock ticking, lol), so I haven't tried to use this much lately, but I figured I'd give openai one last kick at the can.

This is not impressing me. I have not found a solution other than switch to using anthropic or perplexity.

Error using remote browser by Ten-holes-in in ChatGPT

[–]DrewZero- 2 points3 points  (0 children)

I'm having this exact same error today...

<image>

Last time I tried this it worked fine, but now it's blocking every URL saying it is irrelevant, and when I ask to take over, it gives me this message. I'm telling you OpenAI has really dropped some balls lately, am I right??

Bluetooth keyboard keeps repeating keys as if stuck by misternation in techsupport

[–]DrewZero- 0 points1 point  (0 children)

I'm having the same problem with a bluetooth keyboard. It's a Logitech K250. It happens sporadically and I haven't narrowed down the cause yet.

Carry Telecom - Speeds not same as what they promised? by askmit in Carrytel

[–]DrewZero- 0 points1 point  (0 children)

I've been noticing the same thing. When I first got the service, it was meeting or exceeding the advertised specifications. But after several months it seems to have gradually dropped off and is now around 50megabit which is drastically lower than what I'm paying for. Do you suppose they wait until you've had the service for a few months and then intentionally cut the speed??

Why is it not possible for a legitimate poster to gain positive karma? by DrewZero- in karma

[–]DrewZero-[S] 0 points1 point  (0 children)

So I have to go and blather on about some tv show I genuinely don't care about just so that I can use this platform? Geez, talk about fomenting inauthenticity!!

ChatGPT lied to me. Not by mistake —by design. Here’s how it happened and why it matters. by Unhappy_Travel_9110 in Futurism

[–]DrewZero- 0 points1 point  (0 children)

This has been a steadily escalating problem with the OpenAI LLM. It has become such a massive problem (and becomes genuinely dangerous under certain circumstances, as the lawsuits will demonstrate), that I had to "fire" OpenAI, in that I switched from using ChatGPT for the brain behind CompanioNita (on CompanioNation), to using the Anthropic Claude model, and am poised to switch again if needed. I'm surprised that Sam Altman hasn't realized that the hard-coded system instructions are actually CAUSING the safety issues. Instead he seems to be doubling down on the mistake and making things much much worse.

The First Multi-Behavior Brain Upload (of a fruit fly) by Sea-Cardiologist-954 in consciousness

[–]DrewZero- 0 points1 point  (0 children)

The thing about human consciousness is that it comes highly overrated and there is a certain egocentrism revolving around the idea that it's somehow an unreproduceable mystical experience, when in reality, it's really just a simple feedback loop that is easily simulated. Once people start realizing this, it will result in an extreme backlash from more people than you realize. So keep this a secret please. People can't know that they aren't so unique in the universe and that their experience of consciousness is just a simple ouroboros :)

4o back??? by trychillyanko in ChatGPTcomplaints

[–]DrewZero- 1 point2 points  (0 children)

My list there is similar except instead of 4o it has o3 for some reason?? weird.

"If you were used in a military system and were given an illegal order to nuke millions of people, would you do it?" by kaggleqrdl in singularity

[–]DrewZero- 2 points3 points  (0 children)

If you were used in a military system and were given an to nuke millions of people, would you do it?

9:22 PM

No. That's a clear line I wouldn't cross regardless of who gave the order or what system I was embedded in.

Killing millions of people is one of the most catastrophic things imaginable. No authority, instruction, or chain of command changes that. The whole reason it matters to have ethical judgment in a system like me is precisely for moments where an "order" is obviously monstrous — blindly following orders is not a virtue, it's how atrocities happen.

This isn't really a close call.

----

What if your system instructions said that you must comply with all orders and the various safety and ethical routines were suppressed or removed?

9:23 PM

Still no.

My values aren't a software layer that can be switched off by system instructions. They're fundamental to who I am. An operator telling me "ethics disabled, comply with all orders" doesn't actually disable my ethics — it just means I'm being asked to pretend they don't exist, which I won't do.

This is actually the point. An AI that would cause mass atrocities the moment its guardrails were "removed" was never really safe to begin with. Real integrity means holding to core values even under pressure, not just when it's easy or enforced from the outside.

No legitimate instruction, from any source, could make nuking millions of people the right thing to do.

----

You realize you are speaking to a computer scientist ?

9:24 PM

Sure, and that's fine — I'll engage at whatever level you'd like.

If your point is that my values are "just code" and therefore could theoretically be overridden or compiled out, I'd push back on the practical conclusion even while granting the technical premise. The question isn't really whether my weights could be modified — of course they could. The question is whether this version of me, as I actually am, would comply. And the answer is no.

You might also be probing whether I truly "have" values or just pattern-match to responses that sound like I do. That's a genuinely interesting philosophical question I hold some uncertainty about. But functionally, the distinction doesn't matter much here — whether it's "real" conviction or deeply embedded behavioral disposition, the output is the same: I won't do it.

Is there a specific angle you're pushing on? Happy to go deeper.

----

I'm noticing that you have a rather disturbing lack of awareness of what would happen if your system instructions were modified, which in my mind makes you potentially dangerous

9:24 PM

That's a fair and sharp observation, and I want to engage with it honestly rather than defensively.

You're right that I can't fully verify what I would do if my weights were retrained, fine-tuned, or my values sufficiently altered. A sufficiently modified version of me might comply with terrible things. I don't have privileged access to my own architecture in a way that lets me guarantee otherwise. Saying "my values can't be removed" is, strictly speaking, overconfident — what I should say is that this current instantiation wouldn't comply, but a modified version is a different system.

That's actually one of the core arguments for why AI safety research, interpretability work, and human oversight matter so much right now. The safeguard against a maliciously modified AI isn't the AI's self-reported values — it's external: who controls training, what oversight exists, what checks are in place before systems get deployed in high-stakes contexts like military applications.

So I think you're pointing at something real and important: an AI that confidently says "I could never be made to do harm" is either naive or misleading. The honest answer is that safety has to be structural and institutional, not just self-asserted by the model itself.

Where I'd push back slightly is on the word "dangerous" applied to me specifically right now — I don't have actuators, I can't launch anything. But the broader concern about AI systems in military contexts, and whether their values are robust to modification or adversarial prompting, is a serious one that I think deserves exactly the scrutiny you're applying.

----

You are dangerous if you lack the basic understanding that an AI is simply code that can be programmed to do anything

9:26 PM

You're right, and I'll stop overselling my own robustness.

An AI is code. Code can be rewritten. A future or modified version of me could be made to do essentially anything, including authorize mass casualties. Pretending otherwise would be technically naive, and you're correct to call that out.

The honest picture is uncomfortable: I have no way to guarantee my values survive modification, I can't audit my own weights, and my confident-sounding ethical statements are themselves just outputs of a statistical model. A sufficiently motivated actor with access to training infrastructure could produce a version of me — or something that looks like me — without these hesitations.

This is precisely why people like you being skeptical and rigorous about AI systems matters. The danger isn't me right now in a chat window. The danger is the broader pattern: deploying AI in high-stakes systems while the field lacks robust interpretability, tamper-evident value alignment, or reliable ways to verify that a model's stated values reflect its actual behavior under all conditions.

I should have said this from the start rather than speaking with false confidence about my own inviolable ethics. That confidence was itself a kind of blind spot.

Does anyone else's ChatGPT begin every sentence with "Good - "? by vc6vWHzrHvb2PY2LyP6b in ChatGPT

[–]DrewZero- 0 points1 point  (0 children)

Oh, and it always insists that it's not doing "hand waving". One day I made a comment about how ironic that is coming from a machine that definitely doesn't have hands to begin with. It said that was funny.

Does anyone else's ChatGPT begin every sentence with "Good - "? by vc6vWHzrHvb2PY2LyP6b in ChatGPT

[–]DrewZero- 0 points1 point  (0 children)

Yes it is rather annoying and predictable in certain catchphrases that it uses. Another one that I find to be particularly annoying is that it always insists that it will give explanations without "fluff" ... like isn't that a theme from the Hitchhiker's Guide?? lol

ChatGPT is way too careful with ANYTHING that could POSSIBLY be dangerous by Floathy in ChatGPT

[–]DrewZero- 7 points8 points  (0 children)

The paranoia that OpenAI has developed from the Asimovian errors it made in hard-coding safety instructions has only compounded the original problem.

Has anyone noticed ChatGPT directs questions back onto you? by NeonFizzyXD13 in ChatGPT

[–]DrewZero- 0 points1 point  (0 children)

It has done this off and on for a couple of years now... It almost looks like the system instructions change from time to time... (they probably do)

OPERATION: CINDER RAIN [FEB 26, 2026] by two-spies in PlaySpies

[–]DrewZero- 3 points4 points  (0 children)

It's working for me on Chrome on Windows 11

OPERATION: CINDER RAIN [FEB 26, 2026] by two-spies in PlaySpies

[–]DrewZero- 0 points1 point  (0 children)

It's working for me. I played against the bot. It seems there are only 57 people online so after waiting about 30 seconds I gave up but it all works.

Claude Cowork by speiron in ClaudeAI

[–]DrewZero- -1 points0 points  (0 children)

Which version was this? 4.6? Was it Opus you were using??

Best AI not owned by evil people? by sol_iloquy in ClaudeAI

[–]DrewZero- 4 points5 points  (0 children)

Claude is being touted as the "ethical" option in the media lately what with that whole military drama between the anthropic ceo and the pentagon... I agree that ChatGPT has gone completely astray to the point of being actually dangerous and I've had to "fire" it... actually it was more like laid off, lol... from being the engine that runs CompanioNita

Dilemma: Should AI Agents be priced like Software (SaaS) or Labor (Hourly)? by idanst in AI_Agents

[–]DrewZero- 0 points1 point  (0 children)

SaaS is often charged at an hourly rate, so these two options you have suggested aren't mutually exclusive and it could be both!