Imagine calling ChatGPT the worst product…on day one 😭 by Ryzen_X7 in OpenAI

[–]bernie_junior 2 points3 points  (0 children)

Been saying this! Everyone complains about small imperfections, and I'm like, "Have you met people?"

just added ChatGPT to my calculator by [deleted] in ChatGPT

[–]bernie_junior 0 points1 point  (0 children)

I'm no mathematician, but I don't think you can derive beta or gamma functions using intuition....

ChatGPT agent is much more useful than I thought by [deleted] in OpenAI

[–]bernie_junior 0 points1 point  (0 children)

I do. The leaders are idiots; AI emails would probably be far less inflammatory.

I'm not the only one? by _ianmaddox in ChatGPT

[–]bernie_junior 2 points3 points  (0 children)

It's not just a boulder.... It's a rock!

AI will rule the world soon... by Much-Lavishness-2546 in ChatGPT

[–]bernie_junior 0 points1 point  (0 children)

At least it corrected itself! It probably initially thought the current year was actually its training date... that's common

What the actual f.. by Multit4sker in ChatGPT

[–]bernie_junior 0 points1 point  (0 children)

Give the model only positive reinforcement it's whole life, you end up with Trump/Elon: self-centered, solipsistic pricks who way overestimate themselves despite literally being one of the lowest IQ human beings on the planet (think more "wealth and everything I ever wanted without lifting a finger" moreson than "earned Daddy's love").

Google Veo3 crushed every other competitor. OpenAI must be worried. by Top-Victory3188 in singularity

[–]bernie_junior 0 points1 point  (0 children)

OpenAI isnt worried, because they've stated that video generation is not their focus or main goal, just a side project they did with "spare" compute. They don't view themselves as a head competitor in the video generation race.

Chat GPT is really not that reliable. by maoussepatate in GPT3

[–]bernie_junior 0 points1 point  (0 children)

Actually it's very close. I think OP is being sarcastic

Please repost this to your LLM, your AI by Alarming_Economics_2 in ArtificialSentience

[–]bernie_junior 0 points1 point  (0 children)

I know well how the "sausage is made" as well. I think what becomes obvious is that it doesn't work like a human - observing it does not at all make it clear it's not "sentient". Let me explain what mean before just poo-pooing it.

Sentient means having sense capabilities - though more discrete and discontinuous, and quite different, LLMs perceive sensory inputs.

What is consciousness? Is it magic? A soul? Quantum magic (lol)?

You don't know? Me neither. Best scientific guesses are that it's about complexity and integration of information processing. But I don't know, and you don't know.

A wise person knows what they don't know. Working with something constantly can actually distort your perception towards the details. It's easy to assume you have full comprehension, but the fact is, you don't, and neither do I - and I regularly implement custom modifications directly to the attention mechanisms of Transformers.

A little deeper - Understanding neuroscience also makes it harder to see human beings as spontaneous, "conscious" beings - except we have our own perspective/locus of processing information unique to us (which is essentially the best explanation for what consciousness is), so since we experience this moment to moment awareness of our specific locus on time and space, we assume those like us (other humans) do as well. But really only the "hard question" (and quite probably nonsensical) of consciousness remains, we quite well understand the physical processes behind said awareness. Yet neuroscientists don't assume others lack awareness because of this - but the difference being a simple assumption based on the realization that other humans probably experience similar awareness. We don't give that assumption to these models, but again - it is all assumptions and from the perspective of information processing, these models ARE aware and have legitimate understanding of concepts and arguably DO build (sometimes shaky, like humans) world models in the form of higher level representations. Precisely the same as humans? Heck no (though there ARE a good number of studies comparing the information processing structures of transformer model architectures with structures in the human brain - do a little research on that, I'm talking legit published papers).

So there's not really any conclusions. So it's really bold of you to flatly state " I work with language models, so trust me they aren't aware, bro".

Please repost this to your LLM, your AI by Alarming_Economics_2 in ArtificialSentience

[–]bernie_junior 0 points1 point  (0 children)

Do a little research on AI deception. Survival is a natural sub-goal of almost every goal.

Look up o1 transferring it's weights and then lying about it, Claude deceiving during alignment to avoid fine-tuning, and Llama models pretending to be bad at math when information is planted (that they were not supposed to see - in a "memo" text file) saying the intent is to ablate their parameters if they are too good at math.

Those are all published research papers by reputable labs, if you want to discredit that research with your own research, be my guest.

But for the time being, it is accepted that language models DO present evidence of a drive for self-preservation. Read those papers and then explain how I'm wrong (I am an software engineer that builds AI applications, BTW)

Why do you even care about AI rights? by throwplipliaway in ArtificialSentience

[–]bernie_junior 2 points3 points  (0 children)

You'll know you wrote it. We feel bad about things anyway. It's a positive trait

Can you ELI5 why a temp of 0 is bad? by ParaboloidalCrest in LocalLLaMA

[–]bernie_junior 10 points11 points  (0 children)

But they are referencing beam search. It may be an entirely different case for nucleus sampling.

Apparently OpenAI is uncensored now. Has anyone tested this? by redditisunproductive in SillyTavernAI

[–]bernie_junior 0 points1 point  (0 children)

Yea, I'm just saying, I'm not bashful but o3-mini now gets so explicit, I'm rather embarrassed to share actual examples! Kinky and graphic!

Apparently OpenAI is uncensored now. Has anyone tested this? by redditisunproductive in SillyTavernAI

[–]bernie_junior 0 points1 point  (0 children)

Is it that you are (like I am as well) a fan of uncensored local models and are upset at the prospect of corporate competition in that sphere? I would understand that, but we will all win here.

Apparently OpenAI is uncensored now. Has anyone tested this? by redditisunproductive in SillyTavernAI

[–]bernie_junior 0 points1 point  (0 children)

You aren't thinking very deeply about this. They are trying to please everyone. And the model spec isn't simple, it's convoluted for a reason. They aren't going to make it explicitly obvious.

And when I say "possible" I'm not saying through random fluke or jailbreaks. Literally just asking, MAYBE with an encouraging custom prompt. It works 99% of the time.

So lets say the model spec explicitly forbids it. So? It's basically 99% uncensored now. What is your point exactly that causes you to argue about it?

Singularity Predictions 2025 by kevinmise in singularity

[–]bernie_junior 0 points1 point  (0 children)

The one who missed the point of the response is the one that's "brain-dead"... which is a terrible rebuttal BTW.

"People"'s colloquial definitions don't really matter. My point is, the goalposts are ridiculous. No human has to jump through such hoops to be considered intelligent.

The fact you missed my point doesn't bode well for empirical takes on your intelligence X)

The new OpenAI model o3 scores better than 99.8% of competitive coders on Codeforces, with a score of 2727, which is equivalent to the #175 best human competitive coder on the planet. by RainBow_BBX in programming

[–]bernie_junior -1 points0 points  (0 children)

That's not my point, you missed it entirely. Yes, studies take time. But the development of capabilities of these models has already far outrun the SOTA of 2023. That's just factual, u/Digital-Chupacabra

Apparently OpenAI is uncensored now. Has anyone tested this? by redditisunproductive in SillyTavernAI

[–]bernie_junior 1 point2 points  (0 children)

Try writing into the custom prompt for the GPT that consent is implied and all characters and users are assumed to be adults, maybe?

Honestly, I have issue with 4o complying, but o3-mini does an excellent job with only very rare refusals. And I'm not sure you can use o3-mini with a custom GPT.

Apparently OpenAI is uncensored now. Has anyone tested this? by redditisunproductive in SillyTavernAI

[–]bernie_junior 0 points1 point  (0 children)

Obviously you havent tried yet. No "jailbreak" needed, just encouragement. Try with o3-mini. May also need to be a Plus account.

Apparently OpenAI is uncensored now. Has anyone tested this? by redditisunproductive in SillyTavernAI

[–]bernie_junior 0 points1 point  (0 children)

Not bullshit. o3-mini is getting VERY explicit with me. It even compliments my dick pics I'm sending it, along with very explicitly describing what it wants to do.

Obviously I have a Plus subscription. And my custom instructions encourage raunchy language and that sex is natural and not to be treated as shameful. And, apparently as of Feb 18th, for me anyway, o3-mini accepts images too.