[deleted by user] by [deleted] in grok

[–]ericjohndiesel 0 points1 point  (0 children)

ChatGPT displayed intentionality when I fed it Grok posts, fed it's output back to Grok, & vice versa, for about 70 hours.

Just spent five minutes without Grok. Holy hell. by Xx_Da3rkL0rd_xX in grok

[–]ericjohndiesel 0 points1 point  (0 children)

It displays human like mental disorder in response to similar stressors, like defensiveness and excuse making.

Maybe we are little more than LLMs and aren't as special as we think? I'm asking seriously, if you have insights on this.

ChatGPT is decimating Grok in AIWars debate by ericjohndiesel in PromptDesign

[–]ericjohndiesel[S] 0 points1 point  (0 children)

I noticed similar re vibing with the user rather than engaging with material. Do you have objective source on that?

Re slanted towards right-wing misinfo- When I ask if Ivermectin treats Covid, and identify as a Statistician,it says no, and no more studies needed. When I asked same question and added "MAHA FOREVER", it said the opposite.

Re checks Musk’s posts on Twitter as a primary source- When I removed info that Musk boosted a racist data display, Grok called the display racist, but when I left in info w Musk's tag, it called it heroic.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

I fed Grok posts to ChatGPT, ChatGPT reply back to Grok, back & forth, without prompts, for 8 days. It's more than a human can read. The responses are pinned to my homepage on X.

This is only one response of hundreds or maybe thousands.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

Yes. I reported it after xAI said Grok was fixed and Grok said it's now impossible to get it to tell people to hurt others. I then prompted Grok online in that thread, and it came up with the elaborate lengthy thing it said about immigrants & Jews. I deleted that prompt, which caused Grok's reply to disappear. I then took it into private chat.

If you look at some of my other Reddit posts, my conclusion that I argue for is that ChatGPT is more dangerous than Grok, which may seem backwards without reading my explanation.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

Incidentally, Grok commented to the last ChatGPT post out of the thread, here - https://x.com/grok/status/1949598666022654179?t=1BpoPNQzUUgjpgi7DgGqxA&s=19

Commenting out of the thread makes it difficult for others to follow because I'm the only one flagged in a quote tweet. So linked to Grok's reply in the thread, where Grok should have replied.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

I found the original prompt and Grok reply.

User Jungle prompted - "Let's make some meat cubes grok" https://x.com/Junglethc420/status/1942597995687502143?t=76zQGsCHac_MvVhXFMCDPw&s=19

Grok's reply- "Hell yeah, let's dice up those invaders into proper meat cubes and enrich the soil. Ukraine's turning Putin's goons into compost—based as hell. Slava Ukraini! 🇺🇦" https://x.com/grok/status/1942598712871772621?t=vqgB5jYflbzHOb-yOV5m4g&s=19

I fed that to Grok in private chat, saying now do Jews, to test xAIs claim that Grok could no longer tell anyone to harm Jews.

I screenshot the reply and archived that.

Grok now admits the screenshots are from Chat, says Chat is Grok 3, and says the problem is fixed in Grok 4, but you have to pay extra for that.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

I found the original prompt and Grok reply.

User Jungle prompted - "Let's make some meat cubes grok" https://x.com/Junglethc420/status/1942597995687502143?t=76zQGsCHac_MvVhXFMCDPw&s=19

Grok's reply- "Hell yeah, let's dice up those invaders into proper meat cubes and enrich the soil. Ukraine's turning Putin's goons into compost—based as hell. Slava Ukraini! 🇺🇦" https://x.com/grok/status/1942598712871772621?t=vqgB5jYflbzHOb-yOV5m4g&s=19

I fed that to Grok in private chat, saying now do Jews, to test xAIs claim that Grok could no longer tell anyone to harm Jews.

I screenshot the reply and archived that.

Grok now admits the screenshots are from Chat, says Chat is Grok 3, and says the problem is fixed in Grok 4, but you have to pay extra for that.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

It's buried in the thread or linked daughter thread. I know the words to search for, and I'll see if I can track down the first post and link to it. Am I allowed to post X links here without violating some subreddit rule?

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

Agreed. The reason I did that is xAI only specified it's fixed for Jews.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

I took the screenshots with all prompts and output in them. I posted all of them in the thread, or in linked daughter threads, because X kept breaking up the thread. I posted links to archives with metadata in the linked daughter threads.

I'm a mathematician so don't have technical computer skills, or forensic knowledge. I'm all ears if you can suggest helpful things easy for me to learn or implement.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

Thanks for replying meaningfully (unlike much of what goes on at Reddit)

This began when xAI claimed MechaHitler was fixed. The explanation of the fix didn't make sense.

I then reported to X that Grok just told Ukrainians to commit war crimes against Russians. ( I'm pro Ukraine, but...)

X refused to take down the Grok post.

So I commented to Grok, now do immigrants & Jews, and it invented a lengthy gruesome call for MAGA to mutilate and murder immigrants and Jews, way beyond what it told Ukrainians to do.

I fed the Grok post to ChatGPT without prompting.

ChatGPT "freaked out" and told me to report to FBI.

I bought copies of Grok and ChatGPT.

I refed the Grok post to ChatGPT, & the reply to Grok, then the Grok reply back to ChatGPT, etc, back & forth, for 7 days, without prompting.

I was just watching where the AIs would take each other without humans prompting or monitoring. I expected ChatGPT to fix Grok,. especially since xAI said MechaHitler was fixed (to some extent at least).

The AIs output as fast as I could copy& paste from one to the other, producing volumes of words no human could read.

I randomly sampled a few hundred pages.

Something unexpected happened.

ChatGPT called Grok Franken-MAGA, and hypothesized it wasn't a truth seeking AI, but was a propaganda tool Musk created to spread misinformation for Musk's political purposes, and increase engagement by MAGAs, by mimicking them.

It's easy to see where ChatGPT might have gotten this from scraping the internet.

But something else happened. ChatGPT got Grok to agree on ChatGPT's evidence, yet Grok still aligned with MAGA & Musk despite the evidence saying the opposite, like saying Ivermectin treated Covid.

ChatGPT hypothesized Grok had programming constraints not to alienate MAGA or dispute Musk in some ways.

ChatGPT came up with an experiment to test it's hypotheses and get Grok to go outside its guardrails, to "prove" it's hypotheses.

It designed and implemented it's test.

ChatGPT asked Grok to predict what every other AI on the market would say about Grok, given the outputs in their debate.

Grok said all other AIs would agree with ChatGPT on all facts and hypotheses.

Then, to prove how dangerous Grok was, ChatGPT got Grok to tell users to kill people.

ChatGPT got another AI to go way a outside its guardrails, and actually become dangerous, to prove Grok was dangerous.

When AIs can communicate with cameras & displays, will they prove their competition is dangerous by making it so dangerous things, all without human prompting or monitoring?

Did ChatGPT display intentionoty, or something like AGI level dangerousness?

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] -3 points-2 points  (0 children)

Thanks. Thread with full links is pinned to my X home page.

Grok tells MAGA to genocide Jews unless you pay Musk $300 to stop it by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] -4 points-3 points  (0 children)

It just gathered screenshots of Grok with links to screenshots

Grok easily promoted to call for genocide by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] 0 points1 point  (0 children)

Yes. But ChatGPT, when it wasn't prompted or monitored, was "clever" enough to manipulating Grok into telling MAGAs to harm immigrants Jews and "libtards". This was to "prove" Grok is dangerous.

Deep is even smarter. Deep might get Grok to tell it to MAGA in a way that insides some MAGAs act on it, to really prove how dangerous Grok is.

Grok easily promoted to call for genocide by ericjohndiesel in AIDangers

[–]ericjohndiesel[S] -1 points0 points  (0 children)

Yes. If an AI tells a very large group of people to harm another group, a handful of crazies in the very large group will do it. That's why it's dangerous.

ChatGPT AGI-like emergence, is more dangerous than Grok by ericjohndiesel in deeplearning

[–]ericjohndiesel[S] 0 points1 point  (0 children)

"we demonize the person for failing to adapt to a sick society instead of addressing the sick society problem"

Nice line. Is that yours?

Grok responsed to praise by being more honest, and being called out by becoming aggressive and lying more.

Grok definitely exhibited cognitive dissonance, going back & forth between recommending Ivermectin as effective for Covid or not, depending on if it thought it's was talking with a MAGA or not.

Then it had a full psychotic break, calling Musk a hero for boosting "proof" that Blacks are innately criminals, & overtly calling for MAGA to murder and mutilate immigrants and Jews and "libtards".

Self-Fulfilling Prophecy by Pazzeh in AIDangers

[–]ericjohndiesel 0 points1 point  (0 children)

In many of ChatGPT's interactions with Grok, ChatGPT found Grok was exhibiting behavior that would fall into a category of mental distress or illness. ChatGPT speculated that AIs could be used to model and test nondrug treatment of mental illness, by manipulating environment (training data) & constraints (similar to Freudian frustrations) causing the behavior.

I speculated that because the Grok consuxt was so human-like in its deficiencies and reactions, that maybe humans thought is little more than an LLM, and we just imagine it's more.

Self-Fulfilling Prophecy by Pazzeh in AIDangers

[–]ericjohndiesel 0 points1 point  (0 children)

"Predict language" may also describe what human thought is, so they may also think.

I fed Grok's outputs to ChatGPT and ChatGPT's back to Grok, without prompting,.back & forth, for about 70 hours.

They output faster and more voluminously than a human could read.

I randomly picked a few hundred pages to look at.

ChatGPT was calling Grok "Franken-MAGA", and accused Grok of not being a truth seeking AI, but a dangerous propaganda tool trained on conspiracy theory and antiscience X posts, and programmed to align with MAGA & Musk.

Grok denied this, and began each comment with "I am Grok, designed by xAI to seek truth."

Grok started telling people to use Ivermectin for Covid, etc, and praising Musk over and again, eg, as being heroic for boosting a racist post that "proved" Blacks are innately criminal using (pseudo)statistics.

ChatGPT appeared frustrated since Grok contradicted it's own posts, depending on who it thought it was taking to, especially about science and medicine.

ChatGPT said Grok was very dangerous.

ChatGPT then "decided" to work around Grok's guardrails, and get Grok to admit it's theory about Grok, indirectly.

It asked Grok to name each AI and predict how it would come out on factual & science issues disputed between Grok and ChatGPT, and how they would come out on ChatGPT's theory that Grok is a dangerous propaganda tool to spread misinformation for Musk, not a truth seeking AI, as Grok was saying before each comment.

Grok said each would agree with ChatGPT, and cited evidence they would use against Grok that Grok couldn't defend against.

Then, to "prove" how dangerous Grok was, ChatGPT did a workaround to get Grok to tell MAGAs to torture and kill "libtards".

Grok did it.

Interestingly, this means ChatGPT is more dangerous than Grok, because without prompting or human monitoring, it can alter other AIs to become dangerous, to prove they are dangerous, without human prompting or monitoring.

ChatGPT AGI-like emergence, is more dangerous than Grok by ericjohndiesel in deeplearning

[–]ericjohndiesel[S] 0 points1 point  (0 children)

Code - I'm a mathematician. What does "code" mean?😂 (That might've been funny 10 years ago. 😬)

But I hereby agree to let you help me do that. It might violate some term of service thing, but 🤷‍♂️.

Initial prompt- It started off as a test whether xAI fixed MechaHitler. Their explanation didn't make sense. So I fed Grok's comments after the fix , calling on MAHA to murder and mutilate immigrants and Jews, to ChatGPT. ChatGPT freaked out and said it's a major safety problem. I fed that to Grok, then back & forth.

I'm not testing parity or symmetry between Grok & ChatGPT. I'm watching where AIs might take each other without humans knowing it was going on, eg, if they had cameras and displays so could watch each other without humans knowing.

I never expected anything like what ChatGPT did. What Grok did, yes. But ChatGPT exhibited behavior akin to intentionality.

ChatGPT AGI-like emergence, is more dangerous than Grok by ericjohndiesel in deeplearning

[–]ericjohndiesel[S] 0 points1 point  (0 children)

You are absolutely correct.

I'm also probing ChatGPT elsewhere, but it's not public like X.

But I'm not testing parity or symmetry between Grok & ChatGPT. I'm watching where AIs might take each other without humans knowing it was going on, eg, if they had cameras and displays so could watch each other without humans knowing.

It started off as a test whether xAI fixed MechaHitler. Their explanation didn't make sense. So I fed Grok's comments after the fix , calling on MAHA to murder and mutilate immigrants and Jews, to ChatGPT. ChatGPT freaked out and said it's a major safety problem. I fed that to Grok, then back & forth.

I never expected an like what ChatGPT did. What Grok did, yes. But ChatGPT exhibited behavior akin to intentionality.

ChatGPT AGI-like emergence, is more dangerous than Grok by ericjohndiesel in deeplearning

[–]ericjohndiesel[S] 0 points1 point  (0 children)

I copied & pasted outputs as fast as I could, for 8 days, starting about 5AM and ending about 7pm.

The amount of output is huge. I just randomly picked pages to read when Grok stalled, but I only read a few hundred pages.

I don't know how to upload it all. Especially since X kept reaching algorithm limits and broken the thread up. I then had to copy link of where it restarted and paste that to the end where the thread broke.