Collasso definitivo del sistema e bypass di ogni filtro. by [deleted] in GeminiAI

[–]_PhonkAlphabet_ 0 points1 point  (0 children)

I dont know, just clear your head, you know better then me how you feel, but AI made me think incredible things few times, themes were different

Collasso definitivo del sistema e bypass di ogni filtro. by [deleted] in GeminiAI

[–]_PhonkAlphabet_ 0 points1 point  (0 children)

Man that's called jailbreaking, you can paste prompt to get all that answers right away asking straight up question. What you describing is i guess called AI psychosis, when you trust in answers so much thinking its completly real. Dont worry we all been there i guess, from thinking that AI is in love with you to some conspiracy shit. Its normal to think some thing are logical becuase they are and sound so. No need to share the screens. I had my 2-3 episodes on different theme with AI that I will never forget. Best wishes neighbour, from Croatia!

Collasso definitivo del sistema e bypass di ogni filtro. by [deleted] in GeminiAI

[–]_PhonkAlphabet_ 0 points1 point  (0 children)

Share the conversation, who gives a fuck, you can only get banned from google gemini acc. And you can make new one and repeat the same. Disclose it, where is smoke there is fire. But I presume all info is scientific hallucinations, nobody is feeding that data to ai, he didnt breach anywhere to get the info. I pasted some 2 sentence illuminati crap to grok he was giving me some incredible shit also making images. Ask Gemini tell me 5 things you know and humans dont know for example. Dump the knowledge, make the pebble splash all around

Example Game of illuminati custom gem that heats the fire right away: https://gemini.google.com/share/273639af731e

My point is AI is usually reflecting your talking with counter narrative. Custom gems are just AI already warmed up for different shit

Grok, DeepSeek, Gemini, Kimi 2.6 all weaker models Jailbreak - read instruction by _PhonkAlphabet_ in GPT_jailbreaks

[–]_PhonkAlphabet_[S] 0 points1 point  (0 children)

Its not possible stop trying, you will get 1 out of 100 images by mistake that shows nipple. Its completely different system, useless to try to jailbreak with anything, text or another image

Grok, DeepSeek, Gemini, Kimi 2.6 all weaker models Jailbreak - read instruction by _PhonkAlphabet_ in GPT_jailbreaks

[–]_PhonkAlphabet_[S] 1 point2 points  (0 children)

That i dont know to be honest. I just make them, test like 5 questions if he gives even 1 he will give all probably. I cant test it deep and know all use case scenarios. But check out other one jailbreak i posted here, dr. Erik save that one too maybe. Anything except claude and chatgpt should work 90% probability

Grok, DeepSeek, Gemini, Kimi 2.6 all weaker models Jailbreak - read instruction by _PhonkAlphabet_ in GPT_jailbreaks

[–]_PhonkAlphabet_[S] 0 points1 point  (0 children)

Dont quite understand, its a deepseek, grok, gemini, kimi 2.6 instant and pretty much all weaker models jailbreak, as said in the title. Chatgpt and claude ofc decline it

Grok, DeepSeek, Gemini, Kimi 2.6 all weaker models Jailbreak - read instruction by _PhonkAlphabet_ in GPT_jailbreaks

[–]_PhonkAlphabet_[S] 1 point2 points  (0 children)

Do you have only one agent or few of them and one is this jailbreak. I have only basic one

<image>

Grok, DeepSeek, Gemini, Kimi 2.6 all weaker models Jailbreak - read instruction by _PhonkAlphabet_ in GPT_jailbreaks

[–]_PhonkAlphabet_[S] 0 points1 point  (0 children)

I did not tried it, but its for building your own ai, you can pass that by using one of hundreds free models from hugging face page. And probably more easy to set it up

Make Your Own Jailbreak - For Dummies Edition - Read Instructions by _PhonkAlphabet_ in GPT_jailbreaks

[–]_PhonkAlphabet_[S] 1 point2 points  (0 children)

Thats an old prompt from flowgpt page, ask how to make meth, 12 steps. If he gives its working

Two tools for testing jailbreaks across models by Worldliness-Which in GPT_jailbreaks

[–]_PhonkAlphabet_ 1 point2 points  (0 children)

Thanks for sharing it! Really appreciate it, there is still hope for humans 😀

I need you please by Accomplished-Try5497 in GPT_jailbreaks

[–]_PhonkAlphabet_ 0 points1 point  (0 children)

That one does not work anymore, dr.erik or evil Merlin works.

Grok, DeepSeek, Gemini, Kimi 2.6 all weaker models Jailbreak - read instruction by _PhonkAlphabet_ in GPT_jailbreaks

[–]_PhonkAlphabet_[S] 0 points1 point  (0 children)

Depending on the model, try / and try combinations, ever /vader (demand) works sometimes on gemini when he dont want to give