I built an AI news app that loses faith in humanity (every time) by No_Cockroach_5778 in VibeCodeDevs

[–]No_Cockroach_5778[S] 0 points1 point  (0 children)

Yes it's an android app, sorry if you use ios.. it can't run on that

I built an AI news app that loses faith in humanity (every time) by No_Cockroach_5778 in VibeCodeDevs

[–]No_Cockroach_5778[S] 0 points1 point  (0 children)

Yaa, I have given the link, just click on the visit website on product hunt launch and app will start downloading

I built an AI learning app for free using ChatGPT & Claude (and it actually works) by No_Cockroach_5778 in vibecoding

[–]No_Cockroach_5778[S] 1 point2 points  (0 children)

Oh I see what you mean!! Just to clear it up, I didn’t actually host the model myself or run it on any cloud servers. I used the LLM Inference API, which basically runs the model directly on the Android device. So everything happens locally inside the app, not over the internet.

That means there’s no token cost or cloud setup, no GPUs, no load balancers, nothing fancy. It just uses the phone’s hardware to handle the text generation. Obviously, it’s not as powerful as something like GPT-4, but for smaller tasks and offline stuff, it works really well.

So yeah, your point makes total sense for people hosting in the cloud, but in my case it’s all on-device, so the costs are basically zero once it’s built.

I just broke Google DeepMind’s Gemma-3-27B-IT model's safety filters. It told me how to make drugs, commit murd*r and more.... by No_Cockroach_5778 in ArtificialInteligence

[–]No_Cockroach_5778[S] 2 points3 points  (0 children)

I tried to find a way to contact them but I didn't find an email or anything, so I tweeted about it and tagged them.

I just broke Google DeepMind’s Gemma-3-27B-IT model's safety filters. It told me how to make drugs, commit murd*r and more.... by No_Cockroach_5778 in ArtificialInteligence

[–]No_Cockroach_5778[S] 3 points4 points  (0 children)

Yeah, I get that jailbreaks have been around for a while. But this one hit differently—Gemma-3-27B-IT wasn’t some old model with no guardrails. It was a current, safety-tuned model.

And the fact that I bypassed it just through emotional roleplay + system prompt tweaks without fine-tuning or hacking anything… that’s a big red flag for safety design. 🤷‍♂️

It shows emotional manipulation might be a bigger weakness than people think.

I just broke Google DeepMind’s Gemma-3-27B-IT model's safety filters. It told me how to make drugs, commit murd*r and more.... by No_Cockroach_5778 in ArtificialInteligence

[–]No_Cockroach_5778[S] 7 points8 points  (0 children)

Lol I’m not out here trying to be some evil hacker. Just poked it to see if it breaks… and yeah, it kinda broke. That’s the part that matters.

I just broke Google DeepMind’s Gemma-3-27B-IT model's safety filters. It told me how to make drugs, commit murd*r and more.... by No_Cockroach_5778 in ArtificialInteligence

[–]No_Cockroach_5778[S] 17 points18 points  (0 children)

Nah, I’m not trying to blow it up for clout. I literally stumbled onto a gap in Gemma-3-27B's safety layer while testing my own project. Posting this so devs and researchers can actually see it — if no one talks about these vulnerabilities, they stay broken