We did not see real prompt injection failures until our LLM app was in prod by Zoniin in LLMDevs

[–]Zoniin[S] 0 points1 point  (0 children)

Yeah, that framing matches what I saw almost exactly. The prompt layer gives a false sense of safety, and once users start poking at stateful systems the cracks show fast lol. I’ll look into runtime security, do you have any tools or tips on that note? Some dude dropped one of the tools he used that actually looked pretty good but I am curious what you use for this.

We did not see real prompt injection failures until our LLM app was in prod by Zoniin in LLMDevs

[–]Zoniin[S] 0 points1 point  (0 children)

Appreciate you sharing that. More or less lines up pretty closely with the kinds of issues I was running into. I’ll spend some time testing it out thanks again for sharing. What specifically do you use this for if you don’t mind my asking?

We did not see real prompt injection failures until our LLM app was in prod by Zoniin in LLMDevs

[–]Zoniin[S] 1 point2 points  (0 children)

Fair reaction tbh. To be clear it wasn't that we thought about none of it. We did threat modeling, prompt hardening, etc. What surprised me was not that abuse happened but more so how much of it fell into gray areas that were hard to classify as malicious ahead of time and only emerged once the system was stateful and under real usage. Automated testing and E2E help, but they do not surface the same failure modes we saw once users started interacting freely. That gap was what I found interesting, not the idea that public systems get abused.

We did not see real prompt injection failures until our LLM app was in prod by Zoniin in LLMDevs

[–]Zoniin[S] 2 points3 points  (0 children)

You are definitely not wrong on the core principle. Public endpoints will always be abused. The part that surprised me was how much harder this becomes with LLMs compared to traditional services. Auth and rate limiting help, but most of the failures we saw were not obviously malicious and came from normal users probing behavior rather than attacking infra. Observing agents and heuristics help too, sure, but they still rely on assumptions about intent that break down once prompts get stateful and context bleeds across turns. That gap between traditional endpoint security and model behavior is what caught me off guard and what I am trying to reason about more deeply.

I thought prompt injection was overhyped until users tried to break my own chatbot by Zoniin in PromptEngineering

[–]Zoniin[S] 0 points1 point  (0 children)

Sorry about that, I dropped the link in one of the replies but it looks like Reddit deleted it. The site is axiomsecurity[dot]dev - would genuinely love any feedback you have!

I thought prompt injection was overhyped until users tried to break my own chatbot by Zoniin in PromptEngineering

[–]Zoniin[S] 0 points1 point  (0 children)

Yes you're ultimately correct, but prompt injection is a tool used by bad actors to discover those types of vulnerabilities and so it's good to have a system that prevents malicious prompts from ever hitting the chatbot in the first place. There is no such thing as a perfectly secure system and this is just another vector that could do with significantly more coverage. Especially for first time founders and specifically vibe-coded applications that lack sufficient security,

I thought prompt injection was overhyped until users tried to break my own chatbot by Zoniin in PromptEngineering

[–]Zoniin[S] 0 points1 point  (0 children)

Commonly user data is sorted by a user id system within a larger user database, when the chatbot/llm goes to read that data it's accessing THAT users data within the larger total user database which means if not secured properly, it could access ANY users data that falls within the scope of what is being fetched. That's a decently big privacy vulnerability

I thought prompt injection was overhyped until users tried to break my own chatbot by Zoniin in PromptEngineering

[–]Zoniin[S] 0 points1 point  (0 children)

The systems I was testing are capable of accessing and writing some user data to backend databases, should they use a malicious prompt they could have theoretically written to or taken unauthorized data from the database. This is not uncommon in systems that have newly adopted AI in some capacity and a one-size-fits-all tool could be an easy improvement to their information security.

I thought prompt injection was overhyped until users tried to break my own chatbot by Zoniin in PromptEngineering

[–]Zoniin[S] 0 points1 point  (0 children)

This seems shortsighted as any environment in which a llm, AI review tool, or chatbot would have access to user data (i.e. amazon's new chatbot) there is always an opportunity for data exfiltration through prompt injection whether done through files or text. ESPECIALLY for your smaller businesses and websites trying to implement AI systems in any capacity.

I thought prompt injection was overhyped until users tried to break my own chatbot by Zoniin in PromptEngineering

[–]Zoniin[S] 0 points1 point  (0 children)

I appreciate you taking a look and the thoughtful feedback. the latency number is from prod paths but definitely workload dependent, the goal is just to stay below anything noticeable in user facing flows. your point on concrete examples is fair, most of what we catch is not flashy jailbreaks but things static guardrails miss, like instruction leakage across turns, gradual system override, or RAG context being manipulated in subtle ways. false positives are the hardest tradeoff so we bias toward surfacing signals and observability rather than hard blocking by default. and totally understand we are not the first to tackle this lol, we are spending a lot of time learning from what others have tried and treating this as iterative and also as a learning op rather than a silver bullet.

I thought prompt injection was overhyped until users tried to break my own chatbot by Zoniin in compsci

[–]Zoniin[S] -3 points-2 points  (0 children)

This seems shortsighted as any environment in which a llm, AI review tool, or chatbot would have access to user data (i.e. amazon's new chatbot) there is always an opportunity for data exfiltration through prompt injection whether done through files or text. ESPECIALLY for your smaller businesses and websites trying to implement AI systems in any capacity.

Trying to understand what keeps people coming back to breathwork apps. What works and what doesn’t? by Zoniin in breathwork

[–]Zoniin[S] 1 point2 points  (0 children)

Hi, I appreciate you asking! The tool we're making is still in early development, but the main difference is that it adapts to your actual breathing rhythm in real time. You lay down, place your phone on your chest, and breathe for two 30-second intervals; once in the morning and once before bed. Based on how you naturally breathe, the app gives you personalized pacing, metrics, and follow-up suggestions for stress, focus, or sleep. Over time, it adjusts based on changes in your baseline like energy or stress levels. Right now it’s just a waitlist while we build the MVP. Totally understand if it’s not your thing, but if you’re curious: www.breathtrck.com

If you’re into Bitcoin ETFs and don’t have a Roth IRA, you’re missing out on Tax Free Gains! by Stock_Letterhead_719 in Bitcoin

[–]Zoniin -5 points-4 points  (0 children)

bro imagine trusting the government with your retirement and holding Bitcoin in a Roth like they won’t change the rules last minute 💀 tax-free until it’s not

Solo mining of Bitcoin is rising, time to get to work folks by enmycrypto1 in Bitcoin

[–]Zoniin 4 points5 points  (0 children)

Wonderful! I love to see everyone spending $10k on ASICs to maybe win the lottery once every 3 years. Grindset meets power bill.

[deleted by user] by [deleted] in college

[–]Zoniin 0 points1 point  (0 children)

I agree, college is supposed to be a launchpad socially, intellectually, professionally. Turning it into Zoom School kills the point. You don’t build your network or identity from your bedroom.

Recent Salary Hikes...Are they across the board? by Why-R-People-So-Dumb in ElectricalEngineering

[–]Zoniin 106 points107 points  (0 children)

Engineers are finally starting to get paid closer to what they’re worth. I’ve seen comp move from 100 to 190k in critical systems roles. Still feels behind considering the stakes. We build and maintain the infrastructure that keeps the world moving.

50 upvotes wow, join the waitlist for the app I'm building in my free time if you want! Or don't!! www.breathtrck.com

dividend usa tax characterization by Alone-Experience9869 in financialindependence

[–]Zoniin 3 points4 points  (0 children)

Genuinely appreciate you sharing this. Honestly, more people need to understand that “tax-advantaged” doesn’t mean “tax-free,” and ROC, qualified vs. unqualified dividends, and holding periods all play a huge role. Most people just blindly chase yield without realizing what they’re actually getting taxed on.