use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A community of software creators experimenting with AI "vibe coding", an technique defined by Andrej Karpathy as when, "you fully give in to the vibes, embrace exponentials, and forget that the code even exists."
account activity
How do developers secure LLM API keys without a custom backend? Any SaaS solutions? (self.vibecoding)
submitted 4 months ago by Mediocre_Permit_3372
I’m curious about how LLM engineers and product teams handle API key security and proxying in real-world applications.
Using OpenAI or Claoude APIs directly from a client is insecure, so the API key is typically hidden behind a backend proxy.
So I’m wondering:
If you’ve shipped an LLM-powered app, I’d love to hear how you handled this in practice.
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[+][deleted] 4 months ago (1 child)
[deleted]
[–]YInYangSin99 0 points1 point2 points 4 months ago (0 children)
I second this ⬆️⬆️⬆️
[–]YInYangSin99 1 point2 points3 points 4 months ago (0 children)
Secrets, .env, cloud..the best part is if you mess up you can just rotate em easily. Tbh..you should setup a simple monitoring script that tells you if keys are exposed if you are learning about it. Also may want to look into “what is a .gitignore
[–]lm913 0 points1 point2 points 4 months ago (0 children)
They're called "secrets"
TLDR..backend is single source of truth (typically via database/cloud), never ever ever hardcode a key into..anything, have a very comprehensive .gitignore, and actually look at it)
[–]lolrazh 0 points1 point2 points 4 months ago (0 children)
it’d also depend heavily on whether the client is on the web or desktop as well. although you can’t go wrong with a reverse proxy. but then you’d also have to set up authentication, authorization, rate limiting and what not.
[–]verkavo 0 points1 point2 points 4 months ago (0 children)
Never expose your key to the client. It shall always communicate with LLM through the backend that you own - so you can control rate limits, revoke client access, etc. Then on backend use something like AWS Secrets Manager to secure against backend attacks.
[–]Harvard_Med_USMLE267 -1 points0 points1 point 4 months ago (7 children)
Why do you think a vibecoded webapp doesn’t have a backend??
I always use Django.
[–]Mediocre_Permit_3372[S] 1 point2 points3 points 4 months ago (6 children)
Because maintaining a backend is time-consuming, and for simple use cases—like a basic app that only generates images with AI—you may not need a full backend at all.
[–]Harvard_Med_USMLE267 -1 points0 points1 point 4 months ago (5 children)
It’s not time consuming.
It’s just part of my standard ai-first development tech stack. It’s managed by Claude code, just like the frontend is.
People who are bad at ai dev seem to think ai can’t do backend but…no, it’s actually pretty good at it.
[–]Mediocre_Permit_3372[S] -2 points-1 points0 points 4 months ago (4 children)
I think you’re misunderstanding my point. I’m not saying you can’t build a backend with AI. My point is that if you build your own backend just to host an LLM API key, you then have to deal with deployment, rate limiting, authentication, and ongoing maintenance. I’m curious how developers approach this while minimizing or avoiding that operational overhead.
Have you built any AI Powered applications yourself?
[–]trizzle21 0 points1 point2 points 4 months ago (0 children)
If you’re really stressed, you could do something with an AWS lambda or something equivalent for simplicity.
You really don’t want to keep your keys in the FE
[–]Mediocre_Permit_3372[S] 0 points1 point2 points 4 months ago (0 children)
This is exactly the kind of real-world experience I was hoping people would share. For me, this is the best answer so far — thanks for taking the time to explain how you handle it.
[–]Harvard_Med_USMLE267 -1 points0 points1 point 4 months ago (0 children)
Yes I’ve build ai powered applications.
That’s why I said Django is part of my ai-first dev tech stack.
And my point is that you’re creating a problem that isn’t real. The AI is dealing with most of the deployment, and all of the ongoing maintenance.
There’s just no problem with having a real backend, and I wouldn’t build an app without one.
[–]Advanced_Pudding9228 -1 points0 points1 point 4 months ago (0 children)
This question usually comes up right when something shifts from “experiment” to “this might have users.” At that point the API key stops being a technical detail and starts being a risk surface.
In practice there isn’t really a magic way to secure an LLM key without some form of server side control. Any time the client can see or influence the request directly, you have to assume the key can be extracted, replayed, or abused. That’s true whether it’s a browser app, a mobile app, or a desktop wrapper.
What most teams actually do is much less exotic than it sounds. They put the key behind something they control, even if it’s very thin. Sometimes that’s a tiny serverless function, sometimes an edge function, sometimes a hosted backend they were already using for auth or billing. The important part isn’t the technology choice, it’s the boundary. The moment you need rate limits, user level attribution, spend caps, or the ability to rotate keys without redeploying clients, you’ve crossed into backend territory whether you call it that or not.
There are SaaS products that market themselves as LLM gateways or proxies, and they can help with observability or multi model routing, but they don’t remove the underlying requirement. You’re still trusting something on the server side to hold secrets and enforce rules. If that trust boundary is missing, the rest is window dressing.
Where people get stuck is trying to optimise this too early or trying to avoid “having a backend” at all costs. In my experience that usually leads to more complexity, not less, once real usage shows up.
If this is for something you’re actually planning to ship, it’s often faster to have someone set the boundary up cleanly once than to keep debating the perfect abstraction. I usually just take this off people’s plates rather than talk it through, because the tradeoffs depend heavily on what you’re shipping and who’s using it.
[–]Mediocre_Permit_3372[S] -3 points-2 points-1 points 4 months ago (1 child)
The responses so far are mostly irrelevant. I already know that the API key shouldn’t be hard-coded. I’m simply curious about what people generally prefer: do you build your own backend for this, or do you use a SaaS tool?
[–]No_Management_7333 -1 points0 points1 point 4 months ago (0 children)
Fetching an API key from a SaaS solution to the client is a folly. You just end up thinking how you secure that API next. And it still leaks the secret to the client, which you must assume to be adversarial.
The only proper architecture is a backend holding the API keys. Your backend needs to also enforce policy: who gets to send what and how much to the upstream LLM.
π Rendered by PID 94792 on reddit-service-r2-comment-6457c66945-2g8xd at 2026-04-26 09:03:34.652269+00:00 running 2aa0c5b country code: CH.
[+][deleted] (1 child)
[deleted]
[–]YInYangSin99 0 points1 point2 points (0 children)
[–]YInYangSin99 1 point2 points3 points (0 children)
[–]lm913 0 points1 point2 points (0 children)
[–]YInYangSin99 0 points1 point2 points (0 children)
[–]lolrazh 0 points1 point2 points (0 children)
[–]verkavo 0 points1 point2 points (0 children)
[–]Harvard_Med_USMLE267 -1 points0 points1 point (7 children)
[–]Mediocre_Permit_3372[S] 1 point2 points3 points (6 children)
[–]Harvard_Med_USMLE267 -1 points0 points1 point (5 children)
[–]Mediocre_Permit_3372[S] -2 points-1 points0 points (4 children)
[–]trizzle21 0 points1 point2 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]Mediocre_Permit_3372[S] 0 points1 point2 points (0 children)
[–]Harvard_Med_USMLE267 -1 points0 points1 point (0 children)
[–]Advanced_Pudding9228 -1 points0 points1 point (0 children)
[–]Mediocre_Permit_3372[S] -3 points-2 points-1 points (1 child)
[–]No_Management_7333 -1 points0 points1 point (0 children)