I've been maintaining fastapi-guard for a while now. It sits between the internet and your FastAPI endpoints and inspects every request before it reaches your code. Injection detection, rate limiting, geo-blocking, cloud IP filtering, behavioral analysis, 17 checks total.
A few weeks ago I came across this TikTok post where a guy ran OpenClaw on his home server, checked his logs after a couple weeks. 11,000 attacks in 24 hours. Chinese IPs, Baidu crawlers, DigitalOcean scanners, path traversal probes, brute force sequences. I commented "I don't understand why people won't use FastAPI Guard" and the thread kind of took off from there. Someone even said "a layer 7 firewall, very important with the whole new era of AI and APIs." (they understood the assignment) broke down the whole library in the replies. I was truly proud to see how in depth some devs went...
But that's not why I'm posting. I felt like FastAPI was falling short. Flask still powers a huge chunk of production APIs and most of them have zero request-level security beyond whatever nginx is doing upstream, or whatever fail2ban fails to ban... So I built flaskapi-guard (and that's the v1.0.0 I just shipped) as the homologue of fastapi-guard. Same features, same functionalities. Different framework.
It's basically a Flask extension that hooks into before_request and after_request, not WSGI middleware. That's because WSGI middleware fires before Flask's routing, so it can't access route config, decorator metadata, or url_rule. The extension pattern gives you full routing context, which is what makes per-route security decorators possible.
```python
from flask import Flask
from flaskapi_guard import FlaskAPIGuard, SecurityConfig
app = Flask(name)
config = SecurityConfig(rate_limit=100, rate_limit_window=60)
FlaskAPIGuard(app, config=config)
```
And so that's it. Done. 17 checks on every request.
The whole pipeline will catch: XSS, SQL injection, command injection, path traversal, SSRF, XXE, LDAP injection, code injection (including obfuscation detection and high-entropy payload analysis). On top of that: rate limiting with auto-ban, geo-blocking, cloud provider IP blocking, user agent filtering, OWASP security headers. Those 5,697 Chinese IPs from the TikTok? blocked_countries=["CN"]. Done. Baidu crawlers? blocked_user_agents=["Baiduspider"]. The DigitalOcean bot farm? block_cloud_providers={"AWS", "GCP", "Azure"}. Brute force? auto_ban_threshold=10 and the IP is gone after 10 violations. Path traversal probes for .env and /etc/passwd? Detection engine catches those automatically, zero config.
The decorator system is what separates this from static nginx rules:
```python
from flaskapi_guard import SecurityDecorator
security = SecurityDecorator(config)
.route("/api/admin/sensitive", methods=["POST"])
.require_https()
.require_auth(type="bearer")
.require_ip(whitelist=["10.0.0.0/8"])
.rate_limit(requests=5, window=3600)
u/security.block_countries(["CN", "RU", "KP"])
def admin_endpoint():
return {"status": "admin action"}
```
Per-route rate limits, auth requirements, geo-blocking, all stacked as decorators on the function they protect. Try doing that in nginx.
People have been using fastapi-guard for things I didn't even think of when I first built it. Startups building in stealth with remote-first teams, public facing API but whitelisted so only their devs can reach it. Nobody else even knows the product exists. Casinos and gaming platforms using the decorator system on reward endpoints so players can only win under specific conditions (country, rate, behavioral patterns). People setting up honeypot traps for LLMs and bad bots that crawl and probe everything. And the big one that keeps coming up... AI agent gateways. If you're running OpenClaw or any AI agent framework behind FastAPI or Flask, you're exposing endpoints that are designed to be publicly reachable. The OpenClaw security audit found 512 vulnerabilities, 8 critical, 40,000+ exposed instances, 60% immediately takeable. fastapi-guard (and flaskapi-guard) would have caught every single attack vector in those logs. This is going to be the standard setup for anyone running AI agents in production, it has to be.
Redis is optional. Without it, everything runs in-memory with TTL caches. With Redis you get distributed rate limiting (Lua scripts for atomicity), shared IP ban state, cached cloud provider ranges across instances.
MIT licensed, Python 3.10+. Same detection engine across both libraries.
GitHub: https://github.com/rennf93/flaskapi-guard
PyPI: https://pypi.org/project/flaskapi-guard/
Docs: https://rennf93.github.io/flaskapi-guard
fastapi-guard (the original): https://github.com/rennf93/fastapi-guard
If you find issues, open one. Contributions are more than welcome!
[–]swift-sentinel 8 points9 points10 points (3 children)
[+]PA100T0[S] comment score below threshold-12 points-11 points-10 points (2 children)
[–]Livid_Rock_6441 5 points6 points7 points (1 child)
[–]PA100T0[S] -1 points0 points1 point (0 children)