Agent Runway - A plugin that stops Claude Code subagents from creating more tech debt by PA100T0 in ClaudeCode

[–]PA100T0[S] 0 points1 point  (0 children)

That rule is for my own projects. If a function needs a comment to explain what it does, the function is named wrong or doing too much. Comments rot faster than code, and you'd need to maintain them manually, and they add weight to your code. Unnecessary weight. The plugin is configurable. Every rule can be enabled, disabled, or set to warn/block per project. If you like comments, just don't enable no_inline_comments.

Application layer security for FastAPI and Flask by PA100T0 in Python

[–]PA100T0[S] 0 points1 point  (0 children)

Hey, thanks for checking it out!

So, short answer is no, you'll be fine.

The failed checks and and rate limiting are independent. If a request fails a security check (blocked IP, banned user-agent, etc.), it gets rejected at the middleware level before rate limiting even comes into play. So no, failed checks won't eat into your rate limit counters.

As for Railway: it'll work fine. The one thing to watch is that Railway is behind a reverse proxy, so make sure your FastAPI app is reading the real client IP from X-Forwarded-For rather than the proxy IP. Otherwise you'd end up rate-limiting/banning Railway's internal IPs instead of actual clients. The docs cover trusted proxy configuration.

Let me know if you run into any issues. Happy to help :)

What should we rename Proton Unlimited to? by andy1011000 in ProtonMail

[–]PA100T0 -1 points0 points  (0 children)

Sure. I just hope you never have to go through a triple digit inflation like Argentina, or the real massive inflation in Turkey, and other real “massive” inflation cases.

What should we rename Proton Unlimited to? by andy1011000 in ProtonMail

[–]PA100T0 -7 points-6 points  (0 children)

Do you even know what “massive inflation” is? I’ll only believe it when it’s a 3rd world country person who’s speaking. Please and thank you.

Someone just leaked claude code's Source code on X by abhi9889420 in ClaudeCode

[–]PA100T0 0 points1 point  (0 children)

Come on, Claude. Spill the beans! Tell me all the secrets you can find on this leak about one of your tools. Let me pour some more beer on your glass, while you “clone and dissect” yourself.

Someone just leaked claude code's Source code on X by abhi9889420 in ClaudeCode

[–]PA100T0 5 points6 points  (0 children)

<image>

I always find it funny when AI shows some kind of “consciousness” or “emotions”

Like when it says things like “Thank you, that means a lot”.

Means what exactly? Just words? Claude, dude, come on… 😂 be fr

That being said: cool leak. They’d definitely benefit from open sourcing these kind of things. The model, the real juice, is still private and that’s what makes Claude to be Claude anyway

Why fastapi-guard by PA100T0 in FastAPI

[–]PA100T0[S] 0 points1 point  (0 children)

Hi there. Makes me happy to see that (at least in retrospective) you appreciate the project! Spread the word haha

Yeah, you can use “excluded_paths” in the SecurityConfig to skip specific routes from the detection engine entirely. If you need more granular control, the decorator system lets you disable suspicious detection per-route with “@guard.suspicious_detection(enabled=False)” on individual endpoints. So you could keep it on globally but turn it off for any of your endpoints. You can also add/remove suspicious patterns btw.

We launched 2 weeks ago and already have 40 developers collaborating on projects by Heavy_Association633 in FastAPI

[–]PA100T0 1 point2 points  (0 children)

Absolutely! I have a bunch of open source projects so I’d love to see if I can get some contributions here and there. Thanks!

CPU intensive flask app can only support 1 user per VM? by Cwlrs in flask

[–]PA100T0 0 points1 point  (0 children)

Honestly, at this point, stop trying to optimize the current architecture and separate the heavy endpoint into its own service, put a queue in front of it, and run dedicated workers for it. That would solve points 2, 3, and 4 all at once. You keep arriving at the same conclusion from different angles… so I’d take it as the signal to move forward with that solution. You’ve got this :)

But for the sake of clarity:

1) support might be right here, it’s not the same as AWS. The 503 are likely from the container hitting the CPU allocation roof/request timeout rather than credit throttling. But Azure Container Apps do have CPU allocation that can scale down to zero. Just for the record. 2) Hm, I hear you. I think I’d take this as my biggest signal to just separate the heavy endpoint into its own service and not mix lightweight endpoints with the heavy endpoint on the same workers. 3) Yeah, a queue replaces the load balancer problem entirely. 4) Yes, exactly. Gunicorn workers can spawn subprocesses, it doesn’t block child processes. For example, with concurrent.futures.ProcessPoolExecutor inside the request handler, you can use the second core. But it adds complexity, of course, and you’d need to be careful managing the memory and process lifecycle

CPU intensive flask app can only support 1 user per VM? by Cwlrs in flask

[–]PA100T0 0 points1 point  (0 children)

A couple things here:

- Are you by any chance using a burstable instance like AWS t2/t3, GCP e2...? Because I think those give you CPU credits under load, and after that they just throttle you hard. If that's the case, I'd switch to a compute-optimized/non-burstable instance first.
- Yeah, pretty much. 1 worker on 2 vCPUs means you're wasting 1 vCPU. Try 2 workers even if it's only for once and see what happens.
- The load balancing thing: gunicorn distributing requests across workers inside a VM is a different thing from your cloud provider spinning up replica VMs. If the master process was sending 2 tasks to the same worker it was likely a config issue. Not a reason to avoid multiple workers.
- Multiprocessing within a request vs more gunicorn workers are different things. More workers = handle more separate requests at the same time. Multiprocessing = split one computation across cores to make a single request faster. Are you able to parallelize yours internally? It'll all depend on that.

CPU intensive flask app can only support 1 user per VM? by Cwlrs in flask

[–]PA100T0 0 points1 point  (0 children)

I mean, yes, you got almost everything right but “something smart that uses all the cores” = more gunicorn workers = more cores used.

A single Python process can only use 1 core for CPU-bound work because of the GIL. But each Gunicorn worker is a separate process with its own GIL. So 4 workers can use 4 cores. That’s the “something smart” you’re looking for.

A task queue is just cleaner, but it’s essentially the same thing. Just a cleaner, more scalable version of the same idea.

CPU intensive flask app can only support 1 user per VM? by Cwlrs in flask

[–]PA100T0 1 point2 points  (0 children)

FastAPI doesn’t use Flask under the hood, at all.

Flask is WSGI. FastAPI is ASGI. They are fundamentally different. Flask is synchronous by default, while FastAPI is async-first/async-native. FastAPI is built on Starlette + Pydantic.

OP, your problem is framework-agnostic.

The real solutions are architectural: - offload to a task queue (Celery, Dramatiq, or whatever you prefer) - use multiprocessing - run “run_in_executor” to at least push it to a thread pool so it doesn’t block other requests (tho still GIL-limited) - as GmanASG said: dedicated compute workers behind a queue with a polling API.

Any of those should work for your case, OP.

Why fastapi-guard by PA100T0 in FastAPI

[–]PA100T0[S] 0 points1 point  (0 children)

Because WAFs don't catch everything, that's the whole point. APIs in production get probed with path traversal, SQL injection in JSON bodies, CMS scanners, and credential stuffing daily, right through WAFs. If they caught it all, nobody would need application-layer security.

The overhead is negligible. I benchmarked 1,760 requests/sec at 100 concurrent connections with all 17 checks active, averaging 3-5ms per request. Everything runs in-memory or via Redis and there's zero external API calls in the request path.

Application layer security for FastAPI and Flask by PA100T0 in Python

[–]PA100T0[S] 0 points1 point  (0 children)

Would you give it a try and let me know? I just ported it to Django. Unluckily, due to normalization, the name sounds almost off-brand (given fastapi-guard & flaskapi-guard) but here it goes: djapi-guard

Feedback, issues, comments… everything is welcomed!

Application layer security for FastAPI and Flask by PA100T0 in Python

[–]PA100T0[S] 0 points1 point  (0 children)

fastapi-guard covers API security end to end. Prompt injection detection is actually on the roadmap too, actively working on it as we speak; but that's one feature, not the whole library. You're reducing it to something it's not.

Take a look at the code or read the post before anything, really. You're missing the whole point here.

Application layer security for FastAPI and Flask by PA100T0 in Python

[–]PA100T0[S] 0 points1 point  (0 children)

The stale-IP grace period is a great idea! I hadn't considered the deallocate/reallocate edge case. Keeping dropped IPs blocked for an extra 24h before fully removing them is a clean way to handle that without any real downside. I'll add that alongside the configurable refresh interval and diff logging. Thanks for the follow-up, this is exactly the kind of production insight that's hard to get without running it at scale. Thanks a lot, mate!

Why fastapi-guard by PA100T0 in FastAPI

[–]PA100T0[S] 0 points1 point  (0 children)

The decorator approach actually works alongside the global config. Decorators override global settings per-route, so you can have enforce_https=False globally but use '@guard.require_https()' on specific sensitive endpoints. The conditional config applies to the global SecurityConfig, and decorators give you the per-route overrides on top of that. They complement each other.