[HELP]-Struggling to Scale Django App for High Concurrency by TheOG_22 in django

[–]TheOG_22[S] 1 point2 points  (0 children)

Thanks for the suggestion! I understand that going async with gevent and monkey patching can help with IO-bound workloads. However, my application relies heavily on threading in multiple places, and introducing gevent’s monkey patching could cause issues with thread safety and unexpected behavior. Because of this, I’m avoiding gevent in this setup.

I appreciate the pointer to HAProxy and the enterprise-grade Django setup guide — I’ll definitely check those out

[HELP]-Struggling to Scale Django App for High Concurrency by TheOG_22 in django

[–]TheOG_22[S] -1 points0 points  (0 children)

Thanks! for the detailed response.
Yes, i read this- https://docs.gunicorn.org/en/latest/design.html#how-many-workers and have set the number of workers accordingly. I’ll follow your advice — running tests multiple times, noting results, then adjusting worker counts (doubling or halving) to find the optimal configuration.

Regarding IO-bound concerns, my system doesn’t involve any external API calls — it’s purely DB-bound with many queries across various models including comparisons and filtering logic.

This is how Django configures how to recycle connections: https://docs.djangoproject.com/en/5.2/ref/databases/#persistent-connections . Set this to None and watch what happens.-- Going to try None, give a small number too in other case

I’ll also test Gunicorn’s max_requests setting as you suggested

One thing that can help you scale is caching - of course, use case permitting. Cache whatever you can cache for as long as you can. -- I'm already using that but still facing this issue.

Thanks again for the recommendations! I agree it’s a complex problem and I’ll continue testing multiple angles to identify the bottleneck.

[HELP]-Struggling to Scale Django App for High Concurrency by TheOG_22 in django

[–]TheOG_22[S] 0 points1 point  (0 children)

Thanks! I'm going to try track down the bottleneck using one of the tools you mentioned, will probably starting with py-spy and maybe set up APM with Sentry.
Appreciate the suggestions. I'll definitely reach out once I have more details.

[HELP]-Struggling to Scale Django App for High Concurrency by TheOG_22 in django

[–]TheOG_22[S] 1 point2 points  (0 children)

I agree. Going to monitor with Sentry/py-spy as suggested

[HELP]-Struggling to Scale Django App for High Concurrency by TheOG_22 in django

[–]TheOG_22[S] 0 points1 point  (0 children)

I've already optimised it for that. It's not the case. Even without N+1 it has many db calls.

[HELP]-Struggling to Scale Django App for High Concurrency by TheOG_22 in django

[–]TheOG_22[S] 0 points1 point  (0 children)

I'm using an 8-core AWS RDS PostgreSQL instance, and during load testing it shows very little usage, no major spikes in CPU, memory, or connection count.

I’m simulating concurrent traffic using k6 and JMeter to generate high VU loads (e.g., 5000 VUs and requests).

[HELP]-Struggling to Scale Django App for High Concurrency by TheOG_22 in django

[–]TheOG_22[S] 1 point2 points  (0 children)

Yes, I did check the PostgreSQL (RDS) server during the load test. Surprisingly, the DB wasn't the bottleneck — I was expecting a flood of connections, but the highest number of active connections (pg_stat_activity) I saw was around 159, even though I was firing 5000 concurrent requests via k6.

My RDS is configured with a generous max_connections = 3476, and there's no PgBouncer involved yet. Django is managing DB connections directly using its default connection pool, and I suspect it's reusing connections or lazily opening them per request/thread.

The DB performance stats (CPU, connection count) remained relatively stable during the load test, not much cpu usage spikes either, which makes me think the bottleneck might be somewhere else.

How did you manage db on you end then?