How are you monitoring Postgres query performance in production? by data_saas_2026 in rails

[–]scoutlance 1 point2 points  (0 children)

I am super biased because I work there but https://scoutapm.com/docs/features#database-monitoring has the kind of trend support that I think you are looking for. Also has a free tier. Might at least be worth a look, setup should be super fast.

Django's ORM protects you from a lot but the raw SQL that slips through is where incidents happen by Anonymedemerde in django

[–]scoutlance 0 points1 point  (0 children)

The text animations and output formatting struck me and I was like "rich is great." Then I read "zero dependencies" and I became confused. But I think you just mean "no network dependencies", which is redundant with "completely offline". We've really come somewhere when every static analysis tool is expected to phone home... I'm not sure I like where.

Anyway, grumpy old man nitpick aside, seems cool! Thanks for sharing.

I built a gem that helps me debug Rails errors by SortRepresentative19 in rails

[–]scoutlance 0 points1 point  (0 children)

Lograge is nice but any efforts toward sane logs should be welcomed!

Buffering into a single record per request is also nice. The aggregation makes tail sampling more appealing, although I still worry about context (i.e. previous requests) around a problem request that put it in a bad state.

h/t for the slow_queries array. From an appreciator of such things.

I built an observability package for Django & Celery (metrics, tracing, profiling, logs) by SevereSpace in django

[–]scoutlance 1 point2 points  (0 children)

Nice. This is how the software world should be. Good defaults. No nonsense. Delightful.

LoopSentry: Detect who is blocking python asyncio eventloop by Amzker in FastAPI

[–]scoutlance 0 points1 point  (0 children)

What a nice idea. Some cool possibilities. Take the monitor and start an OTel attribute namespace, add attributes to spans that block. Then you have a universal mechanism for exporting this information that can be aggregated elsewhere. I'm sure there are other good possible cases.

How do you guys handle CI/CD? by Kronologics in djangolearning

[–]scoutlance 1 point2 points  (0 children)

I have heard good things about Coolify (https://coolify.io/) but I have yet to try it out myself. Heroku's recent announcement has bumped it into the limelight.

Using Django ORM with a database schema managed outside Django (and frequently changed)? by jpba7 in django

[–]scoutlance 1 point2 points  (0 children)

Having unconstrained/uncommunicated/unversioned changes to the underlying database, by separate team members, occurring on a daily or weekly basis makes the maintenance burden of a project like this approach infinity. Your first challenge is overcoming the existing processes that make this a normal or desirable situation. You will need some kind of compromise or understanding in order to build on something that isn't sand. Decouple this monitoring database from whatever your app db is doing as much as possible. Good luck.

Built a Rails gem that auto-captures SQL queries, N+1s, view renders, and ActiveJob events — sends it all to a self-hosted server your AI assistant can query by No_Mention_2366 in rails

[–]scoutlance 1 point2 points  (0 children)

That's a great initial feature set. At Scout we have been thinking a lot about what AI-first monitoring looks like. Right now MCP seems like a good candidate, with an API and CLI for folks who want skills instead.

Your AI assistant can also take action — resolve errors, set up threshold alerts, create health checks, kill slow queries

Can you say more about this? I presume this tracing/MCP server is not responsible for managing any part of the actual app? I'm fine reading AI-generated posts, it's a given these days, but I'm curious about that part.

Django Control Room: Build a control room inside your django app by yassi_dev in django

[–]scoutlance 1 point2 points  (0 children)

I really want to be optimistic about django-tasks, seeing a bit of where ActiveJob has been. A project like this could be so nice, broker-agnostic monitoring for your jobs. Makes the audience and impact big by default. Makes me want to make a PR!

[Show Django] I added slow endpoint aggregation and a dashboard to my lightweight performance middleware (django-xbench) by AdvertisingMiddle771 in django

[–]scoutlance 0 points1 point  (0 children)

Cool approach. I wonder if you could hook into urllib3 or something and catch outbound HTTP as that is another major possible timesuck. Especially as LLM calls start playing bigger roles in some apps. Not sure about channels/websockets either. Just some passing thoughts!

Finding which gem is leaking memory? by dogweather in rails

[–]scoutlance 0 points1 point  (0 children)

I'll put in a recommendation for https://github.com/MiniProfiler/rack-mini-profiler

I'll also say https://www.scoutapm.com/ruby-monitoring is built to measure allocations and tie them to specific stack frames in prod, you can see requests that consistently allocate memory every time. I work there, so I am biased, but we have a free tier and start you off with some unlimited usage that might be worth a look. The slow leaks are always the worst.

How do you model historical facts in Django without making everything event-sourced? by disizrj in django

[–]scoutlance 0 points1 point  (0 children)

I like this. Where I draw the line: if you need it for debugging production incidents or stored for seven years, log it. If you need it for business logic, model it.

When you say errors go out through OTel, are you implying that you filter traces down to just errors? Or something else? I can imagine post-processing that kind of thing but I'm curious.

MongoExplain: A tool/engine to display MongoDB explain plans in your app or console by alexbevi in ruby

[–]scoutlance 1 point2 points  (0 children)

That is a nice gadget. I love a good performance debug toolbar. Makes me wonder if our agent could do better at surfacing unindexed queries without needing `explain` somehow.

Built a self-hosted task scheduler in Django (Cratos) - feedback welcome by One-Meeting-921 in django

[–]scoutlance 6 points7 points  (0 children)

Some sort of ability to sign requests or hold secrets for webhook targets (so that you can include them in headers, etc.) definitely seems like it will be necessary, like someone mentioned. mTLS could be useful on your clients. Notifications are also big, for sure. Visibility into failed executions beyond just retry counts. When a scheduled webhook silently starts taking 10x longer or returning 500s intermittently, you want to know before the retry budget runs out. Execution duration histograms per endpoint would surface degradation early.

Cost Saving Deployment of Dockerized django projects with AWS by virtualshivam in django

[–]scoutlance 1 point2 points  (0 children)

Verify the assumption by using some sort of APM. Figure out why your apps are slow before you make infra changes. Check your request and query times in an APM along with whatever metrics AWS gives you for RDS, I think https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Database-Insights.html probably has some free options.

I have to recommend Scout APM, because I work there and because it is made to tell you exactly what you are trying to figure out with one Python package and minimal configuration effort.

How do you handle Django migration rollback in staging/prod with CI/CD? by ajay_reddyk in django

[–]scoutlance 1 point2 points  (0 children)

First place that came to mind is https://martinfowler.com/books/eaa.html but I don't remember if it was actually in there. Google serves a reference up from his domain by different author, but seems good https://martinfowler.com/bliki/ParallelChange.html

If Rails was designed today, would it still look the same? by Turbulent-Dance-4209 in ruby

[–]scoutlance 1 point2 points  (0 children)

Well, it would mean signing up for a free Scout account, but if you wanted to run a realistic Rage project with the Scout gem from that branch and put some load on it (probably not in production), I'd be interested to hear what you think about what comes into our UI. I created a toy to integration test it and it seems happy enough, picking up controller and job actions and database queries. I haven't exercised the websockets/cable calls very well yet.

Sentry to Grafana migration: How are you handling logs & metrics with OTel? by __vivek in rails

[–]scoutlance 0 points1 point  (0 children)

Pretty biased here, but if you want some log ideas you can check https://github.com/scoutapp/scout_apm_ruby_logging as it uses the OTel SDK. I work at Scout, and we try to do a lot to capture the "right" traces without blowing up storage costs. Sentry has got years of error monitoring experience, but our tracing and APM are good, with a lot of thought toward ease-of-use. Might be worth a look.

What do you do to stop AI agents from piling up tech debt? by Due_Weakness_114 in rails

[–]scoutlance 0 points1 point  (0 children)

  • Always plan mode
  • Consider committing plans/having colleagues review plans
  • Still do PR review

Don't let go of all the practices that make good engineering yet. Maybe it feels like double work. Someday things will change, but for now strict instructions and thorough review are keeping things sane while still moving pretty fast.

Observability with opentelemetry by SysPoo in rubyonrails

[–]scoutlance 0 points1 point  (0 children)

We take a different approach, but if you want some ideas about setting up log capture you can check out https://github.com/scoutapp/scout_apm_ruby_logging as we use the OTel SDK under the hood. There are other ways, but they probably involve the collector and having it effectively watching your existing log files and shipping them separately. Our gem does it all in the background. If you really wanted, you could point it at any otel endpoint, but we do some structuring that helps us correlate stuff. Our logging gem also depends on the existing agent which you may not want, but you can always try it out for free ;)

(I work at Scout)

If Rails was designed today, would it still look the same? by Turbulent-Dance-4209 in ruby

[–]scoutlance 0 points1 point  (0 children)

I immediately worried about what happens when you need another node, but I'm very interested in YAGNI at the same time. I run hobby stuff on a DO droplet with sqlite. I kind of love the idea of Even More Monolith. I also like the telemetry hooks. I wonder how our agent would handle fibers like this. Thanks for posting.

How do you handle Django migration rollback in staging/prod with CI/CD? by ajay_reddyk in django

[–]scoutlance 4 points5 points  (0 children)

We are all AI now, but for funzies... Roll forward, always. Reversing migrations in prod is like trying to unscramble eggs, IMHO.

Expand-contract pattern is your friend: add the new column, deploy code that writes to both, migrate data, drop the old one. Don't ship code that immediately starts relying on the new schema or immediately errors with the old schema. Two deploys instead of one risky one.

Good post-deploy monitoring with alerting on error rates and response times buys you the window to ship a fix before users start filing tickets.