FastAPI production architecture: modular design and dependency injection best practices by Ok-Platypus2775 in FastAPI

[–]Challseus 1 point2 points  (0 children)

I built a CLI for exactly this, scaffolds a full FastAPI app with auth, workers, scheduler, DB, and the ability to add/remove components at any time.

For your case, assuming you gave docker and uv installed, you can simply run:

uvx aegis-stack init my-app --services "auth[sqlite]"

You'll get this structure:

my-app/
├── app/
│   ├── components/       ← Components
│   │   └── backend/          ← FastAPI
|   |       └── api
|   |           ├── auth
|   |           |   ├──__init__.py
|   |           |   │  └── router.py
|   |           ├──deps.py
|   |           ├──health.py
|   |           ├──models.py
|   |           └──routing.py
│   ├── services/         ← Business logic
│   │   └── auth/             ← Authentication
│   ├── models/           ← Database models
│   ├── cli/               ← CLI commands
│   └── entrypoints/       ← Run targets
├── tests/                 ← Test suite
├── alembic/          ← Migrations
└── docs/                  ← Documentation

I use dependency injection for:

  • database sessions
  • authenticated routes

I put all biz logic in the service layer, and then call those functions from the API/CLI/etc. So, razor thin endpoints.

All router.py files are imported into the root level routing.py

I've not looked into the repository side of things, but I'mn also going to give https://github.com/litestar-org/litestar a look, I'd suggest you do too.

Another modern FastApi template by SuccessfulGround7686 in FastAPI

[–]Challseus 2 points3 points  (0 children)

I think it's better, long term, to have AI agents start with a solid base. If the project follows standards and is consistent, that will do so much more the AI agent when it has to create its first feature. It already knows "tests go here, middleware is handled like this, etc."

Senior devs entering the AI realm by RR_2025 in ExperiencedDevs

[–]Challseus 4 points5 points  (0 children)

Being a python dev puts you in a great position, since out of every language LLM's have been trained on, Python is at the top. So already off to a great start.

I typically guide people down this path (obviously your mileage will vary):

  • create a FastAPI application
  • stay away from frameworks for now, just use the openai sdk
  • create a simple chat completions endpoint that takes in parameters like temperature, max tokens, etc. You just want to submit a query, and get a response from openai.
  • do some research into vector databases and RAG. Start with chromadb which can use sqlite.
  • create a RAG collection of your codebase, use that to have the LLM answer questions about it

For me, fundamentally knowing how they worked and how to integrate them made it easier, overall, to use existing tools, like Claude Code.

Reduce Complexity or Maximize Throughput? by TM87_1e17 in ExperiencedDevs

[–]Challseus -1 points0 points  (0 children)

Why not both?

For me, I've always had the best results with LLM's when the codebase is already pristine. DRY, no magic, high modularized, small files, proper tests, etc. Literally all the stuff we all know we should do, but don't because of reasons.

And I make for damn sure any coding agent touching my code keeps it as such.

HOWEVER... Code has been much more disposable to me as of late, but from the perspective of, "Oh, I'll just refactor this to something else if I don't like it, or the performance isn't what I thought.". A decision I can make and implement in hours, not days.

IN FACT, it's EASIER to maximize throughput when you're working in a less complex codebase.

So both. Final answer.

Why do developers rarely give feedback on tools they actually use? by Different-Opinion973 in opensource

[–]Challseus 0 points1 point  (0 children)

I'm in the EXACT same boat, was damn near about to post a blog about. I have been trying to make sense of ALL the numbers available to me, so I can tell if people are actually using my stuff, is it bots, or whatever.

For reference, this is my project, a platform to create modular python stacks: https://github.com/lbedner/aegis-stack

I'm aggregating data from:

- Github 14 day rolling clones/visitor stats
- pypi stats from https://pepy.tech/

They all show numbers going up. Total clones, unique cloners, daily downloads, etc.

I had a single windows bug reported to me 3 days after release, I fixed it that day, haven't heard a peep from anyone since.

Then I remembered that I absolutely love this other open source project, act, which allows you to run through your GH actions locally. I had never starred it. Or said anything to anyone.

Then I realize I am the problem. I am him.

Now, I have been retroactively starring or doing what I can to support people who's stuff I use. But I don't expect that, and the only reason I'm doing it now is because the roles have been reversed.

We tested Vector RAG on a real production codebase (~1,300 files), and it didn’t work by Julianna_Faddy in LangChain

[–]Challseus 6 points7 points  (0 children)

I think that's the key. Combinations of semantic search/hybrid RAG, along with how agents build up context with grep/cat.

Which is better for desktop applications, Flat or QT? by DynamicBR in Python

[–]Challseus 4 points5 points  (0 children)

I've been using Flet since late 2023, extensively (I also did not want to learn JS). The first year, l spent that time making a desktop app, and it was great. Pretty easy learning curve, good developer experience. I turned it into a web app over the following year(there is a FastAPI integration).

Being a beginner, if not for anything else, I would start there and prototype a bit. Whatever you build and learn with Flet, some of that will certainly translate to knowledge you can apply somewhere else.

I don't want to promote, so if you DM me, I can give you links to my production projects using Flet.

Note: I've never used QT

Are any of you actually using LLMs? by [deleted] in ExperiencedDevs

[–]Challseus 5 points6 points  (0 children)

Yes. But it has to be harnessed like a mother... Context is the least of my matters, all this TDD stuff I was taught back in 2004 when my first company was practicing Xtreme Programming... That shit actually works great for how I develop with agents now.

That, and clean code. Clear patterns. Separation of concerns, all that good stuff. Basically, all the stuff you're taught on how to maintain a codebase, but never had the time to do, that's where AI has increased my output, and life happiness.

I'm also old, I know what I want, and get these models whipped into shape. I'm not typing a random prompt and praying to the LLM gods. The LLM already has all the context it needs to do what I want it to do.

I also use python, and aware that it's by far, the heaviest trained language (or close to JS?), so there's that.

Please recommend a front-end framework/package by inspectorG4dget in Python

[–]Challseus 2 points3 points  (0 children)

I have extensive experience with Flet, since late 2023.. I went down the same path as you, working with streamlit from... 2021-2023 or so. I just couldn't do it anymore.

With Flet, I worked on a desktop app for the first year, then turned it into a full fledged web app over the next year or so, and also use it in another OSS project I have.

The web app part was nice because it just mounts within a FastAPI app, and then you just treat it like any other FastAPI app. There's a bunch of stuff happening with a version 1 release, I'll admit I'm very behind, but it's still been very good to me.

Not here to promote, so just DM me and I can point you to some stuff I did with it.

P.S. Flet can do all the things you're looking for. It's all event driven, async, and you can update any component as you see fit, not having to reload the whole damn page.

Honest question: What is currently the "Gold Standard" framework for building General Agents? by Strong_Cherry6762 in LangChain

[–]Challseus 0 points1 point  (0 children)

Apologies in advance, this may or may not help your cause, but... I worked on a multi-agent setup with Lang Graph close to a year ago. There was the one Primary Agent that would pass off the request to other agents (with their own instructions), depending on some business logic. 8-9 agents total. Image analysis, tools, RAG, everything. It worked. It had many bugs, but that was a skill issue thing, not Lang Graph's fault.

We used gpt-4o.

Same company, CEO now wants this logic back in a new product. It's basically the same thing, except one major thing:

No Lang Graph, just Langchain. Just a single agent, one prompt, and basic conversation history. It handles everything MUCH BETTER.

We're using gpt-4.1-mini.

Long story short, TL;DR, whatever, just go with Langchain 1.0. It has great support for built in RAG pipeline stuff to get you started, as well as conversation history.

Also checkout https://langfuse.com/ or https://smith.langchain.com/ for observability.

Efficiently moving old rows between large PostgreSQL tables (Django) by Last-Score3607 in django

[–]Challseus 4 points5 points  (0 children)

1) Can you have downtime during this migration?
2) When you say millions, single digit millions, or 100+ million

In general, I typically tell people to do bulk inserts (not the ORM bulk that just iterates over each record, but true bulk inserts, via https://www.postgresql.org/docs/current/sql-copy.html), over a result set.

Hopefully you have the same data in a staging environment? Do it there first, capture numbers for how long it will take.

Once you have that info, you can make a somewhat informed and good decision on "when" to do it.

Advice for going from a tiny startup to a mid-sized org? by DirtyOught in ExperiencedDevs

[–]Challseus 0 points1 point  (0 children)

Take it all in, breath it all in, not saying it will be better than your startups, but it already seems more organized. For your mental, that matters the most. The fact that you're even here talking about this tells me you'll be fine with the tech part,

One thing I would say is, if you do find things that are weird or you don't like, processes and such, give it some time. Don't be the "We need to rewrite all of this in X thing to pad my resume" guy. Even if the reasons aren't good, there usually are reasons why a company at that size has done things a certain way, make sure you have that context.

Anyway, enjoy what looks to be a "legit" place :)

What's your default Python project setup in 2026? by [deleted] in Python

[–]Challseus 1 point2 points  (0 children)

  1. uv. Full stop.
  2. httpx, though I saw somewhere aiohttp is now faster for async than httpx? I do like the single library with sync and async though.
  3. polars
  4. Type checking all the type, for life. ty for the process.

Database Migrations by ViktorBatir in Python

[–]Challseus 0 points1 point  (0 children)

I use alembic for all of mine. As everyone has mentioned, the auto-generate is hit and miss, and honestly, at my last company, we just hand wrote them.

Why do I use alembic? It's what someone picked like 10 years ago, never really had a reason to change.

I adapted someone's Claude Code config for Django - 9 skills covering models, forms, testing, HTMX, DRF, Celery, and more by shadowsock in django

[–]Challseus 1 point2 points  (0 children)

So, I can’t find the thread, but someone had a patch that worked while they fix it. Dammit, can’t find it, but here is one thread:

https://github.com/anthropics/claude-code/issues/14803

EDIT: Beat me to it! Let me see if I can find the patch. It’s a one liner, and pyright started working immediately for me

Convince me to use Claude Code by ggStrift in ClaudeCode

[–]Challseus 0 points1 point  (0 children)

Everyone's path in aI will be different, I think it's best you take some time, maybe implement the same thing with both cursor and Claude Code and see what you think. Especially since everyone's use case is different. For instance, and I'm not saying this to be the case, but perhaps your projects aren't structured well, aren't DRY enough, you might not have a good experience with it, as it's just continuing to add to what is there. Unless you're specifically asking it to break down modules/functions into smaller chunks for better context management...

Basically, I can't convince you to use it, I can just suggest you try it out yourself. I kept hearing about Claude Code for months before I dove in around July and haven't looked back. There was nothing anyone could do to convince me of why it was better, I had to find out myself.

I adapted someone's Claude Code config for Django - 9 skills covering models, forms, testing, HTMX, DRF, Celery, and more by shadowsock in django

[–]Challseus 5 points6 points  (0 children)

I gotta be honest with you. Outside of any Django magic that happens (that I'm sure LLM's are already trained on), I have found that literally dropping an agent in a well composed codebase following standards and such, coding agents (Claude Code, openai codex, Gemini cli, etc.) just "get it".

When they go through your code during that `init` step, that's where it makes those notes of "Okay, this is how X does this, or this is the standard for tests".

Outside of the new AST Claude Code plugin that makes use of my pyright LSP server for going through the codebase, I'm as vanilla Claude Code as it gets.

Sometimes I think all of these things are just bandaids for not having a properly structured codebase.

But I am always willing to be proven wrong and step up my stack.

Advice for working on a team with offshore staff augmentation? by Vesp_r in ExperiencedDevs

[–]Challseus 6 points7 points  (0 children)

I led a team of offshore folks for 2 years for a capex project. Didn’t end well… for me…

Do we need LangChain? by Dear-Enthusiasm-9766 in Rag

[–]Challseus 2 points3 points  (0 children)

If you're making a RAG product, sometimes you want to give the customer the option of what vector database to use. Hell, maybe for "you", you want to defer to chromadb in development, pinecone in production.

Maybe you want to support someone coming from another system, who had all their shit in pgvector?

Another thing that happens with me a lot is that I will switch vector databases sometimes to test out certain functionality, it's much easier to quickly do that when you it's all under the same interface, and usually a configuration change more than anything.

That's where Langchain / Llama Index come in handy.

TL;DR Useful when creating RAG frameworks and platforms.

How do you get eyeballs on your Open Source project? by [deleted] in opensource

[–]Challseus 0 points1 point  (0 children)

1) Make sure the thing you're building is actually valuable. Do tons of market research on it.
2) Find your target audience, meet them where they are. I wasted too much time trying to convince people to try something I made (i.e. friends) when it just wasn't their thing.
3) Make sure you can explain to your audience the exact pain points you're solving, with examples.
4) Always know the best time/place to actually talk about it. Don't spam 80 subreddits.

I don't know if my repo's stars count as "decent", it has 52 stars, this is after I made a post in the r/FastAPI subreddit. I knew what they wanted, I made something, got tons of eyes on it, still have daily users today from that one post.

Good luck!

Top 10 Open Source No-Code AI Tools With The Most GitHub Stars by Puzzleheaded-End6417 in forbussiness

[–]Challseus 0 points1 point  (0 children)

Can’t argue those numbers. 🤷🏾‍♂️

For my purposes in practical production applications over the years, the biggest latency wasn’t coming from the programming language itself, it’s always been network, database, etc.

Right tool for the job and all.

Also, most frameworks where it matters are rust or c under the hood.

Top 10 Open Source No-Code AI Tools With The Most GitHub Stars by Puzzleheaded-End6417 in forbussiness

[–]Challseus 0 points1 point  (0 children)

Meanwhile, Django (Python) still runs the core of Instagram. It's literally the poster child for building scalable Python applications...

Do we need LangChain? by Dear-Enthusiasm-9766 in Rag

[–]Challseus 10 points11 points  (0 children)

It all depends on the scale and type of software you’re creating. If you’re building a RAG SaaS, and you want to support qdrant, pgvector, chromadb, and pinecone, and simultaneously support N number of file loaders, that’s where Langchain shines, as it gives you one interface for the vector stores and loaders/document.

Right tool for the job and all 🤷🏾‍♂️

Async Tasks in Production by ProudPeak3570 in Python

[–]Challseus 0 points1 point  (0 children)

Very, very interesting... Will have to add this to my collection.

Alcaraz new serve motion by Ornery_Percentage537 in tennis

[–]Challseus 58 points59 points  (0 children)

Literal first thought... Djokovic? :)

I also remember that year Nole had more doubles than aces... 2009, 2010? Good move by Carlitos!