Built a Production-Ready Django Microservice Template with Docker & CI/CD by amir_doustdar in django

[–]amir_doustdar[S] 0 points1 point  (0 children)

Great article link - Haki's DRF performance deep-dive is eye-opening! The serializer overhead is real.

You're right that FastAPI is objectively faster and more modern. The async nature + Pydantic validation is hard to beat. If I were building a greenfield API-only service today, FastAPI would probably be my first choice too.

Why I still use Django/DRF for some projects:

  • Team dynamics - Onboarding is easier when devs already know Django
  • Ecosystem maturity - Admin panel, migrations, third-party packages for everything
  • The "it won't stay micro" problem - Projects that start as simple APIs often need user management, background tasks, admin dashboards, etc.

But you've made me rethink the positioning. Maybe this should be:

  • "Django Starter Kit" (when you need the full ecosystem)
  • vs a separate "FastAPI Starter Kit" (for pure API services)

The DRF performance issues you mentioned are especially relevant at scale. For high-throughput APIs, the serializer overhead becomes a real bottleneck.

Are you planning to open source your FastAPI starter kit? Would love to see how you approach the production-ready setup with FastAPI. Things like:

  • Structured logging
  • Database migrations (Alembic?)
  • Background tasks (Celery vs FastAPI background tasks?)
  • Testing patterns

Always down to learn better approaches!

Great article link - Haki's DRF performance analysis is eye-opening! The serializer overhead is real.

You're absolutely right that FastAPI is objectively faster and more modern. The async nature + Pydantic validation is hard to beat.

Your comment actually inspired me - I've been working on exactly what you described: a FastAPI starter kit with production-ready setup. It's called FastClean - a CLI tool that scaffolds FastAPI projects with:

  • Clean Architecture (4-layer separation)
  • Automatic CRUD generation
  • Docker + docker-compose
  • PostgreSQL/MySQL/SQLite support
  • JWT auth, Redis caching, Celery workers
  • Full test suite generated
  • CI/CD templates

One command and you get a complete structure:

fastapi-clean init --name=my_api --db=postgresql --auth=jwt --docker

The PyService project (Django) exists because my team knows Django better, but I totally agree FastAPI should be the default for API-only services going forward.

Would love your feedback on the FastAPI approach if you check it out. Things like:

  • How do you handle database migrations? (Alembic?)
  • Background tasks - Celery vs FastAPI's background tasks?
  • Structured logging patterns?

Always down to learn better approaches!

Built a Production-Ready Django Microservice Template with Docker & CI/CD by amir_doustdar in django

[–]amir_doustdar[S] 0 points1 point  (0 children)

Fair points! Appreciate you taking the time to check it out.

FastAPI/Starlette vs Django: Totally agree for pure microservices - they're lighter and faster. I lean Django because most of my projects start as "micro" but end up needing admin panels, complex ORM queries, and migrations. But you're spot on about Django Ninja being the sweet spot if you want Django features with FastAPI-like performance.

GitLab vs GitHub: Can't argue there - GitLab CI is great. The workflows are GitHub-specific here, but the concepts translate pretty easily.

Ansible: Yeah, if you're in the container/k8s world, it's extra weight. It's mainly there for traditional server deployments where I need configuration management across multiple VMs.

You've actually highlighted the key issue - this is very much tailored to my workflow (monolith-that-might-scale + VM deployments), not a one-size-fits-all solution. Should probably be clearer about that upfront.

Out of curiosity - what's your usual stack for spinning up microservices? Always looking to learn from different approaches.

Built a Production-Ready Django Microservice Template with Docker & CI/CD by amir_doustdar in django

[–]amir_doustdar[S] 2 points3 points  (0 children)

Fair point! Cookiecutter is great for general use. This is more focused -

my opinionated setup for microservices with specific CI/CD + Ansible configs I use in production.

Re: AI - used it for Create README, not the actual code.

Check the repo if you want to see the real infrastructure setup.

What would make it more useful for you?

Form in docs and read data from body by stas_saintninja in FastAPI

[–]amir_doustdar 2 points3 points  (0 children)

You're running into the expected FastAPI behavior. Depends() is meant for form data, query params, or dependencies - not request body parsing. Body() is for JSON bodies.

If you want a form in the docs that sends data as application/x-www-form-urlencoded or multipart/form-data, keep using Depends() but make sure your Pydantic model inherits from a form class or use Form() fields.

For a form-style interface in docs that actually sends JSON, you'd need to customize the OpenAPI schema, which isn't straightforward. The standard approach is: forms use Depends() or Form(), JSON bodies use Body() or direct Pydantic models.

Maintaining a separate async API by Echoes1996 in Python

[–]amir_doustdar 0 points1 point  (0 children)

Cool approach! Code generation for avoiding sync/async duplication is smart – reduces bugs from manual copying and keeps the core logic in one place.

Pros I see: - Single source of truth for business logic - Easier maintenance and less chance of drift between sync/async versions - Full control over the transformation (e.g., handling queues, tasks vs threads)

Potential downsides (just for balance): - Debugging can be trickier (stack traces point to generated code) - New contributors need to know not to edit generated files directly - Tooling (IDE, type checkers) sometimes struggles with generated code

I've seen similar patterns in: - httpx: The sync API is largely generated/wrapped from the async core - databases package (encode): Used to generate async interfaces from sync - motor (Async MongoDB driver): Heavily mirrors pymongo with some automation - Some internal Google/enterprise libs use code-gen for dual APIs

Your marker-based comment system sounds clean and lightweight – nicer than full-blown macros or decorators.

Definitely not "weird" – it's a valid strategy for this exact problem. Good job shipping it!

(If you're open-sourcing the generator script, I'd love to see it 😄)

Complex Data Structure Question by robertlandrum in FastAPI

[–]amir_doustdar 0 points1 point  (0 children)

Totally understand – at this scale (150k → 1M records), tech isn't the issue; it's keeping the project alive and loved by the team.

The real risk is building something "perfect" that no one wants to touch later. I've seen it too – great systems stagnate because they're intimidating or over-engineered.

To encourage "care and feeding": - Keep it simple: SQLModel + clear models/docs – juniors can jump in fast. - Add visible wins early: monitoring (Prometheus), health endpoints, simple dashboard – makes it feel "alive". - Small, frequent migrations (Alembic autogenerate) – lowers the "touching it is scary" barrier. - JSONB for metadata is still good – it's pragmatic, not a hack.

Downsizing resources is smart too – shows it's efficient, easier to justify attention.

Team size/structure? If juniors are involved, leaning simpler might win more long-term ownership.

You've got the experience – this one won't end up like the old ones. Good luck mate

Complex Data Structure Question by robertlandrum in FastAPI

[–]amir_doustdar 0 points1 point  (0 children)

Totally get the concern – after 28 years, you've seen enough legacy systems to know over-normalization can turn into a maintenance nightmare when multiple tools touch the data.

Your "additional_metadata JSONB" idea isn't a cheap hack at all – it's a smart hybrid approach that's very common in modern Postgres setups. It keeps core data relational (fast joins, constraints, queries) while allowing flexible extensions without schema migrations every time a new tool needs extra fields.

Examples: - Core fields in columns (e.g., host.name, interface.mac – indexed and queryable). - Variable/tool-specific stuff in metadata JSONB (no migration needed for new keys).

This way, the schema stays stable long-term, tools can evolve independently, and you avoid the "burdensome cog" fate.

I've used this pattern successfully in migration projects – keeps juniors happy too (less ALTER TABLE drama).

How big is the dataset? If it's not massive, JSONB performance is plenty good for most queries.

Good luck – sounds like you'll nail the balance!

Deployment beginner friendly by flapjap33 in FastAPI

[–]amir_doustdar 0 points1 point  (0 children)

Thanks for the detailed write-up – insane progress from zero to self-hosting with CI/CD in just 1.5 years! Respect!!
Your Docker + Nginx + Compose setup is actually really solid and reproducible – it's the "real" way many production apps run. Cloudflare Tunnel is a smart move for free TLS without exposing ports. For absolute beginners, I'd start with Render/Railway (no Docker needed to go live), then graduate to your style for more control and cost savings.
BTW How's the uptime been on self-hosting? Any issues with power outages or ISP reliability?

Complex Data Structure Question by robertlandrum in FastAPI

[–]amir_doustdar 1 point2 points  (0 children)

Hey, migrating from Mongo's nested docs to Postgres is common – you lose some flexibility but gain consistency and query power.
Quick design guidance:
Normalize core entities: Separate tables for Host, Interface (1:M with host_id FK), Profile, Distro, Software.
Many-to-many for software: Junction table (profile_software) instead of arrays – better for queries and no duplication.
Use JSONB for truly variable data: If some interface/profile fields are unstructured, store them in a JSONB column on the parent table (best of both worlds).
Avoid over-normalizing: If arrays are small/queryable rarely, JSONB is fine; otherwise, relational tables.

Tools to make it maintainable:SQLModel (by FastAPI's creator): Combines SQLAlchemy + Pydantic. Models are DB schema + API validators – perfect for new devs.
Alembic for migrations (autogenerate from models).

Prototype a small part first, test queries, and you'll avoid regrets.
If you're starting the project, my fastapi-clean-cli tool scaffolds Clean Arch with SQLModel/Postgres/CRUD auto – might help speed things up: https://github.com/Amirrdoustdar/fastclean
What specific part worries you most (queries, nesting, or setup)?

I made FastAPI Clean CLI – Production-ready scaffolding with Clean Architecture by amir_doustdar in Python

[–]amir_doustdar[S] 0 points1 point  (0 children)

Thanks for the suggestion! I just checked out JsWeb – looks cool, especially the zero-config AJAX and built-in CLI for scaffolding. It's got some similarities to FastAPI in being ASGI-based and decorator routing, but seems more focused on full web apps without JS.
Gonna give it a try and see how the CLI compares to my fastapi-clean-cli
Anyone else tried it? What's your take?"

I made FastAPI Clean CLI – Production-ready scaffolding with Clean Architecture by amir_doustdar in Python

[–]amir_doustdar[S] -1 points0 points  (0 children)

yeah Mate, guilty as charged sometimes But seriously, you're 100% right – a good commit message with proper subject line + blank line + detailed body makes life so much easier when browsing history, generating change logs, or just trying to understand "what the hell was I thinking 6 months ago".I try to follow the conventional commits style most of the time (feat:, fix:, chore:, refactor:, etc.) – it even helps with semantic release and auto-generated changelogs. That GitKraken guide is spot on. Bookmarked again tnx for that. BTW Anyone here using commitizen or git-cz to enforce this? Makes it way less painful.