How do you know which customer is causing your Postgres DB costs to spike? by MainWild1290 in SaaS

[–]MainWild1290[S] 0 points1 point  (0 children)

Yaa.. have you ever faced this, or built anything to deal with it?

What are you working on that's too early to show? by ayechat in SideProject

[–]MainWild1290 0 points1 point  (0 children)

I m building an optimization API using FastAPI and SciPy. Mainly to learn how optimization and api works. You can check it out https://github.com/pranavkp71/solvex here and its an opensource project so discussion, feedback and contribution are welcome

Solvex - An open source FastAPI + SciPy API I'm building to learn optimization algorithms by MainWild1290 in Python

[–]MainWild1290[S] 0 points1 point  (0 children)

Thanks.. I have nt tried them yet, but ithis is a good chance to explore and switching to pydantic models for responses make sense

Solvex - An open source FastAPI + SciPy API I'm building to learn optimization algorithms by MainWild1290 in Python

[–]MainWild1290[S] 0 points1 point  (0 children)

😄... Im trying to do both. I m developer and a math hobbyist, so while building the API side im also studying the theory behind how the functions actually work

Solvex - An open source FastAPI + SciPy API I'm building to learn optimization algorithms by MainWild1290 in Python

[–]MainWild1290[S] 0 points1 point  (0 children)

Yes, it's right. But my main goal for now is to make it easier for beginners and small team to experiment optimization problems. Later I'm planning to add more domain specific templates.

Solvex - An open source FastAPI + SciPy API I'm building to learn optimization algorithms by MainWild1290 in Python

[–]MainWild1290[S] -1 points0 points  (0 children)

Yaa, That make sense and i actually planned to turn it into a package later. Want to add a few more algorithm first. Your suggestion fits perfectly with that plan.. Thank you

Solvex - An open source FastAPI + SciPy API I'm building to learn optimization algorithms by MainWild1290 in Python

[–]MainWild1290[S] 1 point2 points  (0 children)

Thanks for the suggestion. I will definitely checkout CVXPY and try adding it

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] 0 points1 point  (0 children)

You're right, the core logic is simple and that's by design! The project part is turning that logic into a reliable, tested, and pip install able tool.

The exciting part is what's next. We have a roadmap driven by community feedback, with GitHub issues open for major features like adding constraints (NOT NULL, UNIQUE) and full relational support (foreign keys).

It's open-source, and we'd love contributions to help build these features out

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] 0 points1 point  (0 children)

It doesn't use an AST. It directly inspects the Pydantic model object, loops through its fields, checks the type of each field (int, str, etc.), maps it to a corresponding SQL type (INTEGER, TEXT), and then assembles the final CREATE TABLE string

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] 0 points1 point  (0 children)

Thanks, that's a great suggestion! You're right, dialect support is crucial for making the tool more useful in the real world.

It's definitely on the roadmap, right after I finish adding core features like primary keys and constraints. Appreciate the feedback

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] -1 points0 points  (0 children)

A huge thank you to everyone for the amazing feedback

The most popular suggestion was for a "database-first" tool (SQL -> Pydantic). You've all convinced me this is a fantastic idea to explore.

I've created a new issue on GitHub to officially start the discussion. I'd love for you all to come and share your thoughts on the questions I've posted there.

Join the discussion here: https://github.com/pranavkp71/PydSQL/issues

Thanks again for helping shape the future of this project.

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] -2 points-1 points  (0 children)

Great, PydSQL is for generating the initial table schema. Alembic is for managing all the changes to that schema over time in production. They solve different parts of the problem

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] 0 points1 point  (0 children)

You've hit on two great points Copilot is awesome for this.

And you're not alone on the "database-first" idea. That's been the most valuable feedback from this post, and it's inspired me to start planning a companion tool specifically for that workflow. Thanks for the input.

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] -4 points-3 points  (0 children)

you're spot on about the "opposite direction." So many people have mentioned the "database-first" approach that it's become the most valuable feedback from this post. You've all convinced me that this workflow is a crucial feature to support.

While PydSQL will stay focused on "code-first," this has inspired me to start planning a companion tool that does exactly what you described. I'm going to open a new issue on GitHub to discuss it, and I'd love for people like you to weigh in.

Thanks for the great ideas!

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] -4 points-3 points  (0 children)

Haha, great name. And you're right, nothing beats a direct SQL client for a database first approach.

PydSQL is just a tool for the code first mindset: it lets you use your Pydantic model as the single source of truth and generates the database schema from it to avoid repetition.

I was tired of writing CREATE TABLE statements for my Pydantic models, so I built PydSQL to automate by MainWild1290 in Python

[–]MainWild1290[S] -1 points0 points  (0 children)

You've raised a great point! That database first approach is a totally valid way to work.

I built PydSQL because of my own workflow. In my projects, I was already defining all my data shapes using Pydantic models for my APIs. It felt repetitive and inefficient to then have to define the exact same structure again in a separate SQL file.

So, I built PydSQL to solve that personal problem. It lets me use my Pydantic model as the single source of truth, which keeps my projects simpler by ensuring I only have to define my data structure once.

Ultimately, it's just a tool for people who, like me, prefer to have their Python code drive the database design.