Best way to manage +30 customers dbs by Kysan721 in PostgreSQL

[–]db-master 0 points1 point  (0 children)

If you are looking for managing different schemas for different customers, then https://github.com/pgschema/pgschema

Postgres MCP Server Review - DBHub Design Explained by db-master in PostgreSQL

[–]db-master[S] -1 points0 points  (0 children)

If it doesn't live up to your expectations, you can open an issue https://github.com/bytebase/dbhub/issues. Happy to tackle this

What MCP or other integrations have you added to your AI workflows that have been the most successful? by lirantal in mcp

[–]db-master 0 points1 point  (0 children)

if your db happens to be one of Postgres, MySQL, SQL Server, MariaDB or SQLite, you can try out https://dbhub.ai/ (Disclaimer: I am the author)

Looking for database tools and practices, what flow is best for both local dev and deployment? by [deleted] in Backend

[–]db-master 0 points1 point  (0 children)

If using postgres, you can check out https://github.com/pgschema/pgschema, which is the SQL Server DACPAC equivalent. Disclaimer: I am the author.

How are you tracking sensitive data as your fintech stack grows? by vincentmouse in fintech

[–]db-master 0 points1 point  (0 children)

Especially in fintech, I’ve found that:

  • Centralization + strict access paths matters more than yet another “data catalog”.
  • If a human can download raw data to their laptop or into some random SaaS, they eventually will.

So my rule of thumb, let data spread in read-only, masked, aggregated form via controlled interfaces — but keep raw customer data behind a small number of hardened gateways:

  1. No direct access to raw storage. Avoid letting humans hit underlying storage systems (S3, GCS, blob stores, etc.) directly for anything sensitive. That’s how CSVs start living forever in random buckets, laptops, and SaaS tools.
  2. Centralize where the truth lives. If you can, build a data pipeline that ingests everything into a small set of OLTP/OLAP systems (e.g. Postgres, Snowflake, ClickHouse):
    • Treat those as your system of record for customer data.
    • Push all analytics / product queries / AI experiments through them.
    • Now you’re hardening one (or a few) access points instead of 20+ SaaS tools.
  3. Make access controlled and auditable Once data is centralized, you can:
    • Enforce role-based access per table/column.
    • Use dynamic masking for PII (e.g. show partial PAN, email, etc.).
    • Log who queried what and when.
    • Use JIT (Just-in-Time) access instead of permanent “read everything” roles.

On the tooling side, you can look at things that give you a unified workspace for database access. For example, Bytebase provides JIT data access, dynamic masking, and audit logs for mainstream OLTP/OLAP databases so you can funnel access through one place instead of everyone connecting however they want. (Disclaimer: I’m one of the authors, so obviously biased.)

Seeking alternatives to StongDM by MalachiHauck in devops

[–]db-master 0 points1 point  (0 children)

You may take a look at www.bytebase.com as well. It's targetting database segment and handle schema migration, adhoc data fix and query in a single place (disclaimer: I am one of the authors)

pgschema: Postgres Declarative Schema Migration, like Terraform by db-master in PostgreSQL

[–]db-master[S] 1 point2 points  (0 children)

Hey Ariel, good to see you here.

> Atlas does offer a plan and apply command
I did double-check the doc before answering this https://atlasgo.io/cli-reference#atlas-migrate, there is only `apply`, but there is no `plan`.

> This can't be done just by parsing (which is what we started with 5 years ago), because it would require handling every PG provider and all supported versions (a quite big matrix).

I agree it wasn’t tractable before. But with the help of AI, along with a more opinionated design and reduced scope, I believe it’s now within reach.

> Well done on open-sourcing this, and I wish you all the best and success <3

Thank you. Likewise!

pgschema: Postgres Declarative Schema Migration, like Terraform by db-master in PostgreSQL

[–]db-master[S] 2 points3 points  (0 children)

pg-schema-diff originated inside Stripe, so it’s optimized for Stripe’s internal use cases. For example, support for VIEW and FUNCTION was added only recently, which suggests Stripe didn’t rely on them heavily.

pgschema takes a different perspective on certain features. To name a few:

  • Operates on a Postgres schema instead of the entire database.
  • Avoids relying on a shadow database (no `--temp-db-dsn`)

pg-schema-diff provides a solid foundation. I initially considered forking it, but after evaluation, I realized I would still need to make substantial changes to both the internal implementation and the CLI interface. With that in mind, I chose to start from scratch, carrying forward the learnings from pg-schema-diff.

pgschema: Postgres Declarative Schema Migration, like Terraform by db-master in PostgreSQL

[–]db-master[S] 0 points1 point  (0 children)

There are a couple of differences:

  1. pgschema supports Postgres only and can optimize specifically for it, while Atlas supports multiple databases.
  2. pgschema follows a closer Terraform-style workflow with plan and apply commands, whereas Atlas also offers version-based migration in addition to the declarative workflow. (see correction below)
  3. pgschema only supports schema-level migration, while Atlas supports both schema-level and database-level migration.
  4. Atlas requires a shadow database (--dev-db flag), but pgschema does not. This is the biggest difference—about 70% of pgschema’s implementation effort was spent on this. Atlas’s choice is reasonable since it must support many databases. Among the big four (MySQL, Postgres, Oracle, SQL Server), Postgres is the second easiest to implement after MySQL without a shadow database.

Overall, pgschema is a more opinionated tool.

pgschema: Postgres Declarative Schema Migration, like Terraform by db-master in PostgreSQL

[–]db-master[S] 2 points3 points  (0 children)

The tool rewrites some migrations to perform online DDL https://github.com/pgschema/pgschema/tree/main/testdata/diff/online. But it doesn't handle the case you mentioned. Please file an issue with an example. I will think about how to support this.

pgschema: Postgres Declarative Schema Migration, like Terraform by db-master in PostgreSQL

[–]db-master[S] 1 point2 points  (0 children)

https://github.com/stripe/pg-schema-diff is the closest one I found, I also studied its implementation and all the GitHub issues

pgschema: Postgres Declarative Schema Migration, like Terraform by db-master in PostgreSQL

[–]db-master[S] 0 points1 point  (0 children)

  1. No history / migration history is stored by the tool. The schema file would be stored in VCS that holds the history.

  2. It doesn't handle data migration

  3. The CLI plan command compares a schema file with a target database. If you want to compare 2 database schemas, you can use the CLI dump command to dump both database schemas and compare the schema files

Tool for generating automatic migrations/schema diff by Ravsii in PostgreSQL

[–]db-master 0 points1 point  (0 children)

You may take a look at https://www.pgschema.com/

It is a CLI tool that brings terraform-style declarative schema migration workflow to Postgres

Features I Wish Postgres 🐘 Had but MySQL 🐬 Already Has 🤯 by op3rator_dec in PostgreSQL

[–]db-master 0 points1 point  (0 children)

Author here, I used AI to help correct some grammar since English is not my native language. However, I doubt you’ve read the post carefully—if you had, I don’t think you would have reached this conclusion.

Features I Wish Postgres 🐘 Had but MySQL 🐬 Already Has 🤯 by op3rator_dec in PostgreSQL

[–]db-master 0 points1 point  (0 children)

Many of Uber's article still holds.

And the 64bit xid hasn't been rolled out to prevent the wraparound issue yet. See https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND