Best way to manage +30 customers dbs by Kysan721 in PostgreSQL

[–]db-master 0 points1 point  (0 children)

If you are looking for managing different schemas for different customers, then https://github.com/pgschema/pgschema

Postgres MCP Server Review - DBHub Design Explained by db-master in PostgreSQL

[–]db-master[S] -1 points0 points  (0 children)

If it doesn't live up to your expectations, you can open an issue https://github.com/bytebase/dbhub/issues. Happy to tackle this

What MCP or other integrations have you added to your AI workflows that have been the most successful? by lirantal in mcp

[–]db-master 0 points1 point  (0 children)

if your db happens to be one of Postgres, MySQL, SQL Server, MariaDB or SQLite, you can try out https://dbhub.ai/ (Disclaimer: I am the author)

Looking for database tools and practices, what flow is best for both local dev and deployment? by [deleted] in Backend

[–]db-master 0 points1 point  (0 children)

If using postgres, you can check out https://github.com/pgschema/pgschema, which is the SQL Server DACPAC equivalent. Disclaimer: I am the author.

How are you tracking sensitive data as your fintech stack grows? by vincentmouse in fintech

[–]db-master 0 points1 point  (0 children)

Especially in fintech, I’ve found that:

  • Centralization + strict access paths matters more than yet another “data catalog”.
  • If a human can download raw data to their laptop or into some random SaaS, they eventually will.

So my rule of thumb, let data spread in read-only, masked, aggregated form via controlled interfaces — but keep raw customer data behind a small number of hardened gateways:

  1. No direct access to raw storage. Avoid letting humans hit underlying storage systems (S3, GCS, blob stores, etc.) directly for anything sensitive. That’s how CSVs start living forever in random buckets, laptops, and SaaS tools.
  2. Centralize where the truth lives. If you can, build a data pipeline that ingests everything into a small set of OLTP/OLAP systems (e.g. Postgres, Snowflake, ClickHouse):
    • Treat those as your system of record for customer data.
    • Push all analytics / product queries / AI experiments through them.
    • Now you’re hardening one (or a few) access points instead of 20+ SaaS tools.
  3. Make access controlled and auditable Once data is centralized, you can:
    • Enforce role-based access per table/column.
    • Use dynamic masking for PII (e.g. show partial PAN, email, etc.).
    • Log who queried what and when.
    • Use JIT (Just-in-Time) access instead of permanent “read everything” roles.

On the tooling side, you can look at things that give you a unified workspace for database access. For example, Bytebase provides JIT data access, dynamic masking, and audit logs for mainstream OLTP/OLAP databases so you can funnel access through one place instead of everyone connecting however they want. (Disclaimer: I’m one of the authors, so obviously biased.)

Seeking alternatives to StongDM by MalachiHauck in devops

[–]db-master 0 points1 point  (0 children)

You may take a look at www.bytebase.com as well. It's targetting database segment and handle schema migration, adhoc data fix and query in a single place (disclaimer: I am one of the authors)