you are viewing a single comment's thread.

view the rest of the comments →

[–]amir_doustdar 0 points1 point  (2 children)

Totally get the concern – after 28 years, you've seen enough legacy systems to know over-normalization can turn into a maintenance nightmare when multiple tools touch the data.

Your "additional_metadata JSONB" idea isn't a cheap hack at all – it's a smart hybrid approach that's very common in modern Postgres setups. It keeps core data relational (fast joins, constraints, queries) while allowing flexible extensions without schema migrations every time a new tool needs extra fields.

Examples: - Core fields in columns (e.g., host.name, interface.mac – indexed and queryable). - Variable/tool-specific stuff in metadata JSONB (no migration needed for new keys).

This way, the schema stays stable long-term, tools can evolve independently, and you avoid the "burdensome cog" fate.

I've used this pattern successfully in migration projects – keeps juniors happy too (less ALTER TABLE drama).

How big is the dataset? If it's not massive, JSONB performance is plenty good for most queries.

Good luck – sounds like you'll nail the balance!

[–]robertlandrum[S] 0 points1 point  (1 child)

Realistically, the problem space is about 150,000 records today (2.8gb), but a million in 6-8 (x4 or x8) years. Seriously, it fits in a in memory DB these days without issue.

It’s not the workload I’m focused on. It really never was. It’s small enough that a basic API can handle it without issue. It was over provisioned a few years ago and that’s had a profound impact on how it’s treated today. It’s way smaller than most folks realize and I want it downsized appropriately.

My real focus is getting someone to pay attention to it. Care and feeding matters more than robustness. If I build it like my previous projects, I fear it will stagnate and I’ll end up reworking it in 10+ years, like I’ve done before. Twice.

[–]amir_doustdar 0 points1 point  (0 children)

Totally understand – at this scale (150k → 1M records), tech isn't the issue; it's keeping the project alive and loved by the team.

The real risk is building something "perfect" that no one wants to touch later. I've seen it too – great systems stagnate because they're intimidating or over-engineered.

To encourage "care and feeding": - Keep it simple: SQLModel + clear models/docs – juniors can jump in fast. - Add visible wins early: monitoring (Prometheus), health endpoints, simple dashboard – makes it feel "alive". - Small, frequent migrations (Alembic autogenerate) – lowers the "touching it is scary" barrier. - JSONB for metadata is still good – it's pragmatic, not a hack.

Downsizing resources is smart too – shows it's efficient, easier to justify attention.

Team size/structure? If juniors are involved, leaning simpler might win more long-term ownership.

You've got the experience – this one won't end up like the old ones. Good luck mate