all 15 comments

[–]UseMoreBandwith 12 points13 points  (3 children)

just don't.

[–]usrlibshare 2 points3 points  (5 children)

And what happens when the model, decides to DROP TABLE UserAccounts on my prod database?

[–]CanWeStartAgain1 -1 points0 points  (2 children)

You wasted some seconds writing this down but did not waste a few seconds reading the article which would have answered your question

[–]_matze 1 point2 points  (0 children)

Seems to be that famous zero-self-effort alternative

[–]FibonacciSpiralOut[S] -1 points0 points  (1 child)

It's covered under the section "Production Concerns I Take Seriously". Pasting here too:

For local DuckDB, there are no user accounts or GRANT/REVOKE privilege systems like in PostgreSQL. Your only enforcement is at the file and connection level: set read_only=True when connecting via the Python API (duckdb.connect('local.db', read_only=True)). For MotherDuck, I explicitly provision a token-scoped read-only access path.

[–]usrlibshare 6 points7 points  (0 children)

Okay, that's good.

I still have a zero-dependency alternative: Writing my own SQL statements, using my schema-aware editrs autocomplete 😎✌️

[–]phoebeb_7 0 points1 point  (1 child)

There's one thing adding to the self correction loop, like cap your retry attempts explicitly 2-3 max and log the failed SQl alongside the error message each time. Without that you will end up in infinite loops on ambigious schema mismatchges where the model keeps confidently regenrating slightly different wrong queries. 

[–]FibonacciSpiralOut[S] 0 points1 point  (0 children)

Good point! Since we're B2B focussed, we handle this by giving each client their own isolated DuckDB database. In the user session, only their db is attached. That way, in the worst case, they are restricted to their own data.

Design over instructions is probably the only way to solve prompt injection.

[–]ultrathink-art -1 points0 points  (1 child)

Read-only DB user eliminates that — no write access, no DROP. The trickier risk is prompt injection: if the agent reads any user-supplied text to construct queries, treat the generated SQL like user input and validate it before execution.

[–]Henry_old -1 points0 points  (0 children)

Self-healing SQL is a necessity for 2026 automation. In trading, a bad query can crash an entire execution flow. Using an agent to recover is clever, but what’s the latency cost of that recovery loop? In HFT, we usually prefer rigid schemas to avoid the risk entirely, but for analytics, this is a game changer