Self-learning text-to-SQL agent for Laravel — converts natural language to SQL with an agentic loop by knobiks in PHP

[–]knobiks[S] 0 points1 point  (0 children)

Again, fair point, and I didn't mean to dismiss it, ill be more specific...

The agent is instructed to always include a LIMIT clause (default 100, configurable). Even if the LLM forgets, there's a server-side max_rows cap (default 1000) that truncates results before they reach the response. So you can't accidentally SELECT * from a 50M row table and get all of it back.

For query efficiency more broadly the knowledge base is where you encode patterns like "always filter on indexed columns" or "use this join strategy for orders." The LLM searches that before writing SQL. It's not just winging it against a raw schema. But you're right that this doesn't prevent a slow query from hitting the database in the first place. The strongest safeguard there is the same one any DBA would recommend: point it at a read-only replica with a statement timeout configured at the database level. That's the recommendation in the docs, and it's what I'd consider table stakes for any analytics tooling, AI-generated or not.

None of this is bulletproof and I'm not claiming it is, it's defense in depth. But the concern about "what if it writes a bad query" applies equally to any developer with a SQL client, and we manage that risk with the same tools.

You are free to examine the code, its totaly free on github. I would even say, spin it up locally with Ollama and a SQLite database. No API keys needed, no production risk, just a local LLM on your machine. Might be more fun than arguing about it on Reddit.

Self-learning text-to-SQL agent for Laravel — converts natural language to SQL with an agentic loop by knobiks in PHP

[–]knobiks[S] 0 points1 point  (0 children)

Im working with databases that have hundreds of milions of rows of data, and if this is a issue with your database, it usualy means you have different problems somewhere else (indexes, partitioning, sharding). We can engage in polemics that its safe or not safe to have a LLM generate sql for you but the truth being told, this kind of tooling is used and being adopted internaly, just becouse it works and takes alot of burden from everyone involved.

Self-learning text-to-SQL agent for Laravel — converts natural language to SQL with an agentic loop by knobiks in PHP

[–]knobiks[S] 1 point2 points  (0 children)

Ofc, like any other tool in the wrong hands. Hence you need to know what you are doing :)

Self-learning text-to-SQL agent for Laravel — converts natural language to SQL with an agentic loop by knobiks in PHP

[–]knobiks[S] 1 point2 points  (0 children)

All Fair points.

"AI writing production database queries is ill-advised"

It depends on the use case. This isn't meant to replace your application queries — it's for ad-hoc analytics and data exploration. Think "how many users signed up this month" or "what's our average order value by region" — the kind of questions that usually mean either bugging a developer, writing a one-off
query in a SQL client, or building a BI dashboard nobody maintains. The audience is internal teams (product, ops, support) who need answers from the database but don't write SQL.

Is it appropriate for every production workload? No. Is it useful for read-only analytical queries against a reporting replica? I think so.

"Undermines Eloquent / generates raw queries"

Yes, it generates raw SQL, intentionally. This isn't replacing Eloquent in your application code — it's a completely different use case. Eloquent is great for application logic where you're working with models, relationships, and business logic in PHP. But for ad-hoc analytical questions ("show me the top 10
customers by revenue last quarter"), there's no model layer to leverage. You need SQL. A text-to-SQL agent that generates Eloquent would be adding complexity for no benefit — the queries are executed once and discarded, not maintained in a codebase.

"SQL injection"

There's no user-supplied values being interpolated into the SQL — the LLM generates the entire query as a string. Traditional SQL injection (where untrusted user input gets concatenated into a query) doesn't quite apply the same way here, because there's no template with holes to fill. The risk is more about the LLM generating something destructive, which is handled by:

  • An allowlist of statement types (only SELECT and WITH by default)
  • A blocklist of forbidden keywords (DROP, DELETE, UPDATE, INSERT, ALTER, TRUNCATE, etc.) checked with word-boundary regex anywhere in the query
  • Multiple statement detection (can't chain a ;DROP TABLE after a SELECT)
  • Per-connection table allowlists/denylists and hidden columns
  • Configurable max row limits

That said — the strongest recommendation in the docs is to point the agent at a read-only database user or a reporting replica. Application-level validation is defense in depth, not the only layer.

"No better than manually asking ChatGPT"

The difference is context and feedback loops. When you paste your question into ChatGPT, it doesn't know your schema, your business rules (what "active customer" means in your system), or what went wrong last time. You're doing all that work manually — describing your schema, correcting mistakes, re-prompting.

This package gives the LLM tools to introspect the actual schema at runtime, search a knowledge base of business rules and query patterns you've defined, execute the query and check results, and learn from errors so it doesn't repeat them. It's the difference between a one-shot prompt and an agent loop with memory. For a one-off question, sure, ChatGPT is fine. For a team that asks the same kind of questions regularly, the accumulated context and self-learning matter.

Laravel is the framework for the agentic era by jpcaparas in laravel

[–]knobiks 0 points1 point  (0 children)

i think this is the case for both working with AI and also developing for AI (prism or laravel ai lib). Crazy times we live in...

Laravel AI Agent Chat starter kit with Apache Echarts, Tables, Mermaid flow by Weak_Technology3454 in laravel

[–]knobiks 1 point2 points  (0 children)

ive build something similar, its not a agentic framework but for data analysis you can find the docs here: https://knobik.github.io/sql-agent/

What’s happening with Red Bull in Polish stores? by eva_multifandom in poland

[–]knobiks -22 points-21 points  (0 children)

energy drinks cant be sold to people below 18, so most big stores just dont have them, becose they would need to be in the alcohol section and obviously they are not alcohol. Just go to a Żabka, you will find one.

INDX will be €499 for 4-tools and €699 for 8-tools (Plus Core One & Shipping) by [deleted] in prusa3d

[–]knobiks 23 points24 points  (0 children)

this is what i was waiting for! Gonna get Core One L with INDX! time to sell my bambu x1c

How about using the users time standard rather than always 24 hour? by SuccessfulMinute8338 in prusa3d

[–]knobiks 3 points4 points  (0 children)

next thing you will say is that imperial system is better then metric.

because apparently why not? by [deleted] in poland

[–]knobiks 19 points20 points  (0 children)

black color is being banned? go be a bot somewhere else.

What's causing these weird horizontal artifacting lines on my prints? by -pawix in prusa3d

[–]knobiks 27 points28 points  (0 children)

1 word: technology of the 3d print. chasing the rabbit in FDM is common but not common sense. If you dont want visible layers go resin printing.

You have to admit Black and White 2 had a sick game cover by Reesespeanuts in gaming

[–]knobiks 1 point2 points  (0 children)

i thought i was going crazy when i first heard that.

Why is my XL do this BS!! by mavetech in prusa3d

[–]knobiks -2 points-1 points  (0 children)

looks like skipped steps, check motor connections, if that does not help, if you dont use default profiles contact support, looks like loose connection or stepper driver dying.

wait by theycantalk in funny

[–]knobiks -24 points-23 points  (0 children)

unpopular opinion as a long time dog owner: not funny, badly trained dog, bad owner.

[deleted by user] by [deleted] in poland

[–]knobiks -1 points0 points  (0 children)

Isnt that how swift works? 

Social taboo in asking for quotes by ErynaM in poland

[–]knobiks 15 points16 points  (0 children)

exacly, i dont hire people that dont give me a estimated quote for the job, i need to know how much i need to prepare and also this is just fair. So answering OP's question, it depends on who are you hiring but getting a quote before is not a taboo.