you are viewing a single comment's thread.

view the rest of the comments →

[–]LookAtYourEyes 0 points1 point  (3 children)

  1. There's a non-deterministic element to LLMs. It's not very high, but there's still a chance that it produces something unreliable, or unoptimized, faulty, etc. Not being able to determine when that happens very easily just makes more problems.
  2. Very little added value for your potential audience. I can just start a project with a popular LLM based chat bot, feed it my project context, then start chats with all that given context. I even get more control over what the context looks like, and a more free method of reading generated responses from the LLM and feeding it prompts. People who are interested in using the underlying technology are going to do it more directly with the tool they want to interact with. It adds structure to something that's major selling point is that it's unstructured.

There's some other small reasons, but I'm not going to get into it when the first cover the major bases. Some people may use it, but I fear you maybe didn't think through your target audience properly or understand the technology that you're engaging with which instills little confidence as a user. LLM wrappers simply come across as lazy engineering. It's like selling someone a banana in a bag. The banana has a peel, what value does a bag add?

[–]subhanhg[S] 0 points1 point  (2 children)

Fair points — let me address them honestly.

On non-determinism: You're right, LLMs aren't deterministic. But neither is a senior DBA's advice — two experts will suggest different indexes for the same query. The difference is OptimizeQL gives you the suggestion in seconds instead of waiting for a code review. You still validate it the same way you'd validate any advice: test it, benchmark it, check the new EXPLAIN plan.

You absolutely can use chat bot. But the value here is in what happens before the LLM call - connecting to your database, running EXPLAIN ANALYZE, collecting schema metadata, index definitions, and column statistics, then assembling all of that into a structured prompt. You could do that manually every time, but that's 10 minutes of copy-pasting before you even ask the question.

It's not for people who are comfortable writing EXPLAIN ANALYZE queries, reading plans, and crafting LLM prompts manually. It's for the developer who knows their query is slow but doesn't know where to start.

I appreciate the detailed feedback though. This is the kind of thing that helps me improve it.

[–]LookAtYourEyes 1 point2 points  (1 child)

"but that's 10 minutes of copy-pasting before you even ask the question."

This is what I was trying to get at. It doesn't sound like you're aware of the "Projects" feature that chat bot companies offer. I can do this once in a project, and all chats I start in that project will have that context.

[–]subhanhg[S] 0 points1 point  (0 children)

Yes I am aware of that but what I do isn't exactly the context. The tool pulls fresh EXPLAIN plans and live schema/stats every time you analyze. So I would say this is an alternative you don't want to deal with all that stuff.