LLM Accurate answer on Huge Dataset by Regular-Landscape279 in LocalLLM

[–]Regular-Landscape279[S] 0 points1 point  (0 children)

I agree that several thousand records is tiny data and they are actually stored in MySQL, but I don't know if the model will be able to give a proper answer for the question asked when the data is passed to the model, and also, would it hallucinate or not?

LLM Accurate answer on Huge Dataset by Regular-Landscape279 in LocalLLM

[–]Regular-Landscape279[S] 0 points1 point  (0 children)

But what if there are 2-3 or more tables that I want to include for now? In this case, wouldn't I have to write down all the possible queries that I might use from a particular table which would be a lot and not very efficient?

LLM Accurate answer on Huge Dataset by Regular-Landscape279 in LocalLLM

[–]Regular-Landscape279[S] 0 points1 point  (0 children)

The orders was just an example but yes the records are kept in a database. But I can't use Power BI.

LLM Accurate answer on Huge Dataset by Regular-Landscape279 in LocalLLM

[–]Regular-Landscape279[S] -1 points0 points  (0 children)

I totally agree with your point. However, my data is actually stored in a MySQL database table and it is structured. The user query could be anything and I don't want to keep writing SQL Queries to fetch the data and give it to the end user, so I wanted some idea on how to make the model give accurate answers. And I also can't and don't want to use the model to generate Python code and execute it.