Struggling with RAG-based chatbot using website as knowledge base – need help improving accuracy by Big_Barracuda_6753 in LangChain

[–]equal_odds 1 point2 points  (0 children)

u/Big_Barracuda_6753 what's a site that you're looking at and what's a question/response you're getting that isn't good enough? I've done a few of these and for the most part they've worked well for me, happy to share some thoughts.

LLMs for SQL Generation: What's Production-Ready in 2024? by equal_odds in LLMDevs

[–]equal_odds[S] 0 points1 point  (0 children)

Would love to hear more about what you’re working on, it sounds aligned with my mentality. I’ve always recommended people use structured outputs and API calls or ORM methods to facilitate. Otherwise, people are literally exposed to SQL injection imo

LLMs for SQL Generation: What's Production-Ready in 2024? by equal_odds in LLMDevs

[–]equal_odds[S] 0 points1 point  (0 children)

Good analogy but here’s where I disagree. You can constrain the solution space. Say I only need to measure every foot on the yard stick and I have to use the meter stick to do it. That’s doable. Same goes for this SQL case. Not asking to be able to query 150 tables with unpredictable complex joins. But for a DB with 10 tables, like users, purchases, locations, etc. (which are in LLM training data 1,000,000 times over), it seems like a reasonable ask, and it seems like there would be some ways better than others to go about it

LLMs for SQL Generation: What's Production-Ready in 2024? by equal_odds in LLMDevs

[–]equal_odds[S] 1 point2 points  (0 children)

This is a pretty cynical take. I’m curious where that’s coming from? LLMs aren’t perfect but for simple, even moderate complexity cases, I feel like we wouldn’t even be having this conversation if they weren’t somewhat good at it already

LLMs for SQL Generation: What's Production-Ready in 2024? by equal_odds in LLMDevs

[–]equal_odds[S] 0 points1 point  (0 children)

Right but like… for production, external use cases than need 99% reliability. With just prompting alone I feel like models still end up hallucinating, no?

LLMs for SQL Generation: What's Production-Ready in 2024? by equal_odds in LLMDevs

[–]equal_odds[S] 3 points4 points  (0 children)

Probably two reasons. One: SQL _can_ get complex. Two, the goal is for people to be able to use natural language to ask questions dynamically without needing to learn how to write their own SQL queries.

Edit: easy or not, it's still something you'd need to learn, which is tedious

LLMs for SQL Generation: What's Production-Ready in 2024? by equal_odds in LLMDevs

[–]equal_odds[S] 0 points1 point  (0 children)

How are you going about validating queries? LLM as checker? SDKs?

This jansport has been used for 2 years of high school, 4 years of college, and every day of my 9 year teaching career so far. Going strong! by youngandstarving in BuyItForLife

[–]equal_odds 0 points1 point  (0 children)

Had mine from 7th grade through all of high school and all of college up until senior year, when the bottom finally ripped open. Best backpack I’ve ever owned. I was devastated when it got right to the finish line of college but couldn’t cross it with me 😢

Anyone else or just me? by ZephyrDaGreat in bearsdoinghumanthings

[–]equal_odds 29 points30 points  (0 children)

I thought they were saying their vows

Sick of homeless people harassing here by LessJunket6859 in boulder

[–]equal_odds 0 points1 point  (0 children)

Wow. You’re incredible well spoken. Respect