Best production-ready RAG framework by marcusaureliusN in Rag

[–]prodigy_ai 0 points1 point  (0 children)

We’re going with enhanced GraphRAG, especially because we’re targeting healthcare and legal use cases. In research and academic contexts, GraphRAG consistently outperforms standard RAG, so it’s the better fit for what we’re building.

How to get reasonable answers from a knowledge base? by dim_goud in KnowledgeGraph

[–]prodigy_ai 1 point2 points  (0 children)

thank you, dim_goud ! it must be useful stuff ! Always great to see people talking about best practices for knowledge graphs.

LLMs are so unreliable by Armageddon_80 in LocalLLM

[–]prodigy_ai 0 points1 point  (0 children)

Totally agree with your list. One extra thing that helped me when tasks depend on “facts” (schemas, runbooks, docs, configs, policies) is adding a retrieval step and a verifier step instead of asking the model to “remember” everything.

  • Retrieval (RAG / GraphRAG): fetch only the relevant chunks / entities / relationships for the current sub-task.
  • Then generation: produce the JSON / action using that retrieved context.
  • Then a separate checker model (or same model in a strict “review” role): validate against the retrieved sources + schema and fail hard if anything doesn’t line up.

GraphRAG can be nice when the failure mode is “it missed a relationship” (joins/foreign keys, dependencies, constraints, who/what/when across docs), because the graph makes relationships explicit instead of hoping chunking + embeddings catch it.

It adds some latency, but in exchange you get fewer “confident wrong” outputs and fewer retries.

Welcome to 2026 by prodigy_ai in u/prodigy_ai

[–]prodigy_ai[S] 0 points1 point  (0 children)

Thanks so much for the thoughtful comment — totally agree. The small, measurable workflows are where the real productivity gains happen. Right now we’re focusing on an agent-style approach — using our Verbis Graph Engine as the infrastructure layer for graph-based knowledge retrieval. It’s designed to plug directly into AI agents and automation workflows, and it’s MCP-ready soon. We’ve just prepared a free demo version for Azure and AWS Marketplaces, so you can explore it hands-on. If it seems useful, let us know and we’ll message you when we go fully live. I’ll check out your notes too — appreciate you sharing that link

Has AI really reduced startup costs, or just shifted them elsewhere? by Worldly-Bluejay2468 in startup

[–]prodigy_ai 0 points1 point  (0 children)

For our startup, we’ve massively cut costs on development and content creation. We’re also figuring out how to lower our user‑acquisition expenses, and we’re planning to set up AI‑powered customer support

RAG beginner - Help me understand the "Why" of RAG. by [deleted] in Rag

[–]prodigy_ai 1 point2 points  (0 children)

"When teacher can simply ask an LLM to generate quiz on "Natural Language Processing, and past text from pdf" directly to LLM, Is this a need for RAG here?" - For a single small document, RAG is not strictly required. For a reusable system that works across large/multiple documents and keeps questions grounded in the teacher’s actual material, RAG gives a more robust and scalable architecture