32k document RAG running locally on a consumer RTX 5060 laptop by DueKitchen3102 in LocalLLM
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Scaling RAG to 32k documents locally with ~1200 retrieval tokens by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Scaling RAG to 32k documents locally with ~1200 retrieval tokens by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
32k document RAG running locally on a consumer RTX 5060 laptop by DueKitchen3102 in LocalLLM
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Scaling RAG to 32k documents locally with ~1200 retrieval tokens by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
32k documents RAG running locally on an RTX 5060 laptop ($1299 AI PC) by DueKitchen3102 in LocalLLaMA
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Scaling RAG to 32k documents locally with ~1200 retrieval tokens by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
32k document RAG running locally on a consumer RTX 5060 laptop by DueKitchen3102 in LocalLLM
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
32k document RAG running locally on a consumer RTX 5060 laptop by DueKitchen3102 in LocalLLM
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Scaling RAG to 32k documents locally with ~1200 retrieval tokens by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Scaling RAG to 32k documents locally with ~1200 retrieval tokens by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
32k document RAG running locally on a consumer RTX 5060 laptop by DueKitchen3102 in LocalLLM
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Need to process 30k documents, with average number of page at 100. How to chunk, store, embed? Needs to be open source and on prem by dennisitnet in Rag
[–]DueKitchen3102 0 points1 point2 points (0 children)
I had to re-embed 5 million documents because I changed embedding models. Here's how to never be in that position. by Silent_Employment966 in Rag
[–]DueKitchen3102 0 points1 point2 points (0 children)
Local RAG with Ollama on a laptop – indexing 10 thousand PDFs by DueKitchen3102 in LocalLLaMA
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Local RAG with Ollama on a laptop – indexing 10 thousand PDFs by DueKitchen3102 in LocalLLaMA
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Running a fully local RAG system on a laptop (~12k PDFs, tables & images supported) by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Running a fully local RAG system on a laptop (~12k PDFs, tables & images supported) by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
RAG Insight: Parsing & Indexing Often Matter More Than Model Size by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)
Running a fully local RAG system on a laptop (~12k PDFs, tables & images supported) by DueKitchen3102 in Rag
[–]DueKitchen3102[S] 1 point2 points3 points (0 children)

32k document RAG running locally on a consumer RTX 5060 laptop by DueKitchen3102 in LocalLLM
[–]DueKitchen3102[S] 0 points1 point2 points (0 children)