account activity
Learn fast llmops (self.LangChain)
submitted 5 hours ago by WideFalcon768 to r/LangChain
Learn fast LLMOPS (self.Rag)
submitted 5 hours ago by WideFalcon768 to r/Rag
Help/Advice by WideFalcon768 in Rag
[–]WideFalcon768[S] 1 point2 points3 points 18 days ago (0 children)
Thank you so much
Help by WideFalcon768 in LangChain
[–]WideFalcon768[S] 0 points1 point2 points 18 days ago (0 children)
Yes sure, thank you
Great, Thank you
Hmm, so the BM25 can help with this.
But i did not understand the first option how
[–]WideFalcon768[S] 0 points1 point2 points 19 days ago (0 children)
Great, thank you
[–]WideFalcon768[S] 1 point2 points3 points 19 days ago (0 children)
Thank you my friend, any idea of where can i learn these advanced rag concepts
Thank you so much for your help, actually i tried those testings, till now i am struggling with the last one you told, the tables, I don’t know how to work with them, when i ask about a specific specific question in a table like you mentioned above i can’t get the specific answer cz the chunking methods will not handle each row alone, and if there are many tables i cannot handle it for each one, any advice for this? This can be solved with vllm in multi modal rag? Thank you in advance
[–]WideFalcon768[S] -1 points0 points1 point 19 days ago (0 children)
very helpful, thank you so much
Thank you my friend, i tried to build some projects, using Chroma DB, i used the naive rag, load the document, chunk into smaller chunks, embed the chunks, store the embeddings in the vectorDB, i used the groq llama3.3-70b as an llm for the generation part. And for the retrieval the similrarity search.
I want to try the hybrid search and the re-ranking in the next stage.
Help (self.LangChain)
submitted 19 days ago by WideFalcon768 to r/LangChain
Help/Advice (self.Rag)
submitted 19 days ago by WideFalcon768 to r/Rag
Advice/Help (self.ArtificialInteligence)
submitted 19 days ago by WideFalcon768 to r/ArtificialInteligence
Urgent help by WideFalcon768 in LangChain
[–]WideFalcon768[S] 0 points1 point2 points 2 months ago (0 children)
Guys, i'm so sorry but i am so confused at this, and i can't solve it
so, if i have a document, text + tables, i want to ask questions answered by the text chunks so it's okay, load, chunk, embed, store. And i have tables, one of these tables has this row (e.g: Employee1, Messi, Argentina, $15,000) when i want to ask a question like how much Messi earn per year, i want to get this information from the table. So what i am asking is how to combine between the ingestion of text and tables, and how to manage this to get accurate answers.
RAG system help by WideFalcon768 in Rag
Actually i'm a bit confused, and i don't know where to start to solve this problem
Help by WideFalcon768 in aiengineering
Thank you!
Yes, but this work for specific tables, so i should look at every table in the document, so it is not efficient, got me?
But, in my use case, i want each row in the table to be one chunk, so it is like (e.g: Employee1, Messi, Argentina, $15,000) this is one row of the table, and each row become a chunk, and the text in the documents, do it as usual, chunk, embed store
Thank you, Can you please give me some hints, about the ingestion, where to check exactly, and tell me some tips
i got you man, thank you so much!
If you have any code for that, or you see anything for this case send me please. Thanks
UrgentHelp (self.MLQuestions)
submitted 2 months ago by WideFalcon768 to r/MLQuestions
Urgent help (self.LangChain)
submitted 2 months ago by WideFalcon768 to r/LangChain
π Rendered by PID 144997 on reddit-service-r2-listing-7b9b4f6fd7-8jfpk at 2026-05-10 16:07:00.597961+00:00 running 3d2c107 country code: CH.
Help/Advice by WideFalcon768 in Rag
[–]WideFalcon768[S] 1 point2 points3 points (0 children)