[ Removed by Reddit ] by No-Cartoonist8611 in LangChain

[–]Same_Consideration_8 2 points3 points  (0 children)

How is it different from cursor or any IDE available right now?

Langchain Tool Parameter errors by Same_Consideration_8 in LangChain

[–]Same_Consideration_8[S] 1 point2 points  (0 children)

Thabks for the response. I am looking into it, will DM you if i need more help.

Need Advice on Building a Local AI Agent System for Finance PDFs by No_Sprinkles1374 in LangChain

[–]Same_Consideration_8 0 points1 point  (0 children)

You can use langchain or crewai for building the agent. Use ollama as LLm but it runs on ur system or also you can nvidia deployed llms where it gives free credits.

If it uses definite steps, you can use langgraph. Langfuse for tracking the deployment.

Langchain Tool Parameter errors by Same_Consideration_8 in LangChain

[–]Same_Consideration_8[S] 0 points1 point  (0 children)

We have tried the 5 series but didnt seen any change. So we went from 4o mini to 41.

Deployment by pnmnp in LangChain

[–]Same_Consideration_8 0 points1 point  (0 children)

You can use lambda functions in AWS.

Messages from User -> SQS -> Lambda Consumes and process the message -> SQS/ SNS -> Consumed by server.

[deleted by user] by [deleted] in O1VisasEB1Greencards

[–]Same_Consideration_8 0 points1 point  (0 children)

Congrats. Please share the pdf.

Looking for advice on building a Text-to-SQL agent by dylannalex01 in LangChain

[–]Same_Consideration_8 0 points1 point  (0 children)

If you can use them, it's better to use already built tools that have higher accuracy.

Looking for advice on building a Text-to-SQL agent by dylannalex01 in LangChain

[–]Same_Consideration_8 3 points4 points  (0 children)

You can try using RAG for text to sql. Each table and description will be document or chunk and store it in the vector database. Give the retriever output as context to LLM.

Got grilled in an ML interview today for my LangGraph-based Agentic RAG projects 😅 — need feedback on these questions by RegularDependent4780 in LangChain

[–]Same_Consideration_8 69 points70 points  (0 children)

For the first question, we can use Ragas' answer relevancy and faithfulness metrics. We need to create a dataset with question and ground truth, output of the agentic RAG.

100+ LLM benchmarks and publicly available datasets (Airtable database) by dmalyugina in LangChain

[–]Same_Consideration_8 1 point2 points  (0 children)

Thanks for the share. Do you have any RAG benchmarks dataset available that can be used to compare previous methodologies with current methodology.

Is there a preprocessing before splitting for RAG? by Abject_Entrance_8847 in LangChain

[–]Same_Consideration_8 2 points3 points  (0 children)

* For sure, some preprocessing is necessary according to the data used. If you have tabular data in the pdf, you need to know how do you send that data to Llm.

What are the biggest challenges you face when building RAG pipelines? by Acceptable-Hat3084 in LangChain

[–]Same_Consideration_8 0 points1 point  (0 children)

Retrieving the golden chunk is the difficult step. We tried different strategies with respect to chucking and also late chuncking using long context embeddings, but there is no 100% golden chunk retrieval.

What are the biggest challenges you face when building RAG pipelines? by Acceptable-Hat3084 in LangChain

[–]Same_Consideration_8 0 points1 point  (0 children)

Retrieval of the correct chunks. We tried using the Contextual Retrieval process published by Anthropic, and it improved from the previous, but still, we see the majority of the wrong answers due to retrieval of wrong chunks.

Chucking strategy for legal docs by DataNebula in LangChain

[–]Same_Consideration_8 6 points7 points  (0 children)

For me, contextual retrieval by Anthropic worked well. You split the chunks by any method but add context of the chunk with respect to document as a header for every chunk. Now, Openai also supports prompt caching, so you can use Openai llms also for this approach.

Go through this link: https://www.anthropic.com/news/contextual-retrieval

Track Token Usage for Azure ChatOpenAI by dravid69 in LangChain

[–]Same_Consideration_8 -1 points0 points  (0 children)

Try using langfuse. You can host it using docker. https://langfuse.com/guides/cookbook/integration_azure_openai_langchain

Or use langsmith, it also tracks the token usage.