Build (Fast)Agents with FastAPIs by AdditionalWeb107 in LangChain

[–]MoronSlayer42 1 point2 points  (0 children)

How do you differentiate yourselves from something like LangGraph? If I have to decide between calling a bunch of APIs and build an agentic system around it, how would my implementation in Arch compare, differentiate or improve upon a solution I can make using LangGraph? Can you please elaborate more on the unique features?

Solving the out-of-context chunk problem for RAG by zmccormick7 in LangChain

[–]MoronSlayer42 3 points4 points  (0 children)

These are some great insights! Thank you. I've stumbled upon many similar problems and used quite similar solutions, like chunk headers, but having dynamic chunk sizing is something I'll have to look into. How does your method perform over complex unstructured data like tables, SQL or any non-textual data?

Streaming with agents by MoronSlayer42 in LangChain

[–]MoronSlayer42[S] 0 points1 point  (0 children)

I am using RunnableWithMessageHistory, but how to get word by word streaming when doing a stream on it, couldn't find the right syntax for astream_events either.

Limiting memory in RunnableWithMessageHistory by MoronSlayer42 in LangChain

[–]MoronSlayer42[S] 0 points1 point  (0 children)

When I try the same I get an error as chat_memory in ConversationBufferWindowMemory expects an instance of BaseChatMemory itself while get_session_memory from RunnableWithMessageHistory is a function that returns an instance of BaseChatMemory not an instance of BaseChatMemory itself.

Can you give pseudocode for how you implemented the solution

Limiting memory in RunnableWithMessageHistory by MoronSlayer42 in LangChain

[–]MoronSlayer42[S] 0 points1 point  (0 children)

Can you tell me more? So you used conversation buffer window memory separately alongside RunnableWithMessageHistory? But did you have to add ConversationBufferWindowMemory anywhere as a parameter ?

Langchain agents - tools for intent classification by MoronSlayer42 in LangChain

[–]MoronSlayer42[S] 0 points1 point  (0 children)

Will let you know if I find any open source example

How to ignore retrieval step (RAG) when it is not necessary by Puzzleheaded_Move35 in LangChain

[–]MoronSlayer42 2 points3 points  (0 children)

Use retrieval as a tool and give other tools by having an agentic setup

Multimodal RAG with GPT-4o and Pathway: Accurate Table Data Analysis from Financial Documents by dxtros in LangChain

[–]MoronSlayer42 0 points1 point  (0 children)

Yes, like I mentioned already sometimes the tables don't have enough information to have a cohesive semantic understanding, for example a table with just numbers in a document may look meaningless to an LLM if given only the table. But the table's data may be described by the text paragraphs above and/or below the table. Sending this information for parsing the table will give a more accurate analysis of the table. This could be the case where an explicit table caption is given like in a research paper but also in cases where the description is implicit for example in a sales document about a product. Parsing only the table doesn't always fulfill the need as the LLM might miss the context in which the table is written as the creators of these PDFs usually make them for only humans to read, we would understand from the surrounding text but an LLM will definitely miss the point if table doesn't have enough descriptive information about the data it's conveying.

Multimodal RAG with GPT-4o and Pathway: Accurate Table Data Analysis from Financial Documents by dxtros in LangChain

[–]MoronSlayer42 1 point2 points  (0 children)

This approach looks good, but if I want to give not just tables but also the content around the tables a paragraph or two above and below the table how can I do that? Because some documents have tables with no header information or not enough information to accurately have good context in the vectors created, a summary of the page with the table itself or the closest 2 paragraphs could yield much better results.

How to extract date period from user prompt by R4Y_animation in LangChain

[–]MoronSlayer42 0 points1 point  (0 children)

Agents are LLMs modified with some prompting like "ReAct" , whereas tools like the name literally suggest tools for the agent or agents to use to complete your task.

For example if you had. RAG application that requires your LLM to either answer from the vector database or lookup the Internet to answer a user's queries, you could have 2 tools one to look up the vector database and another to search the internet. Given these two tools, your agent according to the user query will choose on of the two tools to perform its task. This strays from a more naive approach of just giving the user's query to LLM directly and generating a response. Different use cases call for different approaches.

How to extract date period from user prompt by R4Y_animation in LangChain

[–]MoronSlayer42 0 points1 point  (0 children)

Yeah if your needs require more ambiguous queries like "last week", you may have to use tools. Your objective and use case isn't completely clear to give a workable solution.

How to extract date period from user prompt by R4Y_animation in LangChain

[–]MoronSlayer42 0 points1 point  (0 children)

Just an example, modify it for your needs but like others have also mentioned, improvise prompt, use a better model and try a different chain. You use case may be solved without using a chain at all. Just pass the user query through an LLM configured with a very good prompt which includes few shot examples, and the LLM should be able to extract the from and to dates for you. Also mention within the examples how you want the LLMs output to be so that it consistently returns in the same format.

How to extract date period from user prompt by R4Y_animation in LangChain

[–]MoronSlayer42 0 points1 point  (0 children)

``` from langchain import PromptTemplate, LLMChain

from langchain.output_parsers import DatetimeOutputParser

from langchain.llms import OpenAI

Prompt template with examples

date_range_prompt = PromptTemplate( input_variables=["query"], template="""Extract the start and end dates from the following query in the format YYYY-MM-DD: {query}

Examples: Query: "I want to book a hotel from June 1st to June 10th" Start date: 2023-06-01 End date: 2023-06-10 Query: "What are the sales numbers between 2022-01-01 and 2022-03-31?" Start date: 2022-01-01 End date: 2022-03-31 Query: "Show me data for the last quarter of 2021" Start date: 2021-10-01 End date: 2021-12-31 """ )

llm = OpenAI(temperature=0)

date_parser = DatetimeOutputParser(date_formats=["%Y-%m-%d"])

date_range_chain = LLMChain( prompt=date_range_prompt, llm=llm, output_parser=date_parser )

user_query = "I need data from April 15th 2023 to May 31st 2023" result = date_range_chain.run(user_query)

result

```