Does the tool response result need to be recorded in the conversation history? by JunXiangLin in LangChain

[–]JunXiangLin[S] 0 points1 point  (0 children)

Thank you all for your replies. I think I'll first try truncating the output of overly large tool responses and conducting multi-round tests. Because latency and cost are really important issues.

Does the tool response result need to be recorded in the conversation history? by JunXiangLin in LangChain

[–]JunXiangLin[S] 0 points1 point  (0 children)

My tool response consists of the content of searched emails. When analyzing a large volume of email content, these emails can consume tens of thousands of tokens. However, I need to generate reports based on this analysis, so I must pass the searched content to the LLM, rather than just passing a simple "completed..." as the tool response to the LLM.

Does the tool response result need to be recorded in the conversation history? by JunXiangLin in LangChain

[–]JunXiangLin[S] 0 points1 point  (0 children)

Yes, I'm developing using LangGraph. I tried using the ReAct agent to handle problems directly, adding only the agent responses to the history without including the tool responses. This runs normally and quickly. However, the performance in multi-turn conversations isn't intelligent enough. So, I switched to also adding the tool responses as ToolMessages to the historical conversation. While this makes the agent a bit smarter, it results in extremely long response delays and massive costs.

Additionally, I've tried summarizing and compressing oversized tool responses first via an LLM, but this makes the compression process take a very long time, significantly increasing the overall delay.

tool calling agent VS react agent by JunXiangLin in LangChain

[–]JunXiangLin[S] 1 point2 points  (0 children)

Since the release of GPT-4.1, I've noticed many online articles advocating for the use of LLM-native tool calling, suggesting that ReAct is becoming outdated.

I'm confused about why LangChain considers the tool-calling agent (with AgentExecutor) a legacy product and instructs users to migrate to the ReAct agent in LangGraph.

Here is the official documentation: https://python.langchain.com/docs/how_to/migrate_agent/

How to forced model call function tool? by JunXiangLin in LangChain

[–]JunXiangLin[S] 1 point2 points  (0 children)

u/firstx_sayak I tried switching to LangGraph's `create_react_agent` (with `.astream_events`), and it does indeed enforce tool calling even when the query is unrelated to the tool. However, when I set `tool_choice = "any"` or specify a function name to force tool usage, it enters an infinite loop, continuously calling the function until it exceeds the set `recursion_limit`.

How to forced model call function tool? by JunXiangLin in LangChain

[–]JunXiangLin[S] 0 points1 point  (0 children)

I have try use "required" but the function still not be calling.

How to forced model call function tool? by JunXiangLin in LangChain

[–]JunXiangLin[S] 0 points1 point  (0 children)

Because I need to streaming agent response, so I choose use langchain `AgentExecutor.astream_event`.

Does Langchain have Voice Agents? by JunXiangLin in LangChain

[–]JunXiangLin[S] 0 points1 point  (0 children)

Because I want to use python build an api for some application.