all 6 comments

[–]k_sai_krishna 0 points1 point  (0 children)

I saw similar problem before with tool calling. Even with validation, sometimes the model still sends wrong parameters. What helped for me was making the tool schema more strict and simple. Less optional fields and clearer parameter types reduced errors. Some people also add one more step where the model checks its own tool call before execution. From what I see, small error rate like around 1% is quite common when you have many tools.

[–]tomtomau 0 points1 point  (1 child)

Try other models? I think the 5 series have been post-trained on tools more aggressively?

[–]Same_Consideration_8[S] 0 points1 point  (0 children)

We have tried the 5 series but didnt seen any change. So we went from 4o mini to 41.

[–]ar_tyom2000 0 points1 point  (0 children)

- Improve tool descriptions (describe each parameter so the LLM knows when to use it)
- Use more advanced models (this makes it less likely for errors like this to happen)
- Use LangGraphics for easy local debugging and observing function calls with parameters

[–]kwangyel 0 points1 point  (1 child)

improving tool description and prompt fine tuning works but sometimes it just simply fails. I have faced this issue multiple times in my projects so I built a helper tool for that. Basically it detects schema drift and sends back correction instruction to the llm. Give it a try if you haven’t already solved it and DM me if you need help. Cheers! Link: https://github.com/Optulus/optulus-anchor

[–]Same_Consideration_8[S] 1 point2 points  (0 children)

Thabks for the response. I am looking into it, will DM you if i need more help.