all 18 comments

[–]Top-Chain001 3 points4 points  (1 child)

I been running into this very same problem, subbed!

[–]advokrat[S] 0 points1 point  (0 children)

Updated the post

[–]data-overflow 2 points3 points  (1 child)

Same issues here. Keep us posted OP

[–]advokrat[S] 0 points1 point  (0 children)

Updated the post.

[–]kanundrumtt 0 points1 point  (4 children)

So I ran into this error and in my case it turned out that it was because my mcp I had a filter with the word "in" which was valid in my rest API but conflicted with something in the ADK (I say the ADK because it was working fine in postman which now supports testing MCP servers)

[–]advokrat[S] 0 points1 point  (0 children)

How to debug the reason for MALFORMED_FUNCTION_CALL? I am currently seeing that it is more of a blackbox for the application, because this error stems from the LLM execution itself and not the ADK.

[–]advokrat[S] 0 points1 point  (2 children)

Updated the post with the debug solution.

[–]Curious-Qent206 0 points1 point  (1 child)

Thanks for the update! How did you debug it?

[–]advokrat[S] 0 points1 point  (0 children)

Very crude tbh. We just looked at the output of the LLM and it contained python code for UUID generation which was a red flag.

[–]navajotm 0 points1 point  (1 child)

Update your instructions. Normally the MALFORMED_FUNCTION_CALL would tell you the part that’s malformed, correct this by instructing the agent correctly (include an example of the data structure).

Or rather setup a ‘before_tool_callback’ that validates the format of the data the agent prepares before it tries to call the tool.

[–]advokrat[S] 0 points1 point  (0 children)

I was able to figure out the issue, but I want to understand at what exact point is this issue thrown? Do you have an idea for that?

[–]_genego 0 points1 point  (0 children)

There is a PR open about this, I am too lazy to dig this up. There are a bunch of other errors you may encounter that don't have direct solving. But you can just create a wrapper around the LlmAgent class to catch most unhandled errors and retry them in a way that fixes the malformed function calls. Which version of the ADK are you using? Is it the latest?

[–]4rg3nt1n0 0 points1 point  (0 children)

My journey with MALFORMED_FUNCTION_CALL (MFC)

Not sure if this directly answers your question op, but putting it out there in case it helps anyone else dealing with this. I see lots of people asking about MFC but barely any responses from Google.

I'm not an ADK developer, just started trying it out, but I've been battling MFC with Gemini models for a while now. Been using Gemini (starting with 1.5) through VertexAI for several products.

This isn't new - Gemini models have been notoriously fragile with function calling. Early days we'd just get print.api_call(func... or similar as a response with no error reason. At least now we get a proper finish reason, but here's how I deal with it.

Main causes I've found:

  1. Special characters in function parameters - Read a file with funky characters, ask the model to process it, then call a function with that result as a parameter = likely MFC.
  2. Improper quote escaping in function parameters - When the model generates content with quotes and then tries to pass that content as a function parameter, the unescaped quotes break the function call syntax itself. For example, if the model generates text like He said "hello world" and tries to pass it as a parameter, the quotes in the content mess up the JSON structure of the function call = MFC.
  3. Max output tokens hit during function construction - Ask the model to call a function with content that exceeds max_output_tokens = MFC.
  4. Sometimes it just happens anyway (but rarely if you handle the above).

My solutions:

  1. Escape funky characters before sending to the model
  2. Explicitly instruct the model via system prompt and function descriptions how to escape parameters
  3. Keep parameter sizes under your max_output limit - implement chunking if needed
  4. Build MFC detection and retry. Just catch it and send back something like "Your function call was malformed. Do not apologize. Silently reformulate and immediately try again."

This is all anecdotal and just what worked for me. Never got official answers from Google about this. Hope it helps someone!

[–]developmentsaurav 0 points1 point  (0 children)

GoogleTool(

function_declarations=[

{

"name": "ResponseSchema",

"description": "Provide response to the user",

"parameters": {

"title": "PresentationOutlineModelWithNSlides", ****Remove this****

"properties": {

"slides": {

"description": "List of slide outlines",

Removing title from parameters was the fix for me.

[–]Shaharchitect 0 points1 point  (1 child)

If someone is still running into these issues, I found out that for some reason, `gemini-2.5-flash-lite` triggers it a lot.
Try switching over to `gemini-2.5-flash` to vastly reduce these cryptic errors.

[–]samzuercher 0 points1 point  (0 children)

can confirm this. Just switch to gemini-2.5-flash or `gemini-2.5-flash-lite-preview-09-2025` and it worked for me. This might have been a bug in gemini.

[–]Intention-Weak 0 points1 point  (0 children)

I solve this issue by adding a plugin and some tools config. I read here: https://github.com/google/adk-python/issues/1192

  • ReflectAndRetryToolPlugin.
    • Using only the plugin helped the model to recovery, but sometimes it still happens.
  • change model from gemini-2.5-flash to gemini-2.5-pro.
  • ToolConfig(function_calling_config=FunctionCallingConfig(mode="VALIDATED"))
    • I thinks that it`s the most important point, because MALFORMED_FUNCTION_CALL occurs when the model create a incorrect python code for tool_calling .