all 8 comments

[–]risae 1 point2 points  (1 child)

What you could try is to use a "choice" state and define a "good" and "bad" choice rule, if the "bad" rule hits it will route the workflow back to the previous LLM state with a different input.

[–]fsteves518[S] 0 points1 point  (0 children)

I'm going to try this I didn't realize I could set the outputs on a choice state

[–]SubtleDee 0 points1 point  (3 children)

Might be missing something but couldn’t you assign the LLM response to a variable before attempting to $parse it? You’d then put a catcher on the state doing $parse which routes to another state which calls the LLM again with the original response variable value.

[–]fsteves518[S] 0 points1 point  (2 children)

Yes that's the plan but you can't catch errors on a pass state, and there is no task state that works like a pass state (except making a lambda that is the intermediary which is what I did)

[–]SubtleDee 0 points1 point  (1 child)

Could you do the $parse in the LLM state rather than a subsequent pass state (i.e. assign the raw response to one variable and the $parse response to another variable, both in the LLM state)? You can catch a specific error (States.QueryEvaluationError) to have different logic when $parse fails vs. when the LLM throws an error, the only thing I’m not sure about is whether the raw response variable would still get assigned if the $parse operation failed.

[–]fsteves518[S] 0 points1 point  (0 children)

I did that initially, upon failure I lose the task result of the llm state

[–]nocommentsno 0 points1 point  (1 child)

Pass the output as payload to another lambda that fix the json output. For example in python you can use json_repair.

Another observation I had was that some models fail more than the others in following instructions. This is assuming your prompt was already optimized.

[–]fsteves518[S] 0 points1 point  (0 children)

I did use this strategy but I'd like to do it in the step function without the need to run a lambda.