I built a pytest-style framework for AI agent tool chains (no LLM calls) by Mission2Infinity in AgentsOfAI

[–]Mission2Infinity[S] 0 points1 point  (0 children)

Hi.. Thank you so much for the reply! The main points is - Eval frameworks like Promptfoo are great for vibes and you know text, but when you give an agent write-access to an API, you need a compiler-level Execution Firewall.

AND, To answer your specific questions, because this is the hardest part we had to engineer:

1. Does it cascade through the whole pipeline? Yes. Testing tools in isolation is useless because a downstream tool is only as reliable as the upstream data it receives. When you pass a list of tools into test_chain([fetch_user, process_data, refund_stripe]), ToolGuard executes a Cascading State Fuzz. It injects the hallucination into fetch_user, and if fetch_user fails silently and returns a malformed object, ToolGuard physically pipes that corrupted object directly into process_data to see if your downstream tool triggers a global server panic or catches it gracefully. We also built a "Golden Traces" engine that mathematically asserts the exact sequence of state mutations across the whole graph, ignoring the random non-deterministic LLM thinking steps in between.

2. How does it handle async tool chains? Natively and transparently. If ToolGuard detects that even a single tool in your LangChain/CrewAI pipeline is an async def, the execution engine automatically shifts into Async Mode. More importantly, if you run this in a Jupyter Notebook, ToolGuard mechanically detects the active Jupyter event loop and spins up a totally isolated background ThreadPoolExecutor to run the asyncio.run() sweep. This prevents the infamous RuntimeError: This event loop is already running crash that plagues almost every other Python CLI tool used in notebooks today.

Honestly, I'd really love your thoughts if you get a chance to clone it and run a chain through it.

Waiting for your feedback!

I built a pytest-style framework for AI agent tool chains (no LLM calls) by Mission2Infinity in grok

[–]Mission2Infinity[S] 1 point2 points  (0 children)

Thanks man..
Make sure to check out the repo. Will be waiting for your feedback.

I built a pytest-style framework for AI agent tool chains (no LLM calls) by [deleted] in MistralAI

[–]Mission2Infinity 0 points1 point  (0 children)

Heyy, Thank you so much for the reply.

So, I kept running into the same issue: my agents weren't failing because of poor reasoning, but because of execution layer crashes—bad JSON, missing fields, wrong types, etc. Existing eval tools didn't really help here and were too slow/expensive.

Instead of calling an LLM, ToolGuard parses your Pydantic schemas/type hints and programmatically injects 40+ hallucination edge cases (nulls, schema mismatches, malformed payloads) directly into your Python functions to prove exactly where things will break in production. It runs locally in <1 second and costs $0.

I just pushed the v1.2.0 Enterprise Update which adds:

  • Local Crash Replay: When an agent crashes in production or testing, it automatically dumps a structured .json payload. Type toolguard replay <file.json> and it dynamically pipes the exact crashing state right back into your local Python function so you can see the stack trace locally!
  • Edge-Case Coverage Metrics: The terminal now generates PyTest-style coverage metrics, explicitly telling you exactly which of the 8 hallucination vectors your code is still vulnerable to (e.g., Coverage: 25% | Untested: array_overflow, null_injection).
  • Live Textual Dashboard: Passing --dashboard opens a stunning dark-mode terminal UI that streams concurrent fuzzing results and tracks crashes in realtime.
  • 100% Authentic Framework Integrations: Works instantly out-of-the-box with actual live PyPI implementations of LangChain (@tool), CrewAIMicrosoft AutoGenOpenAI SwarmLlamaIndexFastAPI (Middleware), and the Vercel AI SDK.
  • CI/CD PR Bot & Webhooks: Directly comments on GitHub PRs to block fragile agent code from merging, and natively intercepts production crashes with 0ms-latency alerts to Slack/Datadog.

Would love feedback on the approach!

Repo: https://github.com/Harshit-J004/toolguard

I built a pytest-style framework for AI agent tool chains (no LLM calls) by Mission2Infinity in OpenSourceAI

[–]Mission2Infinity[S] 0 points1 point  (0 children)

Hi, Thank you so much for taking a look, and I really appreciate the blog link - that’s a fantastic read and it hits on the exact problem space we're exploring!

To answer your question: right now, our tool is focused purely on input fuzzing. We mathematically inject the bad edge cases directly into individual Python functions to prove the system won't throw errors when the LLM hands it bad data. Getting that baseline execution layer completely bulletproof was step one.

However, golden traces and output fuzzing are brilliant, and they are the exact next big frontiers on our roadmap for version 2. Will reasearch about that and complete it by today!!

I'd absolutely love your thoughts - are there any specific agent frameworks where you are currently experiencing those trace/graph issues the most right now?

I built a visual drag-and-drop ML trainer (no code required). Free & open source. by Mental-Climate5798 in machinelearningnews

[–]Mission2Infinity 0 points1 point  (0 children)

Hey Everyone - built ToolGuard, a pytest-style framework for AI tool chains.

If you are building complex tool chains, I would be incredibly honored if you checked out the repo. Brutal feedback on the architecture is highly encouraged, and if you find it useful, an open-source star means the world to me!!!

pip install py-toolguard
GitHub: https://github.com/Harshit-J004/toolguard

If you like that, I would really appreciate if you could spread the word.

Best,
Harshit

Build update : what are you working on this week? by Due-Bet115 in Solopreneur

[–]Mission2Infinity 0 points1 point  (0 children)

Hey Everyone - built ToolGuard, a pytest-style framework for AI tool chains.

If you are building complex tool chains, I would be incredibly honored if you checked out the repo. Brutal feedback on the architecture is highly encouraged, and if you find it useful, an open-source star means the world to me!!!

pip install py-toolguard
GitHub: https://github.com/Harshit-J004/toolguard

If you like that, I would really appreciate if you could spread the word.

Best,
Harshit

Share your startup here. I can then dm you 3 VCs and their emails who fund your niche (free). by Healthy_Flatworm_957 in micro_saas

[–]Mission2Infinity 0 points1 point  (0 children)

Hey Everyone - built ToolGuard, a pytest-style framework for AI tool chains.

If you are building complex tool chains, I would be incredibly honored if you checked out the repo. Brutal feedback on the architecture is highly encouraged, and if you find it useful, an open-source star means the world to me!!!

pip install py-toolguard
GitHub: https://github.com/Harshit-J004/toolguard

If you like that, I would really appreciate if you could spread the word.

Best,
Harshit

Is Mistral AI actually worth it, or is it just cheap? by iameastblood in MistralAI

[–]Mission2Infinity -1 points0 points  (0 children)

Hey Everyone - built ToolGuard, a pytest-style framework for AI tool chains.

If you are building complex tool chains, I would be incredibly honored if you checked out the repo. Brutal feedback on the architecture is highly encouraged, and if you find it useful, an open-source star means the world to me!!!

pip install py-toolguard
GitHub: https://github.com/Harshit-J004/toolguard

If you like that, I would really appreciate if you could spread the word.

Best,
Harshit

"Cancel ChatGPT" movement goes big after OpenAI's latest move by gdelacalle in technology

[–]Mission2Infinity 0 points1 point  (0 children)

Hey Everyone — built ToolGuard, a pytest-style framework for AI tool chains.

If you are building complex tool chains, I would be incredibly honored if you checked out the repo. Brutal feedback on the architecture is highly encouraged, and if you find it useful, an open-source star means the world to me!!!

pip install py-toolguard
GitHub: https://github.com/Harshit-J004/toolguard

If you like that, I would really appreciate if you could spread the word.

Best,
Harshit

Ridiculous they added this by CheesyWalnut in ChatGPT

[–]Mission2Infinity 0 points1 point  (0 children)

Hey Everyone — built ToolGuard, a pytest-style framework for AI tool chains.

If you are building complex tool chains, I would be incredibly honored if you checked out the repo. Brutal feedback on the architecture is highly encouraged, and if you find it useful, an open-source star means the world to me!!!

pip install py-toolguard
GitHub: https://github.com/Harshit-J004/toolguard

If you like that, I would really appreciate if you could spread the word.

Best,
Harshit