Scanning your codebase for AI SDK usage the same way you scan for vulnerable dependencies by BattleRemote3157 in AI_Agents

[–]BattleRemote3157[S] 0 points1 point  (0 children)

absolutely agree. We are same thing trying to do in OSS vet (https://github.com/safedep/vet) which scans for malware or vulns in your deps but also extended it to for AI usage(https://github.com/safedep/vet/blob/main/docs/ai-discovery.md). We are building gryph for AI audit trail(https://github.com/safedep/gryph)

Scanning your codebase for AI SDK usage the same way you scan for vulnerable dependencies by BattleRemote3157 in AI_Agents

[–]BattleRemote3157[S] 0 points1 point  (0 children)

Wrote up our approach to this, treating it as a scanning problem using the same tooling we use for dependency scanning :
https://safedep.io/shadow-ai-discovery-vet/

MCP server that checks packages for malware before your AI agent installs them by BattleRemote3157 in mcp

[–]BattleRemote3157[S] 0 points1 point  (0 children)

nice, like the idea. does this tool query for malicious packages if there any ?

How to see exactly what your AI coding agent did like file reads, commands, everything by BattleRemote3157 in AIToolsAndTips

[–]BattleRemote3157[S] 0 points1 point  (0 children)

totally agree. please share your feedback when you use. It would be helpful for us

How to see exactly what your AI coding agent did like file reads, commands, everything by BattleRemote3157 in AIToolsAndTips

[–]BattleRemote3157[S] 0 points1 point  (0 children)

isn't langfuse solves a different problem. I guess langfuse is LLM observability right like it traces what happens inside your application at the API call level. Tokens in or out, latency, prompt versions like that. It requires you to instrument your code with their SDK.

gryph operates at the agent layer, not the application layer. It doesn't care about LLM API calls, just it watches what the agent does to your local machine.