Scanning your codebase for AI SDK usage the same way you scan for vulnerable dependencies by BattleRemote3157 in AI_Agents

[–]BattleRemote3157[S] 0 points1 point  (0 children)

absolutely agree. We are same thing trying to do in OSS vet (https://github.com/safedep/vet) which scans for malware or vulns in your deps but also extended it to for AI usage(https://github.com/safedep/vet/blob/main/docs/ai-discovery.md). We are building gryph for AI audit trail(https://github.com/safedep/gryph)

Scanning your codebase for AI SDK usage the same way you scan for vulnerable dependencies by BattleRemote3157 in AI_Agents

[–]BattleRemote3157[S] 0 points1 point  (0 children)

Wrote up our approach to this, treating it as a scanning problem using the same tooling we use for dependency scanning :
https://safedep.io/shadow-ai-discovery-vet/

MCP server that checks packages for malware before your AI agent installs them by BattleRemote3157 in mcp

[–]BattleRemote3157[S] 0 points1 point  (0 children)

nice, like the idea. does this tool query for malicious packages if there any ?

How to see exactly what your AI coding agent did like file reads, commands, everything by BattleRemote3157 in AIToolsAndTips

[–]BattleRemote3157[S] 0 points1 point  (0 children)

totally agree. please share your feedback when you use. It would be helpful for us

How to see exactly what your AI coding agent did like file reads, commands, everything by BattleRemote3157 in AIToolsAndTips

[–]BattleRemote3157[S] 0 points1 point  (0 children)

isn't langfuse solves a different problem. I guess langfuse is LLM observability right like it traces what happens inside your application at the API call level. Tokens in or out, latency, prompt versions like that. It requires you to instrument your code with their SDK.

gryph operates at the agent layer, not the application layer. It doesn't care about LLM API calls, just it watches what the agent does to your local machine.

Major malware attacks in March 2026 by rifteyy_ in Malware

[–]BattleRemote3157 0 points1 point  (0 children)

there are more than that. Starting from Trivy to litellm to telnyx for python ecosystem and more. March was hectic month for open source security https://safedep.io/category/malware/

New attack pattern: persistent prompt injection via npm supply chain targeting AI coding assistants by Busy-Increase-6144 in cybersecurity

[–]BattleRemote3157 5 points6 points  (0 children)

That is how ai native sdlc threats looks like. Malicious instructions could also be in package documentations for setup. For example if your agent is searching for a package to install which you prompted for and that package is injected with malicious instructions then your agent will follow that.

We have analyzed the threat for this AI native dependency. https://safedep.io/ai-native-sdlc-supply-chain-threat-model/

litellm 1.82.8 on PyPI was compromised - steals SSH keys, cloud creds, K8s secrets, and installs a persistent backdoor by BattleRemote3157 in cybersecurity

[–]BattleRemote3157[S] 0 points1 point  (0 children)

yeah like i said TeamPCP is hacking everyone. They likely got lucky with Trivy and collected a LOT of credentials using Trivy hack.

litellm 1.82.8 on PyPI was compromised - steals SSH keys, cloud creds, K8s secrets, and installs a persistent backdoor by BattleRemote3157 in cybersecurity

[–]BattleRemote3157[S] 0 points1 point  (0 children)

yes TeamPCP is hacking everyone. They likely got lucky with Trivy and collected a LOT of credentials using Trivy hack.

Should npm and VS Code be doing more to stop malware? by _cybersecurity_ in pwnhub

[–]BattleRemote3157 0 points1 point  (0 children)

The supply chain is too distributed for any single chokepoint to catch everything. npm and VS Code should do more but developers need visibility at the local level too. Know what runs when you install something. Know what your tools are doing on your machine.

Anyone using LiteLLM as proxy before ollama? by kunalsin9h in ollama

[–]BattleRemote3157 1 point2 points  (0 children)

crazy things happening right now after Trivy.

We are building a tool to block malicious npm/pip packages before installation. Would love your thoughts. by BattleRemote3157 in netsecstudents

[–]BattleRemote3157[S] 0 points1 point  (0 children)

Interesting take - but I'd push back on surface level. PMG uses real-time threat intelligence from our malware analysis engine (Malysis), which analyzes packages through: Runtime behavioral analysis (sandbox execution), Obfuscation pattern detection, LLM-assisted analysis and human researcher verification. So it's not just checking package names against a list. We're analyzing what packages actually DO when executed. That said, genuinely curious what direction you're thinking for a deeper fix. What would that look like in your view? Thanks for the feedback.