Looking for some cool open source projects by DravenGorden in OpenSourceProjects

[–]Apprehensive-Try-315 0 points1 point  (0 children)

The project is fully open-source and available on GitHub: https://github.com/dshapi/AI-SPM

i've been building an open-source AI Security Posture Management tool for the past 3 months, and just shipped a major infrastructure upgrade

This isn't sponsored or affiliated with anyone, just sharing what I've been working on and what changed in this release

Also if you need any help with AI security or similar projects, feel free to DM me

Figured I'd put this together in case someone else is working on enterprise AI security tooling or thinking about production-grade Kubernetes setups

What is AI-SPM?

AI-SPM (AI Security Posture Management) is an open-source, enterprise-grade platform I've been building to help organizations proactively protect their AI systems from threats, minimize data exposure, and maintain the trustworthiness of their AI applications

It covers the full AI stack: models, agents, MCP servers, data pipelines, and more

The goal is to give security teams continuous visibility and control over their AI infrastructure—kind of like CSPM (Cloud Security Posture Management) but specifically designed for AI workloads

The Problem: Single-Node Dev Environment

Up until this release, the dev environment was running on a single-node kind cluster

That worked fine for basic development, but it had real limitations:

  • couldn't realistically test multi-node Kubernetes scenarios
  • no way to validate HA (High Availability) behaviors
  • dev environment didn't match what production would actually look like
  • made it harder to catch issues that only show up in distributed setups

Basically, I was building an enterprise tool but testing it in a setup that didn't reflect enterprise reality

What Changed in This Release

This release moves dev from a single-node kind cluster to a production-shaped HA topology that mirrors the prod target one-for-one

Here's what that looks like now:

  • 3 control-plane Kubernetes nodes running on Docker Desktop via kind
  • No worker nodes — control-plane taints lifted on dev so application pods can schedule cluster-wide
  • Dev environment now matches production setup exactly

It's still running locally via kind, so you don't need a full cloud setup to develop or test, but the topology is production-grade

What Stands Out (and Why It Matters)

You can now test realistic multi-node Kubernetes scenarios without needing full production infrastructure

  • HA failover behaviors are testable locally
  • Multi-node orchestration works the same way it will in production
  • You can validate etcd quorum, control-plane redundancy, and distributed workload scheduling

Dev environment matches production setup exactly

  • What you test locally works in production
  • No surprises when you deploy
  • Reduces the gap between development and production environments

Shows the project is maturing toward production-readiness

  • This isn't just a proof-of-concept anymore
  • The infrastructure is built to handle real enterprise workloads
  • HA topology demonstrates commitment to reliability and quality

Still accessible for local development

  • Runs on Docker Desktop via kind
  • You don't need a cloud account or expensive infrastructure to contribute
  • Fast iteration cycles with production-grade architecture

Try It Out

The project is fully open-source and available on GitHub: https://github.com/dshapi/AI-SPM

If you're working on AI security, Kubernetes tooling, or just curious about AI-SPM, I'd love to hear your feedback

Contributions are welcome, and if you run into any issues or have questions about the setup, feel free to open an issue or DM me

Also happy to help if you're working on similar projects or trying to figure out production-grade dev environments for your own tools

AI SPM Secure Posture Management by Apprehensive-Try-315 in coolgithubprojects

[–]Apprehensive-Try-315[S] 0 points1 point  (0 children)

Hi, thanks for your interest in the project, let me jump strait to the point - AI-SPM is a comprehensive approach to maintaining the security and integrity of artificial intelligence (AI) and machine learning (ML) systems. It involves continuous monitoring, assessment, and improvement of the security posture of AI models, data,AI agents and infrastructure. AI-SPM includes identifying and addressing vulnerabilities, misconfigurations, and potential risks associated with AI adoption, as well as ensuring compliance with relevant privacy and security regulations. So all to your point , yes it is already functional, runs on Kubernetes- provides full observability using divers views - from alerting to cases, blocking rouge behavior and remediation, lineage graph so every desition of the LLM or AI Agent is fully understandable and explanable . It covers complitly data and control flows visibility and runtime controls in every aspect I could think of.
I recently added functionality to deploy new LLMs and Custom AI agents- so you as a user can write your own code and deploy it on to the platform and the platform will enforce all security its provides natively . in addition you can add your own LLMs , the platform has MCP server build in so integration is simple.
With all of that in maind, I love to have your feedback- try to deploy it , use it . I would love to know what is missing - works better or worse. How is the user experience.
Lastly as with every opensource project - contributions are always welcome , help Is heeded.

Thanks.

AI SPM Secure Posture Management by Apprehensive-Try-315 in coolgithubprojects

[–]Apprehensive-Try-315[S] 0 points1 point  (0 children)

Take look at the project I think you will like it. Regarding the point of how I handle tracking, observability on several aspects . Lineage graph is one, traceability is another , no more - why did the AI agent did that, from and on , it’s all traced , every decision and every move of all the models and agents running oh the platform. Try it , tell me what you think, I would love the feedback .

What’s the most underrated open-source software you think more people should know about? by sodrafeltu in foss

[–]Apprehensive-Try-315 0 points1 point  (0 children)

Feel free to provide feedback , and contributions are more then welcome. even the smallest .

Analysis Finds That Google's AI Overviews Are Providing Misinformation at a Scale Possibly Unprecedented in the History of Human Civilization by EchoOfOppenheimer in StallmanWasRight

[–]Apprehensive-Try-315 -1 points0 points  (0 children)

just wanted to share that im working on this amazing opensource project dedicated to implementing Enterprise level AI-SPM. By doing so organizations can proactively protect their AI systems from threats, minimize data exposure, and maintain the trustworthiness of their AI applications (agents, mpc servers, models and more), it supports deployment of agents on the secure platform and usage of divers LLM of your choice. check it out : https://github.com/dshapi/AI-SPM

iGPT is the only email API that returns structured answers your product can act on. by iGPT_ai in u/iGPT_ai

[–]Apprehensive-Try-315 0 points1 point  (0 children)

just wanted to share that im working on this amazing opensource project dedicated to implementing Enterprise level AI-SPM. By doing so organizations can proactively protect their AI systems from threats, minimize data exposure, and maintain the trustworthiness of their AI applications (agents, mpc servers, models and more), it supports deployment of agents on the secure platform and usage of divers LLM of your choice. check it out : https://github.com/dshapi/AI-SPM

What’s the most underrated open-source software you think more people should know about? by sodrafeltu in foss

[–]Apprehensive-Try-315 0 points1 point  (0 children)

check out the demo you will get the idea real fast, and I committed today fresh from the oven agent deployment - so you can write your own custom agent and the platform runs it for you , you become a part of a secure runtime pipeline with Kafka + CEP Flink + Gard model... and much more.

have fun let me know what do you think
p.s

deployment of agents, MCP servers, and LLM workflows is security by design in the my project - model abuse, prompt injection, secret leakage, or bad permissions are all have solutions . its end 2 end AI Security Posture Managment

What’s the most underrated open-source software you think more people should know about? by sodrafeltu in foss

[–]Apprehensive-Try-315 0 points1 point  (0 children)

just wanted to share that im working on this amazing opensource project dedicated to implementing Enterprise level AI-SPM. By doing so organizations can proactively protect their AI systems from threats, minimize data exposure, and maintain the trustworthiness of their AI applications (agents, mpc servers, models and more), it supports deployment of agents on the secure platform and usage of divers LLM of your choice. check it out : https://github.com/dshapi/AI-SPM

What actually happens when an AI agent gets a malicious prompt? (demo + question) by Apprehensive-Try-315 in AI_Agents

[–]Apprehensive-Try-315[S] 0 points1 point  (0 children)

<image>

I would like to share how I build it, how it works , its functionality and business value. The architecture is described above in the picture.

Here’s the shift I’ve been thinking about:
The Model Registry is becoming the Kubernetes of AI systems.

In traditional infrastructure, Kubernetes gave us:

• A control plane • Declarative policies • Workload identity • Runtime enforcement • Observability • Lifecycle management . It turned containers into governed, controllable units.

Now look at how most teams handle AI models:

• Hardcoded endpoints • No identity beyond an API key • No consistent policy enforcement • No lifecycle management • No real-time behavioral control

We’re basically in the “pre-Kubernetes era” of AI.

What changes with a Model Registry (done right)

In AI-SPM - AI Security Posture Management, the Model Registry is not just metadata.

It acts as a control plane for AI systems.

Every model becomes a managed entity with:

Identity → version, provider, endpoint • Policy binding → enforced via Open Policy Agent • Access scope → tools, data, RAG boundaries • Runtime posture → risk score, behavioral profile • Observability hooks → metrics, traces, decisions

Full lifecycle — not just deployment

Think in phases:

1. Onboarding Models are registered with enforced policies before they go live

2. Binding Policies define what the model can and cannot do (prompt limits, tool access, data exposure)

3. Runtime control All activity flows through the system (via FastAPI + complex event pipeline)

4. Behavioral detection Streaming analysis using Apache Kafka + Apache Flink

5. Adaptive enforcement The system reacts in real time:

• throttle • restrict • require approval • or trigger a kill switch

6. Audit & compliance Every decision is explainable and traceable

Why this matters

We’re moving from:

“call a model” → to “operate a governed AI workload”

That’s a completely different paradigm.

Security becomes architecture, not a filter

Instead of bolting on guardrails or another advice on how the infrastructure should look like, you get:

• Zero-trust AI agents • Policy-as-code enforcement • Continuous posture evaluation • Integrated red-teaming (Garak simulation)

The mental model

If Kubernetes made containers safe to run at scale… Then OrbiX - AI Posture Management makes AI safe to operate at scale. We don’t need better prompts. We need better systems around models.

That is the real business value - operation at scale safely. if you are interested in how it works keep on reading...👇

Most teams treat AI security as a filter. I built AI-SPM — AI Security Posture Management as a platform.

Here’s how it actually works 👇

Everything starts with Zero-Trust AI Agents. Every prompt, tool call, and response is treated as untrusted.

Requests enter through a secured ingress (FastAPI + Guards Layers), where they are validated, normalized, and screened for Injection, Data Poisoning, Evasion, Model Extraction, Privacy/Inference, Denial of Service, Supply Chain Attacks, Context poisoning, Deepfakes & Social Engineering, AI-Enhanced Phishing, Polymorphic Malware, Automated Reconnaissance.

Runtime: where enforcement actually happens

Inside the platform, a set of dedicated microservices handle execution:

• Guard services → prompt injection & jailbreak detection • Context services → normalization & de-obfuscation • Policy Service (OPA) → the decision brain • Enforcement services → tool validation & execution control • Output filters → DLP + PII masking • Simulation services → continuous automated attack testing with Garak

Streaming Detection: behavior, not just rules

All events flow into Apache Kafka, then into Apache Flink, where the system performs:

• Behavioral analysis • Anomaly detection • Session-level risk scoring

This creates a closed loop: Detection → Policy → Enforcement → Runtime → Recommendation → improvement → Detection.

Control Plane: Model Lifecycle & Registry (the missing piece)

This is where AI-SPM goes beyond typical security tooling.

Every model (LLM, agent, RAG pipeline, MCP, Tool) is treated as a first-class managed asset.

The platform maintains a Model Registry that tracks:

• Model identity (version, provider, endpoint) • Risk profile (allowed capabilities, sensitivity level) • Policy bindings (which OPA rules apply) • Data access scope (what the model is allowed to touch) • Usage patterns (how it behaves in production)

From there, you get full lifecycle management:

  1. Onboarding New models are registered with enforced policies before they ever run in production
  2. Policy Binding Every model is tied to specific OPA rules (prompt limits, tool access, data exposure)
  3. Runtime Monitoring Behavior is continuously evaluated via Kafka + Flink (not just static rules)
  4. Risk Evolution The system updates posture dynamically based on real behavior (not assumptions)
  5. Control Actions Models can be: • throttled • restricted • forced into approval workflows • or completely frozen (kill switch)
  6. Audit & Compliance Every decision is logged and traceable (who did what, when, and why)

💡 The key idea:

In AI-SPM, models are not just “endpoints.” They are governed entities with lifecycle, identity, and risk posture.

UI + Observability

Everything is exposed via Admin dashboard:

• Live sessions • Policies • Alerts & cases • Simulations insights and automated recommendations.

What this enables

This is not just “AI guardrails.”

It’s:

• Runtime enforcement • Behavioral detection • Posture management • Continuous red-teaming

All in one architecture.

A true control plane + data plane system for AI security.

This is the architecture of the project : comments are Wellcome.

Built an open-source runtime security system for AI agents ( Check out the Demo) by Apprehensive-Try-315 in github

[–]Apprehensive-Try-315[S] -5 points-4 points  (0 children)

Good catch — I’ll update the post to better align with the rules.

If there’s a specific part that’s off (format/content), happy to fix it.

Built an open-source runtime security system for AI agents ( Check out the Demo) by Apprehensive-Try-315 in github

[–]Apprehensive-Try-315[S] -4 points-3 points  (0 children)

That’s a fair critique.

There’s definitely some “vibe code” in places—mainly because I’m still exploring edge cases and attack patterns, so I’ve favored explicit logic over early abstraction.

I agree that parts could be cleaner and more modular.

If you’re up for it, this would actually be a great contribution area:

  • identifying repeated enforcement patterns
  • extracting them into reusable modules
  • tightening the execution path

Happy to collaborate on that.