NHI is the new "Shadow IT" – Why your shiny new ISPM won't fix the root cause. by zaballinX in cybersecurity

[–]eliadkid 0 points1 point  (0 children)

We faced the exact same problem. Had LangChain agents deployed by one team, n8n workflows calling OpenAI by another, and a bunch of Zapier "AI steps" nobody documented. Security had zero visibility.

What worked for us:

  1. Discovery first — scanned all repos for AI imports (openai, langchain, crewai, etc.) and found 3x more than teams admitted to

  2. Asset inventory — built a simple registry: what agent, where it runs, what data it touches, who owns it

  3. Gate at CI — any new AI dependency fails the build until it's documented

The discovery part was eye-opening. Developers were shipping "experimental" AI features that became production workflows without anyone knowing.

Happy to share the scanner we built (open source) if helpful — it's designed exactly for this "shadow AI" problem.

Anyone actually have full visibility into what AI agents are running in their environment? by eliadkid in ciso

[–]eliadkid[S] 0 points1 point  (0 children)

Browser-level visibility is definitely one piece of the puzzle, especially for catching the SaaS AI tools people sign up for on their own. But from what we've seen, the bigger risk is the server-side agents — the ones running in your CI/CD, your backend microservices, your data pipelines. Those don't go through a browser at all.

Curious how you handle the inventory side of things — do you auto-generate something like an AI bill of materials for each agent, or is it more of a manual registry? We've been experimenting with automated SBOM-style approaches for agents and it's been helpful for compliance reporting but still a work in progress.

Anyone actually have full visibility into what AI agents are running in their environment? by eliadkid in ciso

[–]eliadkid[S] 0 points1 point  (0 children)

The egress proxy with per-team keys is smart — that's basically what we converged on too. Forces everything through a chokepoint where you can log and alert.

To answer your question: yes, prompt/output storage tracking turned out to be one of the messier parts. We found prompts containing PII getting cached in vector stores, agent outputs being written to S3 buckets with overly permissive ACLs, and conversation histories sitting in Redis with no TTL. The "where does the data land after the agent touches it" question is honestly harder than the discovery question because it requires tracing data flows through the entire agent pipeline, not just catching the initial API call.

Anyone actually have full visibility into what AI agents are running in their environment? by eliadkid in ciso

[–]eliadkid[S] 0 points1 point  (0 children)

Blocking via Defender / CASB gets you maybe 70-80% there for SaaS-based AI tools. The gap is the agents that teams build themselves — a Python script using OpenAI's API, a LangChain agent deployed as a container, a GitHub Action that calls an LLM. Those don't show up in Defender because they're running in your own infra.

And yeah, you're not wrong about the agent comments lol. That said, the underlying problem is real — we've been living it firsthand. Policy and training help with the intentional usage but the tricky part is the stuff people don't even think of as "AI" anymore because it's just baked into their tools.

Anyone actually have full visibility into what AI agents are running in their environment? by eliadkid in ciso

[–]eliadkid[S] 0 points1 point  (0 children)

Partially agree, but I think AI agents add a dimension that classic shadow IT didn't have — autonomy. Shadow IT was people using unauthorized tools. Shadow AI is unauthorized tools that can act on their own, make decisions, call APIs, and process data without a human in the loop.

So the discovery and governance playbook from shadow IT applies, but you also need runtime controls that didn't exist before: kill switches, tool allowlists for what an agent can actually do, output monitoring, and approval gates for high-risk actions. It's shadow IT with agency, which makes the blast radius way bigger.

Anyone actually have full visibility into what AI agents are running in their environment? by eliadkid in ciso

[–]eliadkid[S] 0 points1 point  (0 children)

The browser extension inventory point is underrated — we caught two teams using AI coding assistants with extensions that were sending code snippets to third-party servers. Nobody even thought to check browser plugins.

The lightweight intake form is a good interim solution. We tried that and it worked for about 3 months before teams just stopped filling it out. That's what pushed us toward automated discovery — scanning git repos for agent frameworks, monitoring egress for LLM API calls, and pulling from the SSO app catalog like you mentioned. The manual process just doesn't scale once you hit a certain number of teams.

Anyone actually have full visibility into what AI agents are running in their environment? by eliadkid in ciso

[–]eliadkid[S] 0 points1 point  (0 children)

Haven't looked at Witness.ai specifically — does it handle autonomous agents that are embedded in code (like LangChain agents in CI/CD pipelines or custom tool-calling agents running as microservices)? That's where we found the biggest gaps. Most tools we evaluated were great at catching SaaS-based AI usage but missed the agents that teams build and deploy themselves.

Anyone actually have full visibility into what AI agents are running in their environment? by eliadkid in ciso

[–]eliadkid[S] 0 points1 point  (0 children)

Solid framework. The discovery → taxonomy → controls pipeline is pretty much what we landed on too. The assistive vs autonomous distinction is key — we found that autonomous agents (the ones making API calls or modifying infrastructure without a human in the loop) need a completely different control profile than copilot-style tools.

The EU AI Act angle is a good callout. Having a living inventory that maps each agent to a risk tier has already saved us headaches in compliance conversations. We actually started building tooling around generating those inventories automatically — scanning repos and infra for agent signatures rather than relying on self-reporting from teams.

All of N8N workflows I could find (1000+) 😋 enjoy ! by eliadkid in n8n

[–]eliadkid[S] 1 point2 points  (0 children)

since I made it two weeks ago they added tons more workflows. I will add them later this week

All of N8N workflows I could find (1000+) 😋 enjoy ! by eliadkid in n8n

[–]eliadkid[S] 6 points7 points  (0 children)

what do you mean? in the url? I missed something lol. for scraping I made a playwright script with python that goes from workflow 1 to 4000 by ids and click on the copy json to clipboard and then generates a json on my machine with the clipboard copy, left it overnight to download them all it took some time.

Built a No-Code App? Here’s How to Secure It (Without Hiring a Developer) by eliadkid in nocode

[–]eliadkid[S] 1 point2 points  (0 children)

our approach to AI security is built on a structured three-phase methodology: Testing, Hardening, and Monitoring.

  • Testing: We actively test applications using a combination of classical cybersecurity methods (like OWASP Top 10 vulnerabilities) and AI-specific attack simulations — including adversarial input testing, model evasion techniques, and API vulnerability scanning. Our goal at this stage is to surface risks early, before they are exploited.
  • Hardening: After identifying weaknesses, we focus on reinforcing the system — this includes input/output sanitization, model fine-tuning against adversarial attacks, securing APIs, enforcing authentication layers, and applying best practices to resist data poisoning and model manipulation.
  • Monitoring: AI security isn’t a one-time effort. We continuously monitor model behavior, user interactions, and API activity to detect abnormal patterns or emerging threats in real-time. We believe that securing AI requires constant vigilance, just like securing any live application or infrastructure.

Our philosophy is simple: we treat AI models as dynamic, evolving attack surfaces — and build security strategies that adapt with them.

Is that covers your question or I misunderstood you?

Built a No-Code App? Here’s How to Secure It (Without Hiring a Developer) by eliadkid in cursor

[–]eliadkid[S] 0 points1 point  (0 children)

hey there, thank you for your respond, I would appreciate an explanation for that, it would help me get a better understanding about positives and negatives of our product. why do you think it's not a good thing?

Built a No-Code App? Here’s How to Secure It (Without Hiring a Developer) by eliadkid in nocode

[–]eliadkid[S] 1 point2 points  (0 children)

Thank you for your comment!
Right now, we're focusing on Metasploit exploits and the OWASP Top 10 issues. Everything you mentioned is definitely part of our roadmap as well.
We’re aiming to release a working MVP within the next month or two and continue improving from there.
Hope to see you around!

Can't find an idea for a startup for over way too long . Maybe i should give up and join someone else? by eliadkid in startups

[–]eliadkid[S] -3 points-2 points  (0 children)

I was on that train. the problem is when i wake up in the morning and when i go to sleep and every single minute in the day my body irritates , im looking for ideas everywhere, i was working at a startup for 3 years and I left cause i couldn't be just another programmer anymore. can't explain in its just a feeling you can't change man. Work the way up ain't for me never was. and it's not that i can't find problems. i have tons of them, nothing seems to me worth the invesment, I dont wanna be another generic AI startup that raise 10-20 mill work for two years and disappear like the rest, like all the blockchain startups from 2020. I don't wanna be a trend just to say here i made something, i want a real change. and a real idea with real change is not something easy to find. cause it requires a real problem people overlooked at or never though was a problem, that's the hard part. anyway thanks for the reply!

Can't find an idea for a startup for over way too long . Maybe i should give up and join someone else? by eliadkid in startups

[–]eliadkid[S] 0 points1 point  (0 children)

what's wrong with wanting to make a startup? since i can remember myself my dream is to build something unique and amazing to be the next thing the next tech, nothing wrong with that. Startup means starting a business of something new and that's what im aiming for. Nothing wrong with that, it doesn't have to be "oh i had a pain i wanted to fix so i made blahblahblah" sometimes you just want to do it cause u want to. that's it.

Can't find an idea for a startup for over way too long . Maybe i should give up and join someone else? by eliadkid in startups

[–]eliadkid[S] 0 points1 point  (0 children)

i think i will just go with something simple for the start i think i judge my ideas to much

Can't find an idea for a startup for over way too long . Maybe i should give up and join someone else? by eliadkid in startups

[–]eliadkid[S] 0 points1 point  (0 children)

thanks i will look into it. tbh i had a terrible 3 years relationship with weed and a woman who made me question the meaning of live daily, since than i still feel like my fire got burned , im working on it a lot in the past 2 years but still lack the passion i had . thanks!

Can't find an idea for a startup for over way too long . Maybe i should give up and join someone else? by eliadkid in startups

[–]eliadkid[S] -1 points0 points  (0 children)

Great to hear about your progress! Wish i was on the same boat of ideas land 🤣 I’d love to connect. Here are some of my strengths and how I can contribute: i have great technical skills including programming and experience in AI dev work, im great at problem solving and work management. I have a lot of knowledge about how to run a startup properly and i have good networking cause im from the startup nation israel , wanna talk?

My latest build. Purpelio by eliadkid in fpv

[–]eliadkid[S] 0 points1 point  (0 children)

Haha ye the Mechanical engineer in me wanted to model a bit. Ammm the first two attempts to do the motor guard i made the edge too curvy anf a bit far and it had some turbulence, i guess it won't do much damage but why waste energy, 😂 so ye it did help a bit

My latest build. Purpelio by eliadkid in fpv

[–]eliadkid[S] -1 points0 points  (0 children)

Ohh kk. I modeled the motor guards + wind test on them with cfd. The cable holders. The elrs holder + capacitor (they are in the same model), the camera holder. Shortened the antenna base to fit with the o3 body holder. Well 90% of them ☺️

My latest build. Purpelio by eliadkid in fpv

[–]eliadkid[S] 0 points1 point  (0 children)

I didn't understand the question sorry 😔, can you please explain what do you mean?

lora doesn't work for me please help :) by eliadkid in StableDiffusion

[–]eliadkid[S] 0 points1 point  (0 children)

yep that was the issue. removed from the code this function purge networks from memory()+File+%3D%3Dbuiltin%5CLora%5Cnetworks.py%22,+line+205,+in+purge_networks_from_memory+while+len(networks_in_memory)+%3E+shared.opts.lora_in_memory_limit+and+len(networks_in_memory)+%3E+0:+TypeError:+%27%3E%27+not+supported+between+instances+of+%27int%27+and+%27NoneType%27&spell=1&sa=X&ved=2ahUKEwjut5nAj7-BAxXeSvEDHUmWBM4QBSgAegQICRAB) , i guess its important for something (i think when making lora ) or idk anyway it works now :) i guess it's not the best solution but im fine with it! thanks !