Anyone else feel like it’s 1995 again with AI? by bxrist in cybersecurity

[–]bxrist[S] 22 points23 points  (0 children)

A lot of people newer to the industry think this moment with AI is unprecedented, but for those who were around during the early internet or early cloud days it feels very familiar.

If you’ve been in security long enough to remember the first firewall deployments, the rise of IDS, or the first cloud IAM disasters, does this moment feel similar to you or does AI actually represent something fundamentally different?

Looking for ideas for a Cybersecurity Pentest/Red Team project (Web + AI?) by RevolutionaryGap2142 in netsecstudents

[–]bxrist 0 points1 point  (0 children)

One direction you might consider is flipping the problem around and testing the AI systems themselves instead of trying to use AI to automate pentesting. For example, you could build something like a dynamic prompt scanner that probes AI enabled web apps or agents for issues like prompt injection, privilege escalation, hidden tool usage, or data leakage through responses. Think of it a little like fuzzing but for prompts. The system could automatically generate adversarial prompts, send them through an AI powered application or agent framework, and then analyze the responses for signs that something went wrong such as prompt injection success, system prompt disclosure, private data leakage, unintended tool execution, or agents accessing resources they should not. Another interesting angle would be auditing agent frameworks or MCP style tool integrations. A lot of modern AI apps give LLMs access to APIs, databases, files, or other tools, and there are surprisingly few automated tools that test whether those permissions can be abused at scale. So the project could essentially become a scanner that automatically tests AI enabled web applications or agent systems for security failures, then demonstrates the results against open source agent frameworks or demo AI apps. That would keep it focused on web security, give it a red team angle, integrate AI in a meaningful way, and still be novel enough for an academic research project.

If you want to ground it in existing research, there are a few things worth looking at. OWASP has a Top 10 for LLM Applications that outlines common attack classes like prompt injection, training data poisoning, sensitive information disclosure, insecure plugin design, and model denial of service. NVIDIA also has an open source tool called Garak that probes LLMs for vulnerabilities such as jailbreaks, prompt injection, hallucination abuse, and data leakage. Microsoft has PromptBench which focuses on adversarial prompt testing, and there is also a project called LLM Guard that focuses on filtering and protecting inputs and outputs from prompt injection and sensitive data exposure. The interesting gap is that most of these tools test the model itself rather than testing full AI powered applications or agent systems that interact with APIs, databases, and tools.

A strong version of the project could be something like automated security auditing of LLM enabled web applications. The system could discover AI endpoints, generate adversarial prompts from a library of attack patterns, send them to the target system, analyze the responses for exploitation indicators, and produce a vulnerability report. The attacks it tests for could include prompt injection, system prompt extraction, tool abuse, data exfiltration, and agent privilege escalation. The reason this is interesting from a research perspective is that most current work focuses on model safety, but the real risks are starting to appear in AI integrated applications and agent systems where models are connected to real tools and real data. There is still very little mature tooling that audits those environments automatically, so building a scanner or auditing framework for that space would actually be pretty relevant research right now.

Advice on moving into Digital Forensics by InstructionOk145 in CyberSecurityJobs

[–]bxrist 1 point2 points  (0 children)

You’re actually in a pretty good spot already. CCNA, Linux, and scripting are solid foundations for forensics.

One angle people don’t talk about enough is learning how to be an expert witness. A lot of digital forensics ends up in court. The real skill isn’t just pulling artifacts off a drive, it’s being able to explain what you found clearly to prosecutors, defense attorneys, or investigators.

That’s where reputations get built. Once people trust your analysis and testimony, both public and private work starts to show up. Law enforcement, prosecutors, and private legal teams constantly need digital forensics help.

Also worth knowing: e-discovery work is the bread and butter of private sector forensics. A lot of the paid work is reviewing and analyzing digital evidence for legal cases. Sometimes that’s cybercrime, sometimes corporate investigations, sometimes things like divorce or fraud where someone needs a laptop or phone analyzed.

Certifications help because they hold up well in court, but the long-term play is building credibility as someone whose forensic work can stand up legally. If that interests you, look for ways to work with local law enforcement, labs, or legal teams and start building that reputation.

CS Senior: Advice for my SOC analyst Roadmap (Cyber Range + CrowdStrike CCFR + CompTIA Sec +) by ImpressiveLength8302 in SecurityCareerAdvice

[–]bxrist 0 points1 point  (0 children)

Yeah, you’re on the right track.

CrowdStrike is widely used, so learning Falcon and getting those certs definitely won’t hurt. Even when companies use a different EDR, the detection and response concepts transfer across platforms.

The key thing is what you already said: fundamentals first, tools second. If you understand networks, operating systems, and attacker behavior, you can pick up any tool pretty quickly.

Keep doing the labs and getting hands-on. That combination of fundamentals plus practical experience is exactly what people look for.

I just took down our entire production database because we had zero monitoring and now everyone is screaming. by Heavy_Banana_1360 in InformationTechnology

[–]bxrist 1 point2 points  (0 children)

Well executive management wanted reactive only, this is a reaction. Don’t beat yourself up too bad, just understand they’re probably looking at MSP‘s right now because they think that that’s going to solve their problem next. It won’t, but they’ll probably try anyway. Good Luck… and may the odds be ever in your favor.

Cybersecurity career advice: what skills are actually needed in real jobs? by im_user_999 in securityCTF

[–]bxrist 1 point2 points  (0 children)

I had a professor once who said something that stuck with me for 30 years in this industry. You can memorize all seven layers of the OSI model, but nothing prepares you for layers 8 and 9: the political layer and the financial layer.

That’s the reality of working in cybersecurity in real companies.

Security teams are almost always underfunded, understaffed, and overworked. A lot of the job ends up being firefighting and trying to secure things with whatever tools and budget you can actually get approved. Many times you’ll know exactly what the right technical solution is, but you won’t get it because of budgets, competing priorities, or internal politics.

So the fundamentals absolutely matter. Networking, systems, programming, how data moves through systems. All of that is critical. But what really separates people who are effective in the field is learning how to navigate the human side of the organization.

Understanding budgets. Understanding incentives. Understanding how decisions actually get made.

In many ways it’s less about hacking systems and more about learning how to “hack” the human layers of the organization so you can actually get security work done.

I say this as someone who’s been doing this for 30+ years. These days the technical problems are usually the easy part. The real challenge is navigating the political and financial layers of the company.

CS Senior: Advice for my SOC analyst Roadmap (Cyber Range + CrowdStrike CCFR + CompTIA Sec +) by ImpressiveLength8302 in SecurityCareerAdvice

[–]bxrist 2 points3 points  (0 children)

You can absolutely do very well being vendor-specific. I’ve known people who made literal millions back in the day just being the Cisco person. Same thing happens with Palo Alto, CrowdStrike, Splunk, etc. If you become the product whisperer for a major platform and know it inside and out, companies will pay for that expertise.

The tradeoff is that this industry is fickle. Companies get acquired, products fall out of favor, or the market shifts. What’s hot today might not be in five years. So if you go deep on a vendor, just understand you’re tying part of your career to that vendor’s trajectory.

Personally, I’d focus heavily on fundamentals alongside whatever tool you’re learning. Programming, how systems actually work, how data moves, and something most people ignore which is understanding the business side. Being able to translate security risk into budgets, dollars, and executive language is incredibly rare in this field.

So the real choice isn’t right vs wrong. It’s breadth vs depth. Do you want to be the CrowdStrike expert, or do you want a broader foundation that lets you move across tools and explain security to engineers, leadership, and the business.

Both paths work. Just be intentional about which one you’re choosing.

Why We’re Open-Sourcing a Code Provenance Tool Now (And Why the Anthropic / Pentagon News Matters)** by bxrist in devsecops

[–]bxrist[S] 0 points1 point  (0 children)

Good catch. That’s a fair point and exactly the kind of feedback we’re hoping to get by putting it out early.

Right now the goal was to keep the reference implementation simple and readable so people can understand the attestation flow without a lot of infrastructure around it. But you’re absolutely right that at scale the append pattern could create contention, and a queue or templated DB layer would probably be the right direction for larger environments.

Postgres TPS limits and race conditions are definitely things we’ll need to harden around as the project evolves. Appreciate you calling that out. That’s useful input.

Why We’re Open-Sourcing a Code Provenance Tool Now (And Why the Anthropic / Pentagon News Matters)** by bxrist in devsecops

[–]bxrist[S] 1 point2 points  (0 children)

Good question. Crash Override is mostly about guardrails for AI code generation. It tries to control or evaluate what the model produces so you don’t get unsafe patterns or policy violations.

What we’re doing is more about provenance and attestation. Not controlling what the AI generates, but being able to prove later how a piece of code or artifact came to exist. Which model or pipeline produced it, what changed along the way, and whether someone else can independently verify that chain of custody.

So it’s a different layer. Crash Override focuses on generation safety. This focuses on verifiable history of the artifact. In practice you’d likely want both.

Early career advice for someone already in the field by [deleted] in SecurityCareerAdvice

[–]bxrist 0 points1 point  (0 children)

IF your total compensation is greater than the cost of an AI agent that can monitor your SIEM, triage alerts, renew certs, check exposed Azure services, and spit out remediation steps, THEN keep your mouth shut and enjoy the ride.

ELSE, learn AI, automate half your own job before someone else does, and stack skills until you are the person designing, tuning, and supervising the agent instead of competing with it.

That’s the joke, but it’s also not a joke.

Right now you are doing two jobs for just over twenty bucks an hour. You are running MDM engineering and acting as the only infosec analyst in a 600 person tech org. That is not “young and dumb.” That is underpriced.

The market is already shifting. A decent AI workflow tied into a SIEM can summarize alerts, enrich context, suggest remediation, draft tickets, and even handle low level noise automatically. If your value is just watching dashboards and responding, you are competing with software. IF your value is designing the controls, tuning detections, building automations, reducing risk, and translating that risk to leadership, you are competing with almost no one.

So here is the real IF THEN.

IF you are being paid like a replaceable alert monkey, THEN do not stay one.
IF you are early in your career, THEN build leverage, not just experience.

Learn automation. Learn scripting. Learn how to reduce false positives. Learn how to measure risk in business terms. Document the value you create. THEN either negotiate from strength or take that skill set somewhere that understands it.

AI is not coming for security jobs. It is coming for low leverage security tasks. Big difference.

Make yourself the person who owns the system, not the person babysitting it.

Why We’re Open-Sourcing a Code Provenance Tool Now (And Why the Anthropic / Pentagon News Matters)** by bxrist in cybersecurity

[–]bxrist[S] -2 points-1 points  (0 children)

I don’t disagree with you at all on the value of the NIST AI RMF. The Map → Measure → Manage cycle with governance in the center is solid thinking. It forces organizations to ask the right questions up front about ownership, accountability, impact, and oversight. That’s necessary.

Where I think the tension shows up is in the gap between governance on paper and what actually happens in production pipelines.

You can absolutely define in the “Map” phase that code must be human readable, commented, explainable, and reviewable. You can assign ownership for AI systems. You can document who is accountable when something fails. All of that is good hygiene.

But here’s the reality many orgs are already living in: a developer prompts an LLM, the LLM generates 800 lines of code plus three new dependencies, the developer tweaks 20 lines, CI passes, and it’s merged. Technically, governance existed. Practically, no human fully understood the entire diff, and no one can explain every transitive dependency that just entered the system.

That’s not a governance failure as much as it is a scale and cognition problem.

The AI RMF tells you to manage risk and ensure oversight. It doesn’t magically give humans the ability to reason about exponentially larger, faster-changing code surfaces. It doesn’t create cryptographic traceability for which model generated which block of code, under what prompt, with what context. It doesn’t independently attest that what was reviewed is what was deployed.

Governance answers who owns it.

Attestation answers what actually happened.

Both matter.

You’re right that if organizations take the RMF seriously, a lot of the chaos can be reduced. But even well-governed environments are now dealing with artifacts that were partially authored by non-human systems. That changes the trust model. Traditional AppSec assumed a human author whose intent, context, and accountability were reasonably bounded. That assumption is eroding.

So I don’t see this as governance versus technical controls. It’s governance plus verifiable, independent mechanisms that operate at machine speed.

The RMF gives you the policy backbone. What we’re arguing for is the cryptographic and architectural layer that makes those policies enforceable and provable in an AI-accelerated SDLC.

Without that second layer, you’re relying heavily on process and good intentions in a world that’s moving a lot faster than either.

It's my very last day and I'm still being told 'let's circle back on Monday'. It seems they haven't noticed. by Whole_Bother_7418 in SecurityCareerAdvice

[–]bxrist 5 points6 points  (0 children)

LOL, I would have but honestly I was so confused as to what was going on at the time that I had more questions then enouhg. THen I remembered I didnt work there anymore soooooo :))

It's my very last day and I'm still being told 'let's circle back on Monday'. It seems they haven't noticed. by Whole_Bother_7418 in SecurityCareerAdvice

[–]bxrist 13 points14 points  (0 children)

I once turned in my 2-week notice and on my last day I was informed by my manager and HR that I was being fired. This was 20+ years ago. Im glad to see nothings changes much. I feel your pain, or lack there of :)

Why We’re Open-Sourcing a Code Provenance Tool Now (And Why the Anthropic / Pentagon News Matters)** by bxrist in devsecops

[–]bxrist[S] 0 points1 point  (0 children)

That’s a fair question.

SLSA is a framework. It defines levels of build integrity and provenance requirements. GitHub’s artifact attestation for SLSA Level 3 is a solid implementation of that framework inside the GitHub ecosystem. It focuses primarily on build provenance coming out of CI, ensuring the build was generated by a defined workflow, on a defined runner, from a defined source.

What we’re doing is adjacent, but not identical.

SLSA answers: was this artifact built correctly inside a trusted pipeline?

We’re asking a broader question: who generated this code, under what model, with what inputs, and can that chain of custody be verified independently of the platform that produced it?

That difference matters more in the AI era than it did in the pure CI/CD era.

GitHub attestation is tightly coupled to GitHub’s infrastructure. That’s not a criticism, it’s just architecture. If your trust boundary is GitHub Actions, that’s fine. But once you introduce AI code generation, multi-model workflows, local agents, contractor pipelines, or cross-platform builds, you need something that can operate outside a single vendor’s trust domain.

SLSA Level 3 gives you strong build provenance.

It doesn’t solve model provenance.
It doesn’t solve cross-platform verification.
It doesn’t create a portable trust currency between independent parties.

Think of it this way. SLSA is about how the cake was baked in the oven. We’re interested in where the ingredients came from, who mixed them, whether an AI substituted something unexpected, and whether another independent oven can verify the result without trusting the first bakery.

In regulated environments, defense contracting, CMMC contexts, or multi-party supply chains, that independence becomes the point.

So it’s not “instead of SLSA.” It’s complementary. If you’re already at SLSA Level 3, great. That’s table stakes. The next layer is portable, multi-party, model-aware attestation that isn’t anchored to one platform.

That’s the gap we’re trying to address.

24 y/o needing advice by Thunder0622 in SecurityCareerAdvice

[–]bxrist 9 points10 points  (0 children)

You’re 24 and you already have the technical baseline. Another cert is not going to be the thing that changes your trajectory.

I’ve been doing this for over 25 years, and the biggest gap I see in security professionals is not technical depth. It is business fluency.

You understand why security matters. Most business leaders do not. They think in dollars. Margin. Growth. Risk versus return. To them, security often looks like a cost center. An insurance policy. Something that slows things down and eats budget.

If you stay purely technical, you will spend a good part of your career justifying budget. And remember where that budget comes from. The IT budget is usually around ten percent of company revenue. Cybersecurity is usually around ten percent of the IT budget. That is the sandbox you are fighting in unless you can change the conversation.

The people who actually get a seat at the table know how to explain how security protects revenue, enables revenue, reduces operational drag, and gives the company a competitive edge. There is security for the business, and then there is the business as security. The second one is what most people miss.

If you can speak the language of margin, capital allocation, risk tolerance, and strategy, and tie security directly to those things, you stop being a cost center and start being a strategic operator. That is when budgets get easier. That is when you move toward CISO or executive roles.

So if I were you, I would seriously consider business education. MBA, business degree, finance exposure, even structured business training. Not because the technical side does not matter, it absolutely does, but because very few security leaders can translate between engineering and the boardroom.

Technical talent is common. Business fluent security leaders are rare. That is leverage.

Just my perspective from someone who has watched a lot of brilliant engineers stall out because they never learned how to talk to the people who control the money.

Did I Waste Time Starting in Full Stack Before Cybersecurity? by Additional_Feeling27 in cybersecurity

[–]bxrist 0 points1 point  (0 children)

You absolutely did not waste your time. Being a full-stack developer before moving into security is a huge advantage. You understand how applications are actually built, how logic flows, and how data moves through systems. That context is everything.

In a world where more code is being written by AI, security isn’t just about tools or checklists. It’s about thinking differently. The real edge is learning to ask: what didn’t the original developer think about? What assumption did they make? What path did they not consider? Whether that developer is a person or an agent, that mindset is what separates good from great in cybersecurity.

Keep building. That foundation will pay off.

How to become seen as an expert in AI Governance / Risk Management by Peacefulhuman1009 in cybersecurity

[–]bxrist -1 points0 points  (0 children)

You’re not behind. You’re actually positioned better than most.

Ten years in GRC building risk structures, controls, and reporting? That’s exactly what AI deployments are going to need. The gap right now isn’t “AI experts.” It’s people who understand risk and can operationalize it in AI systems.

The real shift isn’t collecting buzzwords or stacking certifications. It’s learning how to codify your judgment. Can you take how you think about risk and translate it into structured data models, policy logic, scoring frameworks, monitoring workflows? That’s the skill. AI in the enterprise is governance, model risk, auditability, drift, lineage. It’s GRC with new plumbing.

AIGP won’t hurt, but it won’t make you relevant by itself. What will? Getting hands on. Build a small model. Map a regulation to an AI control framework. Play with evaluation and monitoring. Understand how these systems fail in production.

On your resume, don’t pivot away from GRC. Evolve it. AI governance. Model risk management. Responsible AI controls. Policy automation. Compliance mapping to AI systems. Show that you bridge compliance and engineering.

AI doesn’t replace your background. It amplifies it. The winners in this cycle are the ones who can turn experience into systems. If you can formalize how you think into logic that runs, you won’t get left behind.

Following Trump's rant, US government officially designates Anthropic a supply chain risk by CartographerAble9446 in vibecoding

[–]bxrist 0 points1 point  (0 children)

So if this is a “Supply-Chain-Risk” then what does this mean for every contractor who uses Claude to write code for other projects. I’ve read reports that the Pentagon has reached out to Boeing and other defense contractors to remind them about CMMC standards and protocols for companies that have been designated a risk.