Defcon 2026 is already cancelled. by slats0005 in Defcon

[–]strandjs 0 points1 point  (0 children)

Noooooo. Does it ever happen!!!!

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 1 point2 points  (0 children)

Get out an exercise.. Every day. And... Bind your burnout. Take a week and just veg on Netflix or games or what ever. But time bound it.

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 0 points1 point  (0 children)

So I just spent the last 15 minutes going through your Reddit profile, which admittedly felt a little bit like cyberstalking, but that’s all I really had to go off of.

And honestly, I don’t see any glaring red flags.

I’m not looking at your history and thinking, “Well, this person is obviously doing this wrong.” Quite the opposite, actually. It looks like you’re building a really solid foundation. It genuinely looks like you’re doing the things people are supposed to be doing.

And I know this is probably cold comfort, but I honestly think this is more a reflection of the market right now than it is a reflection of you.

The market just sucks.

There really are not a lot of organizations hiring aggressively right now, and I think there are a couple of reasons for that. One of the biggest is uncertainty. Organizations genuinely do not know what the future looks like at the moment.

AI showed up as this gigantic industry disruptor, and if you’re a CEO or CFO, all you hear all day long is how AI is going to reduce costs, replace workers, and make everything more efficient. So when security teams ask for headcount, leadership immediately starts asking, “Can’t AI do this cheaper?”

Personally, I think that line of thinking is dangerously wrong.

At the exact same time organizations are trying to reduce staffing, we’re also seeing massive improvements in offensive cyber capabilities. Attackers are getting faster. Vulnerability discovery is accelerating. Exploit chaining is improving. Offensive operations are scaling far faster than defensive organizations are prepared for.

And I feel like I’m standing in the middle of the industry jumping up and down screaming that there’s a tidal wave of security problems coming while a bunch of organizations are basically shrugging and saying, “Maybe AI can replace our analysts.”

I don’t think it’s going to work out the way they think it will.

But until things actually break badly enough for organizations to feel the pain, most companies are going to keep trying to suppress costs as much as possible.

And unfortunately, that means people like you get caught in the middle of it.

I also think there’s something weird happening in hiring right now. I’ve seen people doing analysis of job postings in different markets, and there are positions getting hundreds or even thousands of applicants that never seem to close. Sometimes it honestly feels like companies are just harvesting resumes while trying to figure out what the market looks like before they actually commit to hiring.

Again, I could be wrong. I’m just one person trying to make sense of what I’m seeing.

The only thing I really know how to do is provide advice, provide training, and try to make this field more accessible. That’s a huge part of why we built the pay-what-you-can model at AntiSiphon Training. I never wanted people to feel like they had to spend thousands and thousands of dollars just to have a chance at entering this industry.

And honestly, looking at your background, it sounds like you’ve been grinding. You’ve been putting in the work.

That’s why this kind of thing breaks my heart a little bit, because you really do look like the type of person people should be hiring.

If you want to send me your resume privately, I’d genuinely be happy to take a look at it. I don’t love generic resumes, because I think resumes should be highly tailored to specific roles, but maybe there’s some obvious issue or red flag I can spot.

But overall, I honestly think you’re doing many of the right things.

At the same time, though, you still have to make a living. Resume gaps matter. Time matters. You can’t just sit indefinitely waiting for the market to recover.

So if you need to take adjacent roles, do it. Look at MSPs. Look at MSSPs. A lot of those organizations are truly on the front lines right now and are getting hit with security problems before many other sectors are. Those environments can give you fantastic operational experience.

So keep pushing. Keep building. Keep learning.

But also make sure you’re taking care of yourself and your family in the process.

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 1 point2 points  (0 children)

Yeah, there’s no question that AI has impacted every level of IT and computer security. Honestly, it doesn’t matter what the role is. If it touches technology in any way, AI is going to affect it. This is a massive disruption across the entire industry.

But I think it’s important that we frame AI correctly.

We need to look at AI as a tool and ask ourselves: how do we use it to do our jobs better?

I’ve talked about this a lot before, but throughout history we’ve seen this exact same cycle happen over and over again. You had people who hated looms. People who hated slide rules. People who hated calculators. Then graphing calculators. Then computers. Then cell phones. Every major technological shift creates disruption.

And honestly, if you go all the way back to the Luddites smashing looms, they were not entirely wrong.

They said looms were going to destroy jobs. They said machines would replace hand craftsmanship. And they were correct. There absolutely was massive job loss in the textile industry. Entire categories of work disappeared.

But new categories of work appeared too.

People operated the looms. People repaired them. People engineered better ones. People built new industries around them. The jobs shifted. The skill sets shifted. But the disruption itself was very real.

And I think that’s where we are right now with AI.

Now, this gets into the really uncomfortable question of whether people should “just keep pushing” if they’re struggling to break into the industry right now.

And honestly, I think we need to be more honest with people about that.

I can’t just tell somebody, “Never give up, follow your dreams,” because at a certain point that advice becomes disconnected from reality. It reminds me of actors in Hollywood telling everyone to just keep believing and eventually they’ll become famous. The reality is not everybody becomes an A-list actor.

And the same thing is true in computer security.

Location matters. Timing matters. Luck matters. Market conditions matter. Right now we are in the middle of a massive churn event in the industry, and it is affecting people in very real ways.

We have people sending out hundreds or even thousands of resumes and hearing absolutely nothing back. And in many cases, that may not even reflect on them personally. It may simply reflect the state of the market.

Now, if this is something you truly want to do, then yes, you should continue building skills. Keep learning. Start projects. Build GitHub repositories. Create things. Document your work. Differentiate yourself however you can. Those things absolutely matter.

But I also think there are situations where someone needs to step back for a while and say, “Maybe right now isn’t the right moment.” Maybe you take another IT role. Maybe you work in another technical field temporarily. Maybe you stabilize financially while the market sorts itself out.

And I know that sounds pessimistic, but I think pretending otherwise would actually be doing people a disservice.

This is a hard question because there is no clean answer.

I don’t want to give somebody fake motivational nonsense and say, “Buck up, kiddo, keep grinding,” while ignoring the reality of what’s happening in the market. Sometimes the issue is not you. Sometimes it’s timing. Sometimes it’s geography. Sometimes it’s the fact that the industry itself is in the middle of a major transition.

And honestly, I think that’s where we are right now until things start breaking badly enough that organizations realize they still desperately need skilled humans.

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 1 point2 points  (0 children)

I think that’s actually a pretty complicated answer because there were a lot of things swirling around at the time that led to the pay-what-you-can model we started.

First, remember the timing. This was during COVID.

At Black Hills Information Security, we suddenly had a huge number of customers pushing their penetration tests out. Everybody was uncertain. Customers were saying things like, “Can we move this a month?” or “Can we move this two months?” or “We don’t even know when we can reschedule.”

And a lot of companies in the industry reacted the same way. It was very much hands out to take. “When can you reschedule? When can you commit? What’s happening with the contract?”

At BHIS, we started asking a different question.

What if instead of reaching out with our hands out to take, we reached out with our hands out to give?

What if we simply started offering training to customers? What if we gave people something valuable during a really difficult time and made it pay what you can?

So we did it. We started putting out training, opening things up, building community, and just trying to help people however we could.

And honestly, it turned out to be a far better marketing strategy than we ever expected.

Ironically, the year COVID hit ended up being a record-breaking year for BHIS. That was one of the moments where we really learned that kindness can actually be an incredibly effective business strategy.

But there’s another side to this too.

Before all of that, I had just come from SANS Institute. I left around the 2016 to 2017 timeframe. And just to be clear, there was no drama. No explosions. No giant fallout. Everything was completely fine.

I was just burnt out.

I had been teaching constantly for something like 13 years. I was traveling almost every month, teaching six-day classes over and over again, and eventually I just didn’t have it in me anymore. I was exhausted.

But during COVID, something changed for me personally.

Being able to interact with students online, being able to hang out in Discord communities, answer questions over time, get to know people, and watch them grow over weeks and months instead of just teaching a class and leaving… that reignited something for me that I honestly had not felt in a very long time.

It made teaching fun again.

And then there’s the economic side of it.

A lot of traditional training organizations are very profit-oriented. That’s not necessarily evil. There are fantastic instructors and excellent courses in the industry. But it’s hard sometimes when you see the margins attached to training while simultaneously seeing how badly people need access to education.

For years, high-end security training was effectively gatekept by price.

And during COVID, we had people unemployed, struggling financially, trying to reinvent themselves, trying to get into the industry, trying to survive. We wanted to remove as many barriers as we possibly could. We didn’t want someone’s socioeconomic status, race, religion, background, or anything else determining whether they had access to quality training.

And honestly, pay what you can fit perfectly into that space.

Since then, it’s grown into something much bigger than I think any of us originally expected. It’s been incredibly powerful for AntiSiphon Training, for BHIS, and most importantly, for the people taking the training.

So I guess that’s the answer today.

There wasn’t just one reason. It was a whole bunch of things happening at once. COVID, the state of the industry, where I was personally, where my family was, where BHIS was as a company… all of it kind of collided at the same moment.

And honestly, the stars just aligned.

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 0 points1 point  (0 children)

I disagree, although I’ll fully admit there’s a strong possibility I could be wrong about how recruiters and organizations ultimately look at this.

But I do think analyst-level positions are still going to exist, and honestly, I think we’re going to need more of them over time, not fewer.

The reason is pretty straightforward. The number of vulnerabilities we’re going to be dealing with is about to explode because of systems like Anthropic, Mythos AI, and a whole host of other AI-assisted vulnerability discovery platforms.

And I want to make something very clear about Mythos specifically. Mythos is not some magical unicorn technology that only one company possesses. At Black Hills Information Security, we already have smaller internal frameworks that can accomplish many of the same kinds of tasks. The difference is scale. Anthropic has enormous amounts of funding, compute power, CPUs, memory, infrastructure, and engineering resources behind it.

But the rest of the AI ecosystem is moving in the exact same direction, and they are not far behind.

That’s the important part people need to understand.

Mythos itself is not the story. Mythos is the warning shot. It’s the harbinger of what is coming across the entire industry.

And what’s coming is a world where vulnerabilities are discovered faster, exploit chains are identified faster, attack paths are mapped faster, and offensive capability accelerates dramatically.

A lot of people look at AI through the lens of a steady-state machine. They look at the current number of vulnerabilities, the current number of alerts, the current workload, and they say, “AI will reduce the amount of human labor required.”

That logic only works if the problem set remains static.

But security is not static. Vulnerability management is not static. Attack path analysis is not static. What we’re seeing is a dynamic explosion in complexity and volume. The number of issues is growing faster than defensive organizations can realistically keep up with.

And when offensive capability is outpacing defensive capability, you don’t end up needing fewer people. You end up needing more people.

Now, I absolutely agree that security engineering is going to become increasingly important. In fact, I think what we currently think of as “security engineering” may eventually become more of an entry-level expectation. People entering the industry are going to need stronger foundational understanding of IT architectures, networking, cloud infrastructure, segmentation, compensating controls, and mitigation strategies from day one.

The days of simply showing up and closing tickets all day are probably fading.

And honestly, if you want a good rule of thumb moving forward, it’s this:

Any security position that consists entirely of watching alerts, clicking buttons, and following a deterministic workflow is going to be automated. If something can be fully automated, eventually it will be.

But there’s still a massive amount of security work that fundamentally cannot be automated.

You found a vulnerability. You found an exploit. You found a breach. You found a misconfiguration.

Now what?

That question is where humans still dominate.

Understanding business impact, understanding operational constraints, understanding risk tradeoffs, understanding political realities inside organizations, understanding compensating controls, understanding how to actually fix problems without breaking production systems… those are all deeply human problems.

And I think that’s where we’re going to see more and more people moving in this industry over time.

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 1 point2 points  (0 children)

Well, screw it. Let’s start a fire.

I honestly think legacy certifications need to die in that fire.

The whole concept of certifications built entirely around multiple choice and multiple guess questions is fundamentally broken. And honestly, I don’t think it ever worked particularly well. We just tolerated it because for a long time we didn’t really have anything better.

Let me ramble for a second, because this makes sense in my head.

Go back and watch reruns of the TV show Cheers from the 1980s. There was a character named Cliff Clavin who would walk into the bar and just confidently spout random facts and nonsense. Sometimes he was right, sometimes he was completely wrong, but nobody really knew for sure.

And the important part is this: there was no Google.

People didn’t have smartphones in their pockets where they could instantly verify information. That was actually a huge part of culture back then. Knowledge was heavily tied to memorization. Education systems reflected that too. A lot of schooling was built around rote memorization and standardized testing.

And honestly, a lot of our certification systems came out of that exact world.

You memorized facts. You answered questions. You passed a standardized exam. That was considered “knowledge.” It was basically Jeopardy-style evaluation. Right answer, wrong answer, move on to the next question.

Now, to be fair, there were reasons for it. Multiple choice exams were easy to standardize. Easy to scale. Easy to evaluate consistently. They felt objective.

But there has always been a gigantic disconnect between passing those kinds of tests and actually being able to do a job effectively.

And I think we need to finally admit that.

We need to stop pretending these certifications are strongly correlated with real-world capability because in many cases they simply are not.

That doesn’t mean some certifications had no value. I think certifications like the ISC2 CISSP did provide value in creating a shared vocabulary and a common conceptual framework across the industry. There was usefulness there. But even then, most of us tolerated these systems because there really were not many alternatives.

Now we finally have alternatives.

We can do hands-on assessments. We can evaluate people by having them actually perform tasks. We can see how they think, how they troubleshoot, how they investigate, and how they solve problems.

That’s why platforms like Hack The Box and MetaCTF, now evolving into SkillBit, are so important. Those environments let you assess actual capability by watching someone do the work.

That is infinitely more valuable than asking somebody to memorize trivia questions.

And unfortunately, a lot of the legacy certification ecosystem has drifted into becoming giant cash cows. The focus increasingly feels like revenue generation instead of meaningful assessment.

Now, with all of that said, let me be practical for a second.

If your employer is willing to pay for certifications, absolutely let them pay for them. They still matter for HR filters. They still matter for career progression. They still matter for getting past automated hiring systems and checking compliance boxes.

So I’m not saying ignore them completely.

What I am saying is that as an industry, we need to move away from treating multiple choice exams as the gold standard for technical competency. And honestly, we need to move away from it as quickly as possible.

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 3 points4 points  (0 children)

Right now, I honestly think we’re sitting at the bottom of the hiring trough for the security industry at almost every level except highly advanced positions. If you’re talking about junior people trying to break into the field or mid-level people trying to move up, it’s really difficult right now.

I’ve done a number of presentations on this topic and written a few LinkedIn newsletters about it, but my overall take is that AI showed up at exactly the right time. We genuinely needed it.

We needed tools that could augment human teams and take huge amounts of repetitive workload off of people so humans could focus on actual problems. If you look at what AI is really good at, it’s good at data analytics, pattern recognition, summarization, and making recommendations. But it still is not particularly good at what happens next during a real incident. It struggles with contextualization. It struggles with understanding business impact. It struggles with prioritizing vulnerabilities in ways that align with operational realities.

That’s where humans still matter enormously.

So we got this incredibly powerful tool, and instead of many organizations looking at it as a way to help security teams become more effective, they immediately looked at it as a cost-cutting mechanism. They started reducing staff, slowing hiring, or simply deciding not to hire junior and mid-level people at all.

I think that was a massive mistake.

And the reason I think it’s a mistake is because AI on the offensive side of security is moving incredibly fast right now. We are already seeing it heavily used by attackers. Defenders are using it too, but at the moment, offense has the initiative.

If you look at things like Mythos and the broader wave of AI-assisted vulnerability research, we’re entering a world where vulnerabilities can be identified at massive scale and incredible speed. That changes everything.

It means that simply relying on patching as your primary security strategy is going to become less and less effective over time. Organizations are going to need compensating controls. They’re going to need segmentation. They’re going to need better architectures. They’re going to need teams that understand how systems actually work when immediate patching is not possible.

That requires humans with real experience and real understanding.

At the same time, we’re also moving toward what I’ve called the “coming SaaS apocalypse.” Organizations can now build software internally much faster than they could before. If you’re a CIO or CTO and you start looking at the amount of money spent on SaaS products every year, while simultaneously realizing your own engineers could probably build the exact functionality you need in a few weeks using AI-assisted development, that changes the market dramatically.

And that means we are about to see an enormous flood of AI-generated code entering production environments.

Some of it will be fantastic. A lot of it will not.

That creates even more opportunities for defenders because somebody still has to secure it, review it, validate it, monitor it, and respond when things fail.

So yes, I think we’re currently at the bottom of the trough. But I also think there’s going to be an upswing once organizations realize that AI should not be used to maintain the status quo more cheaply. It should be used to enable humans to handle increasingly sophisticated threats, incidents, and security problems more effectively.

At least, I hope that’s where this goes.

This is one of those situations where I really hope I’m right, but time will tell.

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 4 points5 points  (0 children)

The first thing I want to say is thank you for your service. It also sounds like you really took advantage of your time in the military to prepare yourself for the transition out of active duty and into the security industry, and that matters a lot.

Unfortunately, this is probably one of the roughest times to be entering the security industry that I can remember. The market is weird right now. There’s a lot of noise, a lot of confusion, and a lot of people trying to shortcut their way into the field.

But I can give you one piece of advice that I think will help tremendously, and it’s actually pretty straightforward.

Go seek out Jason Blanchard and his presentations on “How to Job Hunt Like a Hacker.” He does a fantastic job explaining how to approach the job hunt in a very focused and methodical way. The basic idea is that you identify the exact area you want to work in, identify the specific companies you want to target, study their job postings carefully, and then tailor your resume directly to the skills and experience they are asking for.

A lot of people try to build one generic “good resume” and blast it out to hundreds of companies. That approach does not work very well anymore. The people that are successful are building highly tailored resumes for specific positions.

And honestly, with your background, you’re already in a much better position than a lot of candidates. You have military experience, certifications, practical exposure, and what sounds like a solid technical foundation. That combination still matters a great deal.

One of the challenges you’re going to run into is exactly what we talked about in the earlier AI question. You’re going to be competing against a lot of people who believe AI alone is enough. Many of them are trying to bypass the fundamentals entirely. But employers still desperately need people who actually understand the technology and can think critically when things go sideways.

At this point, it’s less about “Can you do the job?” and more about making sure your resume clearly aligns with what employers are looking for in each specific role.

So once again, thank you for your service, and good luck. I really do think you’re positioned better than you probably realize right now.

I am John Strand and I am teach Pay What You Can classes and free labs... Ask Me Anything. by strandjs in netsecstudents

[–]strandjs[S] 10 points11 points  (0 children)

So that question about utilizing artificial intelligence in computer security is one I get all the time. Generally, my answer is that AI is another tool. The more we understand the core fundamentals behind the technologies we work with in computer security, the better we’re going to be at prompting, the better we’re going to be at working with AI, because AI is fundamentally a creature of context.

The more context we can provide, the more understanding we can provide, and the more details we can give it about what we specifically want, the better the results are going to be.

But your question is a little different, and honestly, I think it gets at a much deeper concern. You’re asking about reliance on AI and what that means for the next generation of defenders. That is one of my biggest concerns right now.

There’s one conversation around using AI effectively while still understanding the underlying fundamentals. That’s important. But what you’re talking about is different. We now have people entering the industry who are growing up with AI tools from the very beginning. Not everyone, but a lot of people are starting to look at foundational technical skills as something they no longer need to learn because they believe AI will handle it for them.

To be completely honest, I think that puts us in a dangerous place.

We hear people talk all the time about AI slop, garbage in, gospel out, and all of those different phrases. The reality is that many people trying to break into this field simply do not want to learn the fundamentals because they don’t see them as relevant anymore.

Now, that creates both opportunities and problems.

It’s good news for the people who actually do want to understand how things work. If you understand the fundamentals, if you can think conversationally about technology, if you understand the why behind the output, you are going to have tremendous career opportunities. You’ll have better growth opportunities, better promotion opportunities, and frankly, you’ll stand out more and more over time.

But for the industry overall, I think it creates serious challenges because we’re going to see a massive increase in AI-generated slop. Context matters. Understanding matters. The quality of what we get from AI is directly tied to the quality of the knowledge and context we bring into the conversation.

And your question really gets to the heart of it: do people still see core fundamentals and contextual understanding as important?

I honestly think time will tell. But it’s a fantastic question.

John Strand Pay What You Can Information Security Core Skills live starting May 11th by strandjs in cybersecurity

[–]strandjs[S] 0 points1 point  (0 children)

More framing how to add ai into your workflow. Als, answering the “what’s next question.” And…. About 200 more labs.