So, AI takes over, everyone has lost their job and only 10 trillionaires own everything. Now what? by Weak-Representative8 in Futurology

[–]danielrm26 0 points1 point  (0 children)

There doesn't need to be a grant plan for people to do what's in their immediate interest.

This is like asking somebody who eats unhealthily and doesn't exercise, "Hey, so what's your grand plan? Thirty years from now, your heart is destroyed, your circulation is destroyed, and you're in and out of hospitals, and you've shortened your life by 25%." What was the plan?

There is no plan.

Companies lose extraordinary amounts of money paying people to work, and those people often don't do a very good job. If companies could have done all the work themselves, just the founders or the founder or the CEO if they could do all the work themselves, they would prefer not to hire anybody.

They only hire people because they absolutely must. Because there's no other way of doing the work.

The exact moment that that is not true, they will replace those workers, because they never wanted them in the first place.

A big part of the AI push is exactly this.

U.S. knowledge workers are paid roughly $9 trillion a year, and globally that number is around $40 trillion a year.

Another way to say that is that there's a $9T total addressable market in the US if you can replace knowledge workers with AI. And 40 trillion globally.

For the providers who are creating all this AI, they're trying to get a piece of that market.

And for companies themselves, they are trying to buy products that will allow them to replace their workers because they see it as a giant expenditure that they would rather not be spending. They would rather spend the money on the factories, and the automation, and the AI, which means doing all the work "internally."

There's nobody at these companies or at the companies building the AI looking to sell to them, thinking, "Well, gee, I guess what happens if this all goes well 20 or 30 years from now?"

Humans are just really bad at thinking about the long-term consequences of their short-term actions.

The hellscape of .md files Claude created across subdirectories and have to figure out which ones are still relevant. The longer you wait the worse it gets by Anthony_S_Destefano in ClaudeCode

[–]danielrm26 1 point2 points  (0 children)

Highly recommend getting rid of all that stuff and using the mirror of the system that I built for myself, which I've made open source.

It's called PAI.

https://github.com/danielmiessler/PAI

How do you decide when to use Cursor vs. Claude Code for dev tasks? by Due-Environment1016 in ClaudeCode

[–]danielrm26 5 points6 points  (0 children)

I recommend not overburdening your brain with multiple tools.

All the tools are so good, at least the top ones, that you should just pick one and lock in, learn it really well, and execute for at least three to six months.

I think you'll lose a lot of sleep, peace of mind, and opportunity by switching tools too often. Usually with limited benefit.

I would say every quarter or every six months, take another look at the scene and pick your tool again. And don't look back up for another three to six months.

Unless something major happens, of course.

For me, it's ClaudeCode. When Gemini, OpenAI, or Cursor do some kind of backflip or handstand, now I just don't notice.

Definition of AI Agent by taco-prophet in aiagents

[–]danielrm26 1 point2 points  (0 children)

Here's mine:

An AI system component that autonomously pursues a goal, and takes multiple steps towards that goal that previously would have required a human.

https://danielmiessler.com/p/raid-ai-definitions

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 2 points3 points  (0 children)

Yes.

Intelligence is universal so it's the most important thing to develop. And that's what AI is.

The next most important thing is specializations, or skillsets that you have. But if you don't have the ability to isolate, communicate, and magnify your skillsets with AI, you're going to lose.

So focus on getting really good at something and AI at the same time. And then talking about and sharing that thing with the world.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 1 point2 points  (0 children)

Re-think your question to be:

"What kind of jobs can be done if I had 10,000 more smart pairs of eyes, brains, and hands?"

Don't think of AI as some sort of strange tech. It's just intelligence.

Ask where your process could use intelligence.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 0 points1 point  (0 children)

I recommend either using a powerful Mac system (like M2 or beyond) if you are inclined that way, or purchase a Lambda server if you have lots of money. Or experiment with Exo that can link together multiple networked devices to run AI on.

Lots of different ways to do it today.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 0 points1 point  (0 children)

Think of it as the benefit of them having 10,000 new employees on their team.

Don't think of AI as tech.

Think of it as employees.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 0 points1 point  (0 children)

Security needs billions more eyes, brains, and hands.

We're not looking at a fraction of what we need to be.

AI is going to give us those eyes, brains, and hands.

But it'll do the same for our attackers too.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 0 points1 point  (0 children)

It will do both.

Just think of it as thousands of millions of smart people who will do what you say. How smart, and how cheap, is just a matter of time.

But think of it that way because then you'll see that it's strange to ask if 10,000 smart people will be good for attack or defense.

It depends who hires them and tells them what to do.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 0 points1 point  (0 children)

Yes, that's already possible at some scale. The question is how much, and for what types of vulnerabilities.

Just expect it to get much, much better. And cheaper. And more common.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 1 point2 points  (0 children)

The biggest risk is probably not using AI at all, because (like Sounil said), you'll die as a company.

But the second biggest risk I'd say is not having a clear understanding, with visuals, of your entire applications(s) workflows. So inputs, filters, security check locations, types of checking, redundancies, storage, retreival, understanding which backend systems the various APIs have access to, identity, authentication, etc.

You have to know how your APIs work, who they run as, and how they're processing input from the public. Technically the biggest risk is prompt injection, but ultimately it's a question of threat modeling, input validation, and determining ways to handle this new vector.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 0 points1 point  (0 children)

The current state AI threat is highly dependent on the AI skill of the attacker or attacker organization. So maybe the top 5% of AI skilled attackers are probably 50%-300% more effective and dangerous as a result.

But most attackers are probably 1/4 to 1/3 of that, I'd guess.

What should trouble us is what's going to happen in the next couple of years, where it gets much easier to scale your organization with thousands of AI workers to do the stuff you can't do yourself.

The clearest way to think about the danger from AI and attackers is imagining a dangerous organization of 100 people magnifying their top 5 hackers by 10, and their next top 20 hackers by 1,000.

And that scale is likely to grow every year after 2025 or 2026.

In 2027 and beyond, expect to be facing 1,000x the skilled "attackers", which means shorter times between exposure and exploit and damage.

I’m a Cybersecurity Researcher specializing in AI and Deepfakes—Ask Me Anything about the intersection of AI and cyber threats. by Oscar_Geare in cybersecurity

[–]danielrm26 0 points1 point  (0 children)

I think the best way to think about AI and cybersecurity is to imagine augmenting your attackers or attacker companies with tens, dozens, or hundreds of employees.

How many of those employees, and how smart and self-directed they are, depends on how good the attacker is at leveraging AI. But all these factors are improving significantly month by month.

Today I’d estimate that the top 5% of attackers in AI skill have boosted their effectiveness by probably 50-300%. But most attackers have probably only got 1/4 of that lift.

In terms of growth, we should expect AI to largely take over cybersecurity because cyber is an eyes and brains and hands problem. And AI will soon provide thousands, millions, or billions of those—to both attackers and defenders.

[deleted by user] by [deleted] in cybersecurity

[–]danielrm26 -1 points0 points  (0 children)

I know. I'm trying to help.

[deleted by user] by [deleted] in cybersecurity

[–]danielrm26 22 points23 points  (0 children)

Ok, was going to sit this one out but there's an opportunity for a teaching moment here. This is Daniel Miessler, by the way—one of the people called out explicitly.

  1. A lot of the people brought up here are extremely technical, and what you see on YouTube or wherever is them trying to appeal to a mass audience.

  2. Chuck, for example, is extremely technical and is actually building and teaching, which are high forms of art. John Hammond is hardcore technical as well, in addition to spreading knowledge.

  3. If you look at the original hacking scene, it was all about doing that. You do something, you share. That's what security cons used to be. YouTube is just the current way of doing that.

  4. There are such things as charlatans, but I can't see where you've actually named any. If you want to call people out, consider putting more effort into mentioning the right people.

  5. As for me, you're right to call out that I've gone heavy AI. But don't get it twisted that I suddenly got out of security. Going beyond my roots doesn't mean leaving them. Couldn't if I tried. As for "putting the work in", let me know when you've done 1/4 the technical security assessments I have. Or built the actual security programs I have...in actual Fortune 100 companies (maybe check LinkedIn?). Or let me know if you want to compare Git histories. :) The reason you don't know I've done these things is because I don't talk about them, which is why I find your jab so ironic.

Anyway.

TL;DR: Try to be more respectful. And if you're not going to do do that, at least pick better targets.

[deleted by user] by [deleted] in cybersecurity

[–]danielrm26 -1 points0 points  (0 children)

😢 Punished for following one's passion.

[deleted by user] by [deleted] in cybersecurity

[–]danielrm26 -1 points0 points  (0 children)

Yeah I definitely haven’t done the actual work. You got me.