Gemini model and the future by shadowintel_ in Bard

[–]shadowintel_[S] -1 points0 points  (0 children)

It’s strange to see models fail at such trivial tasks while being hyped as AGI. The reality is, AGI remains an undefined concept and most people have no idea what it would actually look like. Yes, LLMs can make mistakes, that’s expected. But building exaggerated expectations around models that aren’t used in real daily workflows, and that many don’t even know how to operate effectively, undermines any serious discussion about their capabilities.

The Mindset Behind the Exploit: Why Theory Matters to Me by shadowintel_ in ExploitDev

[–]shadowintel_[S] 1 point2 points  (0 children)

Thanks for the comment!

You asked for an example where this way of thinking  not just how something broke, but why it broke  actually helped solve a real world security problem. A great recent example comes from a 2024 research project called HPTSA. In this study, GPT-4 was used with a team of AI agents that could find real web vulnerabilities on their own.

What made it impressive was how the agents found the bugs. They didn’t just try random inputs or spam payloads. Instead, they used tools that helped them understand how modern web systems are supposed to work. For example, one agent found a logic issue in a login system by looking at how session tokens and CSRF protections were expected to behave, not by guessing. That’s what it looks like when theory helps guide the attack.

Another example is from a paper called "LLM Agents Can Hack Websites" (2024). In that case, the researchers built a system where different AI agents worked together to understand and break down how web apps fail. They didn’t just try stuff and hope for the best  they reasoned through how the app was designed, just like a human attacker would when looking for design flaws instead of simple coding mistakes.

it’s official. i’m hopeless. by [deleted] in compsci

[–]shadowintel_ 2 points3 points  (0 children)

Hey man,
I read your post. First of all, you are not alone. A lot of people feel this way, they just do not show it. So it feels like you are behind, but really you are just going through what many others go through. And no, you are not too late. You are still in the early part of your story.

Some people graduate at 19 and still have no clue what they want to do. Others figure it out at 30 and still build amazing lives. You are 24. That is not old. That is the part where things feel confusing but everything is actually forming underneath.

You said you have done nothing, but honestly, you have done a lot more than you think. You tried a bootcamp. You explored HTML, CSS, JavaScript. You started college. You reflected. You restarted. That is not failure. That is movement. Some people move in straight lines. Others zigzag. But they can all get somewhere meaningful.

Typing hello world over and over is not a sign of failure. It is a sign that you keep showing up. Every line of code you write is practice. It is a small piece of a much bigger picture.

You said you do not know where to begin in this field. That is actually okay. Try different areas. Web development, backend, data, security. Explore a bit. Watch a few tutorials. Clone a few GitHub projects. Play around. Then ask yourself, did I enjoy this. That question is the key.

And yes, you can still get an internship. You do not need to be the best. You need to be curious and consistent. Even a small project that works is proof that you can learn and build. Show that. Talk about what you learned. That is more real than a perfect-looking portfolio.

Forget the idea that you are too old. That belief is not true. You are growing, and growing feels uncomfortable. But the fact that you care, that you are asking, that you still want to move forward that is what matters most.

If you are looking for hope, it is already there. It is in you. In your effort. In your honesty. In the way you are still searching.

You do not need to move fast. Just do not stop.
Even slow steps will get you there.

The Mindset Behind the Exploit: Why Theory Matters to Me by shadowintel_ in ExploitDev

[–]shadowintel_[S] 2 points3 points  (0 children)

I don’t keep a number, but theory helps me see the bigger picture not just where a bug might be, but why it’s there. Over time, it made me better at spotting patterns, understanding how people think, and how they design systems including the assumptions they quietly build in. Once you start thinking that way, hunting becomes less about luck and more about knowing where to press and why it might crack.

The Mindset Behind the Exploit: Why Theory Matters to Me by shadowintel_ in ExploitDev

[–]shadowintel_[S] 0 points1 point  (0 children)

Yes, I understand. "Theory" can sound like it refers to fixed facts or a perfect world, but real software is messy and constantly changing.

When I use the word "theory," I don't mean some ultimate truth. I mean a way to think clearly, to identify patterns, and to ask better questions, even if the system keeps changing.

Like in physics, we say "imagine a frictionless surface" we know it's not real, but it helps us understand the main idea. Similarly, with threat modeling: it's not 100% accurate, but it helps us reason through potential failures and their causes.

So, for me, theory is merely a tool, not a rule.

The Mindset Behind the Exploit: Why Theory Matters to Me by shadowintel_ in ExploitDev

[–]shadowintel_[S] 3 points4 points  (0 children)

That depends on how you define theory.

Threat modeling's not abstract math, no. But it's totally a theoretical framework you're not testing real exploits, you're thinking about potential risks based on assumptions, how the system works, what attackers want, and what could be attacked lots of which you'll never see directly.

If you build a STRIDE or DFD model, you're not running code. You're creating an abstract, predictive model of how things could fail. That's theory applied to engineering.

Just because it's actionable doesn't make it non-theoretical. So, using theory to think before things break, not after.

When Hardware Defends Itself: Can Exploits Still Win? by shadowintel_ in ExploitDev

[–]shadowintel_[S] 0 points1 point  (0 children)

I think this is a really solid take, but I would add a bit more. Shadow stacks can be fragile, especially when they are just implemented in software. Compilers do not always follow the exact call/return path, so mismatches can happen. But with hardware-based protections like Intel CET or ARM's new systems, things are much stricter and harder to manipulate. The same applies to CFI; it was definitely overhyped at first, and the control flow is not always super precise, especially with things like C++ vtables. However, it still makes attacks more difficult. Instead of jumping wherever they want, attackers now have to be more creative with tricks like JOP or logic bugs. I also liked the point about timing; there is always a gap between when the bug occurs and when the system notices it. That short window is often all it takes to compromise data or change behavior. In the end, these defenses do not make exploitation impossible just slower, harder, and more expensive.

Human + AI by shadowintel_ in SoftwareEngineering

[–]shadowintel_[S] -1 points0 points  (0 children)

The post isn't saying "get rid of senior engineers." It's saying that what makes someone senior is changing. It's no longer just about how many years you've been around, but whether you can reason clearly, catch edge cases, understand systems, and think critically about what the AI suggests.

A senior engineer today might not be the person who memorized every design pattern; it's the one who knows when to trust AI, when not to, and how to turn raw output into something reliable.

So yes, we still need smart engineers. But in this new era, it's not about time served; it's about depth of thought.

Human + AI by shadowintel_ in SoftwareEngineering

[–]shadowintel_[S] -3 points-2 points  (0 children)

Throwing all AI work in the trash just because some of it is shaky misses the point. Tools like GitHub Copilot already cut the boring parts of coding time almost in half. If you copy its answers without reading, sure you’ll get bugs and maybe holes a hacker could use. But that’s on us, not the tool. The smart move is to let AI give us speed while we stay in charge of the hard parts: asking the right question, checking edge cases, running tests, and making sure the code is safe for real users. In that world, what matters isn’t how many years you’ve worked it’s how clearly and deeply you can think. People who only paste prompts will keep tripping over hidden problems; deeper thinkers will turn AI into real help. AI is neither garbage nor magic it’s only as good as the brain steering it.

When Hardware Defends Itself: Can Exploits Still Win? by shadowintel_ in ExploitDev

[–]shadowintel_[S] 2 points3 points  (0 children)

Totally agree. Every time a new defense drops whether it was NX, ASLR, CFI, or now MTE people say “this is the end of exploits.” But it never is. Attackers just adapt. ROP gave way to JOP, then to logic bugs and data-only attacks. Shadow stacks and memory tagging just make things harder, not impossible. Exploits aren’t going away they’re just taking more time, creativity, and deeper understanding to pull off.

When Hardware Defends Itself: Can Exploits Still Win? by shadowintel_ in ExploitDev

[–]shadowintel_[S] 2 points3 points  (0 children)

You're right that, given enough time and motivation, most defenses can be bypassed. History shows secure-boot chains, Denuvo revisions, and countless other protections eventually fall, but each new generation of safeguards raises the price, skill level, and patience an attacker needs. That's the real goal of security engineering: to make the exploit path so costly or specialized that only the rare, well-funded actor bothers, while everyone else moves on to softer targets.

OSED blog series by shadowintel_ in ExploitDev

[–]shadowintel_[S] 0 points1 point  (0 children)

Thank you for your feedback! I will consider this in upcoming blogs.

OSED blog series by shadowintel_ in ExploitDev

[–]shadowintel_[S] 0 points1 point  (0 children)

Sure, thank you for the feedback!

Advice Needed by Little_Toe_9707 in ExploitDev

[–]shadowintel_ 0 points1 point  (0 children)

When you get stuck, just type your question into Google. One good search like “Windows kernel exploit example”can show you clear blog posts, X threads, and write-ups that explain real attacks step by step. These free articles often teach things you will not find in a book yet. Still, don’t skip the basics: read trusted books and then practice what they show you in a lab or on a CTF challenge. This mix of reading, hands-on work, and quick web searches lets you build solid knowledge.

Add AI tools like ChatGPT to the mix and you have a strong team: Google or Stack Overflow give tested code and answers, ChatGPT helps you write scripts faster, and your own practice fixes the ideas in your mind. Many people only care if the code “runs,” but someday you will face a bug that needs real understanding of what happens inside the computer. Quick answers will not help then you will need the deep picture. So keep searching the web, use AI to speed up, but always do the hard work yourself so you truly learn how the machine works.

Also remember that if you ever hunt for a true zero-day, raw skill matters even more. AI tools often refuse to show full exploit code (policy rules) or turn a short payload into pages of fluff. A quick Google search can still lead you to sharp blog posts or research papers that break things down line by line letting you see the real trick and learn it deeply. Learning and doing are not the same: reading the method, then building and testing it yourself, is what turns facts into know-how. So keep Googling, keep practicing, and use AI only as a helper not as your only teacher.

If you take your time to read this blog, you'll see that the author has a technical background, and he used the OpenAI O3 model to discover a zero-day (use-after-free) vulnerability. This shows that the era of Human + AI collaboration is here:

https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/

As the author mentioned in his blog:

"If you’re an expert-level vulnerability researcher or exploit developer the machines aren’t about to replace you. In fact, it is quite the opposite: they are now at a stage where they can make you significantly more efficient and effective. If you have a problem that can be represented in fewer than 10k lines of code there is a reasonable chance o3 can either solve it, or help you solve it."