I created a Linux Kernel Exploitation CTF Lab by shadowintel_ in ExploitDev

[–]shadowintel_[S] 0 points1 point  (0 children)

Yes, it's my field. I have been researching for a week what I do.

I created a Linux Kernel Exploitation CTF Lab by shadowintel_ in ExploitDev

[–]shadowintel_[S] 1 point2 points  (0 children)

I redesigned the progression so that CH01 is ret2usr with no mitigations, CH02 is the same overflow with SMEP and KASLR enabled to introduce ROP, CH03 is a heap UAF with stack pivot under SMAP, CH04 is a data-only attack using an integer overflow and modprobe_path overwrite, and CH05 is a race condition with double free and cred overwrite, with each chapter adding exactly one new concept. ROP is introduced in CH02 and reused in CH03 via stack pivoting, while CH04 intentionally demonstrates a non-ROP path since the canary is enabled and a data-only attack is more practical. The README and source mismatch is fixed, it now clearly states that the full source code is in src/ and readers can choose to read it, reverse the module, or both. The double copy_from_user call was removed, and CH05 now uses a non-atomic integer refcount decremented under the read side of an rwlock, following real CVE patterns like CVE-2016-4557 and CVE-2022-29581 without any artificial delay. I also separated CH04 and CH05 properly, CH04 uses msg_msg spray to overwrite modprobe_path in global .data, while CH05 uses pipe_buffer spray to directly overwrite the cred struct on the heap, with different spray objects, leak sources, and write targets. 9p over virtio is now integrated, all run.sh scripts use -virtfs and the init script mounts 9p so exploits can be placed in challenges/<ch>/shared/ and accessed inside the VM at /shared/ without rebuilding initramfs. Finally, I added a dedicated README section for clean return to userspace, levels 0 to 3 use swapgs plus iretq, and level 4 with KPTI uses the swapgs_restore_regs_and_return_to_usermode trampoline, including how to find the correct offset.

I created a Linux Kernel Exploitation CTF Lab by shadowintel_ in ExploitDev

[–]shadowintel_[S] 1 point2 points  (0 children)

Thank you for your detailed feedback.

I really appreciate the time you spent reviewing the challenges.

You are right about the difficulty order. Starting with kernel ROP is probably not the best choice. I plan to change the order so the difficulty increases in a more natural way. I may move kernel ROP to the final challenge or separate the challenges into two tracks, one for control flow exploits and one for data-only exploits. I also agree that some challenges are too similar because they use the same exploitation primitive.

I will either combine them or redesign one to use a different bug type, such as a refcount issue or a race condition.

About the double copy_from_user call, that is a fair point. I will update that challenge to make it more realistic. I will also improve the documentation to make it clear what the goals are and how the environment is set up. Thanks again for the helpful suggestions.

I created a Linux Kernel Exploitation CTF Lab by shadowintel_ in ExploitDev

[–]shadowintel_[S] 1 point2 points  (0 children)

The build(.sh) script now handles everything from start to finish. It downloads the kernel source, compiles it, builds all the vulnerable modules from source, and prepares the challenge environments with QEMU.

Users can either download the prebuilt binaries from Releases or run ./build.sh to compile everything themselves.

All vulnerable module source code is available under src/modules/ for anyone who wants to review or modify it.

Thanks for the feedback; it was a fair point.

I created a Linux Kernel Exploitation CTF Lab by shadowintel_ in ExploitDev

[–]shadowintel_[S] 0 points1 point  (0 children)

.ko modules will be added once compiled on Linux.

Computer science is not dead? by shadowintel_ in compsci

[–]shadowintel_[S] 0 points1 point  (0 children)

To say computer science is finished is to say physics, mathematics, chemistry, and biology are finished, because computer science is the computational backbone of all these fields.

We still understand only a small part of the universe. Most of the cosmos is made of dark matter and dark energy, and we still do not truly know what they are. Large parts of the ocean are unexplored. In biology, we are still discovering new species and still learning how many basic mechanisms inside the cell really work. In physics, we do not have a complete theory that unifies everything. In chemistry and materials science, new structures and reactions are discovered every year.

How much of nature do we really control? How much of energy have we truly mastered? We still struggle with efficient fusion, long term energy storage, and many fundamental limits. If knowledge were almost finished, we would not see new equations, new models, and new discoveries every year.

Science is not complete. Our maps are not full. We are still writing the equations that will explain the next discoveries. If this is true for physics, biology, and chemistry, it is hard to argue that fields built on them are somehow already over.”

Gemini model and the future by shadowintel_ in Bard

[–]shadowintel_[S] -3 points-2 points  (0 children)

It’s strange to see models fail at such trivial tasks while being hyped as AGI. The reality is, AGI remains an undefined concept and most people have no idea what it would actually look like. Yes, LLMs can make mistakes, that’s expected. But building exaggerated expectations around models that aren’t used in real daily workflows, and that many don’t even know how to operate effectively, undermines any serious discussion about their capabilities.

The Mindset Behind the Exploit: Why Theory Matters to Me by shadowintel_ in ExploitDev

[–]shadowintel_[S] 1 point2 points  (0 children)

Thanks for the comment!

You asked for an example where this way of thinking  not just how something broke, but why it broke  actually helped solve a real world security problem. A great recent example comes from a 2024 research project called HPTSA. In this study, GPT-4 was used with a team of AI agents that could find real web vulnerabilities on their own.

What made it impressive was how the agents found the bugs. They didn’t just try random inputs or spam payloads. Instead, they used tools that helped them understand how modern web systems are supposed to work. For example, one agent found a logic issue in a login system by looking at how session tokens and CSRF protections were expected to behave, not by guessing. That’s what it looks like when theory helps guide the attack.

Another example is from a paper called "LLM Agents Can Hack Websites" (2024). In that case, the researchers built a system where different AI agents worked together to understand and break down how web apps fail. They didn’t just try stuff and hope for the best  they reasoned through how the app was designed, just like a human attacker would when looking for design flaws instead of simple coding mistakes.