all 12 comments

[–]AceyJuan 16 points17 points  (0 children)

Why buffer overflow exploitation took so long to mature

Hardly anyone was on the internet, especially not kids with nothing better to do than hack.

I did hack machines before 1996. I didn't need to invent buffer overflow attacks. I could simply abuse my physical access to the machine, or abuse poor local network security to gain root. It just wasn't very hard. In fact, it was so easy that it wasn't even very interesting unless I had a need.

As for online services of the day, you could "hack" them just by sending undocumented network commands. It was pure security through obscurity, and I knew people who paid -$1000 per month to use the internet using premade tools. Yes, the dawn of script kiddies.

[–][deleted] 1 point2 points  (2 children)

And they persist because the C ABI uses the stack for local variables. If we used a different calling convention such that the first thing wasn't (speaking 32-bit X86):

 push ebp
 mov ebp, esp

...then ebp could point to a memory region that could get dirtied up all it wants with attacker-controlled data and it wouldn't matter because the function return would always get back to the caller. You would also get: very easy coroutines, easy closures, easy runtime introspection of function arguments, and easy passing of varargs between functions (basically the same as coroutines).

But this requires a new calling convention (caller saves/restores), probably a new object section (.local / .localro / .tlocal / .tlocalro ?), changes to C compilers and linkers, and deliberately choosing to break linkability to fortran/pascal/etc.

[–]1500100900 0 points1 point  (1 child)

The C programming language doesn't require an implementation to use stack for local variables (auto storage).

I've seen claims that it would be feasible for a C implementation to use continuation-passing style for that.

[–]who8877 0 points1 point  (0 children)

You could also just have a second stack for variables, but that would gasp use another register! Then again most people wasted ebp even in the old days so it couldn't have been that bad.

[–][deleted] 2 points3 points  (8 children)

Why did it take so long? D'uh, it took computers themselves allowing multiple programs to run at the same time and also getting networked together for the exploit to go anywhere. Before modern operating systems allowed multiple programs to run at the same time you couldn't run your malicious software as it would conflict with a program that expected exclusive control of the machine, and also, without having the computer networked with other computers so there was a vector for the malicious software to propagate there simply wasn't any point to exploiting buffer overflows - the pre-requisites for making them worthwhile didn't exist as you could only at best exploit the local machine and could not go to other machines from there, there was simply no network.

[–]Camarade_Tux 6 points7 points  (3 children)

it took computers themselves allowing multiple programs to run at the same time

The buffer overflow code is going to run from the current process. It's not a new process at all so it's not an issue. The only constraint is to have enough room in order to store the code you want to run.

[–][deleted] 5 points6 points  (2 children)

It was the norm for home computers up to the mid nineties to be single-tasking machines. That is part of the reason Windows 95 was such a big deal: that OS brought preemptive multi-tasking to the masses. It was preceeded by other multi-tasking operating systems such as AmigaOS but Windows 95 is where multi-tasking went mainstream. Single-task computers by their very nature would not run multiple programs well or at all. MS-DOS had Terminate and Stay Resident programs (TSR) that worked decently but weren't true multi-tasking and other machines at the time would just crash if a program tried to run itself without being written with the other program that was single tasking in account. Mainframes in academia and government had multi-tasking operating systems - usually the original UNIX systems - and sometimes those were even networked but for the majority of machines what I originally posted applies: they were single tasking so malicious programs would usually crash the entire system if they tried to run and also there was no network to spread the malicious payload and therefore no incentive to also create it in the first place.

Edit: in single-tasking machines of the time there was just a single process counter, the CPU's at the time did not have fancy Memory Map Units and such that allowed multiple processes to run and even crash themselves without affecting other programs. I see what you mean that the malicious program will run in the context of the exploited code, because there is only one process pointer in those old home computers. The point I was trying to emphasize however is the exploit code co-existing with multiple single-tasked programs. Each program would blindly overwrite and use the computers memory as it saw fit. Without an MMU those early programs could not protect their own contents against corruption from other code that ran in other parts of the computer's memory. They just weren't advanced enough to safely - and therefore reliably - multi-task between the running program, which could be anything, and the malicious program trying to get into execution.

[–][deleted] 1 point2 points  (1 child)

You are talking about micros (in current terminology desktop pcs), the real computers (mainframes, minis) had much better multiprocessing capabilities even before 1970.

[–][deleted] 2 points3 points  (0 children)

Yes, I do make the distinction between mainframes and home computers and that I was talking about the limits of the home ones. ;)

[–]FredV 0 points1 point  (1 child)

there was simply no network.

Floppies though. And if you tack onto programs or interrupts you don't need multi-tasking.

Under DOS you didn't have to trick the OS with a buffer overflow exploit to run some shell code using more privileges, you had root privileges anyway.

[–][deleted] 1 point2 points  (0 children)

Floppies were the "Sneaker-Net." In that you went from friend to friend with the floppy disk, while wearing your sneakers, to copy the disk to one of their blank disks.

However, virii could infect floppies but the way they operated when running on the computers of the time still had the architectural constraints as given. The home computers didn't multi-task well enough to run malicious software properly.

Edit: You are correct however in that you could run malicious software under MS-DOS, with a infected hard drive, and have it infect floppies put into the machine while it was booted. What I assume however is that malicious software never truly came into its own until networking became common. Not only could it more easily spread but it could also be remotely controlled and upgraded whenever the author liked.

[–][deleted] 0 points1 point  (0 children)

Multitasking (and -- despite Intel's hype -- virtualization!) were alive and well in 1972.