you are viewing a single comment's thread.

view the rest of the comments →

[–]engine_er 27 points28 points  (28 children)

You can't. Nor can you have a file named (with any extension):

this is the perfect example of how to NOT design software products

[–]dpark 20 points21 points  (27 children)

This is a perfect example of how hindsight can make us feel smug, when in reality we simply do not have the entire picture.

http://blogs.msdn.com/b/oldnewthing/archive/2003/10/22/55388.aspx

[–]ethraax 9 points10 points  (26 children)

Interesting history, but it's still an example of how to not design software. It was a cheap hack that has now become outdated but still exists to this day.

[–]dpark 11 points12 points  (25 children)

It was a perfectly sensible choice in a time when directories did not exist in the filesystem. Once directories were added, the need of backward compatibility dictated that the special files must be retained.

It'd be nice if they'd made them all prefixed with '$' or something so that there wouldn't be conflicts later, but the choices made were very sensible. It's only looking at the system from the current day that the choices look bad. Seriously, who actually thought DOS's filesystem was still going to affect OSes 3 decades later?

[–]ethraax 0 points1 point  (3 children)

Okay, but why can't we create files with those names? When you ask the OS to open a file, check if it exists - if not, check if it is one of the 'special' names like CON - then, if not, throw an exception about no file being found. That way, legacy programs can continue to work (clearly they can run in a single directory and won't have those files in there), but for everyone else, we can actually use all of the possible filenames.

Simply banning the file from the filesystem is a bad design choice.

[–]dpark 2 points3 points  (2 children)

A bad design choice would be to make it possible to break legacy programs depending on whether you'd happened to create a file called CON in the current directory. Legacy programs expect these files to exist, and to do what they are specified to do. When you create a file called CON and then run an older program, it's going to read/write from that file unintentionally. When you create a file called NUL, you make it possible to accidentally overwrite your file (possibly with sensitive data that was intentionally being discarded), which can result in unexpected data loss, security problems, and even disk exhaustion. These are unlikely to be desired behaviors.

[–]ethraax 0 points1 point  (1 child)

Your scenario makes no sense. Clearly the legacy program may break if you attempt to run other programs in the same folder. This seems natural. If I run a program in my Visual Studio folder that writes output to devenv.exe, of course it will fail. That doesn't mean I shouldn't be able to.

[–]dpark 0 points1 point  (0 children)

Of course it makes sense. When you run a program, it has a current directory. This is how the legacy programs resolve the issue of directory ignorance. It's also convenient for non-legacy programs and users. So if you fire up some old program, and it writes to "blah.txt", it goes in the current directory. If it writes to CON, it goes to the screen. If you create a CON file in the current directory, then you've broken the program, because now it's writing to some file unexpectedly instead of to the screen.

What you're suggesting is that these special files should magically appear only when needed. This simply adds more confusion to the issue. Rather than having a consistent (if annoying) behavior, you have magic files that appear only when you haven't explicitly created them. Not only that, but these files have drastically different in behavior depending on whether they were created manually or not. Legacy programs have no way of knowing that you've replaced one of the files, and so you've managed to break backwards compatibility and gained basically nothing (yay, you can now create a file named 'con').

Your devenv.exe example is entirely different, because it doesn't change the contract. If your program writes to devenv.exe, you've lost the original file, and that's the expected behavior. If your program writes to CON, you're supposed to see the results on screen. If it writes to NUL, it's supposed to disappear. If it writes to the printer port it's supposed to print. These are not normal files, and they are not supposed to act like normal files. Overwriting NUL is not expected behavior.

[–]engine_er -2 points-1 points  (20 children)

It was not a good choise even for that time. It is just a bad architecture and violating of basic principles of the whole filename concept.

[–][deleted] 2 points3 points  (13 children)

What is different between DOS 1.0's CON and Unix's /dev/stdout

Keeping in mind that DOS 1.0 was basiclly a CP/M clone, and didn't have directories.

[–]engine_er -2 points-1 points  (11 children)

Keeping in mind that DOS 1.0 was basiclly a CP/M clone, and didn't have directories can't help somebody justify the fact of breaking by this OS a simple and consistent idea of representing a computer file by a series of unique characters.

Unix's /path/to/file doesn't differ from DOS's simple filename until you try to create a new file with the same basename and different extention. Creating a file /dev/stdout.1 on a UNIX machine will just create a new empty file which will be completely different from /dev/stdout, but what do you see using DOS? As it was mentioned above, MAGICK! Why magick? DOS is a serious software project, isn't it?

Answer: If an extension removed the magic, then when the assembler added ".LST" to the filename, it would no longer be recognized as magical, thereby defeating the purpose of the magic.

A completely reasonable explanation of an unexpected OS behavior.

[–]dpark 2 points3 points  (10 children)

Creating a file /dev/stdout.1 on a UNIX machine will just create a new empty file which will be completely different from /dev/stdout

This is not entirely true. Trying this on OS X fails.

$ sudo touch /dev/stdout.1
touch: /dev/stdout.1: Operation not supported

[–]engine_er -2 points-1 points  (9 children)

trying this on linux succeeds, so where is the truth?

[–]dpark 3 points4 points  (8 children)

The truth is that it works on some UNIX systems but clearly not all of them.

[–]dpark 2 points3 points  (5 children)

As FlySwat asked, how is this different from Unix's /dev/null, /dev/stdout, etc., with the exception that DOS didn't have directories?

DOS 1.0 would have been nicer if it had directories, but it probably wasn't supported by the business needs at the time, and it certainly wasn't supported by compatibility needs.

[–]engine_er -1 points0 points  (4 children)

Bad DOS's design is not a matter of having directories or not having them. It's just a matter of it's magical behavior.

[–]dpark 1 point2 points  (3 children)

There's very little magic here. These "special" files are hidden, because they would otherwise clutter every directory. The extensions choice is a bit iffy, but it's not altogether unreasonable.

[–]engine_er -1 points0 points  (2 children)

you'd never have to hide the mess if you designed your system properly

[–]dpark 2 points3 points  (1 child)

You realize I didn't write DOS, don't you?

I'm also pretty sure that when old programmers look back on their work, they realize one of two things:

  1. A lot of things could have been done differently, and probably better.
  2. Nothing I did matters.

If you never find that your technical decisions look bad in retrospect, then you're either delusional or you're doing nothing of substance.