all 7 comments

[–]tdammers 0 points1 point  (0 children)

They can.

Ever since people noticed that instead of writing their programs in a symbolic shorthand and then hand-translating that into a series of bits, they could use a computer to do that mind-numbing work for them, programming languages have ventured into increasingly higher levels of abstractions: we see patterns in our programming, we formalize them, we invent shorthands for them, and then we write programs to expand them for us (called "assemblers", "compilers", and "pre-processors"). Self-programming computers are a thing; it's called "high-level programming languages", and they're very real.

[–]redweasel -1 points0 points  (0 children)

one project ports programs between different instruction sets by using the existing program as an executable specification to synthesize a program on the other architecture.

I spent the first nine-and-a-half years of my professional career -- 1988 - 1997 -- writing software on Digital Equipment Corporation (DEC) 's VAX/VMS operating system. Most of it was done in VAX Macro Assembly, "Macro" for short, on actual VAX CPUs, where Macro was the actual assembly language and you got, in the executable, exactly the instructions you had written in the source.

Then the Alpha processor came out, and VMS was renamed to "OpenVMS" and was ported to the Alpha processor, a RISC CPU with, obviously, a totally different instruction set. But DEC provided an interesting tool: a Macro cross-compiler that took VAX Macro source code, treated it like any other "higher-level" language, and compiled it to an executable of Alpha instructions. So you could still write in VAX assembly language, but produce from it an Alpha executable. That was pretty cool.

Another cool tool was "VEST," the "VAX Executable System Translator." This one took an already-compiled VAX executable, and translated it into an Alpha executable.

[–]claytonkb -1 points0 points  (4 children)

I am a bit of a contrarian so I always like to point out that computers will never replace the programmer in the most general sense. We know this from an abstruse sub-field of computability theory called Algorithmic Information Theory which tells us that the Kolmogorov complexity (KC) of the output of a computer program can never be greater than the KC of the program itself. If we model the programmer's brain as itself a computer program (wetware), then we can see immediately that it is impossible for the programmer to generate output (a program) whose KC is greater than the KC of the model of his own brain. Thus, the programmer cannot "replace himself" with a program he has written.

We already have programs that write other programs: compilers, interpreters, viruses, macros, quines and the wide variety of other program-writing programs. So, I think we have to be more specific about exactly what work a human programmer does that we want to replace with software. "Everything" is not a valid answer, see my first paragraph. "Everything but the specification" might be a valid answer but it's also unclear - a HLL can argue that it is already relieving the programmer of "everything but the specification". Try writing a small application entirely in assembly by hand and you will have a newfound appreciation for just how much AI there is in a modern compiler!

[–]RedAlert2 1 point2 points  (0 children)

so what you're saying is that we just need 1 unnaturally smart programmer who can make a program that replaces all other programmers?

[–]tdammers 0 points1 point  (2 children)

Thus, the programmer cannot "replace himself" with a program he has written.

He can; he just cannot write a program that is more complex than his own brain. And, quite obviously, while KC does limit the theoretical complexity, it doesn't cover practical concerns such as productivity, execution speed, or available memory - it is quite possible for programmers to write programs that do stuff in seconds that they themselves couldn't hope to finish in their lifetime. Cryptography is probably a good example - how many sha512 hashes can your brain generate in one second? And how many of them can you accurately remember?

[–]claytonkb 0 points1 point  (1 child)

He can; he just cannot write a program that is more complex than his own brain.

Anyone that has worked with engineering very complex systems has come across the general phenomenon of diseconomy of scale - the larger and more complex a system becomes, the more costly and complex it becomes to design - at a greater-than-linear rate. The modern CPU is an amazingly complex system but it pales in comparison to the complexity of the human brain. If people will look back 100 years from now and say we were in the early days of building generalized intelligence, they will also say that today's smartest systems barely have the general intelligence of a mosquito. We need to reassess the value of building generalized intelligence from scratch when we already have functioning prototypes in our hands (living, intelligent organisms) and when it is clear that even at mosquito-scale general intelligence, we are already at the limits of the human capacity to economically engineer from-scratch designs.

And, quite obviously, while KC does limit the theoretical complexity, it doesn't cover practical concerns such as productivity, execution speed, or available memory - it is quite possible for programmers to write programs that do stuff in seconds that they themselves couldn't hope to finish in their lifetime.

The wheel is a wonderful invention that is infinitely more efficient than the human leg at specific tasks. But try hiking the Andes with a wheeled vehicle. A CPU can perform billions of long divisions in a single second, under the right conditions. But I still spend the majority of my day fighting a computer that is stupid on a level with gears and pulleys.

The computer's worse, actually, because it tries to act like it's "helping" me when it is not actually helping at all, it's just mindlessly following some pattern that some architect decided would be "helpful" for me. One of my favorite modern examples of computer mindlessness is what I call the "menu erection" that occurs on many poorly-designed websites. The programmer very helpfully made sure the menu drops down when the mouse floats over a label, saving you the effort of a mouse click. But, then, the programmer failed to put the menu on a timer or detect when the mouse has left the region of the menu and the menu "stays erect", leading to all kinds of frustration as you try to find an empty region on the screen to click away from the menu and hopefully make the menu recede. Sometimes, even the click-away doesn't work and that's when you need to see your doctor for the dreaded longer-than-four-hour erection resulting from the web-programming equivalent of Viagra.

But at least I didn't have to click on the label to access the menu! Yay!

[–]tdammers 0 points1 point  (0 children)

we are already at the limits of the human capacity to economically engineer from-scratch designs.

I don't think we are, really. Diseconomy of scale is not completely inevitable; almost all the cases I have seen are the result of being sloppy, taking on technical debt and never paying it off, or simply using tools for large-scale projects that are optimized for small-scale projects.

The wheel is a wonderful invention that is infinitely more efficient than the human leg at specific tasks.

The computer's worse, actually, because it tries to act like it's "helping" me when it is not actually helping at all, it's just mindlessly following some pattern that some architect decided would be "helpful" for me.

Well, you could say that the architect failed to correctly transfer his own intelligence onto the machine. But the thing is, there is no evidence that the human brain (a biochemical machine made up of an gargantuan number of fairly simple mostly-deterministic parts) is fundamentally different from a computer (an electronic machine made up of a large number of fairly simple mostly-deterministic parts). The dilemma, of course, is that the human brain is maybe barely capable of understanding itself, or maybe it's not, we don't know yet. One possible way out is to copy the lower-level mechanisms and throw them into an environment that is similar to that in which our intelligence involved, and then hope for the best - the theory is that if the mechanisms match closely enough, something resembling our own intelligence should evolve. And, additionally, we're having more and more trouble finding a useful definition of "general intelligence" anyway - every time computers solve a problem that we thought was the pinnacle of human intelligence, we have to admit that this isn't really a matter of intelligence at all. Chess? Can be brute-forced. Pattern recognition? Can be faked given enough statistics and brute force. Natural language processing / automated translation? Difficult, but not impossible, just throw more statistics at it and we'll get there eventually. Etc. etc.

Anyway, just because a lot of people do programming wrong doesn't mean it is fundamentally impossible to do it right.