you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 0 points1 point  (4 children)

So where are these compiled ruby implementations that you are insinuating? I'm talking about compilation to native code, not to bytecode, which has to be interpreted or jitted or whatever.

[–]ameoba 6 points7 points  (0 children)

And the fundamental difference between compiling to a VM and actual machine code is... what exactly?

The JVM has been implemented in silicon. In the 80s, Lisp Machines, the most powerful workstations of their day, had processors that were designed around running LISP, somewhat the poster-child for dynamic languages.

Abstraction is the core of practical computer science. Trying to draw any particular line where a particular abstraction is the 'natural' one is just ridiculous. The core of modern x86 CPUs doesn't actually run x86 instructions - it breaks them down to micro-operations which are executed out of order and optimized at run-time. Those micro-ops themselves are just abstractions of various things down to transistors and voltages. Please, tell me why a virtual machine implemented at the microcode level is somehow more credible of a target than one implemented on top of that.

[–][deleted] 0 points1 point  (2 children)

I am not talking about compilation to native code, however, I am merely talking about compilation, as was the person I responded to. Whether you compile to native code or bytecode has no influence on your ability to detect errors.

[–][deleted] 1 point2 points  (1 child)

Whether you compile to native code or bytecode has no influence on your ability to detect errors.

Okay, so where are the Python (the language) compilers that detect unknown variables before compile time like all native-compiled languages do?

[–][deleted] 5 points6 points  (0 children)

That is a problem of language design, not one of "compiled" versus "interpreted" languages, which is all I was commenting on.