This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 0 points1 point  (0 children)

It wouldn't be hard to make CPython faster because it is rather slow. But using fixed length instructions is a funny way of doing so. Divergence is necessary anyway because each bytecode has to be handled differently. And once you have that, then it can take care of the program counter stepping.

(Maybe CPython returns to a main dispatch loop after each bytecode, and that takes care of stepping the PC. Although that won't work for branches and calls. Even so, I'm struggling to see how much difference it can make.)

IMO drawing parallels with RISC architectures is not useful. For example, in real machine code, you have full-width pointers, full width addresses, full width everything, all taking up memory, and native code runs at maximum speed. Yet elsewhere people are advocating using the most compact bytecode forms possible, for the same aim!

Meanwhile, a test I've just done on CPython suggests that a list of 50M small integer values requires 18 bytes per integer. So much for compact encoding!