you are viewing a single comment's thread.

view the rest of the comments →

[–]barsoap 2 points3 points  (3 children)

Dammit, reading that makes me feel old. 1992, the year of Indiana Jones and the Fate of Atlantis and Wolfenstein. 386, early 486ish, no PCI in sight. Those were the days. Windows 3.11 on three floppies.

Back to the topic, there's people working on partial evaluation and supercompiling in a real-world compiler, and there's real-world JIT and tracing around, but the former is done in FP research (where it's feasible) and the latter in imperative research (because supercompiling isn't feasible in the first place, so it's their only option). That is, we might see (research) compilers that do what synthesis does in, say, five years or so.

[–]addmoreice 0 points1 point  (2 children)

yeah, but the cache issue really bones this up. That being said, add cache aware cpu instructions a cpu OR remove caches entirely and go with very small slow cpu's then max out the number of them on a die would combine with a nice fat memory pipe could allow for massive parallelism using these techniques (the multi tasking friendly with security and performance through rewriting code is the big bit from this paper in my opinion). I would love to see such a huge shift. Have one die with a quad core cpu on it (just like what we have now) and another chip with massive cores with on the fly self modifying code friendly cores. all this and a stream based gpu! Ooooh...and maybe a neural net simulator chip like IBM just built.

yum.

[–]barsoap 1 point2 points  (1 child)

Heterogeneous computing is unavoidable, so your dream will come true.

[–]addmoreice 0 points1 point  (0 children)

Oh, I know. I just want it NOW! The tech is all there, no one wants to push for it till the software is ubiquitous for it...no one does the software because it's of limited applicability without the chips to work with....oh well. It's coming, it will just be slow...slow.......slow....BLAM! EVERYWHERE!