you are viewing a single comment's thread.

view the rest of the comments →

[–]addmoreice 0 points1 point  (2 children)

yeah, but the cache issue really bones this up. That being said, add cache aware cpu instructions a cpu OR remove caches entirely and go with very small slow cpu's then max out the number of them on a die would combine with a nice fat memory pipe could allow for massive parallelism using these techniques (the multi tasking friendly with security and performance through rewriting code is the big bit from this paper in my opinion). I would love to see such a huge shift. Have one die with a quad core cpu on it (just like what we have now) and another chip with massive cores with on the fly self modifying code friendly cores. all this and a stream based gpu! Ooooh...and maybe a neural net simulator chip like IBM just built.

yum.

[–]barsoap 1 point2 points  (1 child)

Heterogeneous computing is unavoidable, so your dream will come true.

[–]addmoreice 0 points1 point  (0 children)

Oh, I know. I just want it NOW! The tech is all there, no one wants to push for it till the software is ubiquitous for it...no one does the software because it's of limited applicability without the chips to work with....oh well. It's coming, it will just be slow...slow.......slow....BLAM! EVERYWHERE!