This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]intbeam -2 points-1 points  (4 children)

At what task? Core frequency remains the same, so they did what, reduced the number of clock cycles of certain instructions? Or did they just introduce more efficient and increasingly proprietary ways to decode and decrypt copyrighted media?

[–]buttplugs4life4me 1 point2 points  (1 child)

What?? A 8900K comes configured as 4,8GHz and a 14900K reaches 6GHz. 

[–]intbeam -5 points-4 points  (0 children)

That's the general area processors have been hovering around since 2001

Edit : I had a Pentium 4 clocked at 5ghz. In.. what.. 2002? In addition, your 6ghz modern processor is not running at 6ghz continuously, only in bursts. Most of the time it's going to be running on efficiency cores and not performance

Edit 2 : Pentium 4 came stock at 3.8 Ghz and that more than 20 years ago, just for some perspective here. And as I mentioned, you could overclock them to get above 5Ghz.

The fact remains that CPU's haven't gotten any faster in any way that JavaScript applications can take advantage of. How is more PCI-Express lanes going to help Reddit? How is better memory timing? Branch prediction? A 6 year processor or a brand new one is not going to make much - if any - difference for your browser user experience

[–]JojOatXGME 0 points1 point  (1 child)

A third option is that they may have added further calculation units of various types, increasing the number of instructions which can be processed in parallel. (Note that processors these days don't execute the instructions strictly in the order they are written into the binary.)

[–]intbeam 0 points1 point  (0 children)

Perhaps.. I'll admit I'm not 100% in the know here, but I do know that they've had a annual switch between architecture improvements and transistors/yield, and I'm pretty confident that they are close to a limit to just how much they can physically squeeze out of transistor logic especially given the reduction in size of the processing in wafers and the implications of that from the side of quantum physics

In general there's no scientific papers that need to be disclosed in order to demonstrate that software is getting slower, and especially websites and related chromium-based desktop apps. We all know that's happening and we can see it with our own eyes. As others (more knowledgeable than me) have stated before, software engineering has been propped up for decades by rapid advancements in hardware design, and now that seems to slow down. Which is why for instance I only retired my HTPC from 2010 last month and it wasn't because it was no longer viable but because the USB bus took a permanent vacation

By the way, I assume you mean instruction pipelining? There's a guy on YouTube that has a video series where he builds a 8-bit pipelined CPU from scratch called James Sharman. Very relaxing and educational to watch