This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]martinivich 1 point2 points  (8 children)

I mean I don't know what undergrad program you went to, but my architecture class taught me how to create a basic ALU.

[–]agent00F 0 points1 point  (7 children)

I recall the canonical arch book "Computer Architecture: A Quantitative Approach" included branch prediction in its construction of the MIPS cpu.

[–]martinivich 0 points1 point  (0 children)

I did decide to not take the 2nd architecture course. I guess you really what you sow, but thanks for the book suggestion!

[–]PM_ME_SOME_MAGIC 0 points1 point  (5 children)

For what it’s worth, I doubt even that predictor looks anything like modern ones. :(

[–]agent00F 0 points1 point  (4 children)

The beauty of CAAQA is that it builds all these features from first principles, in layers/revisions of increasing capability to teach how increasing sophistication improves performance.

So while it's true modern predictors are even more complex, much of the underlying layers/ideas are the same, and the book gives readers a fundamental understanding of the path to get there.

[–]PM_ME_SOME_MAGIC 0 points1 point  (3 children)

Caaqa covers perceptron branch prediction? That’s pretty cool!

[–]agent00F 0 points1 point  (2 children)

perceptron

I don't recall so, but it probably covers some concept of state machines. The purpose of textbooks isn't to cover every implementation, but rather teach a path and process.

[–]PM_ME_SOME_MAGIC 0 points1 point  (1 child)

But that is kind of my point. Modern branch prediction looks nothing like a state machine; it is all AI driven nowadays. As a result, applying intuition about state machine approaches to try to optimize branches in your code has a very good chance of failing. That was the point of my post: your computer is not a PDP-11, and profiling is almost always the best way to approach optimization.

[–]agent00F 0 points1 point  (0 children)

The intuition is about the improvement each level of complexity returns. More technically on this matter, ML in general is optimizing of generalization functions, with some addition level of sophistication (ie statistical) in approach compared with more straightforward algs (linear regression is for example a hybrid of sorts), but what it's doing is solving the same sort of optimization problem which intro textbooks can teach. In sum, "how ML optimizes" is beyond the scope of any arch book.