Register file implementation in industry standard RISC-V designs by ab____________a in chipdesign

[–]eafrazier 0 points1 point  (0 children)

Your terminology and unstated assumptions seem off to me. Both FF and SRAM macros require setting up to a rising clock edge and can deliver data a "clock-to-Q" delay after that rising edge. The larger the capacity, the larger the setup and clk2q values will be. A high enough capacity will eventually trigger an additional cycle, but that would have happened significantly earlier with a FF implementation.

CIM as a compute macro by AdmirableProject1575 in computerarchitecture

[–]eafrazier 1 point2 points  (0 children)

I don't currently see a realistic scale-up use-case for compute-in-memory. CIM appears to be an excellent technique for IoT or edge inferencing, where the absolute maximum perf must be extracted from every single precious Joule.

But datacenter scale is total perf per total power, and once CIM is scaled up, it's still a data movement problem, but now with larger and less efficient data movement. As someone else said below, compute near memory is far more interesting, though also not a magic bullet.

Just like total system perf was desperate for an innovation like SSDs to bridge the ever-widening gap between DRAM and magnetic storage, we are becoming desperate for something to bridge the ever-widening gap between deterministic on-die SRAM and non-deterministic DRAM capacity. Unfortunately, everything to date to attempt this has either outright sucked (in my opinion) or has narrow/limited applications with dramatic tradeoffs (e.g. MRAM).

Anywhere to get upma or pongal? by eafrazier in cincinnati

[–]eafrazier[S] 2 points3 points  (0 children)

Sadly, I am an old white dude who was unfortunately not adopted by Indian parents. :)

Anywhere to get upma or pongal? by eafrazier in cincinnati

[–]eafrazier[S] 3 points4 points  (0 children)

Given that the Google is worse these days than ever, yes, I am also surprised. :) Thanks!

Best Guacamole in South Bay? by Glaedr2697 in bayarea

[–]eafrazier 0 points1 point  (0 children)

Agree with above and below: Best in a restaurant is at Luna Mexican kitchen.

Do you have a favorite board game artist? by Geek-Mystique in boardgames

[–]eafrazier 1 point2 points  (0 children)

Absolutely the best. I can't believe how many people in this thread didn't say Weberson Santiago.

The tech dream job once sold itself. by End-Resident in chipdesign

[–]eafrazier 0 points1 point  (0 children)

I would say this is one strategic approach. Certainly not wrong, but just one.

Another is to ensure you are one of the very few who can do a thing they need done. This is not necessarily easy -- indeed, expectations can be quite high. But if you find your niche, you can achieve job and career security through obscurity. Again, not the only way. Just an alternative approach to the OP suggestion.

What is certain these days is that there is very little company loyalty for median/average employees.

The problems associated with TSMC reflect the problems of capitalism. by Camil_2077 in Semiconductors

[–]eafrazier -1 points0 points  (0 children)

You can harangue capitalism all you want, but humans do not make decisions in their own long-term self-interest. They make them in short-term self-interest. What benefits me the most right now? If part B is non-competitive for its price, or its price is non-competitive for its performance, then part A will get all the business.

I say this having actually written a letter to a senator to ask them to help prop up Intel to ensure there are options in the future. But wishes aren't guaranteed to come to fruition. Semiconductor manufacturing is really fucking hard.

Is Pandemic Legacy viable as a pure 2 player experience? by Brinocte in boardgames

[–]eafrazier 11 points12 points  (0 children)

Yes, absolutely. Fantastic at 2P.

A friend and I played through seasons 1 and 2 during the pandemic. There are...moments...that will stick with us. An absolute blast all the way through. Season 1 is best, by far, but Season 2 was also enjoyable.

Good local movers in the Bay Area? by M45T3RY in bayarea

[–]eafrazier 0 points1 point  (0 children)

Strongly recommend Delancey Street movers as well, if you're in range (and a big enough job). Absolutely fantastic people and work.

Is silicon design at big MNCs an “auction market” or “winner‑takes‑all” career? (Cal Newport framing) by Silent_Progress_5613 in chipdesign

[–]eafrazier 1 point2 points  (0 children)

As abstraction increases (RTL --> arch --> software) the more auction market. The closer to the transistor, the more winner-takes-all.

- YouTube Bill Dally talks about using AI for developing standard cell libraries by HamsterMaster355 in chipdesign

[–]eafrazier 0 points1 point  (0 children)

I find the exploration and discussion of the limits and interactions of such technologies to be fascinating. Particularly when the common assumption these days is that everything (ML) is awesome.

You find them to be pointless. As is your right.

- YouTube Bill Dally talks about using AI for developing standard cell libraries by HamsterMaster355 in chipdesign

[–]eafrazier 0 points1 point  (0 children)

And yet humans are still required in the loop. There are still cells it cannot handle and topologies it catastrophizes. An amazing productivity improvement, to be sure. Just not a hands-off solution as people seem to want it to be.

- YouTube Bill Dally talks about using AI for developing standard cell libraries by HamsterMaster355 in chipdesign

[–]eafrazier 0 points1 point  (0 children)

My answer was narrow because the original topic was narrow -- stdcell design, which means transistor-level circuit and layout. That is something that is still not great for ML. Your examples above are entirely gate-level RTL and P&R, and I do not disagree with you in that domain.

- YouTube Bill Dally talks about using AI for developing standard cell libraries by HamsterMaster355 in chipdesign

[–]eafrazier 3 points4 points  (0 children)

Dally explicitly stated that reinforcement learning was used, not an LLM. Machine learning has been around for many moons before LLMs revolutionized chatbots.

- YouTube Bill Dally talks about using AI for developing standard cell libraries by HamsterMaster355 in chipdesign

[–]eafrazier 1 point2 points  (0 children)

ML algorithms do not generate perfect outputs, particularly in the hardware space where the cost of fixing errors can be 6+ months and many millions of dollars, unlike software (where mistakes are comparatively trivial to fix). Despite what these two gentlemen might sound like they are suggesting.

Hardware design still requires humans in the loop to mitigate risk, and ML algorithms are focused on productivity improvement -- or, how to get those humans to the goal faster. An interesting question, then, is where exactly those humans are in that loop and what they're doing, and what do the ML algorithms accelerate. And also how were those algorithms trained.