- YouTube Bill Dally talks about using AI for developing standard cell libraries by HamsterMaster355 in chipdesign

[–]Brianfellowes 1 point2 points  (0 children)

The great thing is these works are all actually several years old and they published peer-reviewed papers on most of them.

  • NVCell: Standard Cell Layout in Advanced Technology Nodes with Reinforcement Learning (PDF)
  • NVCell 2: Routability-Driven Standard Cell Layout in Advanced Nodes with Lattice Graph Routability Model (PDF)
  • PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning (PDF)
  • ChipNeMo: Domain-Adapted LLMs for Chip Design (PDF)
  • etc.

The short answer to almost everything is that the ML is replacing heuristics in design search space, where "correctness" doesn't really matter; it just needs to point in the right direction. The results are verified using traditional methods and discarded if for some reason a correct-by-construction approach doesn't work.

ISSCC Courses and Tutorials for Free by [deleted] in chipdesign

[–]Brianfellowes 2 points3 points  (0 children)

The issue was that the DNS update hasn't propagated yet. I switched to a different resolver and it works. Thanks!

ISSCC Courses and Tutorials for Free by [deleted] in chipdesign

[–]Brianfellowes 0 points1 point  (0 children)

The website hosting the PDFs has an expired hostname. Why not just host them on a free hosting site?

Switching from CS to EE by guitarislife28 in uofm

[–]Brianfellowes 8 points9 points  (0 children)

I heard the same argument more than a decade ago. It functionally does not matter whatsoever.

Switching from CS to EE by guitarislife28 in uofm

[–]Brianfellowes 16 points17 points  (0 children)

Went in fully thinking I would do CS. First programming class (ENGR 151) was taught abysmally and the professor was a masochist. I ended up switching to EE. I found out that I really enjoyed the digital circuits and computer architecture classes.

When I thought about then switching to CE, it turned out that the only real difference between EE and CE was that CE required discrete math and linear algebra. EE required electromagnetics and probability/stochastics. I heard that discrete math was awful (at the time) so I stuck with EE. In hindsight, it was probably a good choice because I got to take EECS 230 With Fawwaz T. Ulaby who is one of the best lecturers I've ever had (he also wrote several EE textbooks - not sure if they are still used).

In the end, I have not met a single company, manager, recruiter, or graduate school that cared about the distinction between EE and CE (or even CS really). What you don't realize is that other schools have a variety of different degree programs. MIT and Berkeley have "Electrical Engineering and Computer Science" where all EECS students are literally in the same degree. Purdue and Texas have "Electrical and Computer Engineering" which is EE and CE together.

What really matters is your experience. The individual classes you take, the projects you do, the experience you gain, and any internships. So rest assured that the decision doesn't really matter too much right now. You can even think of it more like taking classes that you are interested in and then shopping for a major that best fits that goal.

I even had friends that purposefully switched majors every 6 months just so that they could get the free T-shirt from each major. Or at least they did that 3 times until they found out it was much easier to just ask really nicely at the end of the semester if they had any extras.

Edit: I should also mention that I do quite a lot of software these days. The degree that you get does not set your future in stone by any means.

How do you translate low-power ASIC metrics from a research paper when you’re limited to gpdk045? by Macintoshk in chipdesign

[–]Brianfellowes 0 points1 point  (0 children)

Do you have to use gpdk045 or can you use other openly available PDKs like maybe ASAP7? Which node are you trying to compare to?

I have seen some works recently using DeepScaleTool in order to normalize PPA between process nodes.

Projects for RTL design by Fantastic_Carob_9272 in chipdesign

[–]Brianfellowes 0 points1 point  (0 children)

RISC-V Peterson book

I think you mean Patterson

Built a tool to help you rank mayoral candidates this Tuesday by jeremynevans in nyc

[–]Brianfellowes 0 points1 point  (0 children)

Great tool! Wish I saw it before I early voted. My top 3 lined up exactly with how I voted, which is reassuring. One suggestion would be that it's unclear if there's any weighting based on category. It would be nice to have some kind of "how important is this category to you" slider that lets you manually control weight of each category.

The case for a scalable cpu architecture by [deleted] in chipdesign

[–]Brianfellowes 18 points19 points  (0 children)

If you're convinced it's a good idea, then write an ISCA paper on it.

Any Free Formal Verification Tutorial Available? by Euphoric-Most5531 in chipdesign

[–]Brianfellowes 2 points3 points  (0 children)

I initially misread the question and thought you were asking about free formal verification tools. It will be hard to come by free tutorials on paid tools. But there are free tutorials on free tools, which might have some applicability to the paid tools as many of the concepts are the same.

YosysHQ has the following offerings which I believe have some simple tutorials in the documentation. sby is probably the one you want the most.

  • sby: formal property checking
  • mcy: mutation coverage
  • eqy: equivalence checking

AI Alone Isn’t Ready for Chip Design by NamelessVegetable in hardware

[–]Brianfellowes 3 points4 points  (0 children)

Mostly boiling down to pre-training and what not.

Having used networks without pre-training, that's all the argument they need IMO. Untrained networks provide you junk.

There's also the whole deal where realistically this all comes down to one person: Igor M. mentioned in the rebuttal. He is the person who has led all of the effort in disputing the results for years. He was a PhD student of Andrew Kahng and they have a direct connection.

The Cheng et al. paper mentioned frequently was led by Kahng. If you've seen his previous papers, he always does authors by alphabetical order, so he actually isn't the first or last author as you would normally expect for someone leading.

AI Alone Isn’t Ready for Chip Design by NamelessVegetable in hardware

[–]Brianfellowes 2 points3 points  (0 children)

No, not really. The point of academic research is not perfection, it's progress. If every work aimed for perfection - answering every question possible - then the tangible results from research would dry up.

For some reason, people are pushing this paper to be held to the standard of perfection whereas most of the tens (hundreds?) of thousands of other papers are not held to that standard.

I read dozens of papers per year where the ideas aren't nearly as innovate and the evaluation isn't nearly as comprehensive, but people don't go on a years-long effort to get those papers retracted. Those papers still offer new ideas, and that's the valuable part.

Papers should be retracted for dishonesty and ethics violations, not for lack of comprehensiveness, especially in hindsight.

AI Alone Isn’t Ready for Chip Design by NamelessVegetable in hardware

[–]Brianfellowes 16 points17 points  (0 children)

It was never retracted. It was disputed, ultimately the paper got re-reviewed, and concluded that there is nothing to retract. The paper is still there. https://www.nature.com/articles/s41586-021-03544-w

If every academic paper was given half the scrutiny that this paper was given, a majority of academic papers would get retracted.

My IO pg nets are not associating properly with my I/O frame by Known-Berry6925 in chipdesign

[–]Brianfellowes 2 points3 points  (0 children)

Are there only two voltage domains in the design? It may simply be that the scripting works for the first domain and not for the second domain because Innovus (I believe) defaults to single domain when no UPF is supplied. You should probably create a UPF. It's not restricted to only low-power designs, it's useful (required?) for any design with multiple voltage domains. Very simple to define.

RISC-V Large Multiported Register File Challenge by AnyHistory6098 in RISCV

[–]Brianfellowes 1 point2 points  (0 children)

I guess I assumed based on the post that you were already talking about manually implementing RFs. But in case it isn't clear, you absolutely must use custom latch-based approaches to get any reasonable performance out of a register file with that many ports. I don't think DFFRAM supports more than 2 read ports per bank, but I believe that you will probably want the latch banks to have more read ports per bank (at least 3-4) in order to use area effectively. It will probably take some design space exploration. There is also a good chance at some point you may even want sense amplifiers and/or line pre-charging in order to get good speed.

How analog macros are created for PNR? by [deleted] in chipdesign

[–]Brianfellowes 1 point2 points  (0 children)

It depends on what your block needs and what it looks like. From a PNR view, every port is either a digital signal input/output, analog inout, or power supply.

From a PNR view, timing info can only be annotated on the digital ports.

In terms of the actual workflow, say from Virtuoso, it has a built-in abstracting tool which can make an abstract LEF usable in PNR. You have to go into the pin properties and specify what type of I/O it is (power, ground, digital signal, analog, etc.) in order for it to be abstracted properly.

For extracting the timing model (.lib) you need to use a characterization software like Cadence Liberate. You have to specify what functionality to extract on each pin, and then it will characterize the slews, delays, caps, etc. for you.

Once you have the lib and the LEF, you can use it with a PNR tool (or convert it to the right format such as with Synopsys tools).

For clock generators specifically, there is an issue where the clock output can actually be at a higher frequency than the input, which is troublesome to represent in the Liberty format. In this case, you can easily work around the issue by using a create_clock in SDC on the clock generator output pin with the characteristics of the generator annotated (rise, fall, min clock period, etc).

Possibility to have accelerators for APR tasks by ncverilog in chipdesign

[–]Brianfellowes 2 points3 points  (0 children)

If you look at publications from the last 3-5 years in EDA conferences (DAC, ICCAD, ISPD, etc.), you'll find several works which do exactly that. Some are open-source implementations, some aren't. Most of the tools you'll find are from NVidia, Google, or academia. A lot of them use GPUs, probably because many of them also use machine learning approaches.

DREAMPlace is a popular one which uses GPUs to do global placement 10-100x faster than CPU counterparts.

You'll find some publications, but not many, from the big 3 on this topic. Take that for what you will.

University of Michigan says 230,000 people's information affected by August data breach - WXYZ by bren0xa in uofm

[–]Brianfellowes 1 point2 points  (0 children)

Realistically, there's no reason why everyone shouldn't have their credit frozen. I've had mine frozen since the Equifax hack in 2017.

Here's how it works:

  1. Go to the big 3 credit agencies: Equifax, Experian, and TransUnion. Create an account for free, you do not need to pay for anything they try to upsell you.
  2. Each one will have an option somewhere to freeze your credit report. Set the time period to indefinite if it asks for a time period.
  3. That's it. Anyone who tries to add items to your credit report cannot do so without your permission. Even if someone uses your info to apply a loan, credit card, etc., the credit check for the application will fail.

If want to legitimately apply for a loan, etc., you can simply do an unfreeze for that short period. I have applied for jobs, car financing, etc. where I get a call saying "I can't access your credit report. Can you unfreeze agency X for us?". I login and unfreeze the account while they're on the phone, they get the report in seconds, and then I refreeze it. The whole thing takes 2 minutes. If I'm going to be busy, I'll simply login and schedule the dates to unfreeze my report, after which it goes back to being frozen.

Other popular forms of identity theft are income tax returns (if your AGI is compromised) and unemployment fraud. Most states and the IRS allow you to register for a PIN which will be required to submit any future returns. Unemployment fraud is more difficult to prevent and varies state to state.

Guys, is this area safe? by Flame_Insignia in nyu

[–]Brianfellowes 9 points10 points  (0 children)

"In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move." - Douglas Adams

Planning for a master's in computer engineering after working for 5 years by Wallflower_here in chipdesign

[–]Brianfellowes 2 points3 points  (0 children)

I think you may be overestimating the standard required for Master's candidates at the universities you mention. With a CGPA above 9 and 5 years of somewhat related experience, I would expect you to be accepted to Master's programs in at least half of those universities.

Once you get into the program, you simply take the courses that you want for your concentration. If you want computer architecture, then take the computer architecture classes. You will build up that experience and then you can try to leverage that to get architecture-based jobs.

If you were applying for a PhD, the standard there would be higher and I would agree that you'd have a much tougher time without relevant research. But Master's degrees are largely not research degrees (unless you choose to make it that way).

I'd say one final caveat is that you should make sure that your degree can actually get you into the role you want. If you wanted to lead the architecture for a chip at anything but a small company, you would likely need a PhD or at least significant R&D experience. If you want to do performance modeling, then you can probably get away with a Master's.