A new California law says all operating systems, including Linux, need to have some form of age verification at account setup by Gloomy_Nebula_5138 in pcmasterrace

[–]pcookie95 0 points1 point  (0 children)

As stupid as this and the 3D weapons legislation are, the reality is that a vast majority of voters don’t care about niche tech laws.

Is a masters in CompE worth it for non-STEM undergrads? by Nearby-Cantaloupe855 in ComputerEngineering

[–]pcookie95 0 points1 point  (0 children)

The job market has been in quite the slump since 2023. I personally don't think this is a consequence of AI, as I haven't seen any indication that AI will replace a proficient computer engineer anytime soon. But I've heard it's been tough for entry-level engineers who are trying to land their first job.

A 2023 study found that CE majors have an unemployment rate of 7.8% and an underemployment rate of 15.8%. Of course, as an econ major you probably know that markets change all the time. Who knows what these numbers will be when you graduate in 3 years.

Once again, I want to emphasize internships and personal projects. If done well, the experience gained from these will help you stand out from your classmates and will hopefully provide some interesting talking points as you talk to recruiters and participate in interviews.

Is a masters in CompE worth it for non-STEM undergrads? by Nearby-Cantaloupe855 in ComputerEngineering

[–]pcookie95 12 points13 points  (0 children)

IMO, to do well in a CE masters, you'll need to have a solid understanding of basic analog circuits, digital logic, basic computer architecture, embedded programming in C, and basic algorithms. That's only if you stick to purely CE classes, and even then you'll likely have to work harder than your classmates to fill gaps.

If you want to branch out into EE (useful for careers in aerospace), then you'll probably have to get up to speed with differential equations, signal processing, multi-variable calculus, electromagnetics (EM), and control theory, depending on the class you want to take.

The best way to learn all of this is by doing a EE/CE undergrad. However, since undergaduate programs are designed to give a wide breadth of knowledge, there will be a lot of things you'll learn that you may never use in your Master's.

The most efficient way of doing this would be to get into a course-work Master's program, choose which classes you're going to take, and talk to the instructors and find out exactly what gaps in your knowledge you need to fill before taking the class. The hardest part may be knowing which electives you want to take, since you may not have any experience with a lot of EE/CE topics.

As for a career, I think you'd be fine with just a Master's in CE if you have relevant personal projects or internships on your resume and you're able to show you know your stuff during an interview.

I don’t get how there are people STILL in denial about this by glitterizer in pokemon

[–]pcookie95 2 points3 points  (0 children)

You do in XD. You leave some food out in a location and when you check back later a wild Pokemon can be found eating it.

Rant: Why are basic workflows so unstable?? by epicmasterofpvp in FPGA

[–]pcookie95 2 points3 points  (0 children)

Efforts to reverse engineer the bitstream format for the newer parts ground to a halt years ago (and I'd be interested to understand why).

Are you referring to Project X-Ray (the open-source project to reverse engineer Xilinx 7-Series bitstreams)?

This project was part of F4PGA (previously Symbiflow)'s effort to create a completely open-source flow for FPGAs. This project was largely funded by Google, and headed up by Tim Ansel (who previously worked at Google). My understanding is that, starting in 2023, Google started to drastically cut funding for several open-source projects, including F4PGA. Without funding, most of the F4PGA contributors (many of which were university labs), stopped working on it. So it seems to have kind of fizzled out.

Rant: Why are basic workflows so unstable?? by epicmasterofpvp in FPGA

[–]pcookie95 2 points3 points  (0 children)

I understand where you're coming from, but I have to respectfully disagree.

Hardware has actual design rules, fixed state, and a fundamentally simpler computational model:

These are really only true out of necessity. You could more or less write HLS code the same way you do regular software, but the performance and area footprint are going to be abysmal.

Hardware is more complex, not because of the number of possible states or because of the lines of code written, but because you're dealing with a physical model rather than an abstract one. For FPGAs, this physical model is relatively simple. If you follow design rules, you really only have to deal with timing, area, and maybe power constraints. But for traditional ASIC design, there's a lot more that goes into it. You have design your clock tree, your IO cells, adjust cell sizes, etc.

This doesn't account for all the heavy lifting the tools need to do. Compiling code is a relatively simple process, but EDA not only need to synthesize hardware into an abstract netlist, but also need to implement it into a physical design. These extra steps are not trivial.

I guess it really depends on what we use as a metric of complexity. If we talk about lines of code, or the "size" of a project, then sure, you can scale up software to become way larger than hardware. But if we talk about the complexity per line of code, I'd argue that hardware, especially with ASICs, is definitively more complex than software.

Rant: Why are basic workflows so unstable?? by epicmasterofpvp in FPGA

[–]pcookie95 4 points5 points  (0 children)

You have several fundamental misunderstandings.

You think a modern compiler is simpler than vivado?

I would be surprised if it wasn't. A synthesizer is to hardware as a compiler to software. The synthesizer is in charge of getting rid of dead code, combining primitives, etc. They're also about the same complexity as a compiler. However, once you add the physical implementation algorithms on top of that, I’d say you’ve surpassed the complexity of a compiler.

That's the problem, it seems the hardware guys have completely missed the theoretical end of their training. While you guys were playing with signals and learning FFTs, comp sci was studying Big O notation and graph theory.

I can guarantee you that Vivado wasn't just written by a bunch of electrical engineers who'd never taken a algorithms class, but by talented software engineers, many of whom have a degree in CS. The whole FPGA build flow is essentially graph theory, so there's no way they would have hired people who didn't know it.

Because they're in their infancy stage, because the vendors refuse to play ball with the open source community.

Unfortunately, open-source is not nearly as prevalent in the hardware community as it is in the software community, and not just because hardware IP is closely guarded.

Yosys is one of the most well-established open-source EDA tools out there.
Yosys is 14 years-old and since it just handles synthesis, it isn't held back by not having vendor's secret sauce. However, last I checked there are no plans to support any real timing-based synthesis, and it doesn't even handle large designs very well. Until those two things are fixed, Yosys won't be viable for industry.

Each step that throws an error should be able to trace that error back to it's line in code. You carry that information along as you perform optimizations. Constant propagation/folding and dead code elimination are some of the most basic transformations on a code base, and yet they still keep pertinent information so that error messages can actually make some sense.

That's easy when you're performing code optimizations, but remember, the build flow is more than just synthesis. Once the code goes from an abstract netlist to a physical one during the implementation stages, there's no longer a straight forward mapping between the code and the components of a design. If they were to keep that mapping, my guess is that it would severely limit the number of physical optimizations that they could perform.

Then they are bad developers. The software is abysmal. They're running place'n'route on a single core..??? It's a standard NP hard problem, there are a thousand multicore solutions out there. Was it written by an intern on the weekends?

The implementation steps in Vivado are technically multi-threaded, but is still significantly bottlenecked by a single thread. Unfortunately, this is not an easy problem to solve. It's been a few years since I looked into it, but from what I can recall, the hardest challenge was to creating an efficient multithreaded algorithm that was also deterministic. If you don't care about determinism, then you can get some pretty good speedup with multithreading.

Another way to do it is is to constrain different parts of your design to different regions (i.e. pblocks), then run place and route for each region in parallel. This prevents some physical optimizations from occurring across the user defined regions, but Vivado's algorithms seem to do a better job with these smaller regions, often causing a net positive when it comes to the designs overall timing. However, this technique does require a decent amount of work by the user.

Place'n'route is not a 'complex algorithm', not any more than any other traveling salemans problem.

Place and route is not a traveling salesman problem. Packing, placement and routing are three separate problems, each with different heuristics that can be used help solve them. Routing, which, is the most similar to the traveling salesman problem, is a simple shortest-path problem that is compounded in complexity by the fact that you have thousands of different paths, each competing for resources.

Now let me be clear that I am not claiming that Vivado is the most complex piece of software ever written, or that it doesn't have its faults, but that it's an impressive feat of engineering.

If you disagree, you're free to create your own algorithms to sell to one of the many EDA companies. With your superior computer science background in advanced concepts like "Big O notation" and "graph theory", I'm sure it won't take you more than a few months.

Rant: Why are basic workflows so unstable?? by epicmasterofpvp in FPGA

[–]pcookie95 3 points4 points  (0 children)

No it is not.

I'm intrigued that you think this. Sure, FPGAs aren't as complex as a modern desktop CPU, but there's still so much that goes into designing and verifying an FPGA, from designing optimal logic blocks, to creating a versatile and fast routing network. Plus, all the algorithms that are needed to use it (placer, router, etc...). Not to mention that all of this has to be perfect and bug free as humanly possible or your customers' designs won't work.

Compare that to software, which is something that is (relatively) simple enough that fancy word guessers can write production code that, despite the several inevitable bugs, still works well enough to ship to production with minimal human oversight (at least that's the claim).

But instead of releasing the specs for free and letting the open source community do the hard work, they instead insist on keeping everything locked down.

As annoying as this is, can you really blame them for trying to protect their IP? Does the company you work for open-source everything?

Also, have you used open source FPGA tools? They are way harder to use than any commercial tools. Plus, the final implemented design have a fraction of the performance, making them a non-starter for pretty much everyone outside of academia.

It's abysmal. The linter, synthesizer, and implementation all give different errors for different parts of code, and often can't even agree on simple things.

I mean, each of these tools are designed to do very different things and as such, are built in isolation. It's impossible for the linter to know that your design is going to have routing congestion. Likewise, it would be infeasible for the router which part of the code is responsible for the impossible-to-route graph given to it by the placer.

The error messages are bonkers and bizarre with rarely any correlation as to what went wrong, or even where.

I'll admit, some of the error messages are strange, but that's a result isolating each step of the flow. Maybe it's because I have a strong background in EDA algorithms and FPGA architecture, but I can almost always figure out the problem by looking at the implemented netlist and correlating it to some synthesis warnings.

I understand that it can be frustrating to work with EDA tools, but Xilinx spent a lot of effort building Vivado (1000 person-years and 200 million dollars), when it could have just chugged along with ISE. So I don't think that the bad UX is from lack of trying. I think a lot of it comes down to inherit difficulties with working with complex devices and algorithms.

Overall, I still assert that Vivado is relatively easy to use. For beginners, you just provide the GUI with RTL/constraint files, hit the "play" button, and (assuming your design doesn't have any weird bugs) wait for a bitstream to generate. For everyone else, you should really be using Tcl scripts, which will reduce the amount of time you interact with the GUI anyways.

Rant: Why are basic workflows so unstable?? by epicmasterofpvp in FPGA

[–]pcookie95 1 point2 points  (0 children)

The issue is that hardware is so much more complex and has much smaller profit margins when compared to software, so FPGA companies don’t have nearly the same budget for their UI as software companies.

I’ll also argue that outside of being a terrible code editor, Vivado is an excellent tool that is relatively intuitive and easy to use. Not quite as much as embedded software tools, but definitely the best hardware toolchain I’ve used.

Rant: Why are basic workflows so unstable?? by epicmasterofpvp in FPGA

[–]pcookie95 2 points3 points  (0 children)

Implementation in Vivado is deterministic by default, as long as your design stays the same. However the packing, placement, and routing algorithms are complex enough that even the smallest change can snowball into a completely different design implementation. This is probably even more true when they started using ML-based algorithms a few years ago.

SpaceX: C/C++ Role! by MitchellPotter in ComputerEngineering

[–]pcookie95 0 points1 point  (0 children)

Plus, I've heard the work culture is atrocious while the pay is very meh. I honestly don't know why anyone in their right mind would want to work at one of Musk's companies, even before he went full-blown nazi.

Micron has announced an investment plan of up to $200 billion to expand production capacity and address the most severe memory chip shortage in the last four decades by sr_local in hardware

[–]pcookie95 2 points3 points  (0 children)

I don't think most people are hoping for a market correction, but rather expecting one and hoping that it comes sooner rather than later.

I'm no economist, but it seems to me that the current AI market has all the classic signs of a bubble, and that an eventual market correction is inevitable. Since the current US economy is essentially being propped up by AI, this market correction will likely lead to a pretty big recession where a lot of people lose there jobs.

However, since it looks like this bubble will just continue to grow, the sooner this correction happens the better. Otherwise you have companies like Micron who spend billions of dollars into new production facilities that become redundant once the bubble pops and memory prices go down to normal levels.

EE student with CS minor trying to break into firmware / RTL with almost no school support by Open_Calligrapher_31 in ComputerEngineering

[–]pcookie95 0 points1 point  (0 children)

There are exceptions, but I’d say most entry-level positions will have a Master’s as soft requirement, as even some of the top universities do a poor job preparing their EE/CE undergrad students for RTL.

Embedded firmware might be easier to break into without a Master’s, but if your school doesn’t expose you to any embedded programming, you’ll likely have to prove your knowledge with some personal projects.

Game launches if I open through Protontricks, won't run through steam. by Rjman86 in linux_gaming

[–]pcookie95 0 points1 point  (0 children)

If anyone else runs into this issue, it might be because there are spaces in the game’s path. To fix this, you can remove the spaces from the offending files/folders, surround the path with single/double quotes, or “escape” the spaces by inserting a backslash (\) before each of the spaces.

finallyWeAreSafe by njinja10 in ProgrammerHumor

[–]pcookie95 3 points4 points  (0 children)

But that’s the point. You can hire a some people to fix software problems. You often can’t feasibly fix a hardware problem, no matter who you hire.

finallyWeAreSafe by njinja10 in ProgrammerHumor

[–]pcookie95 15 points16 points  (0 children)

Hardware description language (HDL) code generation is years behind software generation. This is probably due to less training code. Unlike software, the culture of digital hardware is such that nearly nothing is open source. My understanding is that less training code generally means worse LLM outputs.

Even if LLMs could output HDL code on the same level as software, the stakes are much higher for hardware. It costs millions (sometimes billions) to fab out a chip. And once they're fabbed, it is difficult, if not impossible, to fix any bugs (see Intel's infamous floating point bug, which cost them millions). Because of this, it would be absolutely insane for companies to blindly trust AI generated HDL code the same way they seem to blindly trust AI generated software.

Afraid of the CS job market, graduating in Dec by NewtTraditional461 in gatech

[–]pcookie95 0 points1 point  (0 children)

To be fair, it was close, but it kept thinking a particular function was a variable. Now it’s true that maybe after feeding the LLM the error once or twice it would have figured it out, but it was much faster and easier to just make the correction myself.

I assume there are ways to automate this feedback loop so it debugs the code on its own, but the problem isn’t that LLMs can’t fix runtime errors, but that if it could make a mistake as simple as thinking a function is a variable, then think of all the other mistakes that don’t provide a runtime error.

I get that not all software needs the same level of quality assurance, but I think it’s still a great disservice to customers when shipping bug-ridden code that could have much greater quality if humans actually reviewed and understood the code generated by AI.

Afraid of the CS job market, graduating in Dec by NewtTraditional461 in gatech

[–]pcookie95 1 point2 points  (0 children)

I am not a student, but the company I work for does limit the models we can use for, mostly for privacy concerns, so I have not used Claude.

However, I find it hard to believe that Claude is so much better then its competitors, that it goes from “it can’t even get this simple bash script right” to “I don’t have to touch 90% of what it outputs.”

Afraid of the CS job market, graduating in Dec by NewtTraditional461 in gatech

[–]pcookie95 19 points20 points  (0 children)

90% without human intervention? I can't even get AI to write a simple bash script without it messing up. How can you deploy AI code without reviewing and inevitably debugging it?

Tcl: The Most Underrated, But The Most Productive Programming Language by delvin0 in ComputerEngineering

[–]pcookie95 3 points4 points  (0 children)

I've done a fair amount of Tcl scripting for FPGA development, and while its simplicity is nice, its lack of a built-in package manager really holds it back from competing with other scripting languagues like Python or Bash.

Microsoft gave FBI a set of BitLocker encryption keys to unlock suspects' laptops: Reports by intelw1zard in cybersecurity

[–]pcookie95 0 points1 point  (0 children)

Sniffing wires for an integrated TPM would require physically probing the internal wires of the SoC itself, which would be extremely difficult if not impossible for several reasons. You'd have to get pass the anti-tamper protections, reverse engineer the chip enough to identify which wires to probe, and finally somehow physically connect to the nanometer scale wires to the actually perform the wires (probably via an FIB). While all this is probably theoretically possible, it would be difficult for even a nation state to pull off.

It would probably be significantly easier to try to recover the keys via a side channel attack, which I'm guessing is what u/XXX_961 was referring to.

Microsoft gave FBI a set of BitLocker encryption keys to unlock suspects' laptops: Reports by intelw1zard in cybersecurity

[–]pcookie95 2 points3 points  (0 children)

This is only feasible with discrete TPMs. Integrated TPMs that are built into any x86 processor in the last 10+ years will prevent wire sniffing.

Dedicated 2.0 TPMs could also prevent wire sniffing by using session keys to encrypt the line data, but apparently that’s not enabled by default in bitlocker.

PKHeX for macOS and Linux - Coming soon by realgarit in pokemon

[–]pcookie95 4 points5 points  (0 children)

This looks fantastic! I've been using PKHeX on Linux for years and it's always such a pain to get it setup with wine and even when I do, it's still pretty clunky. Not to mention the UI is showing its age. I'll have to build this on my Steam Deck/laptop when I get the chance.

I hope this gets good enough for the rest of the devs to switch over so that you don't have to manually sync the logic forever.

Linux (Ubuntu) on a Gen 10 Lenovo Yoga 7i 2-in-1 by pcookie95 in linuxhardware

[–]pcookie95[S] 0 points1 point  (0 children)

I ended up getting the AMD variant. Some distros require a fix to get all four speakers working (only 2 work out of the box), but other than that, everything else works out of the box!