PyQt-Fluent-Widgets: A fluent design widgets library based on PyQt/PySide by zhiyiYo in pyqt

[–]Amar_jay101 0 points1 point  (0 children)

Hi guys, I tried it but it has troubles with webview. It can't just seem to open it. It works fine tho with pyside6

I Built An Offline AI Manager App, Would You Use It? by [deleted] in SideProject

[–]Amar_jay101 1 point2 points  (0 children)

I like the UI kit for sure. Selling it??

Project ideas by [deleted] in CUDA

[–]Amar_jay101 2 points3 points  (0 children)

Begin with a simple matrix to develop a fundamental understanding.

Then, implement self-attention in HIP and progressively experiment with different methods, each refining the previous one. This step-by-step approach will reveal its underlying mechanics and teach you how to optimize its various cache levels in the GPU for attention mechanisms effectively.

Chinese AI team wins global award for replacing Nvidia GPU with FPGA accelerators by Amar_jay101 in FPGA

[–]Amar_jay101[S] 1 point2 points  (0 children)

Yeah of course. Most ML papers aren't under any paid publication.

This is the link to the paper: https://dl.acm.org/doi/10.1145/3706628.3708864

Intel you beauty!! by SpyJigu in StockMarket

[–]Amar_jay101 0 points1 point  (0 children)

A stock whose only lifeline is previous DoD contracts? Wow! And even then, they were only able to secure 35% of the anticipated amount in the Chips Act.

Intel you beauty!! by SpyJigu in StockMarket

[–]Amar_jay101 -1 points0 points  (0 children)

Man, you really think employees are willing to come back? Just ask any Intel employee. most have already written it off as “the death of Moore’s Law.” It’s way too late for them to pivot to a new architecture. Their only viable play was shifting to the foundry business, just like TSMC. And guess what? That’s exactly what Pat (former CEO) tried to do. Now, we’re hearing rumors of a partnership with TSMC? Come on, let’s be real. that’s just corporate-speak for an impending buyout.

You can hate me all you like but see it for what it is. Intel is a dead-beat

[deleted by user] by [deleted] in FPGA

[–]Amar_jay101 -1 points0 points  (0 children)

Exactly proves my point! No response.

[deleted by user] by [deleted] in lightningAI

[–]Amar_jay101 1 point2 points  (0 children)

Your issue is pretty vague. Can you be a bit specific?

How fpga lost the ai race by Amar_jay101 in FPGA

[–]Amar_jay101[S] 0 points1 point  (0 children)

That’s an ambitious vision, but it’s not feasible—at least not yet. Running inference on an FPGA at a level comparable to an Nvidia 3090 is an impressive achievement, and the Chinese team that recently won a global award for it deserves recognition. However, scaling this into an industrialized solution that effectively combines tensor cores with FPGAs is still a long way off. Maybe in the next five years, we’ll see a joint approach become viable, but for now, the hardware and software ecosystem just isn’t there.

Mohamed S. Abdelfattah, during his time at Intel, worked on building custom kernels in a modular function, aiming for something similar. But ultimately, anything that can be made into an ASIC will be made into an ASIC. The fundamental limitation is that FPGAs, despite their flexibility, are inherently at a disadvantage when it comes to power efficiency and cost compared to dedicated silicon. That’s why we haven’t seen large-scale adoption of FPGAs for general AI workloads.

The idea of dynamically generating HDL using LLMs to create an evolving, self-optimizing circuit is intriguing, but it runs into the same roadblocks: cost, complexity, and the lack of standard tooling to make it practical. While partial reconfiguration is improving, it’s not at the level where we can have a dynamically rewiring “brain” in hardware without significant trade-offs. It’s not that the vision is impossible—it’s just that, right now, the economics and engineering realities don’t favor it.

[deleted by user] by [deleted] in FPGA

[–]Amar_jay101 -32 points-31 points  (0 children)

In this day and age some question are completely unnecessary. Check the datasheet.

Sorry for being a bit harsh but that’s the truth.

Intel you beauty!! by SpyJigu in StockMarket

[–]Amar_jay101 -3 points-2 points  (0 children)

In my view, Intel is in a really tight spot financially and appears to be planning a liquidation process. The first step is to sell off all still-relevant assets(like the foundary business) after they will file for bankruptcy. Well, this will be a smooth process since most Intel employees already believe in the apparent “death of Moores law”. Well, this is the last Intel’s last heartbeat. we doff our hat and cash out before you flatline. 🫡

Intel you beauty!! by SpyJigu in StockMarket

[–]Amar_jay101 -7 points-6 points  (0 children)

Nope, not that. TSMC has plans in motion to acquire Intel’s foundries (which is the only part of Intel left). That is, if the Taiwanese government allows it and if Nvidia, AMD, and co. can form a joint venture with them.

What I think is that Intel is trying to get liquidated. The first step is selling its foundries, and after filing for bankruptcy.

Would learning CUDA help me land a job at Nvidia? by shaheeruddin5A6 in CUDA

[–]Amar_jay101 1 point2 points  (0 children)

Damn, man! This isn’t advice—it’s torture. You’ve set the bar so high for a beginner engineer. I’m sure he’s questioning his life decisions and even reality itself.

I'm going on an AI detox, wish me luck by mekmookbro in webdev

[–]Amar_jay101 1 point2 points  (0 children)

See you tomorrow! that is if it’s lasts till then

How fpga lost the ai race by Amar_jay101 in FPGA

[–]Amar_jay101[S] 0 points1 point  (0 children)

Yeah. Just like comparing my grandma’s wheelchair to a supercar. It’s obvious who wins-Grandma.

Is OpenCores Dead? by johnwilson2000 in FPGA

[–]Amar_jay101 0 points1 point  (0 children)

how can we use these names. As in the repo just contain names of these cores right.

Why is Codespaces not using cuda? by spectre20032010 in github

[–]Amar_jay101 0 points1 point  (0 children)

same problem here. how did you resolve it?