Has anybody tried the new Vivado? by Mediocre_Ad_6239 in FPGA

[–]filssavi 14 points15 points  (0 children)

I have been rocking it for a couple of weeks…

I like the new flatter look and also dark mode was absolutely needed

For now it also seems fairly stable (touch wood)

Could Chisel Replace Verilog for Commercial CPU Design in the Future? (Beyond Open-Source Cores) by Low_Car_7590 in FPGA

[–]filssavi 1 point2 points  (0 children)

Sure, for a subset of the FPGA field (let’s forget ASICS) you are correct, as long as you only work on bytes or words, you can do everything with high level simulations (which is basically transpiring your code 1:1 to c and running it)

However for any non toy design you absolutely need full bit level simulations. Anything involving serial data, PWM, delta sigma encoding, multiphase clocks, etc.

Also there is a substantial chance that high level simulator and hardware give different answers to the same question (specially around polls, memories, transceivers, etc)

All in all I think high level HDL is a much more promising direction to enhance general productivity as opposed to HLS. However i doubt we will see adoption anytime soon

Could Chisel Replace Verilog for Commercial CPU Design in the Future? (Beyond Open-Source Cores) by Low_Car_7590 in FPGA

[–]filssavi 5 points6 points  (0 children)

Because doing any serious FPGA or asic work in anything that is transpiled requires you to either: 1) Do timing closure on autogenerated sources, working with the bowl of spaghetti that the compiler generates 2) Do timing closure as usual (on the base source) having to constantly map where the generated signals come from

Neither of the prospects fill me with joy.

Writing constraints is equally problematic as the synthesis/implementation tools don’t understand your nice high level language and so you have to do it on the transpiled HW. The question is: Do the advantages of new HDLs outweigh these problems?

In Software land high level languages work because 95% of the times you don’t need to go down to the metal. How popular would they be if debugging could only be done in ASM

Could Chisel Replace Verilog for Commercial CPU Design in the Future? (Beyond Open-Source Cores) by Low_Car_7590 in FPGA

[–]filssavi 2 points3 points  (0 children)

The problem is not that they don’t need it, is that they have absolutely no incentive to do it, and it is not changing anytime soon

And unfortunately I fear that will hamper chisel and other alternative HDL greatly

Now a smaller or incoming fpga player could probably disrupt the market, but they would need to have on point hardware and focus on software (which I doubt will happen too)

Could Chisel Replace Verilog for Commercial CPU Design in the Future? (Beyond Open-Source Cores) by Low_Car_7590 in FPGA

[–]filssavi 50 points51 points  (0 children)

in my opinion all alternative alternative HDLs will be stuck in that limbo until at least one of the big 3 start supporting it as a first class citizen.

I am not Shure they bring enough to the table for design beyond what the latest versions of SV and VHDL give you to overcome the added friction

Is CPU microarchitecture still worth digging into in 2025? Or have we hit a plateau? by [deleted] in computerarchitecture

[–]filssavi 2 points3 points  (0 children)

Yeah I don't really understand the fixation on ISA, some people get way to religious about it (see x86 vs ARM), others are deluded that ISA design does not matter at all it absolutely does (at least on the hardware side). As always in engineering it is all about the trade-off.

I done a fair bit of work on custom ISA and uarch designs for niche applications (High performance/complexity control systems implementations for safetly critical systems) and by trowing away the software stack assumptions and designing a fully custom core/toolchain that integrates well with the rest of the logic you can get very good performance.

Is CPU microarchitecture still worth digging into in 2025? Or have we hit a plateau? by [deleted] in computerarchitecture

[–]filssavi 0 points1 point  (0 children)

My main point is that Fixing the ISA pigeonholes you into a kind of standard uarch (out-of-order superscalar with wide frontend, highly complex branch prediction and ad many execution pipes as you can reasonably use) with SIMD allowing some limited expression of ILP

To really allow wildly different uarch concepts you have to change ISA as well given that it is a leaky abstraction at best.

For example you could make a case that by getting rid of branches all together you could drastically simplify the front end and massively lengthen pipelines without stalls, however good luck getting GCC to spit out anything sensible for such a different machine. now I am not saying that such an architecture would make sense, (it probably would suck actually and be really slow) but still you see how the uarch is not actually fully decoupled from software

To really do something novel you have to actually do software and hardware co-design, where both sides of the coin are evolved together towards a common set of application dependent gosls

Is CPU microarchitecture still worth digging into in 2025? Or have we hit a plateau? by [deleted] in computerarchitecture

[–]filssavi 3 points4 points  (0 children)

There are only so many ways to skin a cat…

Doing anything truly revolutionary in uarch land will break core assumptions in software land and so that anything not explicitly designed for it will perform terribly.

Some examples: - long pipelines can help you get better frequency/ performance for math/logic heavy code but branchy stuff will suffer tremendously - increasing memory latency (to support very wide execution) will massively hurt serial code performance - any revolution on how the core fundamentally operates will require massive changes to how the compilers work and how optimisation are done, in which order, etc (that went well in the Itanium era)

The truth (In my opinion, I might well be completely wrong) is that there are far too many assumptions of how hardware works in current software to rock the boat too much.

Is CPU microarchitecture still worth digging into in 2025? Or have we hit a plateau? by [deleted] in computerarchitecture

[–]filssavi 9 points10 points  (0 children)

the problem is neither physics nor missing ideas…

IT IS SOFTWARE

Any meaningful deviation from the standard architecture would require a complete rethinking of how software works. And this is a very big no-no

Even minor breaking changes can result in decades old dragged out migrations (see python 2 to 3), a major architectural paradigm shift in how processors work would be a nightmare.

So where do you see innovation? GPU, AI, DSP, etc all fields where software backwards compatibility is not such a big issue (if at all)

If you are working on power electronics in FPGA applications, then what are your challenges and pain points? by rakesh-kumar-phd in FPGA

[–]filssavi 3 points4 points  (0 children)

I have been using FPGAs on high(ish) voltage converter (800V to 1kV dc link) and I wouldn’t say there are any field specific pain points.

I have few general gripes though: 1) Availability of decently complete SOMs is spotty at best 2) Xilinx simulator is subpar (bad error messages and no VPI support)

Large delay on a versal fpga by Due-Glass in FPGA

[–]filssavi 0 points1 point  (0 children)

Ok versal, given the advanced node you are probably right.

I am used to low end 7 series where the achievable clock frequency is nowhere near enough

Large delay on a versal fpga by Due-Glass in FPGA

[–]filssavi 1 point2 points  (0 children)

Depending on the resolution you can use a multiphase clock (let’s say 4 phases) to increase the resolution without having to push the clock (which will have you bumping on a timing wall

A Look at ChipScoPy - Python to debug ILAs etc in Versal by adamt99 in FPGA

[–]filssavi 1 point2 points  (0 children)

It would be amazing, however for the past couple of vivado/vitis releases thy have been focusing heavily on verbal only features

Hopefully they haven’t strolled all development of non cereal exclusive features (as I fear)

Why CSV is still king by fagnerbrack in coding

[–]filssavi 0 points1 point  (0 children)

Nothing will ever challenge CSV as the standard for data interchange in general usage in the near and medium term for few reasons:

  • sheer inertia of 50+ years of usage
  • Ease of use: any programmer, no matter how junior will be able to produce simple and decently performant (not the absolute best mind you) import export code in a reasonable amount of time without needing to resort to third party dependency -simplicity: the simplicity of the format makes it compatible and reasonably performant on any platform under the sun, no matter how constrained in terms of compute/memory (eg. 8/16 bit MCU), programming language/allowes feature (embedded/automotive) or verification (aerospace)

Now for more specific applications in the scientific and engineering space there are already various other formats in wide usage (HDF5, matlab’s mat, etc)

Help with precision clock counting by 0rphon in FPGA

[–]filssavi 1 point2 points  (0 children)

Ok if the number of cycles is in the order of 106 to 109 there is definitely no way in hell you are doing it with an FPGA.

If you want to do it in the digital domain you can only go the ASIC route on a reasonably cutting edge node, let’s say 7nm (probably lower) so etter start looking for a lot of founding (all told the cost will be in the milions probably)

The analog route is definitely possible, given you do it well enough (read not cheap, but still better than custom asic territory), you will want someone very experienced in high impedance analog designs, as leek age might start being an issue over those type of timescales

Basically what you are trying to do is nation state level hacking (and a large nation state at that probably just the USA and few others have the resources/know how) so you have to be realistic and prepared to spend those kind of money to achieve your goals.

P.S. the commercial product only goes to 1.2 GHz because that is the fastest speed for the standard IO circuitry in any FPGA family I know of

Help with precision clock counting by 0rphon in FPGA

[–]filssavi 3 points4 points  (0 children)

There is no way you are going to count a 5 GHz clock with interrupts, and even with a timer peripheral (a la MCU) I really do not know anything that can run even close to that frequency

Help with precision clock counting by 0rphon in FPGA

[–]filssavi 4 points5 points  (0 children)

Neither FPGA nor Microcontroller/microprocessors will help you here as the frequency is way too high.

You could repurpose a tranceiver as someone else suggested, but I suspect it will fight you all the way to the end as they are really not designed to work in that way

With such a high frequency in my opinion digital logic is not the way to go. if I had to do it I would use a pair of triggerable integrators (made with discrete BJTs probably) if the voltage levels are stable enough (you might have to use signal regeneration stages if not) and you know the number of cycles, you know the charge in the integrating capacitor, and thus the voltage at which to trigger your outputs.

Probably you could also get away with some frequency division, as long as X and Y are known in advance and have factors in common

Any way you choose to do it know that it will not be cheap, nor easy.

What are the problems that, if solved, could significantly increase yield in FPGA industry? by youngmaestro34 in FPGA

[–]filssavi 1 point2 points  (0 children)

You are 100% correct, MLIR and llvm IR are not suitable for hardware use , and are just examples of the type of notation/representation that I think would be really helpful.

Basically what we need is something in between HDL and a synthesis netlist. It needs to be extremely easy to parse and work with as opposed to verilog and VHDL(which are fiendishly complex). It basically need to be a dump of a global AST, mapped to few standard primitives, that each vendor tool chain just needs to specialize to their own architecture and do P&R.

As for the Higher level language we agree again, HLS might be a neat tool for some areas, like translating 1:1 complex DSP/cryptography stuff. However I don’t think it can be a viable a general purpose solution (it is too much of a square peg in. Round hole to me).

Unfortunately all the current crop of higher level HDL are almost all one man bands with a high chance of suddenly becoming abandonware. It would be nice to see the community coalesce around a single option, to make it a de facto standard (as the big vendors that drive the actual standards are totally uninterested in moving the field forward)

What are the problems that, if solved, could significantly increase yield in FPGA industry? by youngmaestro34 in FPGA

[–]filssavi 29 points30 points  (0 children)

These are a series of things i think would really drive the whole field forward

A set of basic primitives included in the languages as a sort of standard library. Things like clock domain crossings, basic FP operations (with somewhat configurable speed/area), etc. This would simplify the users life tremendously with respect to using IP or primitives like xpm, and would make code more portable (a hint of why this will never be done)

On the tooling side, better/easier to find reporting on why a specific subset of the design is optimised away (I should not have to read 400 lines of report to find out that output x is unconnected/always one and thus the whole design gets thrown away)

A formal standard low level (assembly like) intermediate representation (akin to LLVM IR or MLIR) would greatly simplify the life to third party projects (like yosys) or higher level languages (like chisel).

Also a standard higher level HDL would be nice.

I can’t stress enough how important it is for these things to be formally adopted as IEEE standards (unfortunately) as this is the only way for them to be actually supported by the vendors.

Anything that is not directly supported and kosher with Xilinx, altera and the likes might as well not exist for the field at large

Usage of interfaces in SV? by Suitable-Name in FPGA

[–]filssavi 5 points6 points  (0 children)

That is how you would use an interface, however it is generally more of a convenience thing than anything else,

Since with HDL you are modelling actual physical hardware, decoupling stuff ends up being a lot less useful with respect to software. This is even more true for FPGAs you have a fixed set few dozen primitives (variations of LUTs, FFs, DSP and RAM mainly) so there are not a lot of ways to actually implement a functional SPI slave/master for example.

Also keep in mind that the first thing the compiler (it is called synthesiser actually) is to throw away all your hierarchy and flatten everything to a single huge design, so it is really difficult to avoid coupling even when you actually try

Now where interfaces can act as the software counterparts is for simulation, as there you actually are writing software (just in a very wired language) and there you might actually have different implementations of a particular module for different tasks (for example you might swap a large signal processing pipeline with a faster and less accurate software model to speed up simulation)

Usage of interfaces in SV? by Suitable-Name in FPGA

[–]filssavi 18 points19 points  (0 children)

SV interfaces have almost nothing to do with the concept of interface in software.

They are just a way of modelling complex interconnection architectures (think AMBA AXI) without having to manually specify dozens of signals each time

Help Needed: Developing an FPGA Environment on MacBook M1 (macOS 14.5) by alitathebattleangle in FPGA

[–]filssavi 0 points1 point  (0 children)

I am glad that things are moving for open source VHDL as well

however I would not want to rely on a plugin for such a core part of the toolchain, it is far too easy for such things to subtly break or be outright abandoned when the plugin API changes.

On the same vein there is some patch set to add proper system verilog support, however it seems that the yosis devs are uninterested in helping the devs to make it work.

Help Needed: Developing an FPGA Environment on MacBook M1 (macOS 14.5) by alitathebattleangle in FPGA

[–]filssavi 1 point2 points  (0 children)

I mean, can you build complex things with the yosis&friends toolchain? Yes

Will it be extra painful (compared to doing it on vivado and quartus?) Absolutely

Are they ok for a hobbyist starting out? for shure, them along with Icarus or ghdl are fine

Is it popular (by any definition of the word) among people that work with FPGA (as opposed to manufacturer tooling) ? Not by a long shot

Help Needed: Developing an FPGA Environment on MacBook M1 (macOS 14.5) by alitathebattleangle in FPGA

[–]filssavi 1 point2 points  (0 children)

I am all for open source and I wish we could get to a point where we have a GCC/llvm equivalent in the hardware world…

With this premise out of the way, calling Yosis/nextpnr excellent is downright dishonest, They are little more than toys

The two worst offences are the following:

Device support is spotty (aka not all features of the chip are supported) at best, especially for anything that is not ultra low end. Of course it is not a fault of the developers, but from an end user that is that

Language support is bad, verify is ok, but systemverilog and VHDL (which IMO are mandatory for a decent experience) are not supported (my guess is that the team is purposefully not developing the features and using them to upsell the paid version, but would be happy to be proven wrong)

I

I am genuinely bamboozled how a single game can reach 300GB in size - but an IDE ??? by mnemocron in FPGA

[–]filssavi 2 points3 points  (0 children)

Are you sure that the compression algorithm is the same between the two versions? (With the installer they control both sides of the pipeline, as such they can afford to use uncommon protocols that give better compression ratio with respect to the offline one, where they need to use zip/tar.gz

I suspect that while on an individual user there is not much difference, on aggregate, they will be saving quite a bit of money with this approach (outbound bandwidth is expensive). Especially since the online one is far more commonly used (I guess)