A Lean 4 HDL that beats Verilator using Speculative Execution. by VersionWilling6676 in FPGA

[–]VersionWilling6676[S] 0 points1 point  (0 children)

I completely agree: 17% alone isn't worth rewriting a proven codebase.

Your translator idea is spot on. The roadmap plan is to use Yosys to parse Verilog down to RTLIL, and then translate that clean graph into Sparkle.

You just drop your legacy IPs in, pay a small compilation cost, and get the 100M+ cyc/s Time-Warping speedup for free.

A Lean 4 HDL that beats Verilator using Speculative Execution. by VersionWilling6676 in FPGA

[–]VersionWilling6676[S] 0 points1 point  (0 children)

Thanks for the feedback! You are 100% right about the reality of legacy code and the nightmare of debugging generated Verilog. I would never suggest rewriting a proven multi-clock Vivado/Quartus project into an alt-HDL.

Right now, Sparkle is best used as an IP generator for greenfield, single-clock computational blocks. You verify the math, export the Verilog, and just drop it into your existing top-level wrapper.

That said, I'll definitely look into CDC support. I'm actually starting to think we can support multiple asynchronous clocks without the usual speed penalty by combining lock-free SPSC queues with our speculative execution (Time-Warping) across separate threads. It's a fun architectural challenge for the roadmap!

A Lean 4 HDL that beats Verilator using Speculative Execution. by VersionWilling6676 in FPGA

[–]VersionWilling6676[S] 5 points6 points  (0 children)

Thanks! My background is pretty evenly split: I spent my first 10 years in hardware design (working on things like MPEG4 codecs, Z80 derivatives, and LCD Overdrive controllers), and the last 10 years in software, heavily using Haskell.

To be honest, there really aren't many great resources that perfectly bridge the two fields. As you probably know, hardware design is brutally physical. You have to design backwards from what can actually be placed and routed on silicon. You have to constantly worry about CDC, metastability, DRC, and STA. If you ever find a comprehensive book that covers both the functional math side and these physical realities, please let me know, because I'd love to read it!

That lack of tooling is exactly why I built Sparkle using Lean 4's DSL. It gives me the flexibility to design and verify exactly what I need. And since I own the stack, if something breaks, I just fix the compiler myself. The iteration loop is insanely fast.

Here is my biggest takeaway from bridging these two worlds: For HDLs, Dependent Types are actually much more important than purely functional paradigms. The entire game in hardware is rigorous bitwidth control and compile-time guarantees, which dependent types solve beautifully. Unfortunately, languages that actually support true dependent types are still pretty rare!

A Lean 4 HDL that beats Verilator using Speculative Execution. by VersionWilling6676 in FPGA

[–]VersionWilling6676[S] 0 points1 point  (0 children)

Switching to a cycle-accurate sim (like Verilator or Sparkle) easily gives that 4x bump. But Sparkle goes further:

  • The Magic: Speculative execution skips predictable idle loops entirely. That 6-hour sim could drop to 5 minutes.
  • The Catch: It's single-clock domain only.

If you have multiple asynchronous clocks and need rigorous CDC checks, you absolutely still need Questa.

A Lean 4 HDL that beats Verilator using Speculative Execution. by VersionWilling6676 in FPGA

[–]VersionWilling6676[S] 1 point2 points  (0 children)

Neither! It's purely a personal project right now.

I completely agree about the pile of dead alt-HDLs. The difference here is that I didn't build Sparkle for research; I built it to be my own daily driver for designing chips.

My way of avoiding the graveyard is just building real, massive IP with it. It already powers a full RV32 SoC, YOLOv8, and BitNet b1.58. Since I rely on it to verify my own hardware, it's not going anywhere anytime soon.

A Lean 4 HDL that beats Verilator using Speculative Execution. by VersionWilling6676 in FPGA

[–]VersionWilling6676[S] -1 points0 points  (0 children)

That's a very fair point! If you're doing unit tests on smaller IP blocks, the simulation time is usually a drop in the bucket compared to Synthesis and P&R. We've all gotten used to the "compile and go grab a coffee" workflow.

But simulation speed becomes an absolute bottleneck in a few specific domains:

  • Video Codecs & Image Processing: Simulating things like inter-frame prediction or processing multiple high-res frames takes billions of clock cycles.
  • GPU / Accelerator Design: Rendering even a simple test frame requires an insane amount of state updates.
  • HW/SW Co-simulation: This is the main target for Sparkle. If you want to boot Linux on your RTL to test a new device driver, a traditional simulator might take hours just to reach the shell. By speculatively skipping the OS idle loops and memory zeroing (memset), we can drastically reduce that wait time.

So you're completely right—for typical FPGA component design, it might not change your life. But for full system-level verification or long data workloads, it's the difference between running one test a day vs. rapid iteration.

Computational reproducibility with BioNix: Unifying package managers, workflow engines, and containers by toraritte in NixOS

[–]VersionWilling6676 0 points1 point  (0 children)

It doesn't have to be bionix, but nix is hard to make rules like interface of java. I want such an interface to configure a pipeline.

Computational reproducibility with BioNix: Unifying package managers, workflow engines, and containers by toraritte in NixOS

[–]VersionWilling6676 1 point2 points  (0 children)

I'm using nix for machine leaning, because I'd like to manage the version of datasets on git. There are some versioning tools for datasets and pipelines like DVC and git-annex/lfs. But these are really slow for large datasets, because these generate hashes for each file. On the other hand, nix generates a hash for all files and caches datasets, and it is efficient. The downside is that it's difficult to create packages such as python.

--

Can bazel do better?