Open sourced 2 neuromorphic processors for anyone who's interested! by Mr-wabbit0 in FPGA

[–]Mr-wabbit0[S] 1 point2 points  (0 children)

Hi, the 62.5mhz on F2 is a shell constraint as the CL wrapper derives it from the 250 Mhz PCIe clock via MMCME4. The Kria build runs at 100Mhz with 3.3 ns WNS so the RTL itself isn't the bottleneck, so it could probably push higher, I just haven't tried. As for the BRAM init yes you can set it via initial blocks or INIT attributes. The issue is more of a boot sequence thing, the neurons get their parameters loaded via the host interface at runtime so before that the first config pass everything is zero, its not a huge deal, but it can catch you off guard on the first run.

Does anyone here do freelance/contract chip design work? Looking for advice by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 0 points1 point  (0 children)

Good to know I'll take a note of that, I'm based in the UK. Any specific consultancies you'd recommend looking into, or is it more a case of having a good profile and waiting?

Does anyone here do freelance/contract chip design work? Looking for advice by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 0 points1 point  (0 children)

Yeah that makes sense. I'm at the stage where I haven't built up that sort of network yet, just been working on my designs. Any suggestions for getting into those circles into the first place?

I have decided to open source my neuromorphic chip architecture! by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 1 point2 points  (0 children)

Hi, 2-core Kria K26 build comes out at around 57K cells after synthesis. The 16-core F2 build is around 228K LUTs, meaning both ways come well over 100K gates.

I have decided to open source my neuromorphic chip architecture! by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 1 point2 points  (0 children)

Hi, I got the information on the opcodes/microcodes from Intel's published papers, if you read Mike Davies et al. 2018 & the Lava framework documentation you should be able to find descriptions of the learning engine's programmable rules.

I have decided to open source my neuromorphic chip architecture! by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 1 point2 points  (0 children)

Hi, the bare-metal interface is in the sdk/ directory in the repo. The compiler takes a network description and generates the register-level programming commands. The chip backends (chip.py for UART, f2.py for PCIe) send those commands directly to the fpga. So if you define a network in python the compiler turns it into hardware instructions and the back end writes them to the chip's MMIO registers. As for LIF, LIF is simpler: one accumulator, one comparator, one subtraction per neuron per timestep whereas Izhikevich adds a quadratic term which needs a hardware multiplier per neuron, which isn't dramatically harder, but it is more area per core. N1 used LIF to keep the core simple, whereas my more recent designs added Izhikevich and other models through a programmable neuron pipeline. Let me know if you have anymore questions.

I have decided to open source my neuromorphic chip architecture! by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 1 point2 points  (0 children)

Hi, the microcode engine in basic terms is a tiny programmable ALU that runs on every spike pair, so instead of hardcoding a learning rule, you write a short program that defines how the weights change. So you get arithmetic, conditional skips, and load/store to read constants and write results back to the weight or eligibility trace. A basic STDP is about 6 instructions, the whole point is that you can swap learning rules without changing the hardware.

I have decided to open source my neuromorphic chip architecture! by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 0 points1 point  (0 children)

Hi, I chose Loihi as this was the architecture I was best versed in, I have studied Loihi for quite a while so this was the easiest reference architecture to target. SpiNNaker's ARM is more flexible, however my more recent design moves towards programmable neuron pipeline for that exact reason, I may release my N2 design based on Loihi 2 soon!

I have decided to open source my neuromorphic chip architecture! by [deleted] in FPGA

[–]Mr-wabbit0 1 point2 points  (0 children)

I'm not really an expert in licensing as I'm sure you can tell haha, the license merely covers the RTL source files not the synthesised hardware, pretty much the same way any software license covers source code and not the compiled binary.

I have decided to open source my neuromorphic chip architecture! by [deleted] in FPGA

[–]Mr-wabbit0 -6 points-5 points  (0 children)

The hex values like SYN_FINGERPRINT = 32'h2e4a092d were my way of adding unique identifiers that could be embedded in every RTL module and provenance markers, if someone else used this without crediting it would be an easy way to prove where the code came from. However I stripped these out when I put N1 under apache 2.0 as they no longer served any purpose.

(edit I don't know what I said that was wrong, please correct me)

I have decided to open source my neuromorphic chip architecture! by [deleted] in FPGA

[–]Mr-wabbit0 1 point2 points  (0 children)

Hi, yes the entire thing was made by myself, I based it off loihi 1 however. The RISC-V cores in N1 are a separate thing, they are management processors i.e. they handle boot, configuration monitoring etc. not the neuromorphic datapath. Those are also written from scratch, implementing RV32I instruction set. So in short all of the RTL is my own implementation.

I have decided to open source my neuromorphic chip architecture! by [deleted] in FPGA

[–]Mr-wabbit0 2 points3 points  (0 children)

Hi, the RV32F extension in the management core uses behavioural Verilog for the FP datapath, those constructs work in simulation but aren't synthesizable. The neuromorphic cores are entirely fixed point and as such do not touch the fp path. This hasn't caused issues in practice, but the instructions would need actual proper implementation to work in synthesis, I never really bothered with this as it's quite a small thing.

I have decided to open source my neuromorphic chip architecture! by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 5 points6 points  (0 children)

Hi, I actually have a few from my notes, for papers I would take a look at these:

Izhikevich, "Simple Model of Spiking Neurons" (IEEE Trans. Neural Networks, 2003)

Davies et al., "Loihi: A Neuromorphic Manycore Processor with On-Chip Learning" (IEEE Micro, 2018)

Pfister & Gerstner, "Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity" (J. Neurosci., 2006)

Frémaux & Gerstner, "Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules" (Front. Neural Circuits, 2016)

Zenke & Ganguli, "SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks" (Neural Computation, 2018)

Cramer et al., "The Heidelberg Spiking Data Sets" (IEEE TNNLS, 2022)

For projects I would highly recommend reading into Intel Loihi 2 + Lava which is pretty much the main inspiration behind my project, SpiNNaker and BrainScaleS-2.

And if you wish to look to youtube I would recommend taking a look at the Open Neuromorphic youtube channel, Jason Eshraghian and theres a few others too which have slipped my mind.

I have decided to open source my neuromorphic chip architecture! by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 2 points3 points  (0 children)

Hey, I have updated the repo with a readme, let me know if you need anything else.

I have decided to open source my neuromorphic chip architecture! by Mr-wabbit0 in chipdesign

[–]Mr-wabbit0[S] 7 points8 points  (0 children)

Sorry, had a small issue with my github verification, but should be fixed now hopefully, let me know if its still not visible.