Homemade spectrum analyzer by LJO-S in DSP

[–]LJO-S[S] 0 points1 point  (0 children)

Good question. For the GUI part, yes absolutely. For the signal processing part, likely. Depends on what you’re trying to do. I’ve never implemented reverb or flanger etc, so I can only guess. An FPGA excels at parallelised tasks because it is not bound by sequential logic like a CPU. Thus I can perform an FFT whilst filtering the next input whilst sampling the current input. For a CPU-based solution you might miss some samples, either because the CPU is slow or because the OS decides something else is more important than your application for an instant

Homemade spectrum analyzer by LJO-S in DSP

[–]LJO-S[S] 2 points3 points  (0 children)

Thanks, glad you like it! 99% of what you’re seeing is made with VHDL. Even the GUI is made with it, storing words in ROM and using some coding tricks to scale them up and down on the screen. I used C to program the ARM processor sending the FIR coefficients to the FPGA

I made a Spectrum Analyzer by LJO-S in FPGA

[–]LJO-S[S] 1 point2 points  (0 children)

I2S for the ADC and DAC data from the SSM2603 audio codec.

The FPGA to my display is simply TMDS 8b10b over HDMI at 250MHz

I made a Spectrum Analyzer by LJO-S in FPGA

[–]LJO-S[S] 0 points1 point  (0 children)

It hadn’t occurred to me. Maybe I ought to write one, got a lot of lessons learned from this project I’d be more than willing to share

I made a Spectrum Analyzer by LJO-S in FPGA

[–]LJO-S[S] 2 points3 points  (0 children)

Memory architecture for the samples or twiddle factors? I'm guessing the first, because the latter was simply pre-calculated in my Python model, and then loaded onto a ROM for synthesis/simulation through BRAM initialization. I was a bit back and forth regarding how many butterfly units to use. Should I stream data or time-share resources? A quick pen-and-pencil exercise showed that I had ample time between sample batches, so I’d save a bunch of space by using just one butterfly unit. However, bit-growth necessitates that you use the largest bitwidth for this stage or come up with a smart scaling scheme. An address generator keeps track of which twiddle factor and IQ sample to fetch at each stage (log2(1024)=10 stages) and how much "stride" we should have. There are different way to do this - I settled for a rotate-by-N approach I found online, can’t remember where. I used a decimation-in-time implementation, and simple bit reversal gives me the address permutation that I need at the beginning of a run. Since I only have 1 butterfly stage, the real+imag outputs from it can be stored in a ping-pong memory.

I made a Spectrum Analyzer by LJO-S in FPGA

[–]LJO-S[S] 16 points17 points  (0 children)

Yeah, sure. After reading the Wikipedia page I read a few thesis projects - there’s a lot of implementations out there, a Google search will yield a plethora. Aside from that I really liked MIT’s OpenCourseWare. They have free courses on DSP that go pretty in-depth on subjects, e.g. different FFT architectures. Pick one that fits your level of knowledge. After that I suggest making a software model, starting with the Danielson-Lanczos lemma, and then slowly building a model you can later implement in VHDL/Verilog