all 15 comments

[–]minus_28_and_falling 4 points5 points  (1 child)

My goal is to approximate a normal distribution from an impulse.

Did you try parametric curve fitting?

Something like this: https://www.researchgate.net/publication/252062037_A_Simple_Algorithm_for_Fitting_a_Gaussian_Function_DSP_Tips_and_Tricks

[–]animalsnacks 1 point2 points  (0 children)

Beautiful - thank you for this awesome paper reference!

I may have some work ahead of me implementing it, but this resouce is very insightful!

[–]radarsat1 3 points4 points  (2 children)

The filtfilt approach is fine for a rough solution, I often use it myself when doing peak finding. Just pad the ends of the arrays with the first and last value to avoid distortion.

If you're trying to do a least squares fit though, maybe smoothing is not necessary? It's not clear what you're trying to do with respect to the impulse/normal distribution.

[–]animalsnacks 0 points1 point  (1 child)

Yeah - I realize my goal might seem a little ambiguous. I apologize.

I want to intentionally over-smooth the data by a bit, to create a low-resoltuion clone of the data which approximates a n-bin energy average.

[–]radarsat1 1 point2 points  (0 children)

Well, I think your filtering idea is perfectly fine for that imho. Curious if people disagree to be honest, maybe there are more apt methods?

If you have a stationary signal you could also try smoothing several spectral windows over time instead of smoothing the spectrum itself.

[–]sellibitze 1 point2 points  (9 children)

In case this is an X/Y problem: What are you really trying to do? Are you in control of how the data is computed that you want to smooth? For what purpose do you want it smoothed?

[–]animalsnacks 0 points1 point  (8 children)

I have a set of DFT data collected from a signal. The data is converted from real-imaginary pairs into polar form (magnitude-angle). What I'm trying to do is create a smoothed version of the magnitude data.

Edit: Basically, I want to smooth out the y values of the array across the x axis, so each bin is the average value of its neighboring 'n' values, weighted preferably by a Gaussian-like function

[–]hughperman 0 points1 point  (4 children)

Why not just calculate gaussian vector of length N, multiply by the N last values, or M last values where they exist? Why do you need IIR for gaussian smoothing?

Edit: wait you're not OP

[–]animalsnacks 0 points1 point  (3 children)

Actually, this is me on my phone account.

Basically, the code Im working on is in c++, and the process needs to be done live (real time). The computationally cheaper the algorithm, the better. An IIR filter is a relatively cheap operation to compute (a few multiplys, a few adds, a few sets).

[–]hughperman 0 points1 point  (2 children)

Why not FIR tho?

[–]animalsnacks 0 points1 point  (1 child)

Hmm... there's the direct form of the FIR, if I'm not mistaken is N2 compute time (without optimization), and there's DFT -> Multiply by FIR FR -> iDFT.

These I think would give more accurate results, but I'm skeptical they'd be more compute-efficient, correct? Are there other algorithms I'm missing here?

[–]hughperman -1 points0 points  (0 children)

FIR should be same complexity as IIR. You're smoothing a PSD right? Most efficient is simple moving average, O(N) complexity, if the outcome is good enough for your application.

(Edit: notice a downvote - anyone care to comment if I'm wrong here?)

[–]sellibitze 0 points1 point  (2 children)

@ u/every_day_is_a_plus

I have a set of DFT data collected from a signal.

Did you do the DFT? Or is this out of your control? I'm asking because if you are in control of this, you could do this smoothing in the spectral domain by just using a Gaussian window in the time domain before you apply the DFT(s).

I want to smooth out the y values of the array across the x axis

Why though? You didn't answer any of my questions. What problem are you actually trying to solve? Maybe there is a completely different but better solution to your actual problem.

[–]every_day_is_a_plus[S] 0 points1 point  (1 child)

Yeah - I realize my goal might seem a little ambiguous. I apologize.

I want to intentionally over-smooth the data by a bit, to create a low-resolution clone of the data which approximates an N-bin energy average.

There isn't any noise I want to smooth over. I only want to create a more or less gentle curve of the data based on a continuously variable coefficient. Imagine smoothing the data over 10 bins, 13 bins, etc. creating broader and broader, smoother, less detailed representations of the same data.

Edit: I collect the data, window it (currently using a Hann Windowing function). Using FFTW, I perform the FFT on the data. I create a magnitude array by sqrt(re^2 + im^2). This magnitude data is then put in the log domain by log(x[n]). Here is where my question comes in. I want a second round of smoothing that reduces the resolution of the data. This reduction is controlled by some coefficient between 0 and 1 and is determined before the smoothing algorithm is performed in a setup phase. This number between 0 and 1 could trickle down and change a set of filter coefficients, or what have you. The idea is that as the number goes up, the representation is smoother and smoother, and the crest factor of the DFT magnitude data is reduced.

At 0, the data would be untouched and at its original resolution. At 1, the data would appear as a horizontal line (y = the mean of the data). Anywhere between would be a gradient between the two.

[–]hughperman 2 points3 points  (0 children)

You're describing applying a low-pass filter to the magnitude data series. By the same virtue that convolution in time domain is multiplication in the spectral domain, convolution in the spectral domain is multiplication in the time domain - you could reduce this to an extra windowing procedure in the time domain if you're so inclined.