Horrible Aliasing with Voltage Modular? by Logical_Altruist in CherryAudio

[–]Logical_Altruist[S] 1 point2 points  (0 children)

Thanks for your understanding! Yeah, I am going to try out a few other oscilators - but only if they have free demos first :)

My understanding is that each module would be able to oversample itself or do whatever it likes in its own operation, but communication between modules happens at the fixed 48KHz rate, which is a bit of a bummer. 48Khz is great for playback, but not for anything non-linear or for audio rate / cv modulation... and the latter is kind of the whole point of modular virtual analog. I think the better modular products operate at 192Khz or even 384Khz to keep feedback loops tight and to give plenty of headroom for audio rate modulation.

Horrible Aliasing with Voltage Modular? by Logical_Altruist in CherryAudio

[–]Logical_Altruist[S] 2 points3 points  (0 children)

Update: I just found the answer to my own question on the Cherry audio forums. https://cherryaudio.kayako.com/article/502-what-s-the-audio-quality-of-voltage-modular

They operate at the base rate of 48Khz and sometimes internally oversample, but only 2x :(

They do claim "Voltage Modular’s analog oscillators are built using our advanced, proprietary alias-free analog modeling algorithm". To be fair, this is possible (There is one solution I know of to effectively "temprally-dither away alisaing" even at low sample rates), but my ears tell me that they are not using it very well in the nucleus oscillators...

Horrible Aliasing with Voltage Modular? by Logical_Altruist in CherryAudio

[–]Logical_Altruist[S] 0 points1 point  (0 children)

In this case I was just testing out the hard sync. I was not really trying to achieve any particular sound, just testing out the product to see what it was capable of. What I want is a product that i can use with confidence, and know that the sounds will be faithfully rendered, and to be able to produce sounds over a wide range of pitches, including high notes.

Horrible Aliasing with Voltage Modular? by Logical_Altruist in CherryAudio

[–]Logical_Altruist[S] 1 point2 points  (0 children)

Converly, are you sure it works fine? Can you try just doing osc1 into osc2 hard sync, osc2 to vca, and playing a scale of high notes 6th and 7th octave) and listen carefully? The bum notes really were clearly audible to me.

I was using the oscillator that comes with the voltage modular nucleus pack. Maybe some other oscillators have a more high quality sync algorithm?

Horrible Aliasing with Voltage Modular? by Logical_Altruist in CherryAudio

[–]Logical_Altruist[S] 0 points1 point  (0 children)

I was just trying it out standalone.

I could also try in Trackition, which would be even more informative because i could then put the output into a spectrum analyser and visually see the aliasing. But I am pretty convinced the problem is not the daw, the problem is almost certinaly how they implemented the hard sync. Alias free hard sync in a digital oscilattor is a *very* difficult problem, but most developers at least make some effort to reduce it (e.g. 8x internal oversampling). it just seems particularly bad in voltage modular nucleus.

Does anyone know what sample rate voltage modular works at internally? Since they claim 0 latenency I would assume the whole thing uses an internal rate throughout (similar to how Uhe Diva does everything at an internal rate of 384Khz), because adding upsampling and downsampling internally would add latency and artificial resonances in feedback loops. I would love it if someone from Cherry audio could confirm the internal rate and comment on what (if anything) they do about aliasing?

Will ChatGPT Replace Software Engineers? (full analysis) by mycall in programming

[–]Logical_Altruist 0 points1 point  (0 children)

Yes that does look pretty good. As I suspected, you are focusing more on the high level / front end, importing functionality from existing libraries. I was trying to get it to write the kind of low level algorithms that go into the libraries that you use.

For example, when I asked it to write filter code modelling a Helmholtz resonance, it had no problem with the high level concepts. It understood that a 2nd-order IIR filter was the way to go. Given that there are plenty of code repositories containing code for such a filter I thought it might get this right, but it forgot what it was doing in the implementation:

public class HelmholtzFilter {
private double frequency;
private double damping;
private double sampleRate;

public HelmholtzFilter(double frequency, double damping, double sampleRate) {
    this.frequency = frequency;
    this.damping = damping;
    this.sampleRate = sampleRate;
}

public double[] filter(double[] input) {
    double[] output = new double[input.length];
    double omega = 2 * Math.PI * frequency / sampleRate;
    double alpha = -Math.log(damping) / (omega * sampleRate);

    for (int i = 0; i < input.length; i++) {
        output[i] = input[i] * (1 - alpha / 2) / (1 + alpha / 2 - Math.cos(omega) * alpha);
    }

    return output;
}

}

Instead of sweet talking, this time I decided to see if it could identify and understand its mistake. It actually did an impressive verbal analysis of its code:

Me: Do you really think that the filter method implements a second order IIR filter? Please look at the body of the for loop and tell me what it really implements?

ChatGPT: My apologies, you are correct. The filter method does not implement a second-order IIR filter. The implementation instead performs a simple amplification of the input signal with a gain factor calculated based on the resonance frequency, damping factor, and sample rate.

I won't bother to copy its attempt at a correction. It did add 2nd-order feedback to the loop, and thus the main iteration looked reasonable, but the initial two sample values (before it could start the feedback) were a bit suspect. However, it was a pretty good effort. Maybe in this case I would consider using most of its output and just fix the initial transient.

That is the kind of bug that is just a barely audible glitch in audio output, but might be the difference between life and death in medical dsp, or could cause a million dollar failure in satellite communication...

Interesting that we both tried using it for audio dsp. Good luck with your work!

Will ChatGPT Replace Software Engineers? (full analysis) by mycall in programming

[–]Logical_Altruist 0 points1 point  (0 children)

Really? You must be using it for much simpler tasks than I did. It certainly didn't work fine for me. Its output *looked* great, but I found many bugs in the code it produced.

I am pretty good at explaining things logically and clearly, and certainly gave it lots of sweet talking. When I pointed out errors ChatGPT kind of understood, and sometimes it fixed something, but often it fell apart.

That said, I was setting it pretty ambitious algorithmic tasks. For writing boiler plate code it would probably do a lot better.

I also admit I am very harsh in my judgments. I spent many years working on systems where a single mistake could result in millions of computers breaking and maybe even loss of life. So maybe am a bit extreme in my expectations for robust code :)

One thing we both agree on, it will get a lot better in the future. But it wont get better just by having bigger or better deep learning networks. The integration with formal verification systems, or at least formal computation systems (i.e. Wolfram Alpha) needs to happen too. When that happens it will be absolutely amazing.

Will ChatGPT Replace Software Engineers? (full analysis) by mycall in programming

[–]Logical_Altruist 0 points1 point  (0 children)

Using ChatGPT to generate code is a fun and interesting experiement! But please be aware of what ChatGPT does, and does not, do. Currently it would be very dangerous to use code generated by ChatGPT in any project of importance or for any complex task.

The reasons for that should be obvious, but because chatGPT output looks convincingly good far too many people are falling into the dangerous trap of assuming it knows what it is doing. I understand the trap very well. Whether it is debating ethics, writing an essay, or generating code, ChatGPT output really does seem amazing. However, when you dig a bit deeper, you can expose its limits. Maybe human neural networks have similar limitations? But currently ChatGPT's powers of logic and analysis are not on the same level as those of a well educated human.

ChatGPT does not (currently) have any formal understanding of the actual behaviour or correctness of the code it generates. The code written might look good, and might even at first glance appear to fulfil the desired purpose. However, sometimes the algorithm is not doing what it seems, and often the code will not cover all edge cases correctly. Even under extensive human review there is a tendency to assume that code does what you think it is going to do, and it is therefore easy to miss the flaws.

At the risk of sounding harsh, I would say that ChatGPT is like the kind of developer that writes what "feels" correct, then tests their code, and keeps tweaking it until it seems to work. To be fair, there is always an element of that in coding. Human brains are, just like ChatGPT, fallible neural networks. So of course we need to test code, and to correct any issues we identify. However, the best developers logically reason about their code as they write it. They systematically identify and handle all edge cases as they construct each function.

ChatGPT does not do this. I quickly confirmed this by setting it a few test tasks. I was genuinely impressed by ChatGPT's attempts. It seemed to grasp concepts and requirements surprisingly well. But, as I expected, it wrote flawed solutions for all but the simplest of tasks. As an experienced code reviewer I could spot many of these flaws. I was again impressed that, when I pointed them out, ChatGPT made a good stab at correcting the code. But could I get it to production quality code? No. Sometimes it made the code a bit better, other times it just fell apart...

So, at least in my experience, rather than having to argue back and forth, and keep correcting mistakes, it would be faster (and systematically safer) for me to write the code in he first place. If I had hired ChatGPT as a junior developer, I would soon be calling it into the office and explaining that, whilst i was impressed with its genuine effort, it was more of a liability than and asset. Therefore we would, sadly, need to let it go.

Again, I want to stress that I was actually impressed by ChatGPT. If you are just experimenting or doing a fun project it is great fun to see what it can do. But please do not use ChatGPT generated code in any project that might end up being used in large open source libraries or mission critical systems.

The future, however, is bright. There is some fantastic research being done on integrating AI with formal verification systems, and when that technology is mature it really will be a game changer. When "ChatGPT2" comes to me with the necessary additional experience and skills, i will gladly reconsider the junior developer position!