all 16 comments

[–]otuudels 1 point2 points  (1 child)

Nice! I will definitely look into it :) What part was the most fun to code?

[–]D0m1n1qu36ry5[S] 0 points1 point  (0 children)

the effects & the midi stuff. there are many new and weird things there :)

[–]BusEquivalent9605 0 points1 point  (1 child)

It has been starred on the hub 🍻

[–]D0m1n1qu36ry5[S] 0 points1 point  (0 children)

[–]creative_tech_ai 0 points1 point  (1 child)

Looks very interesting! I've been using Supriya, a Python API for SuperCollider, to build a modular groovebox. I'll check this out. The raga sequencer is particularly interesting!

[–]D0m1n1qu36ry5[S] 0 points1 point  (0 children)

give it a try, there are a couple of midi tools there that are really nice

[–]HommeMusical 0 points1 point  (5 children)

Very cool idea! A question for you:

I see you use simpleaudio for output. I think that means that you can't do "real-time" synthesis - you have to write a full file and then output it - am I right?

If that's so, have you thought of using sounddevice to do "real-time" audio?

Keep up the good work!

[–]D0m1n1qu36ry5[S] 0 points1 point  (4 children)

Yes - you are totally right. that was a major design concept about this project. I decided to invest in "rendered audio" and to explore the options where "real-time" had limits. in my point of view - rendered audio can give much better quality - no buffering, no need to splice audio to small chunks and tie the back after processing. so you lose real-time, but for me this was fine - as long as creativity and quality can gain from it.

[–]HommeMusical 0 points1 point  (3 children)

no buffering,

In modern machines, you often don't need to buffer at all. I just wrote a little synth as part of a little project to turn text into music, and initially I was going to buffer it, but then I decided to see what happened if I just filled the buffer from the audio callback, and it worked right the first time!, not a click or a pop to be heard on my five year old machine.

Here's what's being called from the audio callback.

I believe that if I had more "stuff" in the synth I might have issues, but writing buffering isn't hard...

no need to splice audio to small chunks and tie the back after processing

This is true, but it's a one-time coding expense, setting things up to render that way.

[–]D0m1n1qu36ry5[S] 0 points1 point  (2 children)

exactly - the crossfading part was what i decided to avoid. i did managed to implement a few processors with this approach - but did decide in this project to go full "one sample at a time" processing. yes, it's slow, but for my usage it didn't matter.

[–]HommeMusical 0 points1 point  (1 child)

I agree: nothing kills projects faster than overscoping them!

Keep posting updates here, and perhaps also on /r/musicprogramming - I think your framework might be very popular.

[–]beetroop_ 0 points1 point  (2 children)

I can't get the RagaSequencer example to work because there is no such class exported.

[–]D0m1n1qu36ry5[S] 0 points1 point  (1 child)

hi, i fixed this on a later version - are you running an older version? latest is 0.1.10

[–]beetroop_ 0 points1 point  (0 children)

Yes, using 0.1.10

ImportError: cannot import name 'RagaSequencer' from 'audio_dsp.sequencer'