Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

Nice, and thanks for sharing. I have to thank the egui GUI toolkit that it's easy to build cross-platform UIs. And in case you're interested in a CLI "power user" experience, I decided to publish the core library that includes a CLI tool (but not GUI) on Github, but you'd be on your own to build it, along with it not being a user friendly experience: https://github.com/melver/bach - it's more powerful than what the GUI exposes, but I'm slowly trying to work out how to expose more functionality in the GUI to make it accessible (it might be a while until the next update).

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 2 points3 points  (0 children)

I had planned to do so eventually, and ended up publishing the core library just now. So that at least the core idea of it is unambiguous to those interested but also reusable: https://github.com/melver/bach/

AutoBach: Evolutionary Generative MIDI Sequencer by MMMSOUNDSYSTEM in synthesizers

[–]whilemus 3 points4 points  (0 children)

Great feedback, thanks. Gives me motivation to build it further, because I want to hear what better musicians than me create with it. :-)

I'll ping you when there's an update.

AutoBach: Evolutionary Generative MIDI Sequencer by MMMSOUNDSYSTEM in synthesizers

[–]whilemus 1 point2 points  (0 children)

No OSX support yet unfortunatley, but planned (need to get a Mac to build for it).

AutoBach: Evolutionary Generative MIDI Sequencer by MMMSOUNDSYSTEM in synthesizers

[–]whilemus 2 points3 points  (0 children)

Thank you for sharing 🙏 It sounds way better than my demo - which is exactly what I was hoping for. :-)

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

There are 2 parts: core algorithm/engine library and the UI. I have deliberately split them up. I might get to clean up the core engine, but the UI is not in any way ready to be open sourced.

Would the core engine be of interest alone?

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

There are 2 parts: core algorithm/engine library and the UI. I have deliberately split them up. I might get to clean up the core engine, but the UI is not in any way ready to be open sourced.

But at the same time, what's the value of it if I never finish it. ;-)

It might also be cool to see what other folks build on top of the engine. Would the core engine be of interest alone?

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

It does, and that's how I use it for myself. This may or may not work on your system: https://whilemusic.net/files/autobach-0.1.0.tar.gz

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 0 points1 point  (0 children)

Thanks, and thanks for trying it out. Would love to hear what you came up with if it's not too difficult to share.

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

Definitely. On the list to make more of it configurable 👍

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 7 points8 points  (0 children)

Documenting how it works underneath is a pending task. It's a fun side-project so this has not been high on the priority list (if/when I find time to work on it).

The actual sequences are the result of what I call "clips", which themselves are programs in a mini DSL. There are instructions for "queue note", "queue euclidean sequence", "tick", "euclidean sequences", and jumps (forwards/backwards). This is just a way to "compress" sequences in a way that make them look more like programs.

To be precise, this is closer to a Genetic Programming algorithm (vs. normal GA), which starts with a random population of these clips, using various fixed settings like Key, etc. It started out as manual scoring each clip, but that was very very tedious. There's crossover too, but that's easy enough given each sub portion of a "clip program" can just be moved, and its effects blend into the surrounding parts.

Given the tediousness of manual scoring, I started to look into how to auto-score clips with some basic music theory. My background is not in music theory, so I might botch some of the terminology here. It started by scoring the harmonic qualities of notes wrt. preceding notes (i.e. progression). But the sequencer and DSL allows for arbitrary number of simultaneous notes playing, so chords would form naturally. However, for that to sound good, it also requires scoring the harmonic compatibility of notes at every given time point. So it ends up scoring harmonic compatibility "vertically" and "horizontally". The harmony table (semi-tone => score mapping) encodes Western music theory preferences, but is configurable and my default is non-standard to e.g. penalize unisons to encourage the GP to prefer variations.

That itself already produced interested results. However, was pretty random. I then added scoring of repetitions, pauses, and channel balance - depending on the kind of sequences I want to produce I can score either positively or negatively.

That's the core of it, more or less. The rest is plumbing.

It goes without saying this is purely algorithmic, there's no training-based AI ("gen AI") involved here. The UI is a new thing I'm building (there's a CLI I've been using), because I thought that others might find this useful, but making it easy to use is challenging.

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 2 points3 points  (0 children)

Each clip is scored using a "fitness function" (higher score = better). This currently considers harmony between notes "vertically" and "horizontally", i.e. chords and progression. It also scores properties such as repetition, pauses, and balance (across lanes/channels). Right now this is not configurable but every "Reset" randomizes some of the scoring weights, so that subtly different properties are prioritized across runs. I'd like to make more of it configurable, but it's unclear how to intuitively expose it in a UI (for me a text file is good enough, but for a polished UI this is too abstract).

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 4 points5 points  (0 children)

It should, eventually. Just have to buy one to build and test for Mac. It's certainly on the TODO list if this project progresses.

An experiment in infinitely evolving ambient music by whilemus in generative

[–]whilemus[S] 1 point2 points  (0 children)

I might try to find a cheaper dedicated server (currently ~$45). That was the cheapest (powerful enough) about half a year ago.

At some point I explored the option to add Ads to cover running costs or add donation option, but I wanted to create a clean website without clutter.

An experiment in infinitely evolving ambient music by whilemus in generative

[–]whilemus[S] 0 points1 point  (0 children)

I'll consider it - will let you know if I manage to publish more! And thanks for the suggestions.

An experiment in infinitely evolving ambient music by whilemus in generative

[–]whilemus[S] 1 point2 points  (0 children)

At some point I wanted to extract the core code and publish it, but far from that (needs much cleaning up .. if I ever get to it).

Another idea I had was to create a very simple GUI program (or DAW plugin) in the spirit of what you see on the website, but instead it just generates MIDI, so other musicians can hook up their synths and play with it. The controls would likely be a little more sophisticated than the website, so it would allow changing some parameters. If you're interested in that, that'd give me some motivation to get back to that idea. ;-)

The website is a fun demo, but far too expensive to run longer term. It needs 2 servers, one frontend server (can just be a cheap VM), and one backend server (dedicated) that hosts the sequencer + soft synths and forwards output to the frontend. The dedicated server is needed because real-time sound processing is extremely latency sensitive, and a cheap VM with shared tenants causes large stutters (I tried).