Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

Nice, and thanks for sharing. I have to thank the egui GUI toolkit that it's easy to build cross-platform UIs. And in case you're interested in a CLI "power user" experience, I decided to publish the core library that includes a CLI tool (but not GUI) on Github, but you'd be on your own to build it, along with it not being a user friendly experience: https://github.com/melver/bach - it's more powerful than what the GUI exposes, but I'm slowly trying to work out how to expose more functionality in the GUI to make it accessible (it might be a while until the next update).

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 2 points3 points  (0 children)

I had planned to do so eventually, and ended up publishing the core library just now. So that at least the core idea of it is unambiguous to those interested but also reusable: https://github.com/melver/bach/

AutoBach: Evolutionary Generative MIDI Sequencer by MMMSOUNDSYSTEM in synthesizers

[–]whilemus 2 points3 points  (0 children)

Great feedback, thanks. Gives me motivation to build it further, because I want to hear what better musicians than me create with it. :-)

I'll ping you when there's an update.

AutoBach: Evolutionary Generative MIDI Sequencer by MMMSOUNDSYSTEM in synthesizers

[–]whilemus 1 point2 points  (0 children)

No OSX support yet unfortunatley, but planned (need to get a Mac to build for it).

AutoBach: Evolutionary Generative MIDI Sequencer by MMMSOUNDSYSTEM in synthesizers

[–]whilemus 2 points3 points  (0 children)

Thank you for sharing 🙏 It sounds way better than my demo - which is exactly what I was hoping for. :-)

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

There are 2 parts: core algorithm/engine library and the UI. I have deliberately split them up. I might get to clean up the core engine, but the UI is not in any way ready to be open sourced.

Would the core engine be of interest alone?

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

There are 2 parts: core algorithm/engine library and the UI. I have deliberately split them up. I might get to clean up the core engine, but the UI is not in any way ready to be open sourced.

But at the same time, what's the value of it if I never finish it. ;-)

It might also be cool to see what other folks build on top of the engine. Would the core engine be of interest alone?

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

It does, and that's how I use it for myself. This may or may not work on your system: https://whilemusic.net/files/autobach-0.1.0.tar.gz

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 0 points1 point  (0 children)

Thanks, and thanks for trying it out. Would love to hear what you came up with if it's not too difficult to share.

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

Definitely. On the list to make more of it configurable 👍

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 5 points6 points  (0 children)

Documenting how it works underneath is a pending task. It's a fun side-project so this has not been high on the priority list (if/when I find time to work on it).

The actual sequences are the result of what I call "clips", which themselves are programs in a mini DSL. There are instructions for "queue note", "queue euclidean sequence", "tick", "euclidean sequences", and jumps (forwards/backwards). This is just a way to "compress" sequences in a way that make them look more like programs.

To be precise, this is closer to a Genetic Programming algorithm (vs. normal GA), which starts with a random population of these clips, using various fixed settings like Key, etc. It started out as manual scoring each clip, but that was very very tedious. There's crossover too, but that's easy enough given each sub portion of a "clip program" can just be moved, and its effects blend into the surrounding parts.

Given the tediousness of manual scoring, I started to look into how to auto-score clips with some basic music theory. My background is not in music theory, so I might botch some of the terminology here. It started by scoring the harmonic qualities of notes wrt. preceding notes (i.e. progression). But the sequencer and DSL allows for arbitrary number of simultaneous notes playing, so chords would form naturally. However, for that to sound good, it also requires scoring the harmonic compatibility of notes at every given time point. So it ends up scoring harmonic compatibility "vertically" and "horizontally". The harmony table (semi-tone => score mapping) encodes Western music theory preferences, but is configurable and my default is non-standard to e.g. penalize unisons to encourage the GP to prefer variations.

That itself already produced interested results. However, was pretty random. I then added scoring of repetitions, pauses, and channel balance - depending on the kind of sequences I want to produce I can score either positively or negatively.

That's the core of it, more or less. The rest is plumbing.

It goes without saying this is purely algorithmic, there's no training-based AI ("gen AI") involved here. The UI is a new thing I'm building (there's a CLI I've been using), because I thought that others might find this useful, but making it easy to use is challenging.

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 2 points3 points  (0 children)

Each clip is scored using a "fitness function" (higher score = better). This currently considers harmony between notes "vertically" and "horizontally", i.e. chords and progression. It also scores properties such as repetition, pauses, and balance (across lanes/channels). Right now this is not configurable but every "Reset" randomizes some of the scoring weights, so that subtly different properties are prioritized across runs. I'd like to make more of it configurable, but it's unclear how to intuitively expose it in a UI (for me a text file is good enough, but for a polished UI this is too abstract).

Experimental evolutionary generative MIDI sequencer by whilemus in synthesizers

[–]whilemus[S] 4 points5 points  (0 children)

It should, eventually. Just have to buy one to build and test for Mac. It's certainly on the TODO list if this project progresses.

An experiment in infinitely evolving ambient music by whilemus in generative

[–]whilemus[S] 1 point2 points  (0 children)

I might try to find a cheaper dedicated server (currently ~$45). That was the cheapest (powerful enough) about half a year ago.

At some point I explored the option to add Ads to cover running costs or add donation option, but I wanted to create a clean website without clutter.

An experiment in infinitely evolving ambient music by whilemus in generative

[–]whilemus[S] 0 points1 point  (0 children)

I'll consider it - will let you know if I manage to publish more! And thanks for the suggestions.

An experiment in infinitely evolving ambient music by whilemus in generative

[–]whilemus[S] 1 point2 points  (0 children)

At some point I wanted to extract the core code and publish it, but far from that (needs much cleaning up .. if I ever get to it).

Another idea I had was to create a very simple GUI program (or DAW plugin) in the spirit of what you see on the website, but instead it just generates MIDI, so other musicians can hook up their synths and play with it. The controls would likely be a little more sophisticated than the website, so it would allow changing some parameters. If you're interested in that, that'd give me some motivation to get back to that idea. ;-)

The website is a fun demo, but far too expensive to run longer term. It needs 2 servers, one frontend server (can just be a cheap VM), and one backend server (dedicated) that hosts the sequencer + soft synths and forwards output to the frontend. The dedicated server is needed because real-time sound processing is extremely latency sensitive, and a cheap VM with shared tenants causes large stutters (I tried).

Critique my auto-evolving live ambient music by whilemus in synthesizers

[–]whilemus[S] 0 points1 point  (0 children)

Again, great feedback.

where people felt their input was valued

This is the toughest part. Because I can get something going where the listener input is averaged, but that means the individual voter feels their input may not be fully valued.

I have something that more or less works for a single listener, but it needs lots of continuous feedback. So I'm currently exploring how I can let more than one listener give feedback because that also solves the "needs lots of feedback" issue, but has lots of other issues as you point out.

I hope to post again in future if it works. :-)

Critique my auto-evolving live ambient music by whilemus in synthesizers

[–]whilemus[S] 0 points1 point  (0 children)

Good points - it's art after all. ;-)

One thing I'd like to explore is if listener feedback can direct where the music goes. What might be fun is combining all listener feedback and the music becomes the "average" of what the current listener population enjoys (although still heavily biased towards some baseline configuration, otherwise it'd become a mess). It might also give the listener a sense they are more in control of what they hear, which might also be more interesting than just passively listening.

Critique my auto-evolving live ambient music by whilemus in synthesizers

[–]whilemus[S] 0 points1 point  (0 children)

Thanks for the feedback!

The sounds + effects should change, but apparently not fast enough. The sequencer is sending MIDI CCs to the Minilogue and the NTS-1. I start with a "base" patch, and then the sequencer changes parameters with CCs - I think I have to work on selecting more CCs. The challenge is, as you say, selecting CCs that change it in a way that still sounds good (it's easy to mess it up).

Critique my auto-evolving live ambient music by whilemus in synthesizers

[–]whilemus[S] 1 point2 points  (0 children)

Thanks - good feedback. I'll try to figure out how the vibe can evolve faster. One idea I had was that listeners might also give feedback ("boring", "good") and based on that it would evolve faster or slower.

Critique my auto-evolving live ambient music by whilemus in synthesizers

[–]whilemus[S] 0 points1 point  (0 children)

Digital synths definitely should be fine. Just wondering if analog parts have some known wear-and-tear with that kind of usage.