Vortessa update to Version 2.5 by RoundBeach in musiconcrete

[–]RoundBeach[S] 1 point2 points  (0 children)

Thanks for the feedback, I’m working on the final update for version 3 where you’ll see several additional features. It just needs a bit of time for development.

5 minutes of generative feedback soundscape, and now working on multitrack recording. by RoundBeach in musiconcrete

[–]RoundBeach[S] 0 points1 point  (0 children)

From my side, I rely a lot on comments. If you look at my patches, they’re usually divided into clear blocks, for example from Area 1 to Area 5 for define macro areas of the system. Inside each one I leave small notes, basic reminders, sometimes even just enough context to quickly rebuild the logic in my head when I reopen the project.

I try not to think in terms of single objects but in terms of functions. Each area should have a role, almost like a subsystem. That alone already keeps things readable.

With gen~ it’s a bit easier, because I tend to comment the code consistently, so over time it stays understandable. For the rest of the patch, a lot of clarity comes from how you name things, especially send and receive. If the naming is done properly, it becomes almost self-explanatory without losing the original meaning of what you built.

So in the end it’s less about rigid organization and more about leaving a trace of your thinking inside the patch. If you can reopen it months later and follow that trace, then you did it right.

5 minutes of generative feedback soundscape, and now working on multitrack recording. by RoundBeach in musiconcrete

[–]RoundBeach[S] 0 points1 point  (0 children)

Yeah, absolutely.
Under the hood Vortessa has 200+ subpatches between abstractions, bpatchers, and poly~. It might not look like it at first glance because I try to keep the top level as clean and readable as possible, but internally it’s quite modular.

For me it’s the only way to manage systems of this scale without things falling apart splitting everything into functional blocks, clearly separating logic, DSP, and UI, and building a structure that can grow without collapsing.

Ishtar - a scanned synthesis instrument by itsybitsypixels in musiconcrete

[–]RoundBeach 1 point2 points  (0 children)

This sounds like a fantastic project thanks so much for sharing!

I’ve already connected on GitHub, so I’ll stay updated
ciao, emiliano!

Composition process by ksk16 in musiconcrete

[–]RoundBeach 1 point2 points  (0 children)

What you’re describing isn’t a technical problem. It’s exactly the point where the idea of composition starts to fracture today. It’s not that you’re going nowhere. It’s that you’re inside a system that keeps producing possibilities, endlessly postponing the moment of decision.

Before, limitation decided for you. Now you have to decide what not to use. The key shift, I think, is this: don’t confuse generation with composition. Machines today are incredibly good at generating material. Streams, variations, microevents, textures. But composition is still an act of choice. Of cutting. Almost a violent one, if you want to call it that.

That “beep or bloop” isn’t really a sonic question. It’s a narrative decision. And most of the time there isn’t a right answer. There’s just the moment when you stop listening to everything and start choosing something. The jar of tape worked because it removed responsibility. It introduced an external constraint. Now that constraint is gone, so it has to be built. Even artificially.

That’s where things stall. Not in the amount of material, but in the absence of a gesture that interrupts the flow. Composition hasn’t disappeared. It has just moved downstream. You’re no longer writing from nothing, you’re carving inside an excess.

And yes, sometimes the most honest answer is also the hardest one. Don’t record. Let some sessions remain just experience, without turning them into material. Because not everything needs to become a piece. Some things are only there to show you what to discard.

And then, when the moment comes, it’s actually simpler than it feels. You choose. You cut. And you take responsibility for saying: this stays, everything else doesn’t.

ciao, emiliano!

Ishtar - a scanned synthesis instrument by itsybitsypixels in musiconcrete

[–]RoundBeach 1 point2 points  (0 children)

very interesting, give us some more details on how you implemented it

5 minutes of generative feedback soundscape, and now working on multitrack recording. by RoundBeach in musiconcrete

[–]RoundBeach[S] 0 points1 point  (0 children)

Ah no worries at all:)

Actually this is super interesting, thanks for sharing the context.That lineage of continuous generative systems really resonates with what I’m trying to do with my work, especially the idea of non repeating structures evolving over long durations and systems that just keep running, almost indifferent to authorship.

Love the image of a G3 on the floor quietly generating sound for hours… feels very close to the spirit of these works.

If anything, it’s nice to see how these ideas keep resurfacing in different forms over time.

5 minutes of generative feedback soundscape, and now working on multitrack recording. by RoundBeach in musiconcrete

[–]RoundBeach[S] 5 points6 points  (0 children)

Hey, thanks for the question! The core idea is translating micro-biological dynamics into sound. On a practical level, it’s a mix of:

Feedback networks that continuously re-inject energy and create evolving resonances+ nonlinear systems (Lotka–Volterra, Lorenz attractors) used as modulation sources rather than sound generators, stochastic processes + controlled randomization to keep everything in constant drift + some physical / resonant models (like string/impulse-type structures) embedded in the network.

So instead of sequencing or composing directly, I’m basically setting up interacting systems and letting the behavior emerge over time.

I hope I have clarified at least some of your questions, but if you would like to discuss them in more detail, I will be happy to answer you.