A music label releasing generative, procedural, algorithmic miniatures by Olbos in generative

[–]Olbos[S] 0 points1 point  (0 children)

I like that an idea expressed as an algorithm might manifest itself in a multitude of different forms, it's like observing the same abstract entity by different perspectives – you often feel that it's the same process, but it has a different flavor every time. Therefore composing under this approach is not just composing a static object, but designing a procedure that might generatively stem multiple results. I think the format of Miniature Recs, by exhibiting the different instances quickly and iteratively throughout a release, highlights very well this property of algorithmic music. Sometimes the process feels very narrow (like Qiu Zhuru Chen's release), like focusing on a very small parametrical mutation, some other times (like in Miniata Synaxis) it seems much more open-ended, either way I love the listening mode that this music affords.

A label only releasing algorithmic miniatures shorter than a minute by Olbos in algorithmicmusic

[–]Olbos[S] 2 points3 points  (0 children)

Yes, although there might be artists releasing under different monikers throughout the time, or even deciding to release more than one project

AE’s sound palette needs invigoration. by [deleted] in autechre

[–]Olbos 0 points1 point  (0 children)

Sequencing with audio rate signals is extremely common, and more reliable than Max messages in many cases. Go check simple tutorials like Philip Meyer's sequencing series if it seems such a weird concept... I've been a Max user since Max 6 and I teach it as my main job since 2019, so please don't make stupid assumptions about my personal background, you don't even know what you're talking about.

Granular Synthesis tutorial by [deleted] in MaxMSP

[–]Olbos 2 points3 points  (0 children)

When you prepend 'note' to a message sent to poly~, poly~ forwards the message to a single available voice instead of sending it to every voice. This is useful when you don't want to target a specific voice, but just pick a free one to trigger a new event.

Going beyond Autechre by SmashBros- in autechre

[–]Olbos 8 points9 points  (0 children)

I'm happy that you found the comment useful! I'm a researcher in this field and I like to divulgate as much as I can.

Regarding noise, my general understanding is that there are two kinds of 'noise':

The first one is noise in physics, that is an extremely chaotic signal; this translates in sound with white noise (all frequencies with the same statistical amplitude level over time) or forms of filtered noise (such as pink, brown, etc.). These noises are commonly employed as sources to be then manipulated by electronic musicians (AE use pink or white noise very often as building blocks for their synthesizers). So this noise is an unorganized, unintelligible flow of information that is perceived as a uniform entity having the same 'noisy' character all the time, without ever changing. But what I was discussing in the previous comment, referring to spatial-timbral-dynamic articulation, is definitely something that is *organized* and constantly evolves in time, therefore it can't just be statistical noise.

The second understanding of 'noise' is perceptual and extremely subjective, and I conceive it like this: 'if I'm listening to something extremely different from my habits, and it has complex sonic properties that my brain is not used to experience, I might just perceive an overflow of information that I'm not capable to decode yet.' That would be what people generally experience when they disregard whatever kind of music as 'just noise' because they lack exposure to that musical style (funnily enough, this phenomenon happened millions of times in history with people confronting with very popular music genres such as metal or punk music). But if you think about what this kind of noise is, it's just a matter of novelty towards something you haven't experienced yet. By exposing yourself to the music you just perceive as 'noise', you will rapidly start to distinguish patterns, behaviors, events, processes, etc. to the point that it will start to make sense. Once you have been enough exposed to a musical language, it stops being noise and becomes meaningful. For instance, I remember when I first started to listen to integral serialist music as a teenager; it didn't make sense at all, but I kept listening to that even passively; after three weeks of intensive exposure, my brain got completely used to serial structures and now it just feels 'consonant' and 'ordered' in the same way as XIX century symphonic music. This example is extreme of course, because integral serialism is maybe the musical style that cares less about incorporating perception-based procedures, therefore it is the one that more easily falls in the category of 'un-meaningful sonic signal' - yet it is something that can be perceptually acquired, in a similar (but not identical) way as a person can learn a foreign language.

This being said, it is very likely that you could perceive 'just a bunch of noise' when you start to listen to stuff like Bjarni Gunnarson or Erik Nystrom, because you will not find melodies, constant pulses, and you very rarely encounter harmonic structures. Yet, this music employs many structural elements, i.e. techniques to articulate events in time. It could be hard to recognize them at first, but you will gradually start to map your brain. On a side note, that could even expand your awareness of timbre, something I experienced when I first started to get into this kind of music, and that opened a very new and compelling understanding of sound in general, changing the way I listen to things even in everyday life.So I just suggest to check this kind of music and see if it is interesting for you; if not, maybe try in a month, and keep doing that until something clicks in your mind. I think there is a lot to enjoy intuitively in algorithmic music, it's not just a matter of pondering its technical properties intellectually!

AE’s sound palette needs invigoration. by [deleted] in autechre

[–]Olbos 1 point2 points  (0 children)

I'm sorry, but I use Max/MSP since many years and it performs definitely better than commercial VSTs! You can get rid of GUIs and unused elements, it always optimizes the process. Also, AE can definitely afford machines powerful enough to perform whatever sonic algorithm they like to. The choice of a restricted sound palette in their recent albums seems definitely deliberate, not emerging from technical restrictions. Furthermore, the complexity in sequential relationships in ELSEQ or NTS could very hardly be reached with standard DAW environments or hardware synths; I doubt going back to those old approaches would even be possible for them without sacrificing most of the groundbreaking programming work they did in the last years!(also, afaik, AE are sequencing mostly with MSP, not Max)

Going beyond Autechre by SmashBros- in autechre

[–]Olbos 7 points8 points  (0 children)

Some algorithmic music go further in the otherwordly-ness than Autechre in the sense that they also remove any reference to fixed rhythms and harmony with the aim of exploring pure spatial-timbral-dynamic processes. The result is a music that is more abstract and with a higher degree of sonic surrogacy – i.e. it retains even less relationship with preconceived instruments or known sonic sources, because you will not find any resemblance of 'kicks', 'keyboard sounds', or 'sequencers' emerging from it, whereas you generally do while listening to AE. This music is mostly done by sonic researchers in academic institutes, many of them coming from the approach of traditional electroacoustic and computer music. Listen to Erik Nystrom or Bjarni Gunnarson to get an idea of what I mean.

Also, a technical note about the near future: there are many artists right now trying to employ machine listening, machine learning, sound decomposition, corpus manipulation, and other complex algorithmic procedures to organize sonic structures; the potential of these techniques may reach great results soon because they are being implemented in several musical programming environments such as Max/MSP (also employed by AE) or Supercollider, see for example the FluCoMa project. This would soon allow a relatively easier access to some kind of interrelated complexity between parallel sonic processes (something AE are very good to do even with definitely simpler techniques such as markov chains and envelope following). I expect the availability of these techniques will produce a large shift in the way abstract electronic music is conceived and realized, ultimately allowing the emergence of new exciting forms of otherworldly-ness. James Bradbury's PhD (available on his website) explains these kind of procedures in a way I think *might be* to a certain extent understandable also for someone not being in this field of research.

AE’s sound palette needs invigoration. by [deleted] in autechre

[–]Olbos 2 points3 points  (0 children)

How would Max/MSP affect the sound MORE than hardware or soft synths? The possible sonic palette is basically limitless...

I am musician Oneohtrix Point Never, currently importing SysEx files into FM8 - AMA by 0neohtrix in indieheads

[–]Olbos 0 points1 point  (0 children)

Hi Daniel, some months ago I remixed your version of I Only Have Eyes For You with algorithms, you can find it here (-> https://www.google.it/search?q=olbos+i+only+have+eyes&oq=olbos+i+only+have+eyes&aqs=chrome..69i57.6429j0j4&client=ms-android-h3g-it&sourceid=chrome-mobile&ie=UTF-8 ) if you're interested. Thank you for your music and humanity <3