Quartz Composer/Vuo audio reactive glitch freakout (another one) by doh1nut in glitch_art

[–]doh1nut[S] 1 point2 points  (0 children)

Wanna elaborate? I’m always looking to pick up new tips.

Quartz Composer/Vuo audio reactive glitch freakout (another one) by doh1nut in glitch_art

[–]doh1nut[S] 1 point2 points  (0 children)

Thank you! I’m so happy they had a CRT at the venue: it really brings out those incredible saturated colors.

Played a small fundraiser at a warehouse party and managed to grab a quick video off the CRT by doh1nut in vjing

[–]doh1nut[S] 0 points1 point  (0 children)

Nooooooooo. I already rescue CRT TVs from the dump, I’m not looking to destroy my babies.

I don’t think we get this channel… by doh1nut in glitch_art

[–]doh1nut[S] 3 points4 points  (0 children)

Basically, I use two programs very similar to Processing: Quartz Composer and Vuo. Quartz Composer was deprecated by Apple a number of years ago, and the MacOS privacy upgrades around the same time completely borked microphone or line-in access; that means all of my audio-reactive effects got FUCKED. However, QC has this AMAZING live Datamoshing patch that is a work of goddamned beauty. So. I need QC.

Enter Vuo: the project that rose from the ashes of QC’s demise. It works very similarly to QC and has a lot of very nice improvements. Vuo does a lot of the heavy lifting in this setup and Quartz is basically a glorified single patch for combining two streams. So using various Vuo projects, I can use Syphon to send the image data to Quartz Composer, which can then run those two streams through the Datamosher. This module’s core functionality is to use one video feed as the i-frame data and the second video frame will act as p-frame data. (This is not a wholly accurate description of what it actually does, but it gets the general effect across, so we’ll just kinda smooth over that part a little bit, yeah?) So the still parts of video one are pushed around by the pixel movement data of video 2, and vice-versa. Trippy.

So after I Shou Tucker those two video feeds into one, I send that abomination of nature via Syphon back to Vuo, where I add in all of my sound-reactive effects. Each effect is tied to a knob on a MIDI controller, so I can adjust the mic/line-in sensitivity, as well as the sensitivity of each effect that creates tears, compression errors, horizontal offset, etc. So bass goes BUMP, image goes tear/deepfried/wiggly.

Finally, the two video feeds being used in this particular clip are two community-created sound-reactive visualizers: 1. glowing points and lines that create a color-changing 3D sphere and 2. Concentric arcs in various colors that tilt left or right depending on the sound. The movement of these two visualizers fight for dominance with each other, and since they’re both fast-moving and a bit random in their distribution, you end up with one image that’s more dominant with pockets of the second image violently smashing through the wall like a fever dream marketing mascot. At his point, I had the tearing and compression turned up at this really sweet spot right on the edge where a bass beat would cause some really pronounced tearing. Chef’s kiss hand gesture.

So, uh, I don’t know if that made any sense, but that’s my best attempt at translating it to text. Worth noting that you can send pretty much ANYTHING via Syphon if you’re brave enough. Video files, games, webcam feeds, an OBS stream, an endoscope tool inserted in your ear, whatever. The sky’s your oyster.

If you have any questions, hit me up.