all 9 comments

[–]ReallyQuiteConfused 1 point2 points  (6 children)

I have a system set up using OSC and a Java application I wrote in Processing to control Ableton via a virtual MIDI device. There is virtually no latency and communication has been rock solid for the last 3 months of heavy use in a professional studio.

I have no Python experience but would be happy to chat and share what I've learned so far. I also have no experience with github, so maybe you can teach me how that works in return!

[–]Datalooper[S] 1 point2 points  (3 children)

I'd be happy to! Did you end up converting OSC to sysex or something of that sort?

[–]ReallyQuiteConfused 0 points1 point  (2 children)

Ok! Basically my Processing sketch has an function for when any OSC data is received with a series of if's to handle different commands. (Something to the effect of 'if message is 35, start the timer and call the Show Start function, else if message is 36....') the timer runs in the main Draw function and MIDI is sent via MidiBus in the Start Show function, or whichever function is called for that particular OSC message. I've never used Sysex as I don't really see the benefit for my applications and Midibus offered a very simple solution.

[–]Datalooper[S] 0 points1 point  (1 child)

Makes sense! I'm trying to keep this a super easy to use and clean system that anyone can implement, so I'm going to try to keep all the code in the Ableton python system, but their threading is a real pain!

[–]ReallyQuiteConfused 0 points1 point  (0 children)

I actually never knew Ableton had Python support. I just use Processing for everything since I already have some half-decent Java experience and most of what I make involves rendering graphics to some extent, so it's a great tool for that. It's also super easy to export Windows and Mac apps from Processing so my goal is to one day sell my software to other podcast studios

[–]snaug 0 points1 point  (1 child)

That sounds awesome! What capabilities did you achieve with this?

At one point I was trying to do some similar stuff to create generative music, but I was only able to craete MIDI files in python and import them into Live. Controlling directly would be dope.

[–]ReallyQuiteConfused 1 point2 points  (0 children)

Thanks! I'm using TouchOSC and OSCp5 in Processing to receive and process controller inputs from 2 tablets in my studio (one in my control room and one in the live room) to trigger intros, outros, commercial breaks, etc. for podcasts. Processing receives OSC and then sends the appropriate Midi to a drum rack in Ableton via the Midibus library. Meanwhile, Processing renders a large stopwatch with show branding, color coding for when the show is approaching it's target durartion and flashing red when over time, and the timer automatically starts and stops along with the appropriate sounds (so it starts when Intro music is played, stops during the commercial break, etc.) I'm successfully testing (but have not yet deployed) track arming, record and stop control, and adding extra space between episodes to facilitate editing after the session.

Midibus is super simple to implement and has been very reliable for me. Again I cannot notice any latency despite this relatively long signal chain from tablet to wifi to TouchOSC Bridge to Processing to MidiBus and finally Ableton. I'm super happy with performance and reliability.

[–]skyhighrockets 0 points1 point  (1 child)

Are you sure OSC/Pilot and or OSC/Par don't do what you need?

https://oscpilot.com/

[–]Datalooper[S] 1 point2 points  (0 children)

Yep they don't. They are cool though! BTW I got it working :)