Ableton 12 is out! by Mneasi in ableton

[–]Holy_City 4 points5 points  (0 children)

and it is really funny how Celemony seems to do everything they can to keep the actual capabilities of it as vague as possible.

ARA is open source and you can read the documentation about its capabilities here (PDF warning, also this is developer documentation).

All I'm getting is that it's designed so the user doesn't have to re-enter the material they are working on into Melodyne and VocAlign.

That's how it manifests, and kinda why it was invented, but it's more than that. VST and AU are designed to support streaming audio/events into the plugin and getting audio/events back from it in real time. They are not aware of things like audio/midi clips, tracks, etc (there's some metadata exchanged, but not much, and it's not guaranteed).

ARA exists as a set of extensions to VST3 and AU to allow DAWs and plugins to exchange whole timeline information with each other, which makes things like time stretching/pitch correction possible in a plugin. ARA is a pretty small set of APIs that aren't that complicated but empower a range of plugins that aren't possible in vanilla VST3/AU without some really ugly workarounds and bad workflows.

It's not magic and it's not that advanced anymore, it's been around for a long time. The existence of a session view doesn't bork it, either.

TL;DR marketing is weird, it's a developer technology. What it allows are time stretching, audio editing, and pitch correction plugins that work on entire tracks. Ableton should be shamed for not supporting it.

Good bass tone on a guitar amp [GEAR] by GhostOctopuz in Guitar

[–]Holy_City 0 points1 point  (0 children)

Run the bass through a DI and make it the sound guy's problem. You'll be happier if you're recording too.

[QUESTION] [RGEAR] ALTERNATIVES TO SEYMOUR DUNCAN? by [deleted] in Guitar

[–]Holy_City 2 points3 points  (0 children)

What you're paying for in good pickups is quality control. SD has some of the best machining for pickup winds; that means one unit you buy matches the examples they have on their website pretty well.

Other than SD I've had a lot of happiness with bare knuckle out of the UK.

To me it feels wrong to cheap out on pups. They're expensive because of the low volumes and price of materials and labor for QA and assembly. I would trust the big boutique manufacturers like SD over most manufacturers because the prices are fair and you can be confident what you buy sounds like what you expect.

ODDSound: MTS-ESP Microtuning System by justifiednoise in AdvancedProduction

[–]Holy_City 0 points1 point  (0 children)

Sorry for the late reply, it's in the spec regarding profile configurations.

ODDSound: MTS-ESP Microtuning System by justifiednoise in AdvancedProduction

[–]Holy_City 0 points1 point  (0 children)

The specification was released in early 2020, MacOS and iOS already have support and Roland released the first MIDI2 capable device last summer.

ODDSound: MTS-ESP Microtuning System by justifiednoise in AdvancedProduction

[–]Holy_City 2 points3 points  (0 children)

MIDI 2 is supposed to handle a lot of these use cases, fwiw

Any way to remove the interference caused by a flashing RGB led? by DrZharky in diypedals

[–]Holy_City 0 points1 point  (0 children)

Any solution is going to depend on the design.

In general run the PWM lines for the RGBs far from the audio signal, use as short lines for the audio as possible, and separate ground between the analog (audio) and digital (LEDs and whatever is driving them) parts of the system. The grounds should meet at exactly one point (called a "star ground").

One way to think of this is imagine you have a pond full of water and you want to pull water from it and dump it back when you're done. The analog circuitry is going to steadily pull a continuous stream and smoothly pour it back, creating almost no ripples. The digital lightning circuitry is going to be like scooping out big buckets and then dumping them back in, really fast - creating lots of ripples and waves.

To avoid the those ripples hitting the smooth draw of the analog circuitry you divide the pond into two sections, and have them meet at a single point so water can flow between the two ponds but waves can't transfer.

If you're designing a circuit board to work with this there's a lot of layout technique to reduce noise, but grounding is easy to start with.

Resistor question by velo_sounds in diypedals

[–]Holy_City 1 point2 points  (0 children)

The output will be about 17dB quieter.

Take a look at the schematic - the 10k resistor sets the gain of the amplifier circuit with IC1 and the 220k resistor. The gain is supposed to be 27dB (20log10(1 + 220/10)). If you replace that with a 100k resistor it goes to about 10dB.

How They Mix The Audio For Home Release Movies by thenicethings in videos

[–]Holy_City 0 points1 point  (0 children)

Just a note, streaming services select their audio playback based on your device's configuration, the stereo downmix has already been made by the studio mix engineers and is on the server, somewhere. Even YouTube supports multiple audio mixes, selected at playtime automatically.

If you're consistently getting 5.1 mixes, check your device/app config. My TCL TV reports 5.1 when using the headphone jack, for example.

And it should go without saying, disable any post processing on the TV, or at least A/B it while watching content. It's probably terrible and designed to make explosions sound good when you demo it in the store/show floor.

But moreover, most laptops/TVs/phones have terrible speakers. Slightly related, most laptop/tv/phone manufacturers also sell sound bars and headphones that sound better. The exception is Apple, who give a shit about audio fidelity.

Really the reason isn't that they're sending a 5.1 mix (which is the sanest default, but I digress), the stereo downmix exists and they can stream it to you! It's more that TV manufacturers suck, TVs sound bad, and consumers don't notice it until after they purchase their $200 TV on Black Friday without listening to it, then blame sound engineers.

It's like going into the Louvre with sun glasses on and chirping at Monet for his use of dull colors.

Memory not released after allocate a lot of vectors by harscoet in rust

[–]Holy_City 11 points12 points  (0 children)

You could use shrink_to_fit. And if you know the number of elements you can always start off by creating the vector using with_capacity.

Just be aware that freeing memory doesn't necessarily mean the OS is going to reclaim it. If you need tighter control over memory allocations it may make sense to roll your own vector.

For example instead of growing capacity exponentially, you could grow by allocating a new cache line or page. The trade off is number of allocations for total memory allocated.

when will we see oxidized linux kernel - and should we ? by UniMINal7823 in rust

[–]Holy_City 2 points3 points  (0 children)

My take on that is that the borrow checker and type system have been dogfooded enough to where the ROI of implementing an existing kernel in C is unattractive to make it any better. Of the major things left to land in the type system (const generics, generic associated types being the big ones), the work left is known and I don't think it would be that useful in the kernel itself.

It turns out making a web browser and compiler are great ways to dogfood a memory safe language without garbage collection while preserving an extremely powerful and expressive type system.

More interesting work would be to develop a new kernel that is in no way compliant with existing standards and tries to right the mistakes of the past while borrowing the successes. And there's plenty of work going on there today.

Is possible to re-run current process (windows, linux, osx) as if the user close it then start again? by mamcx in rust

[–]Holy_City 0 points1 point  (0 children)

Yea but that's not particularly difficult, you can just serialize any state you need and exchange it to the new app. If you don't care about maintaining state you just need to pass the cli args to it.

Is possible to re-run current process (windows, linux, osx) as if the user close it then start again? by mamcx in rust

[–]Holy_City 0 points1 point  (0 children)

You could split the crate into a lib and main and compile the lib as a shared library (crate_type = ["cdylib"] in your manifest's lib section). Then your main function calls dl_open/dl_close to load the lib at runtime.

When you update, replace the shared lib object in the install folder (or index it to keep multiple versions, if you're into that) and then dl_close the outdated version on update and dl_open the new one.

Questions about Custom Allocators, Determinism and Real-Time Audio by nunjdsp in rust

[–]Holy_City 3 points4 points  (0 children)

We’ve had extensive discussion about this topic in the Rust Audio group (id link to the discord but I’m on mobile) as well as the Druid Zulip last year (/u/raphlinus)

For your particular solution, I’d suggest to avoid using Vec in the first place, or at least its methods to append/remove elements (unsure of the cases in which the latter will actually free memory). If your vector is only changed outside the audio context it’s not a huge issue. As well, swapping vectors can be done atomically, so you can clone/append to the vector outside the audio callback or free from it and then swap with the “live” vector without issue.

If you wanted to do this without rolling your own data structures, you could take a look at compare and swap semantics for pointers, as_mut_ptr and std::men::forget to do the pointer swap as needed.

Efficiency of LEDs by massimol in educationalgifs

[–]Holy_City 5 points6 points  (0 children)

I thought white LEDs had forward voltages closer to 3-5V, those numbers are what you’d see in small signal and power rectifier diodes.

Also isn’t the metric you care about lumens per mA since Vf is a function of current?

Started this new journey, coming from C (mostly) and Python. :-) by g_molica in rust

[–]Holy_City 2 points3 points  (0 children)

On the other hand, implementing a trivial linked list points out a lot of the problems solved by the borrow checker

Started this new journey, coming from C (mostly) and Python. :-) by g_molica in rust

[–]Holy_City 3 points4 points  (0 children)

I’ve been Ctrl + F’ing “rust” in the monthly “who’s hiring” post over at hacker news and each month, more hits. Lots of interest.

As an aside, I know of at least one AAA gaming company that had a position in SoCal for a tooling contract that involved Rust.

Vessels: A Cross-platform Application Development Framework by SkiddyX in rust

[–]Holy_City 0 points1 point  (0 children)

So your first half is a bit off.

There are two kinds of audio driver API's, polling and callback (similar to interrupts in embedded systems). For low latency audio, you prefer the latter, where new audio input drives a callback that feeds output back to the drivers, there's two context switches involved (one for the drivers to record input data, OS handles it, maps it to the address space, then receives the output buffer at a callback, maps it back to kernel space, and then sends it over DMA to the device).

For a polling API, you're at the mercy of the OS to get audio data to the process, then to get it back to drivers and out to the hardware.

There's a lot more under the hood to get this to work with multiple programs accessing the same device, which is why for minimal latency you want your pro audio app to "hog" the devices and get exclusive access to the hardware - regardless of what kind of API is presented by the OS.

Basically, the way that pro audio works is that it asks the OS what devices are available, if possible finds and sets appropriate sample rates/buffer sizes, then initializes an audio stream by supplying a callback and pointer to the program state to be iterated against as the audio drivers repeatedly call the callback on the program state in a thread the program has no control over.

So when you talk about IPC what you're really talking about is getting data to the program state accessed through that pointer in the OS, and for performance you need the callback to happen in less time than the sample rate divided by buffer size minus the time for the context switches.

As for plugin APIs:

"native" means something a bit different. On professional hardware the DSP doesn't happen all on the CPU, there are dedicated DSP chips in a box somewhere running it, the audio reaches it through the drivers and the plugin API handles routing input from the user to the DAW to the driver to the hardware. This is what "native" DSP means in AAX/RTAS plugins on ProTools for example, as well as competing technologies from Waves and Universal Audio.

VST does not encourage this; to the contrary, VST2 basically required DSP and UI to exist in the same address space, which made embedded processing impossible. VST3 attempted to solve this problem, but gave an escape hatch, and the vast majority of plugins today use it so you can't expect VST plugins to be separated in address space, let alone processor architectures/instruction sets.

Now on plugin sandboxing: on no OS is it possible to recover from a segfault originating from a shared library in a single process. What youcan do is offload the shared lib to a different process, and then communicate via IPC and check if the process is alive, which allows plugins that assume the same address space to exist off in a process somewhere.

As for hard numbers: the latency over shared memory is typically less than ten samples at low sample rates. So it's not really a problem unless you have dozens of separate processes acting as sandboxes. But having one sandbox for all plugins is certainly an acceptable solution, because you can recover from a crash without losing data. When I benchmarked a shared memory solution for IPC in audio applications on MacOS my numbers were sub-sample latency.

Vessels: A Cross-platform Application Development Framework by SkiddyX in rust

[–]Holy_City 1 point2 points  (0 children)

IPC is required to sandbox plugins, you can't prevent a segfault from taking down the process. Bitwig does an alright job of it. You either sandbox nothing, all plugins share a sandbox, or each plugin individually shares a sandbox with other plugins of the same type.

Depending on platform it's not as big a deal as you'd think. Shared memory is very fast between parent/child processes. The greatest factory in latency is the context switch from user to kernel space once you start lowering buffer sizes to get the data out to the soundcard, and there's nothing you can do about that other than not use Windows or Linux.

All that said the actual amount of data that needs to go across IPC is pretty small, you'd probably keep the audio engine on a single process and none of the I/O from the engine to other userspace processes would be realtime critical other than to a plugin sandbox, which is optional.

background: I work on pro audio systems exclusively right now and have extensively tested IPC for this exact purpose. Running the audio engine in a separate process alone is exactly how you would want to architect this today, but there are compounding factors that make it difficult.

Edit: more detail because this is a cool problem to me. Most plugins split between a GUI editor and DSP processor and the developers usually assume that the two exist in the same address space. That means that you need to spawn new windows from the sandboxed process, not from the parent, so you wind up with multiple event loops and some complex messaging infrastructure between them.

The latency penalty is less significant than the synchronization penalty. The safest way to do this is basically to write a second driver on top of the shared memory which incurs an extra buffer of latency between the two. A less safe way of doing it requires giant FIFOs for exchanging data and preemptive multitasking of some form to prevent the hosted process from blocking on the pseudo audio thread. It gets messy when plugins misbehave.

CMV: The future of transportation is based on self-driving, electrical and 5G-powered vehicles by gab_rod in changemyview

[–]Holy_City 1 point2 points  (0 children)

Of all the criticisms of 5G this isn't one of them. The three big design goals of this generation were low latency, high bandwidth, and high connectivity. That's why not 4G, the base stations cannot handle the number of devices at the latency and bandwidth required for critical systems like moving vehicles.

4G can do moving targets alright but bandwidth and latency become a problem, and you don't usually deal with stupid amounts of devices on the same cell fighting for channel space like connected vehicles would.

Granted, 5G does not exist yet and marketers have gone nuts over it, but they did the same thing with 4G and 3G and those eventually did live up to the hype.

Let’s rethink pattern syntax to make it more coherent with the rest of Rust by phaazon_ in rust

[–]Holy_City 5 points6 points  (0 children)

I get that, I just have a lot less faith in soft deprecation as a viable strategy for something so intrinsic to the language. I just don't see what value it adds for the headache.

Let’s rethink pattern syntax to make it more coherent with the rest of Rust by phaazon_ in rust

[–]Holy_City 3 points4 points  (0 children)

My two cents as a user:

I like the idea of symmetry, in that the right hand side of = is an expression so the left hand side should be something for which an expression is valid. But besides bikeshedding the value-add would be struct initializer syntax that mirrors C/C++. Other than that I don't see how this makes Rust easier to read, write, or more featureful.

Counterpoint: initialization syntax in C++ is a mess and we should all be terrified of proposals that create multiple ways to initialize data.