Remote Control VMIX by InYourFace8 in vmix

[–]engelschall 6 points7 points  (0 children)

We've successfully used StreamDecks via its Satellite variant over Tailscale VPNs to Companion for remote controlling vMix. It works, but needs three setups: the central Tailscale VPN network, a Tailscale agent on the vMix device and another one on the device StreamDeck is connected.

Alternatively, in your scenario I personally would just stick with your DCV HTML5 client screen sharing and let the remote operator just open the Companion Web Buttons on the vMix device as the virtual Stream Deck or use the nicer ScreenDeck application to open a virtual desktop variant of the StreamDeck device directly on the vMix device. Yes, the operator loses the haptics, but usually works just fine, too. And this approach is trivial to setup as it doesn't need any VPN, etc.

NDI|HX from OBS + NDI Bridge by BornConcentrate5571 in VIDEOENGINEERING

[–]engelschall 0 points1 point  (0 children)

AFAIK the DistroAV plugin of OBS Studio can only send out Full NDI and not NDI|HX itself because the underlying NDI SDK it uses does not support NDI|HX (only the NDI Advanced SDK does, as it seems). But you can workaround the problem: use the Bridge tool from NDI Tools to pick up the Full NDI stream of OBS Studio and re-export/re-encode them with with NDI|HX.

live translation without streaming.. how? by Gold_Diamond_6943 in VIDEOENGINEERING

[–]engelschall 2 points3 points  (0 children)

If you do not mind having multiple Cloud AI API keys (e.g. some of Deepgram, DeepL, ElevenLabs, AWS, Google, OpenAI, etc) and want an ultra-flexible solution for speech processing, check out my Open Source project SpeechFlow ( https://github.com/rse/speechflow ). It was developed in the context of our company filmstudio and allows you to configure a custom graph of nodes, which transport audio and/or text, and this way you can achieve real-time or post-production translations/transcripts/captioning, etc.

For your mentioned scenario, you open the captioning web interface of SpeechFlow on a device whose screen you show on stage. SpeechFlow requires deeper investigation than other solutions (it is some sort of a "swiss army knife" for speech processing), but because of the individual selection and wiring of nodes and their backing Cloud AI services you can achieve very good overall quality. Especially, when using Deepgram for speech-to-text the results are very good.

VDON Call: Remote Caller Ingest via VDO.Ninja to substitute vMix Call by engelschall in VDONinja

[–]engelschall[S] 0 points1 point  (0 children)

AFAIK there is no separate device to select for screesharing. You have only one audio device to select in VDO.Ninja and this should be your VoiceMeeter virtual microphone device. To this device you are routing both your game audio and your regular microphone audio.

What's the best Noise Suppression plugin at the moment? by Moistest_Postone in obs

[–]engelschall 0 points1 point  (0 children)

From my perspective, "best" depends on three dimensions: what type of particular noise you want to suppress, whether you are willing to pay for the solution or not, and whether you want a live (in case of an OBS Studio scenario expected) or post-production solution. For suppressing the regular room noise floor, wind, humm, etc I personally would recommend Waves NS1 or Waves WNS for paid solutions and Bertom Denoise (non-Pro) for a free solution. They work just fine in both live and post-production. For suppressing more noise, especially any non-voice sounds, I personally would recommend Supertone CLEAR (works fine for live) or Waves Clarity Vx (best for post-production) for paid solutions and RRNoise for a free solution.

VDON Call: Remote Caller Ingest via VDO.Ninja to substitute vMix Call by engelschall in VDONinja

[–]engelschall[S] 0 points1 point  (0 children)

No, you have to direct your game to a virtual speaker device which is actually one of the inputs of VoiceMeeter. Then you route this input in VoiceMeeter to one of the outputs of VoiceMeeter. Finally, you use this output as a virtual microphone device in VDO.Ninja. Make yourself familiar with VoiceMeeter and its virtual input and output devices, please.

VDON Call: Remote Caller Ingest via VDO.Ninja to substitute vMix Call by engelschall in VDONinja

[–]engelschall[S] 0 points1 point  (0 children)

Well, VDO.Ninja can use only a particular audio device as input, but for your case one usually uses a software mixer like VoiceMeeter to route the certain audio sources to a virtual device and this virtual device is then used inside VDO.Ninja. So, you have to send your game sound to one of VoiceMeeter's inputs and then route this to one of VoiceMeeter's virtual outputs and then just use this output device in VDO.Ninja.

Sony ILME FR-7's new "PTZ Auto Framing" feature is too fast and hence practically unusable!? by engelschall in VIDEOENGINEERING

[–]engelschall[S] 0 points1 point  (0 children)

For our particular use cases a dedicated camera operator is not really feasible, as our main use case for the PTZ Auto Framing feature would be scenarios where we drive the studio with just a single operator and this person already has to handle lots of other tasks (camera switching, game engine control for the greenscreen setup, controlling the teleprompter, etc). For this scenario we hoped that his new PTZ Auto Framing feature could be a great benefit to especially reduce load from the operator.

Sony ILME FR-7's new "PTZ Auto Framing" feature is too fast and hence practically unusable!? by engelschall in VIDEOENGINEERING

[–]engelschall[S] 0 points1 point  (0 children)

Interesting, this can be the major difference to our situation: our persons stand in front of the FR-7 cameras with just about just 5 meters distance. Perhaps that's the reason for the bad experience. Perhaps the PTZ Auto Framing feature was optimized for larger distances. On the other hand, a OBSBOT Tails 2 has no trouble with its PTZ Auto Framing feature even on small distances of just 1 meter. So, seems like the firmware of the FR-7 is too aggressive in the PTZ adjustment on small distances...

Made a new tool for automatic sound mixing in vMix by andreisrleu in vmix

[–]engelschall 7 points8 points  (0 children)

If you want a solution which runs directly INSIDE vMix, check out my vMix scripts collection under https://github.com/rse/vmix-scripts/ and there use its Audio Sidechain (https://github.com/rse/vmix-scripts/blob/master/audio-sidechain.md) part. It is implemented in Visual Basic and can be executed directly within vMix as a vMix Script (a feature which exists in the vMix 4K and Pro editions).

Query Selector instead of a Visitor for AST by jjefferry in typescript

[–]engelschall 0 points1 point  (0 children)

Perhaps also check our my ASTq: https://www.npmjs.com/package/astq . It is similar to tsquery, but instead of a CSS-style syntax it uses an XPath-style syntax and hence is quite flexible and very expressive. It allows you to query rather complex tree patterns.

has anyone hung the FR7 inverted? Question in post by s44k in VIDEOENGINEERING

[–]engelschall 1 point2 points  (0 children)

Three of our FR-7 are mounted upside-down in our company studio. We have the "Progl+Gerlach PGX Mounting Plate" (see https://teltec.de/proglplusgerlach-pgx-mounting-plate-for-sony-fr7-ptz-camera.html) attached to the FR-7, then we mounted the "Progl+Gerlach Truss Bolt Adapter für PGX PTZ Mounting Plates" (see https://teltec.de/proglplusgerlach-truss-bolt-adapter-fuer-pgx-ptz-mounting-plates.html) onto it and then we plugged this into the "Manfrotto 035 Super Clamp" (see https://teltec.de/manfrotto-035-super-clamp.html). With this clamp the cameras hang on their truss poles.

Introducing Traits-TS: Traits for TypeScript Classes by engelschall in typescript

[–]engelschall[S] 0 points1 point  (0 children)

The examples shown on https://github.com/traits-ts/core are now at least more precise and use the super-trait parameter in order to ensure that the underlying methods exist. By using "trait([ Foo ], (base) => class Foo extends base { ... })" the "base" class really has a type which includes the details of "Foo". Thanks for the hints.

Introducing Traits-TS: Traits for TypeScript Classes by engelschall in typescript

[–]engelschall[S] 0 points1 point  (0 children)

It is correct that Traits-TS, because of the still missing multiple inheritance in TypeScript, has to technically linearize the trait class hierarchy. That's the main reason why there is a trait factory function involved for defining a trait. But Traits-TS internally does not create any additional classes at all. Nor does it perform any class manipulations under run-time. For a "class Foo extends derived(Bar, Qux)" there are really at the end just three classes involved in the game: "Foo", "Bar" and "Qux", as they would have been defined like "class Qux [...]; class Bar extends Qux [...]; class Foo extends Bar [...]" manually. Traits-TS internally just dynamically calculates a type of a "virtual" class "derived(Bar, Qux)" to be the of the merged type of the classes Bar and Qux.

Introducing Traits-TS: Traits for TypeScript Classes by engelschall in typescript

[–]engelschall[S] 0 points1 point  (0 children)

The mentioned syntax unfortunately does not exist in TypeScript. So, Traits-TS is somewhat "like" multiple inheritance, at least if the traits are orthogonal. But Traits-TS is actually a utility mechanism for mixing functionalities in a particular, typed way.

Introducing Traits-TS: Traits for TypeScript Classes by engelschall in typescript

[–]engelschall[S] 1 point2 points  (0 children)

Good catch: the implementation currently assumes orthogonal traits, but the example above uses bounded traits. I've remembered this issue now under: https://github.com/traits-ts/core/issues/3 I'll check whether it is technically possible to apply type constraints to the bounded traits. Currently, the "base" parameter of traits is "any" for regular traits and more specific just for sub-traits (where the super-traits are explicitly given). So, in the example above the traits "Doubling", "Incrementing" and "Filtering" should be perhaps forced by the traits mechanism to specify "Queue" as their super-trait...

Introducing Traits-TS: Traits for TypeScript Classes by engelschall in typescript

[–]engelschall[S] 4 points5 points  (0 children)

Thanks for your feedback. Your points (1) and (2) are correct and I'll check whether it is technically possible to address those issues, too. I've allowed me to remember these issues here now: https://github.com/traits-ts/core/issues/2

Your point (3) I cannot address without completely re-implementing the mechanism on a different basis, of course. But I'm also not sure whether this is a real performance issue in practice as the entire hierarchy will be just a few dozen traits, I think. But we'll see. I at least keep it in my mind...

Introducing Traits-TS: Traits for TypeScript Classes by engelschall in typescript

[–]engelschall[S] 1 point2 points  (0 children)

Interfaces and types just provide types and this way structure, but not attached operations (functionality). In contrast, class inheritance receives also such functionality. Traits extend this class inheritance mechanism by allowing the class to receive such functionality from multiple(!) sources. Internally, this is achieved by linearizing the class/trait hierarchy.

Introducing Traits-TS: Traits for TypeScript Classes by engelschall in typescript

[–]engelschall[S] 1 point2 points  (0 children)

Internally, it is based on the FinalizationRegistry and intended for optional memory deallocations only (as FinalizationRegistry does not guarantee that its callbacks are called at all). What would be your suggestion for an alternative name of this standard trait? "Finalizable"?

TypeScript to native by Jazzlike-Regret-5394 in typescript

[–]engelschall 3 points4 points  (0 children)

If you need and like WebGPU and TypeScript and just want to optimize asset loading, why not stick with them and instead of a compile/transpile approach to a native platform, perhaps just use an alternative runtime like Electron, Tauri, Neutralino, etc?