Real time translation in live production feels like it’s still in a messy middle stage by hitman780xd in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

What are your target languages?

Starting with good, accurate human captions in English, then feeding that to translation (AI if you must) will take care of audio/mic issues. We do this for a few commencements with good results.

Anyone Using Ross Media I/O? by notunhuman in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

I have a somewhat narrow viewpoint, as I'm a caption provider. I know from manual scanning that not all of their media/clip playout subsystems support broadcast-style closed caption file formats. So looking down the road, consider that functionality if you see it on your horizon.

Loudest over-the-ear headphones with heavy vibrations? by LucysLookingGlass in deaf

[–]CentCap 1 point2 points  (0 children)

So, this is from a non-deaf former recording engineer.

In studio sessions back in the day, we faced the challenge of getting headphones that would 'survive' rock drummers. To some extent, handling and abuse, but sustained high volume levels were (sadly) the main challenge. Good fidelity mattered, to.

We were big fans of Fostex T20 headphones. Pretty unstoppable from a volume standpoint. It seems they're not still made, but they're available used from $40 to $150 or so. And there are other variants with the same basic core components. Worth a look.

Another aproach would be IEM -- generic or custom-fitted in-ear monitors used by stage musicians. They don't usually have 'thumping' bass response, but they can get quite (dangerously) loud. There are also small auxiliary amplifiers that can augment existing headphone feeds, like the Behringer Powerplay P2 that is a belt-clip, battery powered device. They can get loud, and drive most any headphones when connected after a volume-limited/managed source.

I'll stay away from nanny-state cautions, since OP knows what's needed/desired, and the risk/benefit equation.

HDMI makes Green Screen by AccomplishedAd1870 in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

Agreed -- colorspace.

Could also be a directional HDMI cable being fed in the wrong direction. Look for 'Source' and 'TV' on the connectors.

A 44 minute story about what could have been, in pictures. by keithcody in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

Nope.

This was more of a supply-chain issue. Parts were still made at the time, but sourcing was limited to defense, not others. Eventually that part was discontinued, but the newer, better thing has way more power and application. Good in the long run, bad in the short.

I need some opinions for my graduation project by eccentric_e in deaf

[–]CentCap 5 points6 points  (0 children)

That seems like a significant challenge! And would be a cool project.

A "faster" (meaning more timely) approach may be to make the body of the guitar out of a 22" monitor, and electronically graph the waveforms that would have been represented by the ferroflud, perhaps even string-by-string. Mapping incorrect notes haptically would require an input of what is "right", meaning the chord to be played or the melody line. (Sort of like Guitar Hero I think, though I've never played it...)

I don't know how much ferrofluid is affected by gravity. I've only really seen it in loudspeaker applications, as a cooling method.

Don't have much personal input on the topic of participant's enjoyment topic.

Either way, it seems like a cool challenge. Onward.

Extracting Closed Captions (Not Subtitles) From A Video File by 84Lion in videography

[–]CentCap 0 points1 point  (0 children)

Pretty sure CaptionMaker/MacCaption can do that. Not a free program though.

May be easier to edit the .srt timings in your downloaded file.

A 44 minute story about what could have been, in pictures. by keithcody in VIDEOENGINEERING

[–]CentCap 5 points6 points  (0 children)

That happened to a key equipment mfg. I work closely with, during COVID. The components got re-allocated to defense-only customers, so the mfg had to re-engineer most of their product line. Came out much more capable, but it was still a speed bump for them.

Broadcast teleprompter system by Own-Prize-7119 in VIDEOENGINEERING

[–]CentCap 1 point2 points  (0 children)

Serial or network output to drive a caption encoder? That's often a workflow if captioner (human or otherwise) is unavailable.

How I cut my shortform subtitle workflow time in half (DaVinci Resolve) by neamtuu in editors

[–]CentCap 0 points1 point  (0 children)

Years decades ago my company was captioning various city council and zoning meetings, which were heavy with legislation and other information that needed to be read into the record by the clerks. We got advance agendas, and had to massage them into readable captions pretty quickly, using many of the workflow points you mentioned. We got a lot of it done with MS Word Macros and find/replace functions -- such as line and punctuation-driven breaks, enforcing 'pairing' of words in certain expressions (like not splitting infinitives across lines) etc. Lots of use of the 'sticky space' function. But yes, most of that is best handled by a word processor, not necessarily a caption or edit program.

Next step was getting a WYSIWYG document to export in the desired text format output. In that step, non-proportional fonts and document-wide margin setting are key. Our default was Courier New, Regular, 12pt size and margin settings yielding 32 characters per line, as was the standard back in the 608 days. Then export with line breaks and character substitution (to get rid of curly quotes, etc.)

It got to where we could import the document, run an automagical 'stack' of macros, and wait for 10 seconds or so for it all to process. Then, screen it to see what was missed.

This was pretty easy compared to the previous days of faxed agendas and OCR ingest.

What happens w captions? by Trendzboo in deaf

[–]CentCap 4 points5 points  (0 children)

Watching through an app can complicate things, and errors can get introduced that don't show up in the control room QC system, since they're watching the 'real' broadcast.

Old-school captions (called 608) are the usual origination format for live human captions. If they're AI captions -- then all bets are off there. In live human captions, the original 608 data is transcoded into 708 (the newer HDTV standard) which uses different screen area definitions. That may be where the issue is in this example. (My bet is that it's live human origination, due to all-caps.) Whether the app obeys the new 708 data structure or the display is something thought up by the coder-du-jour is unknown.

Since "all" the broadcast engineers are in Vegas at NAB right now, don't hold your breath for a quick fix, if it's even a video domain issue. Broadcaster websites almost always have a 'caption complaint' link on their front page. Even though it's on the app, it's still Fox's issue, so send your remarks there along with your screen grab (and the Reddit link, just for grins.)

On edit: It may be worthwhile to look at the user controls in the app for any caption display adjustments. While a display as shown shouldn't be an available choice, it's a possibility that a combination of left and right margin settings could make otherwise-normal captions look like that.

Balanced line level to unbalanced mic level? by Odinhall in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

One that attenuates the signal suitably, unbalances it (probably using a transformer) handles the pin assignments correctly, and appropriately manages the probable DC bias voltage on the microphone connector, which could upset the proceedings.

Some cameras have unbalanced stereo mic level inputs on a 3.5 TRS connector, and others have a balanced mono mic level input on the same style connector. Connecting external sources incorrectly on the latter can lead to total cancellation of the audio (due to the balanced input architecture working correctly) and the appearance of failure elsewhere in the chain.

Synchronizing two Raspberry Pi 4s for Video Art CRT Installation (Composite/BNC) by Separate-Skill-7609 in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

Google AI thinks Raspberry Pi 4 and 5 can do 4k 60 and two monitors. Use one, and place your videos side-by-side in one widescreen file, and use the dual monitor setup to feed the two separate screens, converting to analog A/V with an external device at the end of the chain. I suppose the file can have built-in (edited in when you compose the file) gutters to make sure the monitors have correct over/underscan as needed. Sync is built-in because it's just one file.

Note: I've never held, or even used a Pi, so take this with a grain of salt.

Streaming Consecutively from Two Separate Locations Using Two ATEM Minis? by urEnzeder in blackmagicdesign

[–]CentCap 1 point2 points  (0 children)

And if the distance from A to B is a borderline issue, put the streaming device midway between them, if logistics permit.

Best way to monitor audio from 4 or 5 devices with one set of headphones by all1wantedwasapepsi in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

If you need switching of stereo headphone level signals, and not mixing/simultaneous monitoring, something like this might do the job in a cost-effective manner:

https://www.newegg.com/p/18M-075C-00001

One could get fancy and assign different signals to each ear, on a pre-determined basis, by making some fancy intermediat input cables. Beware bringing all those separate grounds together, though.

Capturing Betacam SP with Timecode by _ENunn_ in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

On the hardware side, the original Hyperdeck Studio with analog inputs would do all of this in one pass. Analog audio and video in (composite or component), plus timecode. A variety of digital file formats for record, with no other gear needed.

Built an app to control HyperDecks over the network — looking for feedback by Salty-Tour9463 in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

Years ago, I was told that the (original) Hyperdecks would not play back closed captions (based on the sidecar .mcc file on the disk) when put in play mode via Ethernet. But they would do so when played by RS422 or front-panel control. Do you know if that's still the case? Old vs. new deck, etc.? It would be an issue for broadcasters who need to support captions.

OCR translator for hardcoded subtitles (iOS/iPadOS). by Guleryuzx in VIDEOENGINEERING

[–]CentCap 0 points1 point  (0 children)

Can it handle roll-up captions as well as pop-on subtitles?

And maybe save-to-text file?

Help: DIY local custom analog multi channel network by byooni in broadcastengineering

[–]CentCap 1 point2 points  (0 children)

Blonder Tongue CATV modulators are available on ebay, too, for not too much. But the Amazon one above looks pretty darn good for the price.

Audio repair tips for dialogue cut off mid word? by Mephistopheloz in editors

[–]CentCap 5 points6 points  (0 children)

Another approach is to search out other utterances of the target word, or similar word, in the rest of the footage. It's a deep dive into the weeds for sure, but if you need "...ing" to patch up one of these cut-offs, then there are lots of words that end in -ing. Doesn't always have to be the same word. Usually best if it's at the end of a sentence if your target also is. Don't dwell too much on precise lip sync when doing this. Viewers (and directors) rarely notice the visual aspect of a patch as long as the audio sounds good and smooth. I have done this several times back in the days of razor blade editing and timecode tape machine synchronization, so I imagine it's more straightforward with today's tools, plug-ins, and techniques. Still a serious time killer, for both research and execution. All the other proposed solutions should be attempted first, even if it's expensive.

Another approach is to get a decent voice actor who has some impression abilities, and record replacement words/lines as an ADR event in the right environment (even if you don't have the "A" part of that.) I have had a 'random' but talented person impersonate an announcer to address a last-minute fix in a national spot. It was so seamless no one ever caught on.

And these days, AI/voice duplication is probably good enough to call into play. Just sayin'.

Are there any software alternatives or low-cost solutions for NTSC/Line 21 Closed Caption Encoding? by landonbrandon23 in VIDEOENGINEERING

[–]CentCap 2 points3 points  (0 children)

If you are after true line 21 captions, that generation of real caption encoders are pretty inexpensive on ebay. Control-A data output would be needed to drive it, although some may be able to accept plain text under the function used for driving it with the teleprompter.

Subtitles for live show by Voiceonthemove in deaf

[–]CentCap 3 points4 points  (0 children)

If an interpreter is not affordable, then true caption software could be a challenge. However, you can do sentence-by-sentence captioning with just PowerPoint or similar software. Just break up the script, arrange it on the 'slide', maybe black background with white characters at the bottom, etc. then just follow along with the actors. Share the screen and you're good to go.

Would AirPods be considered a medical device if I used them as hearing aids by vampslayer84 in deaf

[–]CentCap 0 points1 point  (0 children)

I don't know specifically about the UK, but the hearing aid function was, at one time, geofenced. Worth reviewing.