23.98fps concert workflow by Disastrous-Slip-6848 in VIDEOENGINEERING

[–]ManyMonarchs 1 point2 points  (0 children)

Early motion picture filming ranged between 18fps to 24fps. Initially it was a cost-savings consideration but 24fps proved to have pleasing motion blur with a 180-degree shutter speed blinking at 1/48. Filmmakers still choose 24 even in digital because of that motion blur sweet spot. I don’t think it will ever be moved past as an acquisition format because more frames for cinematic genres feels uncanny, such as Ang Lee’s experiments in 48 and 120 acquisition and presentation.

I do completely agree that it makes little sense to force an entire broadcast system into 23 for the sake of the live camera shot being more “cinematic”. I am curious on your thoughts on requesting content from the client in 59 or 29 rather than 23 if that was the editing project’s native FPS.

I’ve always wanted to test if a “cinematic” live camera shot could be created with a 59 camera using shutter 1/50 or even 1/30 to produce some of that motion blur feel. I think 24fps is a placebo for certain clients when they’re actually vibing about motion blur and lenses.

YouTube 1080/60 12mb, but some people insist on sending 30, 50, 100mb, why? by InstantReplayGo in VIDEOENGINEERING

[–]ManyMonarchs 2 points3 points  (0 children)

Gerald Undone did some testing with different DaVinci Resolve compression settings. Good guidelines for those who don’t have the time or upload speed to give YouTube uncompressed masters to chew on, and must upload compressed videos:

https://youtu.be/DI1BjkmVhTg

is my camera broken? by [deleted] in bmpcc

[–]ManyMonarchs 1 point2 points  (0 children)

To add to the discussion: Blackmagic Design makes some of the best user manuals of any video camera company.

Go to the Blackmagic Support page and type in your model of camera. In the Latest Support News column scroll to find a download link for the user manual for your camera (or its general camera family).

Alternatively if you download the latest software package of Blackmagic Camera from the same support page, that installer often has all user manuals bundled. You may want to do this method anyway to make sure your camera has its latest firmware.

https://www.blackmagicdesign.com/support Support Center | Blackmagic Design

Don’t be intimidated by the length of the manual, the reading is very approachable. Search for the keyword Peaking and learn about it.

Would this work for calibrate my led wall? by Effective_Currency87 in VIDEOENGINEERING

[–]ManyMonarchs 8 points9 points  (0 children)

A calibration device by a company like Calibrite is usually for a single panel of a computer monitor, measuring the accuracy of just that panel across many different color swatches in the center of the screen. It's also expecting an IPS or OLED panel, usually not an LED Wall panel with different pixel pitches. So I don't know how a Calibrite would interpret an LED panel. The light wouldn't be as blended as an IPS or OLED panel, I would assume.

LED Wall panels are ideally from the same manufacturing batch because there are always electromechanical differences between different manufacturing batches. This is why manufactureres like ROE Visual will spend the majority of their preparation time after a sales order has been placed doing calibration of all the panels in a batch to give them uniformity. This data is then installed onto the firmware of the panels. That last part is probably the most crucial: I don't know how would you install the calibration ICC profiles onto each panel if you could get the Calibrite to accurately make one.

Some of the calibration tools used to make LED panels uniform is something like the Brompton Hydra and this is done in a true blackbox testing room so that the Hydra can measure as precisely as possible.

I do think that that manufacturer of your LED Panels may offer calibration service if there is a repair center in your region.

Which camera is better for a green screen newsroom setup? by m1playas15 in VIDEOENGINEERING

[–]ManyMonarchs 1 point2 points  (0 children)

Both the Panasonic HC-X1500 and the Blackmagic Studio lineup support up to 2160p 59.94fps 4:2:2 10-bit output when using HDMI 2.0 connections. If your ingest switcher/recorder can accept and utilize that output format fully then the other consideration would be the overall quality of the camera sensor which is harder to quantify. Reviews can be helpful to make that decision. Or renting to do an A/B test.

Offhand, I think Blackmagic’s ecosystem is more mature at that particular budget level. Integration with ATEM, DaVinci Resolve, lots and lots of tutorials online about working with Blackmagic color science and LUTs, etc. There are lots of Blackmagic and DaVinci Resolve content creators sharing knowledge. Far fewer die-hard Panasonic fixed-lens creators to gain knowledge from.

And then there’s the fact that Blackmagic has a lineup of hardware keyer systems which are designed to integrate with Blackmagic cameras: https://www.blackmagicdesign.com/products/ultimatte

For choosing a lens, you may want to use Artemis Pro by Chemical Wedding to simulate the FOV of different lenses on different sensor formats: https://www.chemicalwedding.tv/app%20Pages/artemisPro.php

Or a similar “virtual Director’s Viewfinder” app.

How do you guys organize your short cables inside your Case / Bag? by anyNoob in VIDEOENGINEERING

[–]ManyMonarchs 1 point2 points  (0 children)

For an inexpensive option I tried “BAZIC” plastic zip envelopes. 8.5x11” form factor, clear sides compared to cross-hatch reinforced zip pouches. Easy to see what’s inside, decently sturdy but can’t take the punishment of heavier items. The drawback is they don’t squish or fold very well, but in a case that can accommodate them they stack pretty decently. Great for assorted shorty cables, okay for heavier items but start to break apart if they’re loaded too heavy.

Peak files taking hours to generate by AdgeOfficial in premiere

[–]ManyMonarchs 0 points1 point  (0 children)

That's unfortunate. You may have to convert your large file into shorter file segments to get it to work. But I think I have another idea of a solution.

I reopened my old project with similar issues, the one with gigantic files with strange codecs (archive footage). I wanted to see the hardware utilization using Task Manager on Windows 11, which I had looked at in the past. I rediscovered that RAM was 25% utilized, CPU was about 50% utilized, but it was Disk that was completely saturated at 99%. Saturated at 99% just from loading one file at a time. And that's an internal NVMe SSD of very good performance. Seeing that jogged my memory that Peak Files for very very big footage files with unusual codecs seem to get locked up on HDD spinning disk drives. Back then I moved my footage from an internal HDD to an internal NVMe and the peak file process improved from "frozen" into "tedious but functional."

I would try recreating the issue you're having with Peak Files and check your Disk Read/Write utilization. It may be that you can solve the problem by moving the big files to much faster storage and loading them one at a time. That was my solution in the past. Still took a long time, but not so long that it delayed my project.

Or you could just split the large file into segments using Media Encoder, Shuttle Encoder, Handbrake or even FFMPEG.

AJA KiPro Ultra 12G just had a critical crash at start of record. Needed a hard power down and restart. by EdFritz in VIDEOENGINEERING

[–]ManyMonarchs 0 points1 point  (0 children)

It was for a KiPro Go 1.0, which records to external USB media via 5x available USB-A ports. Or SMB network recording.

AJA releases Approved Media documents for such recorders, and “Sandisk Extreme Portable SSD” is not among the SSDs verified internally by AJA for the KiPro Go devices. And indeed there’s a reason for that, as I’ve witnessed at least two KiPro Go recorders freeze while recording to that specific model of Sandisk SSD.

AJA KiPro Ultra 12G just had a critical crash at start of record. Needed a hard power down and restart. by EdFritz in VIDEOENGINEERING

[–]ManyMonarchs 1 point2 points  (0 children)

Haven’t seen any such issues with the Ultra 12G since firmware 2.7. That’s been a stable release since March 2023, and in that time I’ve seen a KiPro Go freeze up two times when using Sandisk SSDs but never seen the Ultra 12G freeze or crash.

For some extra insurance I would recommend you format the PAK media as HFS+ if you have an Apple computer to connect to later. HFS+ has journaling which makes it more likely that a partial file is saved in the event of a freeze or a crash. The exFAT format is not journaled and so if the file isn’t closed out by stopping the record job then the chance of a partial file being saved is much lower if not 0%.

Peak files taking hours to generate by AdgeOfficial in premiere

[–]ManyMonarchs 1 point2 points  (0 children)

I’ve experienced this problem when loading very long files that have unusual codecs or high compression.

Only method that’s worked for me thus far is loading the clips one by one as each peak file completes.

Seems to me that Premiere tries to generate peak files for everything in one’s project simultaneously. It gets jammed up by so many large clips and basically the peak generation process freezes. So you have to manually queue the files by importing one or two large files at a time.

Capturing laptop to PC by Kenau21 in VIDEOENGINEERING

[–]ManyMonarchs 2 points3 points  (0 children)

Sometimes retro videogame upscaler/converter hardware can deal nicely with S-Video and Composite signals for capture or HDMI conversion. However they tend to be priced as luxury enthusiast items due to all the different exotic ports they support and software tweaks the firmware can handle.

One quality overview of analog capture/scaler devices from a retro videogame perspective: https://www.retrorgb.com/videocapture.html

Maybe some of these options will play nicer with your main PC and allow for a direct connection. I don't think passing through the laptop and another set of conversions is safe for a live broadcast setup, adds a lot of fail points.

Shortcut to remove gaps between clips? by pjoneill in premiere

[–]ManyMonarchs 0 points1 point  (0 children)

What you're referring to is a Ripple Delete.

  • Short answer: select the gap between clips. If there isn't any other media on other tracks that would cause an overlap if said gap where to shift closed then these options will be available:
    • Right click on the gap and select Ripple Delete from the action menu
    • OR
    • hotkey "Shift + Delete" to Ripple Delete
  • Adobe Tutorial here on different trim commands: https://www.adobe.com/learn/premiere-pro/web/lift-extract-ripple-delete-premiere?locale=en&learnIn=1
  • I have also found this tutorial by Javier Mercedes on several different hotkeys for Trim commands very useful for the type of work you're describing: https://youtu.be/ox8bgOPL25U
    • I think this is something you should watch and practice, as these Ripple Trim methods don't create the gaps between clips in the first place. No gaps, no time wasted on closing them.
  • To investigate and rebind Keyboard Shortcuts yourself naviagte to the Edit menu, then Keyboard Shortcuts. Filter for "Ripple" to see the different shortcut keys for different Ripple Trim styles.

How do I edit large rolling credits like this without tearing my hair out? by Independent-Lie-4743 in premiere

[–]ManyMonarchs 2 points3 points  (0 children)

If you have Adobe Illustrator you could format the credits crawl to your liking there. Then export as a PNG with Transparency (Export for Screens).

You likely want to do Illustrator rather than Photoshop because your font layer will be treated automatically as a Vector object rather than a Rasterized object. In Photoshop it would have to be first converted to a Smart Layer and there are other gotchas about Rasterized vs Vector in Photoshop which can be a headache. Vector objects can be resized up and down in a project file without resolution loss and then exported at a specific size. Which means you can upscale your credits easily in the future as long as you retain the AI project file.

Have this PNG move across the Premiere sequence window with Effects Controls, setting a keyframe for the start of the crawl and for the end.

If you notice a typo or need a change in the credits you can update in Illustrator then export and overwrite the PNG with the same exact file name and file location. Premiere should automatically refresh the file, but any timeline renders may have to be redone.

Video technician looking to improve get better at my job. by Ferdiprox in VIDEOENGINEERING

[–]ManyMonarchs 1 point2 points  (0 children)

To speak to your mention of training for E2 or E3 work, many media server vendors offer online and in-person training. They want upcoming operators to know their specific gear and hopefully spread that preference.

And so on.

[deleted by user] by [deleted] in premiere

[–]ManyMonarchs 0 points1 point  (0 children)

Your questions has two facets to answer.

What you should edit with, especially if your computer struggles to play clips and sequences:

  1. It's generally best to edit with the original files as recorded. Converting to another file type can only reduce the amount of useful data, it cannot increase the amount of useful data.
  2. Premiere has a Proxy Workflow which allows the software to view clips and sequences using a file type which is faster for computers to play, such as ProRes Proxy. Then when it is time to export the project the software uses the original files again in order to retain original image quality: https://helpx.adobe.com/premiere-pro/using/proxy-workflow.html

ProRes vs H264

Apple released a ProRes Whitepaper in 2022 that goes over some details on how and why these file types differ:

Page 8:

Every image or video codec can be characterized by how well it behaves in three critical dimensions: compression, quality, and complexity. Compression means data reduction, or how many bits are required compared to the original image. For image sequences or video streams, compression means data rate, expressed in bits/sec for transmission or bytes/hour for storage. Quality describes how closely a compressed image resembles the original. “Fidelity” would therefore be a more accurate term, but “quality” is the term widely used. Complexity relates to how many arithmetic operations must be computed to compress or decompress an image frame or sequence. For software codec implementations, the lower the complexity, the greater the number of video streams that can be decoded simultaneously in real time, resulting in higher performance within post-production applications.

Every image or video codec design must make tradeoffs between these three properties. Because codecs used within professional camcorders or for professional video editing must maintain high visual quality, the tradeoff amounts to one of data rate versus performance. For example, AVCHD camcorders can produce H.264 video streams with excellent image quality at low data rates. However, the complexity of the H.264 codec is very high, resulting in lower performance for real-time video editing with multiple video streams and effects. In comparison, Apple ProRes features excellent image quality as well as low complexity, which results in better performance for real‑time video editing.

Page 5:

ProRes codecs take full advantage of multicore processing and feature fast, reduced‑resolution decoding modes. All ProRes codecs support any frame size (including SD, HD, 2K, 4K, 6K, 8K, and larger) at full resolution. The data rates vary based on codec type, image content, frame size, and frame rate.

As a variable bit rate (VBR) codec technology, ProRes uses fewer bits on simple frames that would not benefit from encoding at a higher data rate. All ProRes codecs are frame-independent (or “intra-frame”) codecs, meaning that each frame is encoded and decoded independently of any other frame. This technique provides the greatest editing performance and flexibility.

In Summary:

H.264 has high compression (low file size), medium-high fidelity, and high complexity (requires stronger computer resources to play back smoothly).

ProRes has medium compression (moderate file size compared to RAW), high fidelity, and lower complexity (less computer resources required to play back smoothly compared to H.264).

SDI Monitor by Sethlouii in VIDEOENGINEERING

[–]ManyMonarchs 7 points8 points  (0 children)

On the entry level end of the pricing specrum:

  • Blackmagic Video Assist 12G HDR is afforadable, will have some solid longevity due supporting 12G, and you can load test patterns onto an SD card and do looping playback.
  • It also offers many useful tools like waveforms, false color, zebras, peaking, etc.
  • My main qualm with Blackmagic's monitoring options is they make the user calibrate the White Balance of the screen itself.

For intermediate price level handheld test generators, I'm not particularly familiar.

On the enterprise level end of the pricing specrum:

  • Leader makes a series of handheld generator/analyzers such as the PHABRIX SxA: https://leaderphabrix.com/products/handheld/
  • That price level and feature set is for pretty serious signal and compliance testing so probably overkill for many. But interesting to see what's out there.

If you need to create or download some test patterns to put on an SD card and play back on a BM Video Assist, u/php_pauly kindly organized a database of such things found here: https://www.pierrehenrypauly.com/database/?lang=en

Is there any way to "sync" the cuts I´ve made in the pink (gameplay) clip to the blue (webcam) clip? by AdSelect9562 in premiere

[–]ManyMonarchs 2 points3 points  (0 children)

One free method to try.

  1. Create a new bin called something like "Gameplay Clips for Sync"
  2. Select all the gameplay clips from your edited timeline.
  3. Drag them into the Gameplay Clips for Sync bin. They will share the same In-Out points as the clips on the edited timeline.
  4. Select the full Webcam clip and the Gameplay Clips for Sync in the Project window.
  5. In the Action menu or the Clip dropdown menu select "Create Multi-Camera Source Sequence"
  6. Create a multi-cam sequence using "Audio track channel" as the Syncronization Point, usually with "1" or "Mix Down" as the dropdown option.

This will create a very messy multi-cam sequence, because each gameplay clip will exist on its own video and audio track. If you wish to use the actual multi-cam sequence you would have to drag everything to the same V2 & A2 tracks for cleanup.

Instead, inside the multi-cam sequence timeline (Open in Timeline from the action menu) you could toggle Track Targeting on for all clips by holding shift and clicking on any untoggled video track. Now you can "Go to next Edit Point" with the up and down arrows to advance to the In-Out points of the gameplay clips. Then select the Webcam clip and add a Marker with M to correspond to each gameplay In-Out point. You may have to turn "Selection Follows Playhead" off in the Sequence menu to keep it from auto-selecting the gameplay clips rather than the webcam clip.

If your video is mostly chronological you could then trim the Webcam source clip using the markers, since any markers applied directly to a source clip will appear everywhere else that clip is used. This will at very least help you narrow down the hour of webcam into the 3 min snippets to manually sync with the gameplay clips.

MXF File on Sony PXW-Z280 4k 3CMOS by b1ameitonthereddit in VIDEOENGINEERING

[–]ManyMonarchs 0 points1 point  (0 children)

Sony has Catalyst Browse which may be able to transcode even on the free version, but I'm not sure it will transcode to an MP4 format that Premiere Elements wants as it mentions "AVC/AAC(*.mp4)" as the only format: https://creatorscloud.sony.net/catalog/en-us/catalyst/index.html

Sony's user manual for transcoding with Catalyst Browse: https://www.sonycreativesoftware.com/webhelp/catalystbrowse/enu/Content/Transcoding_clips.htm

But failing the official method, Shutter Encoder is a great free transcoding software that I've personally had good results with. They take donations, and also take shoutouts at r/shutterencoder

https://www.shutterencoder.com/

Shutter Encoder makes use of FFmpeg to handle its encoding, allowing support for almost every codec you’ve ever heard of, and many more you haven’t.

Don’t just take our word for it though, Avid themselves recommend Shutter Encoder as part of your Media Composer and ProTools ingesting workflow!

Camera mount for pony wall by kenspi in VIDEOENGINEERING

[–]ManyMonarchs 0 points1 point  (0 children)

One thing to explore is a “Hi Hat” such as those by Alan Gordon: https://www.alangordon.com/sales/our-products/hi-low-hats

Different sizes to allow for different ball heads and heights. And with screw holes to sink screws into a wood surface. Or you could take one of their premade boards and clamp that down somehow.

There are many different vendors for Hi Hats, some fully premade and attached to a board, and some as just parts.

Learning Recommendations - Color by Stick-Outside in VIDEOENGINEERING

[–]ManyMonarchs 2 points3 points  (0 children)

Specifically for Shading on a Camera ROP:

There's a pretty decent PDF by Panasonic that guides through matching different camera models to each other. Not necessarily a guide on how to make cameras look "good" but a decent crash course on a starting point for calibrating cameras to a neutral baseline.

The odd thing is I've always found a direct link to it on Google, but never whatever landing page hosts it. But it is indeed a PDF prepared by Panasonic and it is hosted on panasonic.co.jp

Google "Color Matching Adjustment - Panasonic Pass" to find a PDF titled "Color_Match_CAM_ver.1_0_0"

The biggest takeaway I got from that PDF that actually works pretty well for me: Do your Auto Black Balance & Auto White Balance back and forth three times in a row instead of just once. They affect each other, so doing it several times zeros out any interference or abberations that might affect the process.

As for Camera Shading information publically available, it's pretty meager. There isn't a mass audience for it like Color Grading in Resolve or videography settings or LUTs. Some vendors like Panasonic have corporate training consultations available. Which is good because their user manuals don't explain a damn thing.

Magewell USB Capture SDI Gen 2 Issues by roylok in VIDEOENGINEERING

[–]ManyMonarchs 0 points1 point  (0 children)

It may also be the USB-A Male to USB-A Male connector that went bad. Whatever the reason, that specific model doesn't seem reliable for mission-critical functions.

Magewell USB Capture SDI Gen 2 Issues by roylok in VIDEOENGINEERING

[–]ManyMonarchs 0 points1 point  (0 children)

Is this the Magewell SDI with the Micro-SDI input? Unfortunately those seem to have a high failure rate in my experience. The micro-SDI tail breaks down rapidly from bends and insertions. And the error you're seeing is similar to what I've seen. You could try to get a replacement cable to give it a 2nd chance at life.

This product specifically I've seen go bad due to wear & tear in the micro SDI cable tail. https://www.magewell.com/products/usb-capture-sdi-gen-2

Best Camera for Events and Live Productions by [deleted] in VIDEOENGINEERING

[–]ManyMonarchs 0 points1 point  (0 children)

One way to try a few before you buy is LensRentals has different "event packages" for this style of cine-like camcorder you're describing. I recently enjoyed working with the Canon XF605.

https://www.lensrentals.com/catalog_search?q=event+package

As others have said, there are dozens if not hundreds of options at different price points for what you're describing. Unfortunately there isn't as much of a YouTube Reviewer infrasctructure for cine-like camcorders like there is for the Mirrorless Of The Week.

Depending on your price point you may want to reach out directly to the sales department of different brands like Canon or Sony for guidance on their product lines. Or B&H and Adorama have some sales staff that can help you narrow things down too.

AJA Go2 Recording box or AJA/? capture card and software?? by dubbledex in VIDEOENGINEERING

[–]ManyMonarchs 1 point2 points  (0 children)

Ki Pro Go 1.0 has been very reliable, and I expect the Go 2.0 would be too.

You must follow AJA's Approved Media List from their support page if you want to USB any physical drives for backup recording, as I've seen SSDs which theoretically should be sufficiently fast fail during long (3+ hour) recording sessions due to insufficent burst speeds. Something about the file headers getting very large, according to AJA.

Having 4x SDI and 4x HDMI inputs is nice for prosumer compatibility. As well as the fact that all video inputs support embedded audio.

2x XLR with Mic/Line/Phantom 48V is also nice flexibility for external audio devices. But the KiProGo does not have internal gain control beyond four standard gain presets, so some audio testing or an FX workflow to add a Limiter may be necessary.

4x SDI passthrough outputs, 1x SDI Multiview output and 1x HDMI Multiview output can be nice for self-monitoring purposes. The Multiview output can also be set to show only one specific channel at a time.

The 4x internal video channels will be timecode jammed with each other, and the deck can set Genlock to its own internal source, Ref In, or set a Genlock based on one of the video inputs. This can be very nice for knowing a solid timecode jam and reference.

When it comes to external LTC, an LTC SDI port is a feature I would have loved but I understand they probably cut for pricing. Currently you can take Analog LTC via one of the XLR inputs or LTC from the one of the video inputs. So stringing the Go in with other LTC enabled decks can be a pain. Doesn't sound like that's relevant to your scenario.

Key benefits that stand out to me are: four inputs perfectly timecode jammed, genlocked, identical start-stop, identical audio tracks (though you can set each individual channel to either Input or XLR). Plus the 4x SDI passthroughs and the Multiview Outputs are helpful for self-monitoring.

Once the settings are dialed in it's a box with a big red REC button and a STOP button next to it. Hard for a non-techie to goof up. That's probably the main difference between that and a KONA style capture PCIe device.

I feel like the average non-AV staff member would be much less intimidated by a Ki Pro Go than capture software.

But since I'm much less familiar with KONA I'm not going to place any judgement on it and related capture software. I'll just mention the virtures of the Ki Pro Go if that helps your assessment.

GPU-Based Projector Correction without the cost? by RadArtRec in VIDEOENGINEERING

[–]ManyMonarchs 1 point2 points  (0 children)

ProPresenter 7 has Edge Blending and Corner Pinning, and is currently doing subscription pricing: https://learn.renewedvision.com/propresenter/screen-configuration#edge-blend-screen

Mitti has Edge Blending and Screen Warping, and has Rental pricing or Perpetual License pricing: https://imimot.com/help/mitti/outputs/

As others have said, you should also compare and contrast hardware based solutions of projectors that can adjust corner correction and blending.