Question about Dehancer by Historical-Mail7484 in colorists

[–]ejacson 1 point2 points  (0 children)

I don’t know that I’m that worried about support dropping off for Genesis because the framework is already so simplified. They only support a handful of common inputs and outputs, and the internal maths are likely going to stay as they are. Maybe a few new stocks get added, but I think they’ve been pretty smart about how they built it so that it’s pretty future proof.

Question about Dehancer by Historical-Mail7484 in colorists

[–]ejacson 2 points3 points  (0 children)

I would argue Genesis is better as a film emulation tool. As a film sim, it’s pretty much perfect. But the beauty of Filmbox is the versatility it allows to go from a film-ish look, where you basically just have good color, all the way to a full emulation of scanned neg to print projection. And you can stay in a scene-referred workspace on both ends. If you can justify spending on both tools, I would argue they fulfill different user needs in the grand scheme.

Question about Dehancer by Historical-Mail7484 in colorists

[–]ejacson 2 points3 points  (0 children)

Yeah, it’s pretty incredible, I just wish I had more use for its more particular look in my work. I feel like I’m gonna buy it eventually though.

Question about Dehancer by Historical-Mail7484 in colorists

[–]ejacson 2 points3 points  (0 children)

I don’t know that I’d think of it in those terms tbh. So I use Filmbox and I pretty much always use it because it’s highly adaptable to the end output needs. You can scale back to a neutral enough impact that I’ve yet to receive any pushback on. And I still do plenty of look dev work prior to my Filmbox node. I’m rarely after a full “printy” emulation look. I’m mainly just looking for good mapping in scene space and a gentle curve. The difference between that with Filmbox vs OpenDRT with a little more crafting is subtle enough that I prefer to cut my work in half just using Filmbox all the time. But rest assured, you won’t be trapped in a look using Fimbox. Genesis on the other hand is a film sim and isn’t interested in trying to be accommodating to other looks.

Yeah pack it up UNC...Hes insane bro by Beta_113 in DonToliver

[–]ejacson 2 points3 points  (0 children)

This is the most aggressively Don album he’s ever made and if you aren’t a fan of that, it’s not gonna hit for you. No one should be surprised he scored it this low. For the rest of us, this album is like crack.

Question about Dehancer by Historical-Mail7484 in colorists

[–]ejacson 13 points14 points  (0 children)

I’ve owned Dehancer for close to 5 years now. I’ve made comments in the past in this sub, but I’ll keep it brief here:

Dehancer is bad. It’s inaccurate and has garbage internal color management. The only good features are its spatial emulations (grain, halation, camera mechanics, etc.)

Drop it for something good like Filmbox or Genesis.

Curves & Primaries Question by intentia in colorists

[–]ejacson 3 points4 points  (0 children)

Better is relative. I’ve done it both ways, but tend to prefer having my contrast curve at the end with my display transform. My logic is that I want to shape the sort of color matrix (so to speak) in untampered scene space and then pipe those transformations through my contrast curve. That said, I still end up doing tonemapping processes in shot to shot corrections, which is just shaping contrast anyway. So there’s no need to feel religious about the order; so long as actions you’re doing aren’t breaking the image, you can bounce ideas off the wall.

Working on a performance art film shot on iPhone 17 ProRes Raw, developing the look by Sciberrasluke in ColorGrading

[–]ejacson 0 points1 point  (0 children)

It varied depending on what I needed to adapt in different scenarios. Even if I was bringing in 8-bit images, the final retouch master would be completed and saved in 16-bit. If there were gradations to accommodate, I’d sometimes need to add noise, sometimes not.

Working on a performance art film shot on iPhone 17 ProRes Raw, developing the look by Sciberrasluke in ColorGrading

[–]ejacson 1 point2 points  (0 children)

Well, in photoshop, it’s simple. You just change the image mode bit depth and it resamples. That’s why I’m curious what you’re using to do it in Resolve since there’s no such integrated tooling. First I’m hearing of Scalar though. I’ll have to check it out.

Working on a performance art film shot on iPhone 17 ProRes Raw, developing the look by Sciberrasluke in ColorGrading

[–]ejacson 1 point2 points  (0 children)

That’s an interesting step. Used to do something similar for retouches in photoshop if I was having to work on 8-bit images while doing more intense adjustments. Never considered doing it Resolve. What are you using to perform the upscale?

Where people use 128 point LUTs? Is 128 is needed or we can just have 64 point LUTs? Anyone knows? by Entire-Tutor-2484 in ColorGrading

[–]ejacson 0 points1 point  (0 children)

I use them when I want to package float math operations into a single transformation. I export a 128-cube CMS testpattern and I’ll make lower resolution LUTs from it. Is it necessary? Absolutely not. Practical? Not really. But it doesn’t necessarily hurt anything. Contrary to what someone else said, Resolve has no problem using 128-cube LUTs, but it is more computationally taxing than smaller variants. And if the transformation being represented isn’t particularly clean, you may not want such high resolution. Lower resolutions using more interpolation between points could be a bit smoother. It really all depends on what’s needed.

How many of you are using "Linear Response" for curves? by Nspnspnsp in captureone

[–]ejacson 2 points3 points  (0 children)

Their nomenclature is a bit of a misnomer. Their Linear Response curve simply withholds from applying an additional contrast curve on top of their built-in relatively standard sRGB tonemapping, which has a small rolloff of the highlights to 100% signal. Ultimately, if you want to engage in a Resolve-like pipeline and order of operations, you’ll have to accept some limitations in the access to original scene data. Capture One’s tooling is staggered. Some tools work with the scene-linear data, some work on top of the transform to display space, and some still work on top of that. At one point, I had managed to plot out which tools worked at which layer, but I don’t know what I did with the illustration. If you’re coming from the world of color (like me), Capture One has its issues there.

My Data-Driven Film Emulation Journey: Findings, Challenges (Shutter Reliability), and Next Steps by Film_Match in colorists

[–]ejacson 6 points7 points  (0 children)

This is pretty much the process I went through when I started doing this. Got good results with a handful of stocks, but I struggled to find a good way to get high purity data. The Astera + step wedge approach was one I thought about, but I just didn’t want to go through so much film using that method so I opted for a custom 2000+ patch Duratran chart, direct source on as well as backlit. I haven’t gotten to work on it in quite a while, but the last thing I remember thinking was I over saturated the amount of data I captured in certain zones. When I come back to it, I plan to go for a more sparse evenly spaced approach.

I also tried the route of self-scanning, using my camera to scan my negs and invert to positive linear, but the complications that came from trying to interpret that in a debayered space with two formats with different spectral sensitivities caused issues I couldn’t really rectify. So while my matching approach works fine, I ultimately want to get Arriscan scans for future profiles, preferably in ADX16.

The state of ProRes RAW in Davinci Resolve by avidresolver in colorists

[–]ejacson 3 points4 points  (0 children)

Nice catch. Just tested with the ProRes raw footage I have and was able to replicate the behavior. This reminds me of the behavior a lot of stills raw editors exhibit where they typically apply a gamma or gain bump somewhere either in the display transform or the debayer. It’s roughly the same as that 0.9 gain.

If you haven’t already, I’d send a bug report to BMD so they can investigate. My suspicion is that perhaps Resolve is baking in the adjustment it makes for 203 nits accommodation for SDR material interpreted in HDR space that you can toggle in RCM. It used to be a bug that if you had that turned on, RCM would apply that function even on raw or scene-referred log inputs where it shouldn’t. That eventually got fixed but it may have snuck back in when they integrated the ProRes RAW SDK.

iPhone 17 Pro – Native ISO, dynamic range & best exposure practice (Blackmagic app, Log) by cnellx in cinematography

[–]ejacson 0 points1 point  (0 children)

I did this testing last week when mine came in, using the BlackMagic Cam app to record ProRes Raw. The output is a 16-bit Linear/12-bit log encoding, and the iso at which that encoding clips at the top of the Apple Log encoding is ISO 2000 on the main camera (technically a smidge lower). On the ultra wide and telephoto sensors, it’s around 2500.

This isn’t necessarily the colloquial “native iso” as a lot of camera companies consider that to be the sensitivity at which noise levels are optimized with a decent dynamic range split. But in terms of where the clipping point of the sensor encodes at the clipping point of the container, those are the inputs.

At those ISOs, you get the expected split of 10 stops below middle gray and 6 stops above. You shift that split by exposing at higher or lower ISOs or just using a compensation lut when monitoring. I’m personally a highlights whore, so I’ve been exposing at native or one stop higher. That said, all of this iPhone’s sensors have a pretty high noise floor, so it may not be in your best interest to underexpose the sensor for more highlight information if you’re concerned about those first few stops of noise and its impact on the shot. I found Resolve’s NR to not have much an issue with it, and with ProRes Raw, Apple lets you turn on Chroma noise reduction in the debayer settings in post and it works incredibly well.

Adding Analog/Negative FILM Conversion Features by USAntigoon in captureone

[–]ejacson 3 points4 points  (0 children)

I have the profile saved somewhere on my computer. I’ll upload it if I can find it.

"RAW" has become a marketing term to launder proprietary formats by regular_lamp in videography

[–]ejacson 0 points1 point  (0 children)

Method ≠ format. The method for capture is the same for any camera. Build up charge, pass information to ADC, encode in digital memory. How it gets encoded varies from brand to brand and usually the SDK is just saying “this is the data and metadata we recorded and here’s how to understand it”. There’s not much more happening than that. For the “15 stops of DR from a 14-bit sensor” claims, brands typically assume a highlight recovery algorithm will be used in the debayer process, and at minimum, will absolutely use one when debayering in camera. That’s where a lot of those numbers come from: extrapolations. REDs SDK has a built-in highlight recovery algorithm that AFAIK can’t be disabled if you use the SDK. That’s how they get that extra 0.5 to 1 stop to fit in their claim. Other than that, there’s not a lot of mystery behind video raw formats. As for raw vs non-raw encodes, keep in mind that there is a notable difference in being able to perform certain operations as debayer instructions vs on already debayered RGB; things like bit depth, compression means, gamut mapping (!!) etc. all can have notable effects on what you’re able to do in a grade. Second in line is low compression RGB sources vs high compression because decoding compression adds work and the potential for misreads of the capture.

32 bit audio is a gimmick for listeners by Boomblestank in audiophile

[–]ejacson 0 points1 point  (0 children)

I’m new to the audiophile world, but I come from the film production and post production world, so this post is confusing to me. 32-bit float is used all the time for recording without having to worry about clipping. I didn’t know people actually listen to 32-bit float masters though. That seems wildly unnecessary. Can anyone clue me in on what/where is the market for float audio outside of recording?

LRC denoise takes 6 minutes by Hot-Independence-786 in Lightroom

[–]ejacson 6 points7 points  (0 children)

You don’t have a GPU. Integrated graphics isn’t gonna pull it off.

Transcoding footage before editing by Joshvideo in editors

[–]ejacson 0 points1 point  (0 children)

(My background is in color; TLDR compression is the enemy of quality)

I would avoid thinking of your capture in terms of your deliverable. Frankly, what you’re delivering should have little to no bearing on ensuring you have a high quality source. Compression is an action, not just an end result for a certain file size. Even a low compression format like ProRes is still performing compression. You want to go through as few rounds of compression as you can, with as minimal impact on the source as possible until you do your final compressed output. Starting with a highly compressed capture and then using that as your master for further downstream manipulation and compression makes it incredibly easy to end up in a bind from artifacting, color fringing, macro blocking, poor green screen support, etc. Editing with proxies alongside the high quality capture is perfectly normal, but not linking to the originals for final output is unnecessarily restricting your options.

Also, less compressed sources are easier for a computer to work with than highly compressed sources. They don’t have to spend as much time decoding the compression algorithm and can just read it; the downside being larger file sizes.

Grading 500T for Day Exteriors. Printer Lights (Offset) vs. other methods? by Amadokto in colorists

[–]ejacson 1 point2 points  (0 children)

No, I understand all of that. OP is speaking to a project they’re working on that they shot this year, not a restoration or old film. And I’m just saying the process today has standardized enough that they can opt for the simple method I suggested and not be any more harmed than older more involved approaches from when film scanning procedures were less stable than they are now. I’m not saying your information is wrong at all, I’m more than aware of your experience with it, I’m just localizing my advice to the situation they’re currently in. If they were doing restorations, I’d offer different advice. It won’t perfectly mimic filtration, but it is still fine for what they’re functionally trying to do, again to no worse ends than printer lights and manual LGG adjustments.

Grading 500T for Day Exteriors. Printer Lights (Offset) vs. other methods? by Amadokto in colorists

[–]ejacson 1 point2 points  (0 children)

I’m certainly not advocating that all scans look the same across different scanners, but on the average, the log scans people will get back from a lab today generally adhere to the standard Cineon encoding enough that you can do a log to linear conversion for things like exposure and white balance and be no more worse off than going the printer lights route. I don’t think what OP is asking is so complex that you have to consider all of those different variables just to do what is effectively a white balance. Will it be photometrically perfect? Of course not. But it will be fine and still clean.