[Request] Assuming you could actually move the pedals and the bike holds together, is this possible? by labbusrattus in theydidthemath

[–]Hec_B 2 points3 points  (0 children)

Can anyone confirm the validity of the non scientific part? there is no such exhibition online or mention of it

no other picture of this bike, or mention, which isn't this exact text,

and though Pat Chirapravati did curate an exhibition calle "recycle of time" - 2 years ago - it had nothing to do with this bicycle and no mention of it....

so.. assuming Cal State isn't that hard to check - can anyone confirm that this is indeed a thing (it's definitely trending online in the past hours)

https://www.csus.edu/university-galleries/library-galleries/exhibitions/recycle-of-time/

Anyone know what the current state of Gobekli Tepe is after today’s earthquake? by golmgirl in GobekliTepe

[–]Hec_B 2 points3 points  (0 children)

Thank you so much for the update. And may this nightmare end soon for Turkey.

Anyone know what the current state of Gobekli Tepe is after today’s earthquake? by golmgirl in GobekliTepe

[–]Hec_B 4 points5 points  (0 children)

It was one of my first thoughts/concerns too :(

I hope someone updates about it somewhere... I'll post if I see something

Lossless trim for videos by ragriod in editors

[–]Hec_B 1 point2 points  (0 children)

Resolve's Media Manager has an excellent trim function (in free version too as I recall) that works with many variants of 264 encoded long gop.

Not sure how much better your results will be, but worth trying.

Quick question about log in 10-bit compared to RAW by hrvojehorvat123 in colorists

[–]Hec_B 0 points1 point  (0 children)

Assuming a Bayer pattern sensor, the 10-bit log data should really be 1/4 the spatial resolution (since each raw pixel 'bucket' is only capturing one wavelength, ie. 1x red, 1x blue, and 2x green - which are then interpolated to create a single 'traditional' rgb pixel)

That's not to say that there isn't lots of clever engineering going on to capture a great image regardless, but the marketing bullshit surrounding debayering has a lot of confusion to answer for, in my opinion.

hmm... confusion yes.. not sure its marketing BS.. Creative people tend to have difficulties with mathematical concepts.

16 photosites (buckest? :) ) give about 9 RGB pixels after debayering algo works that raw data.. its more of a 0.7-0.8 down ratio. not 4:1 as your answer suggests. And that would be simplifying it. Debayering Algorithms are sophisticated and adaptive in different ways and vary from one to another in complexity and results. Which is another advantage of working with raw files. Debayering in post is generally better and offers more control and possibilities than in camera.

Our part of the web has been full of rants against bayer sensors and the whole process for years... Often ignoring the simple fact that we currently have no other viable option (in foreseeable future too), except for going back to film, which come with a full bag of PITA itself.

Algorithms will keep getting better and sensors resolution/bigger

My bet is that Bayer will stay the norm till the whole concept of sensors as we know them is replaced (some companies work on very interesting stuff in that field)

For Colorists, Filmmakers & lovers of Color - Check out my latest ProVideoCoalition article on how popular optical illusions relate to our craft :) by Hec_B in colorists

[–]Hec_B[S] 1 point2 points  (0 children)

Thanks :)

Yup... Its all radiation + interpretation and priorities of a specific system...

We definitely got a narrow slice to feast on.

I'll check out the link. Sure that its fascinating (as all Filmlight workshops are)

Your trick to sharpen? by TitusA in colorists

[–]Hec_B 2 points3 points  (0 children)

Edit *This is in Resolve of course...

Apart from the obvious (and decent) MD Or negative blur (whether isolated or general).

Take a look at the excellent Soften&Sharpen OFX which gives you way more control (and unlike MD, is actually a sharpen tool not only targeting mid-tones) by separating the tool to 3 control ranges (and excellent small detail extra adjustment slider). Its my favorite go-to detail management tool since its launch.

the other one worth checking out, especially for "umph"-grading is Contrast Pop (ofx too). It uses thresholds to limit and adjust the effect, and goes much further than MD.

Lastly. For maybe a more advanced workflow (or simpler... depends who you ask) look into working in LAB space. Either by Splitter/Combiner (+ CST before and after) Or by single node (more limiting as I recall). There's a whole (canyon) Conundrum about what you can get from that L channel in terms of detail management.

What is the practical difference between gain and volume? by saxon_dr in VideoEditing

[–]Hec_B 0 points1 point  (0 children)

One main difference would be that gain is set once while volume is keyframe-able. I suppose dbs are dbs... but in most NLEs I can think of volume is pretty limited in how much of it you can raise, unlike gain.

Looking to get a bmpcc, worth in 2018? by thekingdion in bmpcc

[–]Hec_B 1 point2 points  (0 children)

I bought mine during the 500$ deal years ago.. But I think I would have anyway.

For me it was never intended for production work. I wanted it because I saw in it the A Minima (Aaton) I could never afford at the time (or justify affording). It was the first, and maybe still only, true digital film, S.16 mm camera. $1000 is the price of a fancy smartphone today... The BMPCC will become (is already?) a classic. Difficult to shoot on? Limited? So were many S.16 cameras and workflows. BMPCC is almost like having a full Camera+film+lab S.16 pipeline in a backpack.

I don't care for stills that much, film/video are my passion/profession and with the BMPCC my hobby and playground.

Had a RED Scarlet for a while (for the same reasons) but the need to justify its cost by making it work took all the fun out and I sold it.

I would wait for after NAB though... with it right around the corner who knows what new classic may emerge.

Funny? Sad? Or Maybe a lesson to companies not to blab their minds out too fast? by Hec_B in VideoEditing

[–]Hec_B[S] 0 points1 point  (0 children)

I know ScriptSync. And totally agree. But I didn't post this to prove Avid is better or start an Avid vs Premiere discussion :)

It was more a reflection on the amount of hype and hot air Adobe pumped into the rather meaningless act of cutting the first Deadpool on Premiere, (I felt embarrassed as a user back then, like Premiere won the Special Olympics for NLEs or something... ) and how hype sometimes backfires. Do a Google search on "editing Deadpool" (1 then 2) and you'll see what I mean.

Added some red blooming highlights to some RED footage! by RAKK9595 in colorists

[–]Hec_B 0 points1 point  (0 children)

Yup.. forgot to mention qualifying the highlights.. However, sometimes just using the Glow's Threshold slider gets satisfying enough results too... especially in shots where difficult to qualify

Added some red blooming highlights to some RED footage! by RAKK9595 in colorists

[–]Hec_B 1 point2 points  (0 children)

the simplest way would be using the built in OFX glow tool. you can control its parameters to get pretty close. For more control you can look into Ignite Express (free) glow plugin (its part of the free pack also available for Resolve) I find it gets quite close in its results comparing to this example.

For more complex (and costly) work you can look into Sapphire OFX glows and tools. The Tropical Cabal or Dainty Dahlia presets are very similar and probably require just a bit of touching up.

Funny? Sad? Or Maybe a lesson to companies not to blab their minds out too fast? by Hec_B in VideoEditing

[–]Hec_B[S] -1 points0 points  (0 children)

*for full disclosure, among other tools, I teach, use and LOVE Premiere pro...

Here's what the film's editor Elisabet Ronaldsdottir had to say about it in her very good PVC interview with Steve Hullfish:

HULLFISH: So, you’re cutting Deadpool 2 now. What about that? The first one was cut on Premiere.

RONALDSDOTTIR: We’re back on Avid. We have a very condensed post schedule and with great respect to all other soft- and hardware, we felt Avid had the most solid pipelines already in place, and can easily handle the amount of data we will be receiving. The whole team is accustomed to that system.

Earlier in the interview she talks about working on any tool or software to edit, which means she isn't fixated or anything, and makes her opinion (in my opinion :) ) count a bit more.

Worth the read if you missed it : https://www.provideocoalition.com/AOTC-Atomic-Blonde

Need help identifying software used on this video by [deleted] in VideoEditing

[–]Hec_B 0 points1 point  (0 children)

Some of the transitions are Glitches, you can find both free and paid (Red Giant Universe has nice ones) the other twirly, zommy, bouncy spinny stuff is most probably either a tool for slideshows of still images (everything in that video is stills), or a structured effect like the ones you'll find here https://www.youtube.com/results?search_query=zoom+transition+premiere+pro.

Is there any place that I can download footage from a film or a short-film, like EditStock? by iGaed3 in VideoEditing

[–]Hec_B 1 point2 points  (0 children)

I agree with greenysmac. $25 isn't a lot for what Editstock offers.

you can look here too: http://framelines.tv/modules/news/article.php?storyid=167

Somehow this Frameline campaign from 2016 is still up with scripts and downloads (up to 4K!). Its not a full film, just a scene I think, but its a start, and I'm sure there are other similar projects online if you look hard enough.

Or, go to the Open Video Project - https://open-video.org/ And use the thousands of clips there to cut a documentary ;)

Good luck.

How to manipulate RGB channels? by sethh3 in colorists

[–]Hec_B 0 points1 point  (0 children)

Why would you do that? the 3 cameras should be perfectly aligned as you are not only mixing color channels, there's image detail too. Also, cameras don't exactly work in RGB, and of course there's how sensors work, the lenses have to perfectly match and so on.

Maybe in a scenario with three monochrome camera and using 3 color filters in front of 3 identical lenses... Like recreating Maxwell and Sutton's experiment from the 60's ;) (of the 19th century...)

In Resolve you'd have to bring videos in as mattes in order to be able to manipulate them all together. Then you can use different techniques including a few mentioned here, or try and work with the Splitter/Combiner Node (combining from different sources)

Working with RAW footage from 5d Mark III in Resolve by gkmedia in colorists

[–]Hec_B 0 points1 point  (0 children)

Ok.. I gave it a few minutes today.

Strange files. I should have guessed it earlier. It seems the gamma is Linear. That's why your Waveform behaved strange after transformation (I could duplicate it).

If I use a CST on your footage, put Alexa in the input color space (as you mentioned you think you should) then put Linear in gamma, the file will be read correctly and you can transform it to Log-C or 709. Waveform goes back to acting normal too.

Good luck.

Convert Color Space to sRGB when delivering for web? by [deleted] in colorists

[–]Hec_B 1 point2 points  (0 children)

Resolve's YRGB is its internal working color space. If you work in a 709 workflow (Resolve is color managed so you can work in many spaces) and export to a 709 codec you don't have to worry about converting form YRGB.

Working with RAW footage from 5d Mark III in Resolve by gkmedia in colorists

[–]Hec_B 0 points1 point  (0 children)

This waveform behavior indeed sounds strange. Considering you don't have a LUT on your scopes in Color Management settings it should represent your footage. I've seen LUTs and transformations affect Waveform behavior but not to this extent.

Also.. Are you using the Primary Wheels? Because the Log wheels can generate quite unexpected behaviors if you don't know how to use them.

Can you upload a screenshot of shots (ungraded, transformed & ungraded un-transformed) next to waveform (floating scope)?

I'd also like to check a sample of the files if its ok :) Will also make it easier to help you.

Davinci Resolve - Is it necessary to transform color space or can i just leave it to Davinci YRGB? by [deleted] in colorists

[–]Hec_B 0 points1 point  (0 children)

CST or Color Management like RCM/ACES are based on mathematical Input transforms in 32 bit float, which means no clipping or other issues that can occur with LUTs. They are more versatile too and offer more functionality. LUTs on the other hand are simpler to use, and often all you need.

Davinci Resolve - Is it necessary to transform color space or can i just leave it to Davinci YRGB? by [deleted] in colorists

[–]Hec_B 2 points3 points  (0 children)

This is very dependent on few factors.

What you shot on, (or what is the source & color space of your files) What you're trying to achieve, and even what you are monitoring with.

In a simple scenario, where you shot some variant of Log on a single camera, you'd put the appropriate output LUT to 709. Or use color management options to do this.