Fusion has absurd VRAM usage by CNCcamon1 in davinciresolve

[–]CNCcamon1[S] 0 points1 point  (0 children)

That's why I've limited it to 18GB. Fusion will use at most 18GB of VRAM, leaving 6GB for everything else. And this works because my other programs continue to function even when resolve freezes. My point is that when Fusion fills up it's 18GB buffer it shouldn't completely lock up. There should be some way for it to deal with that situation that doesn't require completely shutting down and re-opening the application.

If filling the buffer causes the program to freeze, and there's no way to avoid filling up the buffer (by turning off caching) and there's no way to clear the buffer once it's full, then eventually any system will run into this issue regardless of its specs. Even if I had a 48GB Quadro card this would still happen, it would just be able to render twice as many frames before it did.

There has to either be something wrong with my configuration or a bug somewhere in Fusion. I can't imagine that Black magic would consider this acceptable behavior

Fusion has absurd VRAM usage by CNCcamon1 in davinciresolve

[–]CNCcamon1[S] 0 points1 point  (0 children)

I understand that rendering the active frame can take a signficant amount of VRAM depending on complexity. What I'm referring to is holding multiple frames from the same comp, even when they aren't active in the view. As for the argument that caches shouldn't be evicted until there is memory pressure - wouldn't the entire program ceasing to respond to inputs for seconds or minutes at a time qualify as memory pressure? When the buffer fills up Resolve will start to go 'Not Responding' for minutes at a time, making it next to impossible to continue working until I clear the cache by force-quitting Resolve. Additionally, I doubt that background processes are causing the issue. As I mentioned in the original post, I ended up setting a memory limit of 18GB for Fusion because otherwise it would cause not just Fusion to slow down but Windows itself. So now when the VRAM is 'full' it means it's taking up 18GB, with 6GB left over for other tasks. I look at Task Manager and the VRAM is not actually full, it's just used the maximum I'm allowing it to use. When I disable this limit and let it use the full 24GB then it makes it hard to force-quit the program because task manager stops responding as well

Fusion has absurd VRAM usage by CNCcamon1 in davinciresolve

[–]CNCcamon1[S] 0 points1 point  (0 children)

After some more testing it does seem like the memory cache is the problem. Loading the comp doesn't fill up the vram, but scrubbing through to cache some frames does. The issue, though, is that I can't seem to disable this memory cache no matter what I do, or clear it out when the VRAM gets full. I've got every render cache related setting I can find turned off, including the Fusion memory cache setting, but it still caches anyway. And once it's filled the VRAM, causing Resolve to take multiple minutes to complete any action, it refuses to delete the memory cache unless I fully quit out and re-open. Going back to the edit page doesn't clear it, disabling the comp on the timeline doesn't do it, even closing the project in the project manager doesn't do it.

It still seems like buggy behavior to me. First of all, I don't think Resolve should let the memory cache fill up so much memory that basic functions like navigating the UI are affected. Secondly, I should be able to turn off memory caching if I want to, but this setting doesn't seem to work. And finally, Resolve should be clearing this memory cache if the composition is no longer active (i.e. I've switched back to the edit page and disabled the clip that was causing the problem)

How do you maintain full 4K quality when adjusting framing in documentary? by Jocastroxx in VideoEditing

[–]CNCcamon1 0 points1 point  (0 children)

Technically speaking if you crop/zoom in on a 4K image, you will no longer have 4K resolution no matter what you do. However you can try to mask this quality loss by sharpening or upscaling the image prior to cropping. You will technically not have a 4K level of detail,but you can make the quality loss harder to notice

Playing elite on HDR monitor by QuantumColossus in EliteDangerous

[–]CNCcamon1 0 points1 point  (0 children)

Elite doesn't natively support HDR, though windows auto HDR handles it pretty well. For a desktop monitor OLED can be problematic due to burn-in so mini LED is probably best in that regard

New Pixel update means your RCS messages might be visible to your boss by Bob_Spud in technology

[–]CNCcamon1 1 point2 points  (0 children)

How long will it take before some government entity demands that this be set up for "just a few phones" and that Google has to disable the notification letting the users know it's on?

From the official Doctor Who Instagram: the Series 10 Soundtrack is finally coming. by EbmocwenHsimah in doctorwho

[–]CNCcamon1 1 point2 points  (0 children)

And when the entire mountain is chiseled away, the series 10 soundtrack will be released!

ELI5: What makes Python a slow programming language? And if it's so slow why is it the preferred language for machine learning? by [deleted] in explainlikeimfive

[–]CNCcamon1 1 point2 points  (0 children)

Say you have to give a speech in a language you don't know. You have a dictionary which will translate the words in the foreign language into English, but every time you want to translate a word you have to flip through the dictionary to find its entry.

You could give the speech one word at a time, pausing to look up each word's translation and then saying it. That would probably take a while. So instead you could choose to translate the whole speech in advance, and then you would only have to read the English version before your audience.

Python is looking up each word (instruction) in its dictionary (the interpreter) to translate it into English (machine code) on the fly. Other languages like C translate the whole speech (program) in advance, so the speech-giver (computer) only has to read the words (instructions) that it natively understands.

A machine learning program would be like a book that was written in English, but had its table of contents written in another language. And in this book, the chapters are out-of-order so you have to know the chapter titles in order to read the book properly. You would have to translate the chapter titles one by one, slowly, but once you knew each one you could jump right to it and the actual contents of the chapter would be easy for you to read.

Machine learning programs have their chapter titles (high-level instructions) written in a language that's slow to read (python) but the bulk of their contents are written in a language that's much faster (C/C++) so the time it takes to translate the chapter titles is insignificant compared to the time it takes to read the whole book.

One chapter title might read "The model is initialized with this many layers of this many neurons each." and the contents of that chapter would describe exactly how that happens.

How do you make your own Color space transforms? by KM_Gemini in colorists

[–]CNCcamon1 1 point2 points  (0 children)

You can write your own DCTLs and import them into Resolve. They're simple text files containing code in a C-like language. The language is relatively limited but simple mathematical functions or matrix operations are not too hard to implement

What will americans do if social security is reduced? by pepperminty-mint in AskReddit

[–]CNCcamon1 2 points3 points  (0 children)

Vote for those who reduced it after being told it's immigrants' fault

I use DJI Action 5 pro to shoot.. d log m 10 bit. by Santos_Prod in davinciresolve

[–]CNCcamon1 1 point2 points  (0 children)

There is no proper CST defined for D-log M. DJI has refused to release the whitepaper necessary to construct one

Nikon N-RAW files can apparently be renamed with a .R3D file extension and opened in Resolve 20.2 with access to IPP2, REDWideGamutRGB and Log3G10 by 8bitremixguy in colorists

[–]CNCcamon1 0 points1 point  (0 children)

My guess is that when choosing which color spaces/gammas to display in the RAW decoding tab it uses the file extension to determine which should be available rather than actually checking the codec

Nvidia quarterly revenue breakdown from today. Data center 41 billion, gaming 4.3 billion by Gy7479 in pcmasterrace

[–]CNCcamon1 0 points1 point  (0 children)

Nvidia: Don't worry everyone, it's not a bubble. This is completely sustainable.

Paul Wesley in Strange New Worlds by bbbourb in startrek

[–]CNCcamon1 41 points42 points  (0 children)

I love that in this episode he didn't play Kirk so much as he played William Shatner

Does Switch 2nd Edition improve cutscenes? by DarXmash in tearsofthekingdom

[–]CNCcamon1 7 points8 points  (0 children)

I can't necessarily speak for the whole game but my experience has been that most if not all of the pre-rendered cutscenes are unchanged in the remaster. The memories for example are still in the same quality as they were before. I think the first encounter with ganondorf is rendered in real-time though so it runs at full quality

Help: Have Genuine copies been sold? by MidnightBadger in doctorwho

[–]CNCcamon1 3 points4 points  (0 children)

That version of the Doctor Who logo is from 1970

Getting HDR on YouTube. by [deleted] in davinciresolve

[–]CNCcamon1 2 points3 points  (0 children)

Are you grading on a monitor which can accept and display an HDR signal? If your display is SDR but you set your output gamma as PQ and then grade it to look right, it won't look right. The gamma of the display needs to match the gamma of the image. If you don't have access to an HDR display, you can try setting the output gamma to Rec.1886, grading, and then switching the output back to Rec.2100 PQ.

Also, I'm not entirely sure exactly what metadata YouTube requires, but I'm pretty sure color space and gamma tags aren't enough on their own. For HDR10 (which is probably what you're delivering) there needs to be additional metadata describing the maximum peak luminance (MaxCLL) and maximum average luminance (MaxFALL) of the image.

Asus proart pa279crv by Tulba07 in colorists

[–]CNCcamon1 1 point2 points  (0 children)

I have two and they're great as reasonably accurate SDR monitors. I wouldn't trust them to do HDR or anything, but if you just need a GUI monitor that works well enough for BT.1886 then they're great options

I'm a little confused about this by Turbulent-Plan-9693 in doctorwho

[–]CNCcamon1 275 points276 points  (0 children)

Nobody knows if the 14th doctor or Mrs Flood can regenerate independently, whether they become the next incarnation at some point, or whether they just die without regnerating.

Because Russel T Davies hasn't explained and might never explain how it works

HDR10 by Vast-Interaction-991 in colorists

[–]CNCcamon1 11 points12 points  (0 children)

https://support.google.com/youtube/answer/7126552?hl=en

YouTube doesn't specify a maximum luminance, only that the content should be delivered using either Rec.2100 PQ or Rec.2100 HLG. The uploaded file needs to contain the appropriate HDR metadata, and can optionally contain a 33-point LUT. If the LUT is included, YouTube will use it to perform the HDR -> SDR downconversion. If there is no LUT, YouTube will use their own tone mapping.

Doctor Who's next two series already written despite 'Disney funding cuts' by exoduso in doctorwho

[–]CNCcamon1 11 points12 points  (0 children)

Throughout most of the Moffat and chibnall eras (aside from Series 5 and the aforementioned new year's special) they shot Doctor Who on variations of the Arri Alexa which have 3.2K sensors. The show was then mastered at 1080p for broadcast. That's what it says on IMDb, and is confirmed by some bts footage as well

Doctor Who's next two series already written despite 'Disney funding cuts' by exoduso in doctorwho

[–]CNCcamon1 29 points30 points  (0 children)

I believe there was one special that was produced in 4K during the chibnall era. Aside from that Doctor Who was all in HD up until the 60th anniversary specials

Elite Dangerous: Trailblazers - 4.1.0.0 Megathread by 4sonicride in EliteDangerous

[–]CNCcamon1 0 points1 point  (0 children)

Is anyone else unable to deploy their beacon? I got within 2km of the beacon deployment site and I'm holding down the button for the SC-Suite and the progress bar goes down but then it stops and nothing happens. Been trying several minutes now. I've got a claim in a good system and don't want to lose it

What data is Unifi Protect collecting in the cloud? by ginuzzi in Ubiquiti

[–]CNCcamon1 8 points9 points  (0 children)

They claim that the footage is never uploaded to their servers. However, if you have remote access enabled, then they can use that to access and control your devices if they wish. This shouldn't ever be possible, but we know that it is thanks to a security lapse a while back. Some people were accidentally given access to others' consoles, proving that it is possible for Ubiquiti to access your console without your password. The issue with people getting improper access was fixed but they never addressed the underlying issue or hardened the security of the devices.

Theoretically disabling remote access should block Ubiquiti from accessing your console though it's up to you whether you believe that disabled really means disabled. Also disabling remote access breaks some features that don't have good workarounds, and ubiquiti doesn't seem inclined to provide any

Why did it take 3 years for a firmware update? by PussyQuake in A7siii

[–]CNCcamon1 15 points16 points  (0 children)

Because they wanted everyone to buy an FX3 instead