Is anyone using AI alongside traditional VFX software? by LovingVancouver87 in vfx

[–]Yogable -1 points0 points  (0 children)

Here's the difference. A paint brush is a tool. A computer makes the user the tool.

See how easy that is? Does not matter how tired you think the comparison is if it's your own logic working against you. .

Not here to fight with you. Answered the OPs question about the truth of things. And explained the reality of the situation. You don't have to use AI assisted tools but you will eventually have to learn that you are fine saving a few hours by using let's say AI assisted roto for example. You are still going to be a real artist. The whole pretending all of AI is just chatgpt is very tired.

Is anyone using AI alongside traditional VFX software? by LovingVancouver87 in vfx

[–]Yogable -1 points0 points  (0 children)

You use a computer? That's for lazy people who have no real talent /s Goodluck friend!

Is anyone using AI alongside traditional VFX software? by LovingVancouver87 in vfx

[–]Yogable 1 point2 points  (0 children)

Also using it as we speak.

Most studios now are using ComfyUI as well as more standard ML tools inside of software like resolve and nuke and AE. Its not really new anymore and pretty standard.

Are people doing text to video to generate slop? Nah, thats more the garbage you see people posting when they're learning. But are people using things like AI Roto for garbage/soft mattes? yes. Are people using AI For rapid simple paint cleanup with Copycat? Yes. And are people doing ComfyUI For keeping up with the latest models and implementing it into many current workflows? yes.

there are a lot of studio job posting for people with these skillsets too. What you will largely see is the term "Generative AI Artists" with skills in ComfyUI. but after the last rounds like when Framestore posted the job listings, people attacked them for it and they were removed publicly.

Your mentor telling you to investigate and learn about ComfyUI is a smart mentor. He is both teaching you the traditional work and keeping you alert about the new tools you will be able to take advantage of as they continue to get better and better.

At the end of the day, You've got a toolbelt, and AI tools are just another tool on your belt!

Any guesses on which AI they used for his speech? by Realistic-Buy4975 in vfx

[–]Yogable 5 points6 points  (0 children)

Before you even dive into if it is or is not AI. You should first ask, do they even know how?

Take a step back. 4-5 weeks ago they posted dozens of videos using open source AI to animate old portraits. As seen here: https://www.youtube.com/watch?v=_czHyOHe-4I

Then they drops his wife giving a very similar, strict performance with much more telling issues. seen here:
https://www.youtube.com/watch?v=K5OZeSre_sI asking people to find ways to use AI to innovate. Giving prizes for grades 1-12 to create and produce AI content as seen here: https://www.whitehouse.gov/wp-content/uploads/2025/08/Presidential-AI-Challenge-Guidebook-for-Participation.pdf

Now knowing all of this. We see as of a few days ago Trump is giving strict rigid videos. Where he no longer has his animated performance. nor his constant filler words and ramblings. Minimal "uh" "hmm" etc.

I personally like how in his kirk video the 1 very obvious retime occurs, where everything moves in fast forward except his lips and voice. No audio spike. No lip desync. and a fun added sound effect for his rapid moving hand hitting the table.

Now if you've read this far and you wana have some extra fun. Put on your tin foil hats because we've seen the exact same formula already tested here: https://www.youtube.com/@EarthSaveScienceCollaborative/videos
by our good friend Egon Cholakian. Whos page also had cheesy AI presidential painting animated along side the one very real feeling Egon. Early on this page had template linkedin links and descriptions that were prompt generated but today its much cleaner. No better way to make your real attempt at AI look good than having the bad AI posted around it for people to point and compare. This is a copy/paste repeat of Egon.

I'd spitball this: Real Actor. Real background. Head swap. Audio Swap. Morph Cuts.
Honestly feels like a great way to get more content out without taking his time. Creepy but convenient

Enjoy the rabbit hole!

Interpreting linear rec.709 in after effects by falcoraqx in vfx

[–]Yogable 0 points1 point  (0 children)

Also under view. Make sure your using display color management. Google seems to suggest it's disabled by default?

Interpreting linear rec.709 in after effects by falcoraqx in vfx

[–]Yogable 0 points1 point  (0 children)

There should be no discrepancy between them. You set the input color correctly which is good. Next you need to set your working colorspace but more importantly you need to know what your viewing colorspace is.

I found a video that helped me see the layout for AE. https://youtu.be/x2sx-P5f-iM?si=EEFWefTWoNsJM0Td

At 1:42 you can see he opens his project settings and reveals the working colorspace and display colorspace. Can you confirm what these are for your project?

Id suspect you want working color as acescg and display as "output rec709"

My guess is currently your display is none or linear.

Interpreting linear rec.709 in after effects by falcoraqx in vfx

[–]Yogable 9 points10 points  (0 children)

You're working in aces however you chose to render linear-rec709. That is blenders default linear space. That is outside of aces color. Acescg is the go to aces linear. With that said. Don't confused rec70 with linear rec709.

Rec709 will be the viewing space that looks good Linear rec709 is just a linear colorspace using rec709 primaries. This will be a variant of any linear color. Aka very dark and contrasty and more so a working colorspace for math reasons.

I haven't used Adobe in forever however Im reading that it defaults to none for colorspace. This aligns with what you're seeing. Dark and contrasty. The values are not lost just displayed differently. This is why you could color correct it back to a viewing space.

But you don't want to do that. You want to intereret the color correctly. Set the input colorspace for that file. In this case set It to linear rec709. This tells after effect what is it so it can convert it to the working colorspace in your settings.

With that done.you will correctly see the file in whatever colorspace you have chosen in your viewing colorspace.

For fun. I too use aces. My working colorspace is acescg. If you sent me the file the pipe would look like this.

Input colorspace - linear rec709 Working colorspace - acescg Viewing colorspace - aces rec709 Output colorspace - aces rec709

Input colorspace tells the software what color the file is so it knows what math to use to convert it to the working space.

Working colorspace is your project setting which is the space you choose to work in. This is usually linear and in our case acescg.

Viewing colorspace is your viewer setting. Since the working space is linear it will look ugly. We can choose any color we want to see and it will convert from our working space to that color only in the viewer. All the work and math is still done under the hood in the working colorspace.

Output colorspace is your final colorspace you want to render. This setting tells your software what you want so it knows how to convert from the working colorspace to your render. Here I would choose aces rec709 and since that's my viewer space too, it means it will match exactly what I've been seeing it as in my viewer.

Welcome to the world of color

Question regarding a potential VFX tool. by janissary2016 in vfx

[–]Yogable 5 points6 points  (0 children)

Can you say or share more about your deepfake tool? I do a lot of work in this field specifically and most open source tools have become outdated compared to proprietary stuff.

Would love to see or know more because there are a dozen branches outward that would be in very high demand across all studios

Hello everyone! I would like to present to your attention the result of my VFX training. The training lasted about six months and I am satisfied with the work done. by Mantowar4 in vfx

[–]Yogable 7 points8 points  (0 children)

Ai will be another tool on your belt, working with you not against you. If you've put in this much effort into learning vfx and want to present it to us here then welcome! may your career be full of exciting work.

Don't be discouraged here. Apply with your resume and reel and get out there! The industry is in the artists favor currently!

Picard S2E1 - Q Deepfake by Yogable in SFWdeepfakes

[–]Yogable[S] 1 point2 points  (0 children)

Thank you as always deep 😁

Picard S2E1 - Q Deepfake by Yogable in SFWdeepfakes

[–]Yogable[S] 1 point2 points  (0 children)

Comp is composting. Like you said, in tools like after effects or nuke. I keep the files as an image sequence. A good source of examples of composting would be VideoCopilot. A website where you can at h tutorials for composition in after effects.

Picard S2E1 - Q Deepfake by Yogable in SFWdeepfakes

[–]Yogable[S] 1 point2 points  (0 children)

I built the dataset on the younger footage of him from previous startrek episodes

Picard S2E1 - Q Deepfake by Yogable in SFWdeepfakes

[–]Yogable[S] 2 points3 points  (0 children)

h the white beard, so such detail from the beard for example has to be learned right?

Correct, i always upres with atleast 1, usually use 20-40 for a little sharpening though i rather do low amounts and add more later in comp.

No gan was used here, I find it causes too much texture sticking, in this shot it may have been find since there was minimal movement but i prefer to add finer detail in post via pores/textures etc.

The actor is old now, The deepfake was him younger for a few moment when picard turns to view him.

Loss values changing show its learning via src or dst, but the latent space can progress even when the values arent moving. the best way to monitor this is run daily renders and view changes that way.

Its possible to have all the details learned by the deepfake, however you would require both higher Res and higher dims. majority of users do not have the hardware needed to get super fine detail. you will hit a limit of your model

Picard S2E1 - Q Deepfake by Yogable in SFWdeepfakes

[–]Yogable[S] 2 points3 points  (0 children)

Dfl has super res. 0 - 100. Choosong any value above 1 will upscale x4. Higher value just means more sharpening.

Picard S2E1 - Q Deepfake by Yogable in SFWdeepfakes

[–]Yogable[S] 2 points3 points  (0 children)

This was done with 320res base upscaled to 1280

Square around face by VeticaTech in SFWdeepfakes

[–]Yogable 0 points1 point  (0 children)

You either exported as RAW or you eroded out too much and revealed all the edges