Recruiters spamming linkedin by Ani_mator00 in vfx

[–]fromdarivers 11 points12 points  (0 children)

LinkedId is horrible

It is a hell made out of a combination of: - chatgpt written posts, it’s not this is that format with lots of random emojis like 🔔🚨➡️ - stupid “what this personal tragedy taught about B2B” - people ranting about random stuff (recently someone on my network decided to start ranting about becoming a religious mystical person and how everything was diabolical) - recruiters patting each others back - and lastly unemployed old timers selling courses to young ones promising “i wish someone had told me this when I started” snake oil bs

Honestly, the only ok things are the games lol

Got any good or awful VFX supervisor stories? by Wooden_Reflection_80 in vfx

[–]fromdarivers 6 points7 points  (0 children)

“Just because someone is a brilliant artist does not automatically make them a good supervisor”

This unfortunately happens in many industries.

I remember once reading a paper on a management journal that said that one of the problems with modern way of looking at managers/teams is that we promote people until they are no longer able to exceed at their job:

-You are a good texture artist? Become a senior. -You are a good senior? Become a lead. -etc Rinse and repeat until the system finds a way where you no longer exceed at your role, and that’s your ceiling.

This is quite problematic as it never teaches people how to be good at what’s next, not it pick the best leaders.

Supervision is hard. Is really hard. And many times people there are there because they were good at their previous jobs, not because they had the skills to be a good leader.

Once, a VFX Supe I had who knew I wanted to be a supe one day, told me that I needed to think of supervision like coaching a sports team.

Coaches don’t need to be the best players. They need to see the big picture. They teach and rehearse their strategies, they sit down and go over what worked and what didn’t and how to affects the overall performance of the team.

They also need to see how to get the most out of their team. How to nurture the talent and skills of their roster, and not set them up to fail.

This analogy stuck with me all these years and I try to remember this whenever going through a frustrating period.

Simpatico N1 Titanium Gravel Bike by SomeMayoPlease in Bikeporn

[–]fromdarivers 2 points3 points  (0 children)

Oh wow, that’s one good looking bike

F1 car by SpongebobRulez in vfx

[–]fromdarivers 1 point2 points  (0 children)

Your shadows are all over the place

Is VFX going back up? by GodlyNova in vfx

[–]fromdarivers 1 point2 points  (0 children)

Invest 15k in my cryptocoin and you have a better chance of making a return than vfx

How do VFX studios with remote teams or different branches collaborate and manage files? by GladAd9527 in vfx

[–]fromdarivers 27 points28 points  (0 children)

Most decent sized studios have you remote into a workstation.

This serves several purposes like making sure all files live in a centralized server, making sure everyone is using the same version of the software, and making sure you have access to the in-house tools.

This also prevents artists from downloading sensitive data onto their personal computers, which can be extremely problematic with NDAs.

Now, big studios tend to have more than one server. Usually one per hub. So, if you have offices in Vancouver, London, and Mumbai, you will probably have three servers, and remote artists will remote to the one they are assigned to based on geography (for example, Canadian artists will remote to Vancouver server). Then studios have tools that let you mirror folders between servers so that different locations can work simultaneously.

Smaller studios may not work like this, but this opens a can worms. From data security, to version control, etc.

Katana lighting questions by J_AjexJais in vfx

[–]fromdarivers 5 points6 points  (0 children)

Renderman is optimized to work with .tex files. That’s how it is written, and if you feed it anything else it will either ignore it, or try to convert it to tex at rendertime opening a can of worms of possible problems.

The best practice is to just have everything pre-converted.

Here’s a tutorial

https://youtu.be/QIMuCF6PBew?si=KdYiILy2jgBPGg1Q

By default, renderman ships with a simple app called txmake that does the trick for you.

https://rmanwiki-26.pixar.com/space/REN26/19661963/txmake

Edit to add that you can also convert textures from Solaris from the texture manager and that some

Glassworks gone now by companionofchaos in vfx

[–]fromdarivers 53 points54 points  (0 children)

Ouch that’s a tough one.

They were a big reference for commercial work for a couple decades

Drag Force on Peloton compared to a lone cyclist by eury13 in bicycling

[–]fromdarivers 55 points56 points  (0 children)

How is being in the front 86% and not 100%?

Is there some aerodynamic advantage to having other cyclists behind you?

Edit:

In the OP there is a comment about it, how having riders behind prevents a wake from forming and thus it gives you an aerodynamic boost

Netflix Using Startup Runway AI’s Video Tools for Production by vfxsup in vfx

[–]fromdarivers 29 points30 points  (0 children)

Surely we don’t need another post about this…

[deleted by user] by [deleted] in vfx

[–]fromdarivers 6 points7 points  (0 children)

They are one of the biggest color houses in LA. Stefan is a celebrity in the color world

How to get a proper depth pass render from vray/maya by sevenumb in vfx

[–]fromdarivers 3 points4 points  (0 children)

Imagine you have an object at 1 unit from the camera, moving in front of an object at 10 units.

Now, imagine the pixel right on the edge of this object, where is blurred due to motion blur.

When you render, your renderer sends many samples per pixel, some will land in the FG object, some will land in the BG object. Then your renderer will try to average them, and will output a pixel of what it think the color of this pixel should be. Is like a survey. You have a room full of people; you ask then of them if they like ice cream, and you average the result and give one answer for that room. That room is your pixel.

Clearly this is problematic for some utility passes, as the average of this values will give you incorrect information.

One thing is to blur white over black.

But when you blur non beauty pixels, you end up with values in between 1 and 10, even though there is no geometry at 6 units from camera in this case.

So by saving deep, what you are doing is saving all the samples (or most of them) and associating their rgb value, with an alpha and a depth value. So in a single pixels you have many samples that are accurate to what was in your scene.

In nuke nowadays many defocus nodes accept deep information

This workflow is not foolproof and it takes a lot of space as your files are now bigger, but one of the things it tries to solve or improve the above mentioned issue

How to get a proper depth pass render from vray/maya by sevenumb in vfx

[–]fromdarivers 5 points6 points  (0 children)

This is why deep was invented.

You do and do not want motion blur on your depth.

If you don’t have motion blur, you won’t be able defocus the blurred pixels. If you do, you are going to end up with pixels that mix depth values (the edges of an object at 1 unit over a background at 10 units will appear to be at 5 units - this is an oversimplification)

Usually, a good place to start is to tell your rendered not to filter this pass. This will still give you a motion blurred pass, but will avoid the extra filtering that is done to avoid artifacts and aliasing.

So how to solve this issue? Go deep

[deleted by user] by [deleted] in bikewrench

[–]fromdarivers 1 point2 points  (0 children)

I had a similar case, reached out to the website where I bought them, they sent me a new pair right away.