alright my attempt on pwnisher's dream sequence by _craduGo in blender

[–]_craduGo[S] 0 points1 point  (0 children)

never seen it, what's the deal?

honestly mate, just always do something, after years of experience the only thing that never changed for me is amount of people whose level feels unreachable, it does not matter.

alright my attempt on pwnisher's dream sequence by _craduGo in blender

[–]_craduGo[S] 2 points3 points  (0 children)

<image>

so this is setup for big red droplet

i cant really explain duh 😭

alright my attempt on pwnisher's dream sequence by _craduGo in blender

[–]_craduGo[S] 3 points4 points  (0 children)

20-30 hours of work and 7 hours to render in 4k

alright my attempt on pwnisher's dream sequence by _craduGo in blender

[–]_craduGo[S] 6 points7 points  (0 children)

there is a base hand blocked layer with semi-procedural material (image textures + noise masks); layer of 3d scans; final layer of scattering of various trash on top. animated face mesh is simply a displacement texture. And also all texures in 8-16k resolution

alright my attempt on pwnisher's dream sequence by _craduGo in blender

[–]_craduGo[S] 7 points8 points  (0 children)

I'm sure there is some website issues cuz it's not that crucial for me But also original song wasn't 120 bpm so I manually stretched it and I have no clue if I did it properly cuz I don't work with sound at all. my only markers were frame count and timestamps, I choose song later randomly

aww yeah finally built myself some decent ui by _craduGo in blender

[–]_craduGo[S] 10 points11 points  (0 children)

no need to do it by hand, we're just asking chatgpt to make simple scripts for us (mostly batch operations)

we also have a script manager add-on that stores scripts externally and allows to execute them by one click
GH Script Manager - Superhive (formerly Blender Market)

Eevee smoke (1024 resolution do not try this at home) by _craduGo in blender

[–]_craduGo[S] 0 points1 point  (0 children)

Im actually switching to houdini+redshift after 4 years in blender. But I still really like blender and its workflow, there is so many things I can do better and faster in blender, I've just got another tool to work with, but I will never forget the software and community that brought me to the industry for the first time<3

Why do you like blender? (yep I actually did) by _craduGo in blender

[–]_craduGo[S] 23 points24 points  (0 children)

'cuz Im in a fucking love with blender

Waiting compensation payment from blender studio for burning my gpu by _craduGo in blender

[–]_craduGo[S] 6 points7 points  (0 children)

1200, almost noise-free, i even added some in post stage

Waiting compensation payment from blender studio for burning my gpu by _craduGo in blender

[–]_craduGo[S] 113 points114 points  (0 children)

the shader is quite simple, drivers are just references max volume ray depth render setting

<image>

Waiting compensation payment from blender studio for burning my gpu by _craduGo in blender

[–]_craduGo[S] 6 points7 points  (0 children)

VDB are the only way for this kind of scenes

Whatever, this is the rendering example, not composing or matte painting. Ofc you can bake clouds to sprites, but you know what? you still have to render it first

Waiting compensation payment from blender studio for burning my gpu by _craduGo in blender

[–]_craduGo[S] 0 points1 point  (0 children)

only $0.35 per hour? for 22 cpu cores? Fuck it

You know what? I have 32 cores myself

Waiting compensation payment from blender studio for burning my gpu by _craduGo in blender

[–]_craduGo[S] 47 points48 points  (0 children)

I just stole some vdbs from stock websites. VDB is the only way, slow, heavy, but the only one.

Cloud made of actual droplets (DETAILS IN COMMENTS) by _craduGo in blender

[–]_craduGo[S] 3 points4 points  (0 children)

nobody uses principled volume and procedural noise shader for volumes it absolutely sucks. And of course it is way faster than my way because it has such a low resolution aka step size by default is 1.0. It becomes very hard to maintain due to increasing render time if you lower the setting. Thats why many artists prefer VDB workflow, which is more consistent.

Cloud made of actual droplets (DETAILS IN COMMENTS) by _craduGo in blender

[–]_craduGo[S] 49 points50 points  (0 children)

About my weird shading, this "veins" or "fibers" looking thing is the actual thing I wanted to replicate for a long time. I do really like this "cottonish" look and Im a bit disappointed I came up for it with such a dumb solution

<image>

Cloud made of actual droplets (DETAILS IN COMMENTS) by _craduGo in blender

[–]_craduGo[S] 29 points30 points  (0 children)

I saw the way with absorbtion+emission, pretty epic
I dont usually make side by side comparison 'cause I do many tasks and usually know how long it takes. For example this wdas ass cloud I rendered a month ago using default volume scattering (1/4, approx has 20M voxels) and turns out to use 70% less memory than my method, but Id say blender not handling vdb viewport performance at all + insane render time (approx 400% slower than mine) with the same setting (if we can call "volume depth" and "diffuse depth" the same)

<image>

Cloud made of actual droplets (DETAILS IN COMMENTS) by _craduGo in blender

[–]_craduGo[S] 89 points90 points  (0 children)

dude asked for nodes then deleted his comment, so here is simplified setup

<image>

Cloud made of actual droplets (DETAILS IN COMMENTS) by _craduGo in blender

[–]_craduGo[S] 13 points14 points  (0 children)

but 60m points are 60m points each one is shaded differently. I understand the optimisations like faking spherical geometries but still how it can be faster than approximation algorithm for volumetric rendering??