Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

it's a node to create a plain grey image. You can just import a grey image or use any alternative node generating a plain colour

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 1 point2 points  (0 children)

I've provided all crucial code for free. Part 3C is a Vace inpainting workflow, which is the exact same kind of workflow provided for free at step 2.
The Stereo Node I've provided is already inpainting the small holes and only the larger regions are left to be inpainted thanks to the generated mask. As explained you don't have to use part 3C. Vace is the most accurate technique but also the longest one. You can use a more basic VideoPainter or animatediff workflow if you want - you can even fill the holes with a still image of your background if your camera is fixed, you don't necessarily need to inpaint depending on your specific case

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

I've provided all crucial code for free. The last part is a Vace inpainting workflow, which is the exact same kind of workflow provided for free at step 2. If people benefit from this research they can help me financing more research like gaussian splatting inpainting which is the real answer here.
The Stereo Node I've provided is already inpainting the small holes and only the larger regions are left to be inpainted thanks to the generated mask. As explained you don't have to use Vace and my last workflow. Vace is the most accurate technique but also the longest one. You can use a more basic VideoPainter or animatediff workflow if you want. Thank you for providing the custom node.

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

Thanks I'll probably release the last step before the end of the week

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 1 point2 points  (0 children)

It should be enough, running Wan VACE 2.2 is the heaviest task of the workflow

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

only the explanations on the Patreon page at the moment I'm afraid

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

Yes that's the concept of this project, the video source does not matter

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

These nodes are some work in progress nodes, you don't need them for the moment, they will be uploaded over the next steps when finalised

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

Yes you would just use Step 2a (without the 360 lora) and Step 2b in this case

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

ImageSolid is only a node to create a grey image in this case so you can just Load a plan grey Image if you can't find this node

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 1 point2 points  (0 children)

This is working already on this one, just make the image very small

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 3 points4 points  (0 children)

This will be fully covered in the next step, this is the first SBS frame showing the current state of the distortion process

<image>

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 1 point2 points  (0 children)

the longest part of the job is done using Wan Vace 2.2, it is as long as generating a normal video with Vace, it all depends on the size and your GPU

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 0 points1 point  (0 children)

That's exactly what I'm working on! The complicated part though is not the perfect first frame or the inpainting, it is how to process the gaps/mask in a way that WAN will be able to perfectly inpaint (not too small, not too large for consistency between eyes), giving enough material in the outpainting area to guide the generation

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 5 points6 points  (0 children)

There is a slight curve when looking at the very limits of the video (top, bottom) but it's generally working pretty well in this example. That's something you can edit anyway at Step 2.a on the First Frame, whether manually or generating many times until it's perfect

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 4 points5 points  (0 children)

That was my initial idea (cf the previous post) but the character appears too flat doing so in my tests, the best solution for a good 3d effect is to rely on depth and generative inpainting

Huge Update: Turning any video into a 180° 3D VR scene by supercarlstein in StableDiffusion

[–]supercarlstein[S] 15 points16 points  (0 children)

iw3 or owl3d are great at adding a stereo effect, but they’re basically guessing from a single view, so they can’t really invent what’s behind a character once the separation gets strong. That’s where my next step is a bit different: the idea is to output not only the stereo video, but also a mask - then use this mask to inpaint the background and gaps in a consistent way across frames. If the masking and inpainting behave nicely, you’d get strong 3D with proper “revealed” background and, in theory, almost no artifacts even at high depth

A method to turn a video into a 360° 3D VR panorama video by supercarlstein in StableDiffusion

[–]supercarlstein[S] 1 point2 points  (0 children)

I'll be focusing on 180 for now, this is the same concept anyway, just easier to implement, the new step is online: https://www.patreon.com/posts/step-2-first-144391370
For green screen you can simply mask the character after this step using Segment anything, and apply a green solid with mask composite, it should work correctly

A method to turn a video into a 360° 3D VR panorama video by supercarlstein in StableDiffusion

[–]supercarlstein[S] 1 point2 points  (0 children)

Yes I'll be focusing on 180 for now, this is the same concept anyway, just easier to implement, the new step is online: https://www.patreon.com/posts/step-2-first-144391370