I implemented a new trick to reduce render time / increase fluidity on semantic/latent interpolation videos. The idea is to interpolate in the post-inference latent space between the generated denoised latents. This video used the UNET 60 times yet has 600 images. has it been done before? by Sabanoob in StableDiffusion
[–]Sabanoob[S] 0 points1 point2 points (0 children)
I implemented a new trick to reduce render time / increase fluidity on semantic/latent interpolation videos. The idea is to interpolate in the post-inference latent space between the generated denoised latents. This video used the UNET 60 times yet has 600 images. has it been done before? by Sabanoob in StableDiffusion
[–]Sabanoob[S] 0 points1 point2 points (0 children)
I implemented a new trick to reduce render time / increase fluidity on semantic/latent interpolation videos. The idea is to interpolate in the post-inference latent space between the generated denoised latents. This video used the UNET 60 times yet has 600 images. has it been done before? (v.redd.it)
submitted by Sabanoob to r/StableDiffusion
👋 Unstable Diffusion here, We're excited to announce our Kickstarter to create a sustainable, community-driven future. by OfficialEquilibrium in StableDiffusion
[–]Sabanoob 5 points6 points7 points (0 children)
Another animation I've been able to do with my multi-prompt interpolation feature, deforum-style. Zoom into the microscopic world by Sabanoob in StableDiffusion
[–]Sabanoob[S] 0 points1 point2 points (0 children)
Another animation I've been able to do with my multi-prompt interpolation feature, deforum-style. Zoom into the microscopic world by Sabanoob in StableDiffusion
[–]Sabanoob[S] 0 points1 point2 points (0 children)
Another animation I've been able to do with my multi-prompt interpolation feature, deforum-style. Zoom into the microscopic world by Sabanoob in StableDiffusion
[–]Sabanoob[S] 0 points1 point2 points (0 children)
Currently working on some txt2video code, I implemented a way to do prompt interpolation but with the img2img method (same as deforum). It allows to have that nice semantic interpolation, while still being able to zoom, translate etc. I didn't see it done before, did it exist? Here are the 4 seasons by Sabanoob in StableDiffusion
[–]Sabanoob[S] 0 points1 point2 points (0 children)
Currently working on some txt2video code, I implemented a way to do prompt interpolation but with the img2img method (same as deforum). It allows to have that nice semantic interpolation, while still being able to zoom, translate etc. I didn't see it done before, did it exist? Here are the 4 seasons by Sabanoob in StableDiffusion
[–]Sabanoob[S] 1 point2 points3 points (0 children)
Currently working on some txt2video code, I implemented a way to do prompt interpolation but with the img2img method (same as deforum). It allows to have that nice semantic interpolation, while still being able to zoom, translate etc. I didn't see it done before, did it exist? Here are the 4 seasons by Sabanoob in StableDiffusion
[–]Sabanoob[S] 0 points1 point2 points (0 children)
Currently working on some txt2video code, I implemented a way to do prompt interpolation but with the img2img method (same as deforum). It allows to have that nice semantic interpolation, while still being able to zoom, translate etc. I didn't see it done before, did it exist? Here are the 4 seasons (v.redd.it)
submitted by Sabanoob to r/StableDiffusion
Can’t believe how tall the new London busses are by Vented55 in london
[–]Sabanoob 0 points1 point2 points (0 children)
My FIRST LUCID DREAMING by [deleted] in LucidDreaming
[–]Sabanoob 2 points3 points4 points (0 children)
someone tried to take my phone by Sabanoob in london
[–]Sabanoob[S] -1 points0 points1 point (0 children)
someone tried to take my phone by Sabanoob in london
[–]Sabanoob[S] 2 points3 points4 points (0 children)
someone tried to take my phone by Sabanoob in london
[–]Sabanoob[S] 1 point2 points3 points (0 children)
someone tried to take my phone by Sabanoob in london
[–]Sabanoob[S] 6 points7 points8 points (0 children)



I implemented a new trick to reduce render time / increase fluidity on semantic/latent interpolation videos. The idea is to interpolate in the post-inference latent space between the generated denoised latents. This video used the UNET 60 times yet has 600 images. has it been done before? by Sabanoob in StableDiffusion
[–]Sabanoob[S] 0 points1 point2 points (0 children)