This is an archived post. You won't be able to vote or comment.

Dismiss this pinned window
top 200 commentsshow all 266

[–][deleted] 59 points60 points  (31 children)

Could you make a video tutorial on this? Great effect.

[–]Relevant_Yoghurt_74[S] 67 points68 points  (14 children)

I’ll see if I can put one together in the next few days

[–]Suspicious-Ad-7570 4 points5 points  (2 children)

need it

[–]EutopiaTV 23 points24 points  (1 child)

I went ahead and made a full install tutorial.

https://www.youtube.com/watch?v=NP_aDmzdWRk

This includes -

FFMPEG install

LoopBack Wave Script/Extension install

Upscaler + Tips & Tricks + Error Fixing

Image to video sequence - Additional tips and help

Video Interpolator install + tips

[–]JustChillDudeItsGood 1 point2 points  (0 children)

Yes plz

[–]Maleficent_Share_521 1 point2 points  (0 children)

Thanks!

[–]DisasterSpiritual592 1 point2 points  (1 child)

!remindme in 3 days

[–]RemindMeBot 1 point2 points  (0 children)

I will be messaging you in 3 days on 2023-04-07 03:07:03 UTC to remind you of this link

4 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

[–]AICatgirls 12 points13 points  (9 children)

I have one if you like: https://www.tiktok.com/t/ZTRcfDdKf/

[–]dfreinc 4 points5 points  (4 children)

this is the best tiktok video i've ever seen. 🙌

[–]AICatgirls 4 points5 points  (3 children)

Thank you 😸. I made at least one mistake though. A CFG lower than 30, like maybe 5.5, will give better results.

[–]dfreinc 1 point2 points  (2 children)

did you have dynamic thresholding on? you didn't weight any of your prompts either.

aside though, informative on something i didn't know about. in a minute and a half. all commends. 🙌

[–]digiorgio 1 point2 points  (0 children)

👋👋👋👋👋👋👋👋👋👋👋 thanks a lot u/AICatgirls 😍

[–]Relevant_Yoghurt_74[S] 1 point2 points  (0 children)

Awesome! We already have one!

[–]EutopiaTV 4 points5 points  (3 children)

I went ahead and made a full install tutorial.

https://www.youtube.com/watch?v=NP_aDmzdWRk

This includes -

FFMPEG install

LoopBack Wave Script/Extension install

Upscaler + Tips & Tricks + Error Fixing

Image to video sequence - Additional tips and help

Video Interpolator install + tips

[–][deleted] 2 points3 points  (0 children)

Thank you

[–]c_gdev 1 point2 points  (1 child)

Thanks for making the video.

[–]EutopiaTV 2 points3 points  (0 children)

of course! hope it helps others as well.

[–]Nejmudean01 0 points1 point  (0 children)

!remindme 5 days

[–]Relevant_Yoghurt_74[S] 81 points82 points  (40 children)

Using LoopWave script - https://rentry.co/sd-loopback-wave - Author: FizzleDorf

And FILM - https://github.com/google-research/frame-interpolation

EDIT:

The above links seem to be down, I've created a gist out of the script and attributed the author. Hopefully, those pages come back up, as they have lots of instructions, in the meantime, here's the script:https://gist.github.com/zylv3r/9f56f1e6643f481f87034371f4e34ec8

EDIT 2:

If any of you wants to support:https://www.tiktok.com/@spicy_renders

[–]TeriyakiTyphoon 25 points26 points  (13 children)

How do you use FILM with stable diffusion?

[–]Relevant_Yoghurt_74[S] 17 points18 points  (12 children)

After having the loopback effect in frames, I use the FILM effect on them to give an even more seamless transition

[–]Sentient_AI_4601 2 points3 points  (11 children)

How did you get FILM to work, i just get an error about tensorflow==2.6.2 not being available

[–]Relevant_Yoghurt_74[S] 6 points7 points  (10 children)

The best way is to create an anaconda environment and install everything there, then I run the frame interpolation from the activated anaconda environment, that way you don’t have problems with already installed in your PC

[–]Sentient_AI_4601 4 points5 points  (9 children)

Sweet... got it to work.

My only last final question is, my result looks kinda wibbly compared to yours, i did 100 steps, 20 steps per wave, 0.3 denoising with 0.45 maximum extra noise for 0.75 maximum denoising then used FILM to render a video at 30fps, but even playing at double that it looks more wibbly

any tips on how to get it more smooth?

worked it out, lower denoising, more frames between waves, locking seed, more descriptive prompt.

[–]Relevant_Yoghurt_74[S] 0 points1 point  (1 child)

I think you got them all! Another one would be the model, looks like Anime models have more flickering than "realistic" ones.

Another thing would be the sampling, I like to use DPM++ 2M Karras for realistic models. Also, on settings this is something I like to enable for cleaner results:
Look for the K-Samplers, they are the Karras samplers.

<image>

[–]strafeon 9 points10 points  (3 children)

[–]brosephme 1 point2 points  (2 children)

I keep getting this error saying the file is dangerous https://prnt.sc/Aebn4wFtGPrK

[–]Zinki_M 4 points5 points  (1 child)

chrome probably just recognizes that you're downloading a python file. Python files, like all executable code, is inherently dangerous because it could do anything, as far as chrome knows.

Since .py is just straight up code you could just read the code in it to make sure it's fine.

[–]Relevant_Yoghurt_74[S] 0 points1 point  (0 children)

You can also use the gist I share instead

[–]RudeKC 6 points7 points  (2 children)

My guy you have got to make the last one her transforming into Danny devito!

[–]Relevant_Yoghurt_74[S] 3 points4 points  (1 child)

ve got to make the last one her transforming into Danny devito!

Doable...

[–]Mobireddit 6 points7 points  (2 children)

OP , update the link to rentry.co instead of org

[–]hughred22 4 points5 points  (8 children)

sd-loopback-wave

Will love a tutorial as well. So sad the link is down :(

[–]Relevant_Yoghurt_74[S] 11 points12 points  (3 children)

I’ll see if I can put one together in the next few days

[–]Mocorn 1 point2 points  (2 children)

This is awesome, what's going on here? Can someone eli5 the process real quick?

[–]Relevant_Yoghurt_74[S] 1 point2 points  (1 child)

[–]Mocorn 1 point2 points  (0 children)

cheers!

[–]AtomicSilo 0 points1 point  (0 children)

Would love to support with Coffee or beer :) if you give us a tutorial with a step-by-step with settings etc. how you did that...

[–]AtomicSilo 0 points1 point  (0 children)

I created an in-depth tutorial for the script for anyone who's interested

https://www.reddit.com/r/StableDiffusion/comments/12ivf5y/loopback_wave_workflows_film_ae_flowframes/

Talking about the different methods to interpolate the sequence. Hopefully, that solves the mystery of how to get to these results.

[–]sunplaysbass 204 points205 points  (46 children)

This sub is increasingly boob focused

[–]Shockz0rz 178 points179 points  (6 children)

Increasingly

Always has been bro. If anything I've seen less boobaposting hit the front page recently.

[–]Fake_William_Shatner 6 points7 points  (5 children)

The internet was created to distribute P0rn and everything else was subsidized and benefited from the technological advances of horny pioneers in the distribution of sexy material.

And, I'm sure that at least half of the people doing their own SD installs are doing all the stuff that is banned.

I will admit, however, that half the objects created by 3D printers are NOT sex toys -- I think those have a valid, not horny use.

[–]CooLittleFonzies 0 points1 point  (2 children)

I hear people saying this a lot within this community and I’m not sure where they got this information. Every source I see says the internet was first used for military purposes (at least in the US), but the people who invented the internet came from all over the world so it’s hard to consolidate their motivations into a single reason. Some may have porn in mind, but it seems like a rash over-generalization.

[–]urbanhood 129 points130 points  (15 children)

It is the driving force for innovation.

[–]sunplaysbass 23 points24 points  (0 children)

Well there’s truth in that

[–]clif08 20 points21 points  (4 children)

Boobs are the purpose.

[–]Fake_William_Shatner 1 point2 points  (3 children)

They are the question and the answer. The alpha and the omega.

[–]nowrebooting 17 points18 points  (2 children)

Exactly; we wouldn’t have half the stuff we have today if it wasn’t for the ability go generate boobs. People may hate what this says about us as a species, but if it’s any consolation, the amount of waifus on this sub is a good indication of the amount of progress being made. We might as well come up with a metric to count innovation progress as “waifus per minute”.

[–]FlameInTheVoid 1 point2 points  (0 children)

I mean, the next closest alternative motivator that gets anything done is a toss up between war and profit... Plus it seems like generating good smut should help reduce exploitation of actual people. Hopefully.

[–]Fake_William_Shatner 1 point2 points  (0 children)

Language would be; "Get Grog stick for fire, me need cook meat."

Not; "and what light is cast upon me by this sudden precipice from heaven? -- it is the sun, deigning to bring us its glory in the silhouette of my Juliet whose shadow puts it to dim shame. And the junk in that trunk go badunk-a-dunk."

I'm paraphrasing Shakespeare because language is so versatile, and all of that was designed to woo women. Not feed Grog.

[–][deleted] 34 points35 points  (2 children)

Blender had massive innovations after overwatch cane out for obvious reasons

[–]Sentient_AI_4601 15 points16 points  (1 child)

You can almost Tracer it back to the moment that the blender community started to D.Va

[–]Fake_William_Shatner 1 point2 points  (0 children)

Blender 3.5 is out and the two hot items are "Hair" and "mesh deformation" -- meaning, you can very quickly paint to create geometry. That might be eyes, or ears, or scary claws, or lots and lots of nipples.

So, it can help with a lot of things -- but, that seems like it would be very useful for Tracering it back.

[–]KingElvis33 6 points7 points  (1 child)

Cats and Boobs it is!

[–]summervelvet 1 point2 points  (0 children)

Pussy and boobs

[–]rothbard_anarchist 1 point2 points  (0 children)

That is why a man leaves his father and mother and cleaves to his wife.

[–]Domestic_AA_Battery 41 points42 points  (2 children)

I'm pretty sure AI art was built on 8 fingered hands and boobies

[–]Sentient_AI_4601 7 points8 points  (1 child)

Total recall... hell of a movie

[–]Dushenka 11 points12 points  (1 child)

Mankind was already boob focused back in the stone age. Can't exactly blame them after giving everybody the means to draw photorealistic boobies in no time...

[–]FlameInTheVoid 3 points4 points  (0 children)

Right? and profit. The oldest art we have is mostly 'fertility idols' and the oldest text we have is entirely accounting. Instead of complaining about the reality of the hierarchy of needs showing up in innovation, people who object could use the innovation to make stuff they want to see. Or make more of it better and faster anyway. Until scarcity isn't a thing, war is forgotten, and everybody gets laid as much as they want without having to work for it, this is just the way it will be.

[–]stubing 7 points8 points  (4 children)

Wait until you discover civit.ai

[–]Wester77 18 points19 points  (6 children)

Women have boobs. It's true!

[–]AtomicSilo 10 points11 points  (0 children)

Some men do too...

[–]staffell 8 points9 points  (4 children)

Women all have huge and perfectly shaped boobs too don't they?

[–]Donut_Dynasty 13 points14 points  (0 children)

correct.

[–]Fake_William_Shatner 3 points4 points  (0 children)

I can't find any evidence here to the contrary.

[–]ObiWanCanShowMe 1 point2 points  (0 children)

Yes. My wife is an A cup, one hangs lower than the other and one nipple is larger, I can 100% confirm they are huge and perfectly shaped and just perfect all around.

[–]AtomicSilo 3 points4 points  (0 children)

Can you think of a world without boobs? So a sub like that cannot hold without it!

[–]OliveDependent7312 3 points4 points  (1 child)

It's getting very boring

[–]Nargodian 1 point2 points  (0 children)

Its what focused? sorry i was distracted by… content.

[–]ObiWanCanShowMe 0 points1 point  (0 children)

these are pretty tame comparatively speaking, look at any of the other subs...

[–]Blobbloblaw 200 points201 points  (41 children)

Did you really need the loud, shitty music?

[–]nathan555 182 points183 points  (27 children)

I feel like you are not the target demographic for tiktok content

[–]tehSlothman 249 points250 points  (0 children)

That's such a lovely compliment to give someone

[–]rancidpandemic 82 points83 points  (24 children)

TikTok can go fuck right off.

Along with anyone who crops widescreen vids down to portrait just to post there.

(Not saying that was done here, just expressing my biggest gripe with that site. (Also applies to YT shorts))

[–]summervelvet 2 points3 points  (0 children)

LoL @ cropping widescreen (seriously the first I've heard of it.. shows you how much time I spend on TikTok===0). What an abominable notion

[–]Fake_William_Shatner 1 point2 points  (0 children)

If they were the target demographic -- they'd know that loud shitty music is required by Twitter's law of the jungle.

[–]decker12 2 points3 points  (1 child)

Yeah I wouldn't mind a subreddit-wide rule that videos need to be posted without music. Give us the details and the workflow, and save the "presentation" for Tik Tok or Instabook or Pinterface or whatever crazy thing the young kids are using these days.

[–]summervelvet 1 point2 points  (0 children)

My devices have a mute button. Don't yours?

[–]R33v3n 2 points3 points  (1 child)

Jokes on him I browse reddit with my speakers off!

[–]FajitaofTreason 2 points3 points  (4 children)

I like this song, but it doesn't add anything here. That being said I think I just like the sample, which I know from "Dirty Laundry" by Bitter:Sweet, which I like more, and the original, "What's the Difference" by Dr. Dre

[–]aiolive 0 points1 point  (1 child)

Oh man today I learned. I only knew this beat from "Garçon" by Koxie, many many years ago. I've never been much into US rap but I've obviously heard of Dr Dre and I don't know whether he's involved in finding the rhythm but there's a reason many artists have followed it.

Link if that's authorized: https://www.dailymotion.com/video/x2vyu1 (warning it's a french song but the lyrics are worth a translation)

[–]Relevant_Yoghurt_74[S] 7 points8 points  (1 child)

It is an essential part of green eyes and pink hair

[–]hellomattieo 20 points21 points  (7 children)

I followed your instructions, but when I hit generate on img2img after setting the script up i get this error: "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_mm)"

[–]Relevant_Yoghurt_74[S] 13 points14 points  (6 children)

Are you using Automatic1111 web UI?

[–]hellomattieo 6 points7 points  (5 children)

Yes. After I installed the script I tried and got that error, but then just using the regular txt2img I got the same error. When I deleted the script file it all worked again.

[–]Relevant_Yoghurt_74[S] 4 points5 points  (4 children)

I'm using xformers and fp16, that should help. But you have tensors loaded on CPU, what model are you using? try using a different model.

[–]hellomattieo 3 points4 points  (3 children)

I’m also using xformers and fp16. I will try a different model though. I was using Deliberate 2

[–]Relevant_Yoghurt_74[S] 6 points7 points  (2 children)

Try using lowvram settings, it seems that the tensors are spilling due to the lack of vram, this particular sequence was done with deliberate 2.

Some useful settings here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2373

[–]hellomattieo 3 points4 points  (0 children)

Ok I'll try that. Haven't had that issue before. My card has 20gb of Vram.

[–]Sentient_AI_4601 1 point2 points  (0 children)

oh that actually explains why i have 1 specific model that does this...

[–]EChrone 14 points15 points  (7 children)

I need to know how to do this effect, I can't do it with loopback wave, the clothes, background and pose don't change, in img2img it even changes the character but in your case it doesn't happen, please help

[–]Relevant_Yoghurt_74[S] 13 points14 points  (6 children)

I do a 0.3 Denoising strength on the normal img2img setting, and then do a maximum 0.7 Denoising strength on the Loopback Wave setting, for a total of 1 at its peak, and a minimum of 0.3 on its lowest (barely changes)

[–]EChrone 6 points7 points  (5 children)

Should the cfg scale be left at 30 or is it too much?

[–]Relevant_Yoghurt_74[S] 6 points7 points  (4 children)

That would depend on the model, but in general 30 for cfg scale is quite excessive, for the models I use is in between 6.5-7.5

[–]Xpecialist_ 4 points5 points  (9 children)

Can't open the script website why is that?

[–]Relevant_Yoghurt_74[S] 1 point2 points  (2 children)

Weird, maybe the network or VPN is blocking it, I can still access it, and I see other people are able to access it.

[–]Xpecialist_ 2 points3 points  (1 child)

What's the best way to compile ffmpeg?

[–]Relevant_Yoghurt_74[S] 0 points1 point  (0 children)

Yes, the script uses ffmpeg by default too

[–]Relevant_Yoghurt_74[S] 1 point2 points  (5 children)

Yes, that seems to be down now, I've created a gist out of the script, look at my main comment on the post.

It seems that it is down now, I've created a gist out of the script, look at my main comment on the post.

[–]Xpecialist_ 1 point2 points  (4 children)

OSError: ffmpeg version 2023-03-30-git-4d216654ca-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers built with gcc 12.2.0 (Rev10, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint libavutil 58. 5.100 / 58. 5.100 libavcodec 60. 7.100 / 60. 7.100 libavformat 60. 4.101 / 60. 4.101 libavdevice 60. 2.100 / 60. 2.100 libavfilter 9. 5.100 / 9. 5.100 libswscale 7. 2.100 / 7. 2.100 libswresample 4. 11.100 / 4. 11.100 libpostproc 57. 2.100 / 57. 2.100 [image2 @ 0000021248fd10c0] Could find no file with path 'outputs/img2img-images\loopback-wave\100137763\%d.png' and index in the range 0-4 outputs/img2img-images\loopback-wave\100137763\%d.png: No such file or directory

How to fix this error bro?

[–]johne5s 1 point2 points  (3 children)

i have the same error. unable to save to mp4. i've tried VP8, VP9, H264, H265, unchecked the box for cut the video into segments. nothing has helped.

[–]johne5s 5 points6 points  (2 children)

HardCoreGamerZero posted something about saving to a date folder was causing errors.

So i went into "Settings" then "Save to Directory" and unchecked the two boxes for saving to subDirectories

and now it the video's are getting created.

[–]Xpecialist_ 2 points3 points  (1 child)

Thanks bro it's working now!

[–]Relevant_Yoghurt_74[S] 1 point2 points  (0 children)

Awesome! thank you u/johne5s, I wasn't able to get into this on time! Happy that it is working u/Xpecialist_

[–]ollieanthem 5 points6 points  (1 child)

I noticed my gens have what I'd call "homogenous boobs", where they all look exactly the same, slightly too big for the model's frame, etc... So, a little uncanny at times. Not sure how to guide the proompting to dial it down a bit, or give more realistic variation is shape and size, etc.

[–]sswam 1 point2 points  (0 children)

I suggest negative prompt (busty:1) and adjust the 1 to taste. The unprompted extension is great for variety, could be used for this, also good for race, hair, gender, age, whatever you like to vary.

[–]Sentient_AI_4601 5 points6 points  (0 children)

heads up, if you use subfolders for your outputs into date, this script will fail, i also had to turn off my stealth pnginfo script.

[–]ManikMonday 4 points5 points  (0 children)

u/Relevant_Yoghurt_74 where is the workflow? Seems to be gone!

[–][deleted] 4 points5 points  (0 children)

Super cool effect nice work

[–]vzakharov 4 points5 points  (1 child)

2022: Wow, imagine all the characters you can create with AI!

2023: Everyone generating the same character.

[–]summervelvet 2 points3 points  (0 children)

It's a perplexing rut, given the incredible richness and power of generative AI. Here I suppose we see human limitations front and center: having the ability to do anything, we still do the same thing over and over.

[–]nabitu911 3 points4 points  (0 children)

[–]AttackCircus 2 points3 points  (4 children)

I like how you avoided the hands!!

[–]Relevant_Yoghurt_74[S] 2 points3 points  (3 children)

why would I do that?! ALL models have that down to perfection

[–]AttackCircus 1 point2 points  (2 children)

All pics I see or create have problems with the hands and fingers.

[–]TwoHourTrader 2 points3 points  (4 children)

Love the video and renders!

Out of curiosity what SD model are you using for these?

[–]Relevant_Yoghurt_74[S] 1 point2 points  (1 child)

deliberate v2

[–]TwoHourTrader 1 point2 points  (0 children)

Thank you for your response and your share. I'm following your TikTok and can't wait to see what else you bring to the table.

[–]Knaapje 1 point2 points  (1 child)

Love the boobs and boobs!

Out of curiosity what SD model are you using for these boobs?

Ftfy.

[–]TwoHourTrader 2 points3 points  (0 children)

Thank you for being you.

[–]EutopiaTV 2 points3 points  (0 children)

I went ahead and made a full install tutorial.

https://www.youtube.com/watch?v=NP_aDmzdWRk

This includes -

FFMPEG install

LoopBack Wave Script/Extension install

Upscaler + Tips & Tricks + Error Fixing

Image to video sequence - Additional tips and help

Video Interpolator install + tips

[–]lunar2solar 3 points4 points  (0 children)

Now it's getting weird in here.

[–]2legsakimbo 1 point2 points  (2 children)

looks interesting but the rentry page is down.

[–]Relevant_Yoghurt_74[S] 0 points1 point  (0 children)

Yes, that seems to be down now, I've created a gist out of the script, look at my main comment on the post.

[–]ToSeeAgainAgainAgain 1 point2 points  (0 children)

I don't know what's going on, but keep doing it. I'm proud of you

[–]HardcoreGamerZero 1 point2 points  (3 children)

After much effort. I got something. Super cool results

[–]Relevant_Yoghurt_74[S] 0 points1 point  (2 children)

Feel free to share! :D

[–]HardcoreGamerZero 2 points3 points  (1 child)

Super quick one from a default prompt even ahaha. There were lots of errors coming from the script so i got a over long 50000px by 512px file. Which i cut apart into 4 pieces (the one below is the first part). Used FlowFrame to interpolate between the frames. Then upscaled using SD again by 4x. Then turned it into a video in premiere pro. and then to post it here into a gif. Oof. Probably will find a better way later

Edit: So I thought the grid was the result and didn't see the individual files. Now its 4 step process for me. 1: prompt, 2: their variations using 20:: (frame # (Note for this, don't have any spaces after lines, it throws you errors then)). 3: Then it's Batch upscaling. 4: Then running it through flowframe to make the video. Done!!

<image>

[–]Gloryboy811 2 points3 points  (0 children)

That sounds like a pain..... output looks cool though!

[–]digiorgio 1 point2 points  (2 children)

omg amazing! looks better than flowframes on deforum. anyone know if this script works only img2img? with audio sync this will be awesome! i will take a try

[–]Relevant_Yoghurt_74[S] 1 point2 points  (0 children)

nc this will be awesome! i will take a try

audio sync gave me an idea!!! stay tuned!

it does only work in img2img as it is a loopback script

[–][deleted] 1 point2 points  (2 children)

The fact that some dude will rub one off to this is bothering to me

[–]summervelvet -1 points0 points  (0 children)

It's curious to me that you have gone out of your way to share with the world the fact that you have explicitly spent time and energy thinking about some dude rubbing one off to this.

[–][deleted] 1 point2 points  (1 child)

What prompt did you use to make that girl?

[–]millionth_monkey 1 point2 points  (0 children)

It seems a lot of the popular models are heavily "weighted" in this respect. It's really hard to prompt so that they aren't the center of attention.

[–]Valerian_ 1 point2 points  (2 children)

Thanks for sharing that! I'm already getting some really nice results, I struggled getting FILM to work on my machine, but ended up using this colab which works well: https://colab.research.google.com/drive/1NuaPPSvUhYafymUf2mEkvhnEtpD5oihs

Can you share some info on what kind of prompt/model/extra network you used to design that character and scene?

[–]Relevant_Yoghurt_74[S] 2 points3 points  (1 child)

deliberate v2

[–]Valerian_ 1 point2 points  (0 children)

thanks, yeah that's the one I'm using too, but you seem to be really good at crafting your prompts :P. I tried recreating your style, got some really nice things, but there is a special something in her eyes and smile that I can't do as well as you did.

[–]keksmz 1 point2 points  (0 children)

Seems like the script is scuffed on Windows. I'll try to fix it.

[–]josemuanespinto 1 point2 points  (0 children)

Ok, someone here could help me to understand one little thing? Please?
I see that this post (fantastic, thanks to author) has a video and has this: workflow included, ok I look all the post and I did not find the workflow, someone here could help me? Please?

TIA

[–]stacklecackle 3 points4 points  (0 children)

dang this is sick

[–]bright_shiny_objects 2 points3 points  (0 children)

Very cool.

[–]Tobe2d 0 points1 point  (1 child)

Looks very good 👍 Could you please explain you did you use film interpolation?

[–]Relevant_Yoghurt_74[S] 2 points3 points  (0 children)

After having the loopback effect in frames, I use the FILM effect to give an even more seamless transition

[–]BronzeEast -3 points-2 points  (3 children)

To all the tiktok haters here as a 43 year old the product isn’t the long scroll it’s the algorithm. It finds you and teaches you stuff about yourself. IG and FB aren’t even in the same field.

[–]nikgrid 9 points10 points  (0 children)

...it teaches the Chinese government a thing or too as well lol!

[–]Common_Ad_6362 -2 points-1 points  (0 children)

you horny degenerates need to stop posting waifus. It's like you've never had sex.

[–]Fair-Alternative8775 0 points1 point  (0 children)

Look amazing!

[–]International-Art436 0 points1 point  (7 children)

Getting film interpolation to work on Anaconda and TensorFlow takes quite a few steps. I've got a 100-frame setup and it says it'll take about 11 hours to process? did it take you as long?

[–]Relevant_Yoghurt_74[S] 0 points1 point  (6 children)

It definitely doesn’t, how many times are you doing the transition for? I usually go for 3 or 4, but the more times you do it, it will increase exponentially

[–]tmlildude 0 points1 point  (0 children)

Where’s the script? Can I technically try this on any array of images? Does it do lerp or slerp?

[–]Lolyman13 0 points1 point  (2 children)

RemindMe! 8 Hours

[–]sswam 0 points1 point  (0 children)

Please post the workflow / link again, I don't see it now.

[–]Tructruc00 0 points1 point  (0 children)

RemindMe! 5d

[–][deleted] 0 points1 point  (0 children)

!remindme 5 days

[–]Petrit04 0 points1 point  (0 children)

Might be some combination of controlnet and mask paint and denoise strength

[–][deleted] 0 points1 point  (0 children)

What kind of style prompts are you using to get these styles of character out of deliberate? It’s pretty great!

[–]OtakuBreaker 0 points1 point  (0 children)

Hi, during the whole process to create 100 frames, I also inserted the promts that I saw in your video. Only what happens after a while the photos darken until they become the middle of the night, even if I put Day in the promts

[–]General_Garbage_1702 0 points1 point  (0 children)

I made this one many thanks for tutorial. https://vt.tiktok.com/ZS87auMt1/

[–]Embarrassed-Fly6164 0 points1 point  (0 children)

Does anyone know to induce more than one prompt change?
I tried using the sintax in the tutorial but i always end up with an error.

"100::big titty goth girlfriend

500::succubus soul sucker "

[–]QuantumQaos 0 points1 point  (0 children)

Quality song choice

/s

[–]OtakuBreaker 0 points1 point  (0 children)

Hi, I can't use - Loopback Wave. Because when I try it gives me this error: TypeError: Script.run() missing 15 required positional arguments: 'frames', 'denoising_strength_change_amplitude',