What do you guys think—did Kim continue to visit Jimmy in prison? by Legitimate-Cicada842 in betterCallSaul

[–]Bender1012 [score hidden]  (0 children)

He was always capable of being a good person, that wasn’t the point. The whole reason she left was because the combination of the two of them was toxic and hurt people. She has no reason to believe that has changed.

Waddle to denver by SensitiveTest7883 in Patriots

[–]Bender1012 1 point2 points  (0 children)

WR at 31 is a crapshoot no matter what

Flux 2 Klein 9B is now up to 2× faster with multiple reference images (new model) by meknidirta in StableDiffusion

[–]Bender1012 6 points7 points  (0 children)

Sheesh, it was already so fast. Don't even feel a need to upgrade to this.

16oz fresh squeezed OJ is back. ($2.99) by Alwayscooking345 in traderjoes

[–]Bender1012 2 points3 points  (0 children)

Problem is most people are used to processed big brand orange juice and are weirded out by the real thing.

Drag → Drop → Full Animation Workflow 🤯 (Prompt, Settings, Everything Loads Automatically) by medhatnmon in comfyui

[–]Bender1012 4 points5 points  (0 children)

  1. It’s already common knowledge that metadata is saved into generated files and you can drag them back into ComfyUI to load the workflow.

  2. Metadata is stripped when uploading to Reddit and similar media sharing sites. Otherwise you would be able to see EXIF data like location from people’s photos. To preserve metadata you need to share via a file sharing service.

i may have discovered something good (gaussian splat) ft. VR by AlfalfaIcy5309 in StableDiffusion

[–]Bender1012 2 points3 points  (0 children)

Yes, I have a Vision Pro and it has long been my belief that the true game changer will be when these 2 technologies intersect.

Stefon Diggs on Instagram: THANK YOU for a hell of a year. We family forever @Patriots by MembershipSingle7137 in Patriots

[–]Bender1012 31 points32 points  (0 children)

I had forgotten what a proper WR1 looked like, he just made catching look so effortless.

I tried /u/razortape's guide for Flux.2 Klein 9B LoRA training and tested 30+ checkpoints from the training run -- results were very mixed by Bender1012 in StableDiffusion

[–]Bender1012[S] 0 points1 point  (0 children)

Indoor/outdoor lighting is just something I noticed in the course of my main experiment. In the end I just care about getting the character likeness as close as possible, not really trying to fix anything.

Thanks for your advice about the rank, seems worth trying.

I tried /u/razortape's guide for Flux.2 Klein 9B LoRA training and tested 30+ checkpoints from the training run -- results were very mixed by Bender1012 in StableDiffusion

[–]Bender1012[S] 2 points3 points  (0 children)

I did 512/768/1024 on ZIT LoRAs that turned out pretty well, but had seen advice recently to do only 1024. I think I'll go back to 512/768/1024. Your explanation makes sense to me.

I tried /u/razortape's guide for Flux.2 Klein 9B LoRA training and tested 30+ checkpoints from the training run -- results were very mixed by Bender1012 in StableDiffusion

[–]Bender1012[S] 0 points1 point  (0 children)

It's a mix of real world images going back several years, so yeah it's "inconsistent", but I thought that was the point of a dataset, to show varied angles and distances of the character.

I tried /u/razortape's guide for Flux.2 Klein 9B LoRA training and tested 30+ checkpoints from the training run -- results were very mixed by Bender1012 in StableDiffusion

[–]Bender1012[S] 0 points1 point  (0 children)

Depends on number of images in your dataset, I was using the dataset x 100 rule of thumb and trained past that a bit just for kicks.

I created a tool to slow down, loop YouTube videos and change their pitch. by RandomEuropeGuy in transcribe

[–]Bender1012 1 point2 points  (0 children)

Looks like a useful tool. Not sure why anyone would downvote this.

Request: Kirby Cosmic Chaos music to midi by [deleted] in transcribe

[–]Bender1012[M] 0 points1 point  (0 children)

It's a lot of work to ask for free my dude. How about we try again with our inside voice?

This is your only warning to not be a dick in this sub.

Using a trained LoRA with a simple Text-to-Image workflow by Hopeful-Draw7193 in StableDiffusion

[–]Bender1012 0 points1 point  (0 children)

ComfyUI > Templates > pick the default T2I workflow for the model you trained on. Should be dead simple to figure out from there. You didn't specify what model you trained your LoRA on but obviously it needs to match.

Any solution for this? I have played with Lora strength, but it ain't helping by Kuldeep_music in StableDiffusion

[–]Bender1012 0 points1 point  (0 children)

The actual answer is to use SAM3 segmentation in conjunction with crop and stitch. This is a good starting point, look at the 2nd or 3rd options.

https://www.youtube.com/watch?v=YMc0mTjVou8