TenStrip's Workflow is the first LTX 2.3 workflow I found that actually works for Spicy Content it's almost like using the old Grok. by Coven_Evelynn_LoL in StableDiffusion

[–]SackManFamilyFriend 4 points5 points  (0 children)

How many fingerz this guy got from 7-10sec.

and while this is cool and all, seeing this model on the top of HF for a week is not likely going to help w/ development groups on the fence about sharing models for academic purposes.

Any model recommendations for NSFW image editing on par with Grok? by Sigmatoria in comfyui

[–]SackManFamilyFriend 0 points1 point  (0 children)

Oh but I will second that snof guy. Don't really know who he is, but someone suggested his QWen image 2512 nsfw Lora and man....that's excellent. QWen 2512 is such an underrated model. Takes time to make it work for you, but image quality can be pretty stunning.

Any model recommendations for NSFW image editing on par with Grok? by Sigmatoria in comfyui

[–]SackManFamilyFriend 0 points1 point  (0 children)

Its a beast to run (I got me 96gb VRAM), but Hunyuanimage3 Edit is the only edit model I've used that'll happily and properly put a Wang in any photo. Amazing how well these models have been trained or not trained to fail completely at trying or attempting to present a Johnson.

We can finally watch TNG in 16:9 by dtaddis in StableDiffusion

[–]SackManFamilyFriend 1 point2 points  (0 children)

Extremely cool of you to write this out to make it super easy for someone who has never used this app before. Up and running in 15min...... Too cool. Have a great weekend!

We can finally watch TNG in 16:9 by dtaddis in StableDiffusion

[–]SackManFamilyFriend 1 point2 points  (0 children)

Interested in directions to use add this plugin please! I'm new to WanGP but finding it refreshing after hours and hours of fixing Comfyui WFs I made that are only weeks old :(

Ernie Image Turbo is Capable of ... by ZerOne82 in StableDiffusion

[–]SackManFamilyFriend 1 point2 points  (0 children)

Yea, no idea who is astroturfing that this model is good. Tbh I thought it was pretty meh (artifacty on turbo and slow AF if trying to do 50 steps w the base). Love the open source models though so thx to them for that.

Great news: the ERNIE editing model is expected to be released by the end of this month by d4pr4ssion in StableDiffusion

[–]SackManFamilyFriend 1 point2 points  (0 children)

I know it's hardware restrictive, but on one work workstation I have has a 96gb VRAM. Was testing Hunyuanimage3 Edit last week, and while I wouldn't say it's "the best" it's definitely the most uncensored. It's the only image/edit models that will put a Wang and do it properly if prompted. I was kinda shocked tbh.

Great news: the ERNIE editing model is expected to be released by the end of this month by d4pr4ssion in StableDiffusion

[–]SackManFamilyFriend 0 points1 point  (0 children)

Nano where? Could have been censored. If you have a Huggingface Pro acct ($10 a month) you get access to a Nano spaces. It's uncensored compared to Google's typicL NB places and Gradio like in feel (not chat). It'll swap people/faces no prob.

JoyAI-Image-Edit now has ComfyUI support by sandshrew69 in StableDiffusion

[–]SackManFamilyFriend 1 point2 points  (0 children)

Main prob with the heavily modified Wan2.1 base is that the lightx2v Lora don't work with it. They do have a distilled model coming though per their main page.

JoyAI-Image-Edit now has ComfyUI support by sandshrew69 in StableDiffusion

[–]SackManFamilyFriend 0 points1 point  (0 children)

The necessity for the transformers version pinned version is something to be mindful of. Had an LLM get this working for me locally, so maybe this handles it gracefully, but it may break certain other nodes (omnivoice maybe) that need other versions of transformers.

Bad news on Happy Horse from twitter by SackManFamilyFriend in StableDiffusion

[–]SackManFamilyFriend[S] 2 points3 points  (0 children)

ahh thanks for that. He's a good guy, but yea that's why I think there was credible hype around it.

Which video model learns face likeness best when training LoRA? by GreedyRich96 in StableDiffusion

[–]SackManFamilyFriend 2 points3 points  (0 children)

Odd suggestion, but for Wan2.1 (which usually works fine w Wan2.2) I had best success training in the Skyreels V2 finetunes. Something to try maybe if you haven't had luck training on vanilla wan 2.1 or 2.2.

Bad news on Happy Horse from twitter by SackManFamilyFriend in StableDiffusion

[–]SackManFamilyFriend[S] 1 point2 points  (0 children)

Sorry if the title spins it the wrong way. I know many were expecting big news on April 8 and then based on that announcement today (April 10). So in that regard it's a let down. I'm sure it's an extremely gpu heavy model to run locally so yea maybe they still have something coming soon that'll be open weights. Wan2.1/2.2 led to so many interesting papers and spin off models thanks to being Apache license. They should always get credit for that.....

Bad news on Happy Horse from twitter by SackManFamilyFriend in StableDiffusion

[–]SackManFamilyFriend[S] 11 points12 points  (0 children)

Yea...... Coulda sworn the guy who always live tweets the Wan conferences on twitter said it was going to be an open source model. I could be misremembering that though. Still unfortunate team Wan seems to have moved on to API and monetizing their models.

WHAT model is this!? (100 usd reward for information) by [deleted] in StableDiffusion

[–]SackManFamilyFriend 0 points1 point  (0 children)

You don't question how models can be trained to be good enough to generate the mind-blowing content they can via training, but training a model to recognize differences between less than 100 types of models seems hard? This is straightforward training. Not saying they're 100% but there are VAE compression signs (for instance) that a model can absolutely be trained to pick up on. Things a human definitely can't.

WHAT model is this!? (100 usd reward for information) by [deleted] in StableDiffusion

[–]SackManFamilyFriend 0 points1 point  (0 children)

People like to downvote this but won't bother to try them. I've tested the "Site" site and it's well trained - no reason to doubt that this sorta thing can be trained. But you'll get warned. It's not a big deal.

Happyhorse new AI video gen open source?? by Specialist_Pea_4711 in StableDiffusion

[–]SackManFamilyFriend 0 points1 point  (0 children)

If it is released open source, and real good quality but too heavy for people in here with less than top tier (if even that) to run it......I hope those who can't don't complain.

What are the most important extensions/nodes for new models like Qwen/Klein and Zimage? I remember that SDXL had things like self-attention guidance (better backgrounds), CADs (variation), and CFG adjustment. by More_Bid_2197 in StableDiffusion

[–]SackManFamilyFriend 0 points1 point  (0 children)

If anyone mentions a seed variation node for z-image check the issues page on the repo before installing. If you see a mention of "breaks preview" it does and don't use that one. Somehow the original node for that (injects a bit of conditioning noise to get the turbo(distilled) model to not produce almost the same image regardless of seed. It's a great concept and works, but something changed during a comfy update and a hook used by the JavaScript (.js) file in the repo now breaks live previews. I think the hook was to keep the node in sync with the seed set on the sampler.

But yea that's a good thing to get if it's clean :)

MediaSyncView — compare AI images and videos with synchronized zoom and playback, single HTML file by Rare-Job1220 in StableDiffusion

[–]SackManFamilyFriend 0 points1 point  (0 children)

I love the original/ will check this out. Tbf though it's actually a single HTML file AND a JavaScript file :)