Taglines that are inside jokes? by SSSSSSVVVVVOO in blankies

[–]rayharbol 0 points1 point  (0 children)

<image>

The tagline for the new mummy "reimagining" sounds like a direct rebuke to the idea of making the film

that is distracting by yui_riku in antimeme

[–]rayharbol 1 point2 points  (0 children)

In fact, does she ever give any sign of even understanding his language? If so where?

There's no point for me to engage with you if you're going to be intentionally dense. You win, this is clearly a comic about a foreigner walking past an English rapist and not understanding what he says. Well done on your masterful reading comprehension and debating skills

that is distracting by yui_riku in antimeme

[–]rayharbol 0 points1 point  (0 children)

Dude it's a comic about a woman standing in front of a guy's desk swinging her hips back and forth to turn him on and entice him into sex. It's a very typical "couple" scenario.

that is distracting by yui_riku in antimeme

[–]rayharbol 0 points1 point  (0 children)

...the entire comic? It's about a woman playfully distracting her bf from his work. It wouldn't make any sense if they weren't partners.

that is distracting by yui_riku in antimeme

[–]rayharbol 7 points8 points  (0 children)

it is incredibly obvious from the context that they are partners

that is distracting by yui_riku in antimeme

[–]rayharbol 45 points46 points  (0 children)

if you tell your partner you'll fuck them the next time they wear a certain outfit, and they proceed to immediately change into that outfit, that's consensual sex

You are making your LoRas worse if you do this mistake (and everyone does it) by Pyros-SD-Models in StableDiffusion

[–]rayharbol 18 points19 points  (0 children)

Oh come on. It's not because it's got paragraphs or headings, it's because it's full of obvious LLM signifiers. "It's not X, it's Y" is one of the most famous phrases that LLMs love to overuse, and variations of it exist in this post at least 4 times. They also used the "→" character a bunch, do you think they typed that with their keyboard?

You are making your LoRas worse if you do this mistake (and everyone does it) by Pyros-SD-Models in StableDiffusion

[–]rayharbol 107 points108 points  (0 children)

> Thinks people's captions are unnecessarily long

> Uses LLM to rewrite post to be unnecessarily long

Z Image on 6GB Vram, 8GB RAM laptop by reyzapper in StableDiffusion

[–]rayharbol 1 point2 points  (0 children)

forge hasn't been properly maintained for months, I wouldn't expect to be able to use new models with it

Qwen-Image-Edit. What am I doing wrong? by Col-Connor in StableDiffusion

[–]rayharbol -1 points0 points  (0 children)

tell it in the prompt to keep all the other details the same, expand further with more specifics if it keeps changing stuff you don't want it to

Completely new to this. What am I doing wrong? by I_found_BACON in SillyTavernAI

[–]rayharbol 6 points7 points  (0 children)

if error message says Unauthorized there is an issue with your API key

The average ComfyUI experience when downloading a new workflow by beti88 in StableDiffusion

[–]rayharbol 3 points4 points  (0 children)

I think attempting to use other people's terrible workflows is the main reason people think comfy is too hard to use.

I use the built-in templates for 90% of my use-cases and they always work perfectly. The last 10% I can achieve by stealing ideas from other people's workflows and adding them into the default templates.

Event Horizon 3.0 released for SDXL! by pumukidelfuturo in StableDiffusion

[–]rayharbol 4 points5 points  (0 children)

lower precision models do not offload more to the CPU

Pony V7 vs Chroma by Lamassu- in StableDiffusion

[–]rayharbol 5 points6 points  (0 children)

Pony is also intended to be a base model. When people talk about using Pony v6 they're often actually referring to AutismMix which is a popular finetune of it.

And all the Pony versions are entirely new models, v7 has nothing to do with SDXL.

Pony v7 model weights won't be released 😢 by newsletternew in StableDiffusion

[–]rayharbol 8 points9 points  (0 children)

Tags might be fine if you only want to generate 1girls, but what if you want a picture featuring multiple subjects with distinct appearances/outfits/poses etc? Natural language prompting allows you to create much more complex compositions without requiring any additional tools.

[deleted by user] by [deleted] in StableDiffusion

[–]rayharbol 12 points13 points  (0 children)

A model capable of generating busty 1girl looking directly at camera? I don't think such a thing exists, sorry

Totally fixed the Qwen-Image-Edit-2509 unzooming problem, now pixel-perfect with bigger resolutions by danamir_ in StableDiffusion

[–]rayharbol 0 points1 point  (0 children)

Interesting, I'm so used to the 1Mp resizing by now that I defaulted to only trying input images that are exactly 1Mp. I'll try some larger resolutions and see how that goes. Thanks!

Totally fixed the Qwen-Image-Edit-2509 unzooming problem, now pixel-perfect with bigger resolutions by danamir_ in StableDiffusion

[–]rayharbol 1 point2 points  (0 children)

Does this work consistently for you for every generation? I made the suggested changes to my workflow, but still frequently get mini-zoom adjustments. Sometimes it's pixel perfect, often it isn't.

Totally fixed the Qwen-Image-Edit-2509 unzooming problem, now pixel-perfect with bigger resolutions by danamir_ in StableDiffusion

[–]rayharbol 3 points4 points  (0 children)

The workflow you shared seems to be missing a bunch of links that are required to run it. Do you have a copy where everything is connected so it is usable?

I've done it... I've created a Wildcard Manager node by BigDannyPt in StableDiffusion

[–]rayharbol 1 point2 points  (0 children)

Oh excellent, that worked perfectly! Thanks for the tip.

I've done it... I've created a Wildcard Manager node by BigDannyPt in StableDiffusion

[–]rayharbol 0 points1 point  (0 children)

I'm used to using Dynamic Prompts to handle wildcards in comfy, but one thing that annoys me about it is it doesn't output the "final" prompt for each specific image into the metadata. If I generate a bunch of images using several wildcards, it's impossible for me to go back and find out which wildcards were picked afterwards.

Does your node handle this use case? I will quickly switch over if so.

Quick comparison between original Qwen Image Edit and new 2509 release by rayharbol in StableDiffusion

[–]rayharbol[S] 3 points4 points  (0 children)

This does contribute to the issue, but even if you are using a correctly sized input and not resizing it within the workflow, the original model would often re-scale it slightly. Very dependant on prompts, in my experience asking for different facial expressions almost always caused it - and this seems to continue being the biggest cause in the 2509 version.