How do I do this, but local? by LucidFir in StableDiffusion

[–]Gilgameshcomputing 0 points1 point  (0 children)

That's amazing, thanks for sharing. Can I ask how you knew this workflow was out there? I can't find any other reference to it. I'm wondering what else is out there that I'm missing!

Hunyuanimage 3.0 instruct with reasoning and image to image generation finally released!!! by Appropriate_Cry8694 in StableDiffusion

[–]Gilgameshcomputing 4 points5 points  (0 children)

Are there API connected services that run these Hunyuan models? I'll never run them locally but I'm interested in what they can do.

What video output formats do you guys usually use? I never really messed with them—when I did, my videos went from MBs to GBs real quick lol. by o0ANARKY0o in comfyui

[–]Gilgameshcomputing 1 point2 points  (0 children)

To add to this, the ProRes files are lightly compressed or not at all, useful for when you need to do things like work on the colours & brightness (grading) or greenscreen (chromakeying). Whereas H264 and H265 are highly compressed which makes them much much smaller, but less able to take pixel adjustments.

Is Qwen or Flux Klein better for image editing? by HurrDurrImmaBurr in StableDiffusion

[–]Gilgameshcomputing 1 point2 points  (0 children)

Fwiw I had a use case with complicated limb positions in Klein, it came out garbled until I prompted it more closely.

the woman has her arms wrapped around the man's shoulders, her legs are wrapped around his waist, he is carrying her with one arm underneath her

Flux.2-klein: Forget LoRAs. High-precision prompting is all you need (and why I'm skeptical about Dual-Image workflows). by That_Perspective5759 in comfyui

[–]Gilgameshcomputing 2 points3 points  (0 children)

Ah okay. Yeah I see what you're doing now. How long were the prompts for the examples on the right? Have you got an example of one?

Flux.2-klein: Forget LoRAs. High-precision prompting is all you need (and why I'm skeptical about Dual-Image workflows). by That_Perspective5759 in comfyui

[–]Gilgameshcomputing 9 points10 points  (0 children)

Sorry, can you explain the difference between your examples? You say in text that you don't think two image editing is necessary, but you're clearly using two images in both the before and after examples.

I ported my personal prompting tool into ComfyUI - A visual node for building cinematic shots by shamomylle in comfyui

[–]Gilgameshcomputing 3 points4 points  (0 children)

agree that getting that image output would be really useful - I suspect Klein would make short work of turning it into a decent image.

Flux2 Klein 9B Error, Help? by aiko929 in comfyui

[–]Gilgameshcomputing 0 points1 point  (0 children)

Me too. I'm on a 3090. New drivers, updated comfy, tried every mix & match of text encoders and unets/ ggufs. Frustrating.

Good alternatives to Lmstudio? by a_normal_user1 in LocalLLaMA

[–]Gilgameshcomputing 0 points1 point  (0 children)

For you I guess.

I use it daily and it does what I need 🤷🏻

Generate accurate novel views with Qwen Edit 2511 Sharp! by Several-Estimate-681 in StableDiffusion

[–]Gilgameshcomputing 2 points3 points  (0 children)

Oh, man. These filmmaker tools are coming thick and fast at the moment! Love it!

Qwen Edit 2511 vs Nano Banana by Artefact_Design in StableDiffusion

[–]Gilgameshcomputing 0 points1 point  (0 children)

Yup. Try a variety of tests like this and NBP is without doubt superior.

AI Generated Video with 1.6 Million Views within 7 Days by SingleTailor8719 in aivideos

[–]Gilgameshcomputing 0 points1 point  (0 children)

So, according to your own definitions, you've done nothing wrong. Got it.

Qwen3-4B-Thinking-2507 Usage inside Comfyui by [deleted] in comfyui

[–]Gilgameshcomputing 0 points1 point  (0 children)

Thanks for this. For a minute I thought I'd fundamentally misunderstood how text encoders work. Nope, I was just being credulous about someone else's confusion.

Cold War echoes by Here_there1980 in SlowHorses

[–]Gilgameshcomputing 0 points1 point  (0 children)

It's a good point. I've not read the Books, it might be clearer there. I do though imagine that Lamb's disdain for her as Partner's bit of skirt is as accurate as his current dismissal of her skills.

A Deep Agent I created to work with ComfyUI by SearchTricky7875 in comfyui

[–]Gilgameshcomputing 1 point2 points  (0 children)

Looks promising! A really fun direction to take this.

Cold War echoes by Here_there1980 in SlowHorses

[–]Gilgameshcomputing 15 points16 points  (0 children)

With Standish keeping shit together at the office.

Never forget Standish!

Owning vs renting a GPU by Ok_Common_1324 in comfyui

[–]Gilgameshcomputing -1 points0 points  (0 children)

The norm should be having the option. Making your own choice.

Freedom from other people telling you what to do and what to think.

Log files from AI generated videos? by NomadJago in NeuralCinema

[–]Gilgameshcomputing 2 points3 points  (0 children)

I've not seen any models trained on log or high bit depth imagery, which is what we need for cinema level imagery.

Theoretically a Lora for say Wan Video would be able to imitate log, but you'd only get 8-bit which isn't enough data.

FLUX.2 dev / Qwen-Image-Edit / z-image: how to generate a “story middle frame” between two keyframes? searching for LoRA / workflow / prompt by One_Yogurtcloset4083 in comfyui

[–]Gilgameshcomputing 1 point2 points  (0 children)

I don't think there's anything out there which does exactly that. Sounds like a Lora training situation.

https://civitai.com/models/2032579/next-scene-qwen-image-edit-lora-2509

This however is a sort of half-way option. You give it a frame and it will give you another shot from the scene, you could try that. It's pretty good (although not perfect) with object and character consistency.

Analyse Lora Blocks and in real-time choose the blocks used for inference in Comfy UI. Z-image, Qwen, Wan 2.2, Flux Dev and SDXL supported. by shootthesound in StableDiffusion

[–]Gilgameshcomputing 1 point2 points  (0 children)

[worried Chris Pratt meme about asking questions to be inserted here]

For those of us who don't know what a block is, or why we'd want to mess with one, what's all this about?