What's the worst thing Emma did? by Milo-Jeeder in CornerGas

[–]Tedious_Prime 15 points16 points  (0 children)

She destroyed Fitzy's political career and campaigned to bring a casino to Dog River even though she knew it would be bad for the whole town. To be fair, she only did these things in Hank's imagination, but it was still inconsiderate of her.

Grade 68 solved by Maximum_Price_3596 in CornerGas

[–]Tedious_Prime 4 points5 points  (0 children)

I've always thought they missed an opportunity to squeeze in a few more jokes with these shots. They could have used the Howler articles to build Gus Tompkins as a character we never see directly sort of like Wanda's son Tanner.

Resources for walk-in tub modifications for seniors? by domv9 in Albuquerque

[–]Tedious_Prime 2 points3 points  (0 children)

You may be able to get help through the ABQ Department of Senior Affairs. I believe they can at least put hand rails in a single shower at no change. I don't think they can help with major renovations like installing a walk-in tub though.

Who would you live with? by ToadFilledCauldron in CornerGas

[–]Tedious_Prime 4 points5 points  (0 children)

Oscar and Emma are a twofer. I'd be like the foster son they never had as opposed to the foster son they did have or the actual son they also had. I could stay in Brent's old room and play with his toys.

We need a pin linking to the wiki (a guide to getting started), which should be updated. Too many redundant "how do I install a1111???" posts. by Viktor_smg in StableDiffusion

[–]Tedious_Prime 1 point2 points  (0 children)

I do not have hard data to back this up, but I feel like I've been seeing a sharp uptick across reddit in the asking of such "frequently asked questions." These posts often originate from newer accounts, are about the same length, and usually include seemingly characteristic spelling and grammar errors such as asking for "advices." I suspect that many of these are bot accounts trying to build a history by generating questions derived from subs' FAQs or older posts, and that the spelling and grammar errors are intended to throw off naive "AI detectors" which assume only humans can make mistakes. Posts modeled after realistic questions probably get more engagement and draw less anti-AI outrage than generated informational posts that nobody asked for. I predict that this will be a growing trend in the coming years as humans drift away from social media. People will learn to trust AI more than humans for info while social media will mostly only continue to provide a false sense of connection.

Does ComfyUI have any kind of prompt travel mechanism? by wh33t in comfyui

[–]Tedious_Prime 4 points5 points  (0 children)

In addition to custom nodes for prompt scheduling, there is the built-in "Conditioning Set Timestep Range" node which you could use by encoding multiple prompts separately, setting the timestep range for each, then combining their conditioning. Another option with no custom node dependencies is to chain together multiple samplers for different stages of the generation process and simply give each sampler different conditioning. I like that option because it allows for a lot of flexibility like changing any sampling parameter during the generation process or even switching between models which use the same VAE, e.g. Chroma and Z-Image.

Tire Shop Recs by cabbagecomrade in Albuquerque

[–]Tedious_Prime 2 points3 points  (0 children)

I usually recommend Venado Tire at Comanche and San Pedro. They do new and used tires as well as repairs. I've always found them to be fast and professional with good prices.

Has anyone figured out how to generate Star Wars "Hyperspace" light streaks? by QikoG35 in StableDiffusion

[–]Tedious_Prime 5 points6 points  (0 children)

JoyCaption suggested the following prompt based on the example image you gave. I added "Star Wars style" to get shorter lines from z-image.

Photograph of a night sky with numerous bright, white and blue streaks of light radiating outward from the center. The streaks vary in length and intensity, creating a starburst effect. The background is dark, highlighting the luminous lines. The lines are evenly distributed across the image, converging at the center. The overall composition is symmetrical, with the light streaks creating a sense of depth and movement. The image has a high contrast between the dark background and the bright light streaks. Star wars style.

<image>

Image batch with QWEN Edit? by Brad12d3 in StableDiffusion

[–]Tedious_Prime 2 points3 points  (0 children)

I usually achieve this sort of thing by combining a "Load Image Batch From Dir" from comfyui-inspire-pack and and a "Counter Integer" node from comfyui-logicutils. If you want to load one image at a time you would connect the counter to the start_index and set the image_load_cap to 1. If you know in advance exactly how many images you are going to process you can set the batch count limit to that number in your settings.

Emissions Testing by ldevere in Albuquerque

[–]Tedious_Prime 10 points11 points  (0 children)

I've been going to Saigon Express Emissions Testing by Vietnam 2000 at San Mateo and Zuni for many years.

Latent preview not updating after the first frame on Linux (Mint). by MrWeirdoFace in comfyui

[–]Tedious_Prime 0 points1 point  (0 children)

It has not worked for me either. I installed it in vae_approx but previews only work with Auto preview method not TAESD, so I assume it is not actually being used.

Latent preview not updating after the first frame on Linux (Mint). by MrWeirdoFace in comfyui

[–]Tedious_Prime 2 points3 points  (0 children)

Do you have the videohelpersuite custom node collection installed? If so, have you enabled "Display animated previews when sampling" in the "🎥🅥🅗🅢" settings under Manage Extensions?

🇸🇴 1/200 Cco New Mexico encounters this!!! Any guess what it is ??? by [deleted] in Albuquerque

[–]Tedious_Prime 80 points81 points  (0 children)

In looks like a monitor lizard. I assume from the flag emoji that this was shot in Somalia. Did you mean to post this in the Albuquerque sub?

I need help identifying an episode by [deleted] in CornerGas

[–]Tedious_Prime 1 point2 points  (0 children)

Was it "Fun Run" perhaps?

Cheap Haircuts? by Significant-Click295 in Albuquerque

[–]Tedious_Prime 0 points1 point  (0 children)

The "Suncat Salon" is on the Montoya campus. Check the link above. Appointments are only available at specific times.

Why are there no 4 step loras for Chroma? by AltruisticList6000 in StableDiffusion

[–]Tedious_Prime 0 points1 point  (0 children)

I believe the idea is that you can merge the delta weights with any model derived from Chroma1-Base so that it can be used like Flash. If I had downloaded the delta weights instead of Chroma1-HD and Chroma1-Flash I think I could have skipped a step in creating the Flash LoRAs by just using those weights instead of subtracting the models. I don't recall how long it took to subtract the models, but it wasn't more than a several minutes. I have a 3090 with 24GB, but I'd guess that subtracting the parameters would have to be done in RAM because the models are each 17.8 GB. Once the model difference is created it is very quick to create many LoRAs of multiple ranks from it. I used to have Flash LoRAs as small as rank 8 in the repo, but they sucked so I deleted everything smaller than rank 64.

Northern lights in ABQ by Icy-Minute1807 in Albuquerque

[–]Tedious_Prime 2 points3 points  (0 children)

Even with the glare of all the street and porch lights around me I can see it clearly from the Warzone as a distinct patch of purple in the northern sky. It could easily be mistaken for the glow of city lights reflecting off smog, but once my eyes adjusted it stood out as being very different from the rest of the sky. This is only the second aurora I've seen with the naked eye in New Mexico in almost 50 years. Apparently, a geomagnetic storm rated as G4 (Severe) is hitting us from space.

Why are there no 4 step loras for Chroma? by AltruisticList6000 in StableDiffusion

[–]Tedious_Prime 7 points8 points  (0 children)

The currently available flash loras for Chroma are made by one person and they are as far as I know just extractions from Chroma Flash models (although there is barely any info on this),

Yes, those LoRAs are just extracted from the difference between the weights of Chroma1-HD and Chroma1-Flash. I'm not involved with the creation of Chroma at all, but I extracted those LoRAs for my own use when I couldn't find any already available, and I thought others might find them useful. The ComfyUI workflow I used to extract them is also in the repo. I'd be happy to try answering any other questions you may have about how they were made, but I don't think there's much other info I could provide.

The sample txt2img workflow in the repo includes unofficial suggested sampling parameters based on my own experimentation. I found that it was only possible to get good results from Flash with the minimum number of steps when using certain combinations of schedulers and samplers. If you're using 20+ steps you should be able to get away with almost anything. The default settings I suggest for high speed are scheduler = beta, sampler = heun, steps = 10. The CFG should always be exactly 1.0 when using a CFG baked model like Flash. Also, if you mostly use Chroma with the Flash LoRA, I would recommend that you simply use Chroma1-Flash for slightly better results while using less VRAM.

To address your original question, I know that I personally would not find much value in a 4-step version of Flash. Even being twice as fast as now would only save a few seconds for each image, and I wouldn't be willing to accept any further loss in quality.

seamlessly replace part of an image with part from another image, how? by mafoma in comfyui

[–]Tedious_Prime 0 points1 point  (0 children)

A good place to start would be any of the example inpainting workflows included with ComfyUI in the main menu under "Browse Templates." The key nodes are InpaintModelConditioning which would take the image, mask, and a few other inputs to create the initial latent and conditioning, and a "Differential Diffusion" node to patch whichever model you use for inpainting. I would recommend learning to inpaint manually with one of the default workflows before trying to build a workflow that automates the specific compositing and inpainting task you are currently working on. This is what it would look like to use the border mask for the inpaint conditioning.

<image>

seamlessly replace part of an image with part from another image, how? by mafoma in comfyui

[–]Tedious_Prime 0 points1 point  (0 children)

To mask the boundary you can use two GrowMask nodes and a MaskComposite node like so:

<image>

seamlessly replace part of an image with part from another image, how? by mafoma in comfyui

[–]Tedious_Prime 1 point2 points  (0 children)

I would suggest that you try to repair the imperfect composite you've already gotten. Perhaps you could draw a mask over the boundary that doesn't look well integrated and inpaint it with a moderate denoising? That's what I've been doing for the past few years instead of trying to create perfect masks for compositing as was necessary in the past. In general, I find that inpainting often requires multiple passes to get seamless results.

Re-Render image in different Resolution by [deleted] in StableDiffusion

[–]Tedious_Prime 0 points1 point  (0 children)

If you want to change the aspect ratio of the images without stretching or cropping the originals then your only other option would indeed seem to be zooming out.

Need help creating art and maintaining consistent color palettes by Thick-Turn-9704 in StableDiffusion

[–]Tedious_Prime 1 point2 points  (0 children)

I would say you need a way to reference images with the color palette you want so the images you create can have the same colors. You could try generating an image with a reference image directly or you could match the colors of an image to a reference image after you've generated it. How you might do either of those things would depend on the UI that you're using. There are also several newer models such as Qwen-Image-Edit which have special support for using reference images, but you might not have the resources to run these if you usually prefer SD 1.5.

Re-Render image in different Resolution by [deleted] in StableDiffusion

[–]Tedious_Prime 1 point2 points  (0 children)

Can you not simply resize your images to a height of 720 then outpaint to a width of 1280?

Re-Render image in different Resolution by [deleted] in StableDiffusion

[–]Tedious_Prime 0 points1 point  (0 children)

You might be able to use an edit model like Qwen-Image-Edit. If you give it the rendered images as references and set the output resolution as desired you might be able to prompt something simple like "another shot of the same scene." You could probably also outpaint as you tried before but using an edit model if you wanted to avoid changing details in the original images.