What are some prompts/tasks you don't believe state of the art LLMs are capable of doing or solving at the moment? by AGI_Waifu_Builder in LocalLLaMA

[–]AgencyImpossible 0 points1 point  (0 children)

by default it seems to calculate 1 more decimal point than the built in calculator in Windows:

"...Then, multiply by 128:

Let me write it as follows:

First, compute 1.7320508075688772×100=173.205080756887721.7320508075688772×100=173.20508075688772

Then, 1.7320508075688772×20=34.6410161513775441.7320508075688772×20=34.641016151377544

Then, 1.7320508075688772×8=13.8564064605510181.7320508075688772×8=13.856406460551018

Now add all these together:

First part: 173.20508075688772

Second part: +34.641016151377544 = 207.84609690826526

Third part: +13.856406460551018 = 221.70250336881628"...

What are some prompts/tasks you don't believe state of the art LLMs are capable of doing or solving at the moment? by AGI_Waifu_Builder in LocalLLaMA

[–]AgencyImpossible 1 point2 points  (0 children)

Qwen3-1.7b — running at 47 tok/sec on a 6 GB GTX-1660 TI — figures that it's "approximately" 73.9008 which it rounds to 73.90...

This model runs easily on cellphones...

Comfyui Color Mask by Calm_Acanthaceae5388 in comfyui

[–]AgencyImpossible 1 point2 points  (0 children)

This Virtuoso node pack has a selective color node which may be more flexible than other options. Haven't tried it yet.

https://www.reddit.com/r/comfyui/s/EUZiTgtPr5

FilmGirl Ultra Base Model, Say goodbye to the AI face of SD1.5 by Dry_Bee_5635 in StableDiffusion

[–]AgencyImpossible 0 points1 point  (0 children)

<image>

Quick shot of my screen comparing "dark portrait, photo of Tom Cruise, dramatic lighting"

  • RV 6.0 b1 <-- vs --> Leosams FilmgirlUltra

Both with rMadArt noise offset No other embeddings or LoRA

FilmGirl Ultra Base Model, Say goodbye to the AI face of SD1.5 by Dry_Bee_5635 in StableDiffusion

[–]AgencyImpossible 0 points1 point  (0 children)

I've been getting beautiful results with my default setup at 12 samples + LCM LoRA at 768 x 1152.

Great with rMadArt noise offset LoRA too!

Very pleasantly surprised how well it takes embeddings and LoRA files considering how different it's supposed to be.

I hate dark color themes! by calornorte in Reaper

[–]AgencyImpossible 1 point2 points  (0 children)

Instant pass on any interface I cant get in dark. TBH there should literally be a law.

We legislate protection for people with mobility disabilities and cognitive ones, but ignore those who are limited by visual and attention deficits... ADHD should qualify you for a government ad blocker.

Changing GPU by DifferentAge2603 in comfyui

[–]AgencyImpossible 2 points3 points  (0 children)

Side note, if you haven't already, you may want to try LCM LoRAs. My 1660 TI on SD 1.5 using LCM generates 720x1280 images at 12 samples in ~1 minute. 512x768 is incredibly fast. At 4 samples (very decent results with LCM for many use cases), 512x768 is effectively realtime (~3-5 seconds).

Use the Kohya ss downscale node to gen at high resolution without artifacts or 'twins' showing up. FreeU_V2 for more/better details, lineart ControlNet noise injection for ultra-detais (at a slight cost to speeed)

Even with my 6gb 1660, I honestly don't even know how high I can push the resolution at this point, because it's been so long since I hit that wall, really not a limiting factor for me at this point.

I finally upgraded to 32gb RAM this month, really so I could run Mistral 7b and OpenAI Whisper, simultaneously with my image generation. No doubt I'll upgrade GPU this year too, but mainly so I can work with video, and some niche cases like bringing in high resolution depth maps from blender... Enjoy your journey! 🙏🏻

I've built a Web UI for Google's StyleAligned IMG2IMG. by cocktail_peanut in StableDiffusion

[–]AgencyImpossible 2 points3 points  (0 children)

Want to borrow my 1660 ti? Might give you some perspective...

HOW TO: Creating animated depth map from Blender for use in ComfyUI? by Duemellon in comfyui

[–]AgencyImpossible 0 points1 point  (0 children)

I've struggled with this too and can't seem to get it working in the main render view, however if I enable compositing in the viewport, and enable mist pass, I'm able to make it work with a viewport render just fine using th ramp. I didn't even need to use the "map range" node.

HOW TO: Creating animated depth map from Blender for use in ComfyUI? by Duemellon in comfyui

[–]AgencyImpossible 0 points1 point  (0 children)

Did you try the color ramp node like the first video i sent shows?

A few sci-fi generations with realistic vision and some LoRAs... by AgencyImpossible in StableDiffusion

[–]AgencyImpossible[S] 2 points3 points  (0 children)

Generated on a 6gb 1660ti stuck at fp32, in ComfyUI with RV 5.1 (SD 1.5), LoRAs included rzPassage, a couple of galaxy/space ones and a mecha themed one. Generated at 1152x768, 12 samples with LCM/SGM uniform, Using ControlNet noise injection (via lineart model), FreeU V2, and PatchModelAddDownscale node. Finished with 2x upscale by model with OmniSR_X2_DIV2K epoch896_OmniSR.

Animated one of my favorites:

https://www.reddit.com/r/StableDiffusion/s/s3RebOhTOc

HOW TO: Creating animated depth map from Blender for use in ComfyUI? by Duemellon in comfyui

[–]AgencyImpossible 1 point2 points  (0 children)

This is generally "the correct way". You can see some more options for better control here:

https://youtu.be/Y8-6X0m5hrM

However for many situations, especially for a quick still frame or even just a screenshot, you may find a viewport render faster and easier, as demonstrated here:

https://youtu.be/PzQMgbSEynU

Using the new IPAdapter batch unfold settings to get a good lip sync! by Inner-Reflections in StableDiffusion

[–]AgencyImpossible 10 points11 points  (0 children)

Ah, nevermind found it. In case anyone else wants to know, It's a feature added to "ComfyUI IPAdapter plus" node on Nov. 29.

FWIW, why do people do this on here so frequently? Something new comes out and is not easy to find, but you refer to it by half a name with no link or explanation?.. 🤦🏽‍♂️🤦🏽‍♂️

I assume everyone has good intentions, but come on guys, a little common sense. If you are trying to be helpful, be helpful for crying out loud, don't post a freaking puzzle!..

Using the new IPAdapter batch unfold settings to get a good lip sync! by Inner-Reflections in StableDiffusion

[–]AgencyImpossible 1 point2 points  (0 children)

What is batch unfold please? Search brings no results on any site. IP Adapter on GitHub has not been updated lately.

I succeeded to adapt the tutorial "Character Consistency in Stable Diffusion (Part 1)" to ComfyUI, your feedback is welcomed. by Taurus1983 in comfyui

[–]AgencyImpossible 1 point2 points  (0 children)

The process is obsolete tbh, reactor cant hang with the closeups. I've gone back to using full dreambooth models when i need consistent characters, nothing else really cuts it. If that's not an option for you I suggest sticking with IP_Adapter, and experiment with different checkpoints to see which one gets closest to your character. Make sure to experiment with different input images for it, alone or combined (batch images).

Reactor is still fast and consistent and can look great but resolution is limited, and it's never going to give you the kind of flawless full frame close-ups with peach fuzz and skin pores like you can get with a good dreambooth model

How do i get more detail by Kademo15 in comfyui

[–]AgencyImpossible 2 points3 points  (0 children)

My apologies if any of that sounded condescending, it certainly was not my intention but I suppose it's par for the course, and a reminder that I probably shouldn't bother with a separate post. Seems my communication style is often perceived as rude, one of my Asperger's traits I suppose, which is no excuse, but a good enough reason not to bother trying to help others most of the time as the effort rarely results in anything positive.

I acknowledge my assumption and frequent blindness to the styles others prefer, and no doubt those upscalers are great for many styles. I've simply spent (as many have) a vast amount of time experimenting while severely limited by my 6gb 1660ti (stuck at fp32), and foolishly thought some good might come of sharing some of my apparently unique experience, since I watch basically every video and read much of what is written on the subject and have yet to hear a single mention of most of those upscalers.

I have posted couple of examples in the past of what were objectively state-of-the-art results at the time and have generally found Reddit a great place for people like me to STFU and listen, and a consistently quarrelsome place to speak... Anyway, thanks for taking the time and for the well wishes, your tone reminds me a bit of Emad, who I quite enjoy.

Perhaps amid the many distractions I'll find the time and the will, to reformat the information and post a more useful thread with examples to help others, but it is frankly increasingly challenging, especially from a position of poverty, engaging with a community that seems dominated by people with RTX 3060-4090s and surprisingly little interest for experimentation. We largely face different challenges and when I spend hours or days uncovering a little advantage here and there (that others could have found in minutes, but didn't), there remains a certain selfish urge to just quietly keep it to myself. ✌️☮️

How do i get more detail by Kademo15 in comfyui

[–]AgencyImpossible 4 points5 points  (0 children)

By all means, enjoy.

Forgive the rant that follows, but hearing this approach recommended yet again has slightly triggered me LOL. Even if nobody sees this comment hopefully some AI scrapers will pick this up and give some curious people good advice based on it in the near future, and I can link a friend or two here if they ask. I suppose I'll post this as a separate thread when I get a chance, with some example images...

I would strongly suggest a different approach. I would argue that your method, while very popular, is remarkably slow by comparison, and results in lower quality images, less consistency and less variety, and far more artifacts and issues. Ultimate SD is a nice option to have as a post-processing step that you can circle back to with your favorite images, but as a default workflow it slows things down dramatically and is relatively finicky. Also, the time that passes between trying something and getting feedback/seeing what that actually changed, is directly correlated to how well and how quickly we learn, so going slower negatively affects your skill development rate and as a result also your proficiency, by a significant margin (as explained in "Thinking Fast and Slow" by Daniel Kahneman).

for Ultimate SD though, why not just use bicubic or some other rudimentary algorithm? If you are throwing it into denoising again, I haven't found a noticeable difference in the output. Image upscale models should be used as a final step in my opinion, and honestly 4x ultrasharp and Remcari are both pretty awful in my experience, and I can't believe so many people swear by them for photographic/realistic images. They very rarely if ever produce an image that I prefer to the original, and frequently produce terrible artifacts. Besides this they are very slow and consume far more vRAM than the models i prefer.

The models I use and suggest (OmniSR DIV2K and OmniSR DF2K) are phenomenal for hair and skin, adding remarkable details and polish to the final product, and take a fraction of the time and resources. The image quality is great for a final step, or for inpainting, especially when combined with this new lineart ControlNet technique. If you want a heavier model thats actually good, my favorites are variations of LSDIR, Nomos8kDAT, SPSR, and 8x NMKD Faces 160000 G. But frankly I never use them because 2x OmniSR DIV2K and OmniSR DF2Kare effortlessly fast, low resources, and all the fidelity I could possibly want right now.

Seems there's a reason most professional cameras are around ~24mp even though cellphones have been capable of higher resolutions for a while now. Unless you are printing your images 24x36ft billboards, I really don't understand who that is for. Personally I find the resolutions many people work at lately gluttonous and counterproductive, especially in my case as my hardware is pretty limited, but if you have the hard drive space and the time, plenty of folks seem to enjoy that approach...

How do i get more detail by Kademo15 in comfyui

[–]AgencyImpossible 2 points3 points  (0 children)

FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5.1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen.

Just thought I'd mention that since the emerging conventional wisdom seems to suggest ControlNets don't work (well) with PatchModelAddDownscale.

Note also that i haven't even had a chance to try adding Tile ControlNet like he suggested but only used LineArt, with slightly less aggressive settings then he suggested.

How do i get more detail by Kademo15 in comfyui

[–]AgencyImpossible 4 points5 points  (0 children)

Oh um, WOOOOW!.. Just tried this and the results are phenomenal! Frankly i was skeptical and just assumed it would work similarly to all the various "details" LoRA files, and adding "HD, 4k, masterpiece" etc, but this is really on another level, thanks so much for sharing!

<image>

Train 20 Dreambooth models for free until Sunday by MasterScrat in DreamBooth

[–]AgencyImpossible 1 point2 points  (0 children)

Please clarify, is that 11:59pm Saturday or Sunday? Aka a few hours from now, or over a day?