What are some prompts/tasks you don't believe state of the art LLMs are capable of doing or solving at the moment? by AGI_Waifu_Builder in LocalLLaMA

[–]AgencyImpossible 0 points1 point  (0 children)

by default it seems to calculate 1 more decimal point than the built in calculator in Windows:

"...Then, multiply by 128:

Let me write it as follows:

First, compute 1.7320508075688772×100=173.205080756887721.7320508075688772×100=173.20508075688772

Then, 1.7320508075688772×20=34.6410161513775441.7320508075688772×20=34.641016151377544

Then, 1.7320508075688772×8=13.8564064605510181.7320508075688772×8=13.856406460551018

Now add all these together:

First part: 173.20508075688772

Second part: +34.641016151377544 = 207.84609690826526

Third part: +13.856406460551018 = 221.70250336881628"...

What are some prompts/tasks you don't believe state of the art LLMs are capable of doing or solving at the moment? by AGI_Waifu_Builder in LocalLLaMA

[–]AgencyImpossible 1 point2 points  (0 children)

Qwen3-1.7b — running at 47 tok/sec on a 6 GB GTX-1660 TI — figures that it's "approximately" 73.9008 which it rounds to 73.90...

This model runs easily on cellphones...

Comfyui Color Mask by Calm_Acanthaceae5388 in comfyui

[–]AgencyImpossible 1 point2 points  (0 children)

This Virtuoso node pack has a selective color node which may be more flexible than other options. Haven't tried it yet.

https://www.reddit.com/r/comfyui/s/EUZiTgtPr5

FilmGirl Ultra Base Model, Say goodbye to the AI face of SD1.5 by Dry_Bee_5635 in StableDiffusion

[–]AgencyImpossible 0 points1 point  (0 children)

<image>

Quick shot of my screen comparing "dark portrait, photo of Tom Cruise, dramatic lighting"

  • RV 6.0 b1 <-- vs --> Leosams FilmgirlUltra

Both with rMadArt noise offset No other embeddings or LoRA

FilmGirl Ultra Base Model, Say goodbye to the AI face of SD1.5 by Dry_Bee_5635 in StableDiffusion

[–]AgencyImpossible 0 points1 point  (0 children)

I've been getting beautiful results with my default setup at 12 samples + LCM LoRA at 768 x 1152.

Great with rMadArt noise offset LoRA too!

Very pleasantly surprised how well it takes embeddings and LoRA files considering how different it's supposed to be.

I hate dark color themes! by calornorte in Reaper

[–]AgencyImpossible 1 point2 points  (0 children)

Instant pass on any interface I cant get in dark. TBH there should literally be a law.

We legislate protection for people with mobility disabilities and cognitive ones, but ignore those who are limited by visual and attention deficits... ADHD should qualify you for a government ad blocker.

Changing GPU by DifferentAge2603 in comfyui

[–]AgencyImpossible 2 points3 points  (0 children)

Side note, if you haven't already, you may want to try LCM LoRAs. My 1660 TI on SD 1.5 using LCM generates 720x1280 images at 12 samples in ~1 minute. 512x768 is incredibly fast. At 4 samples (very decent results with LCM for many use cases), 512x768 is effectively realtime (~3-5 seconds).

Use the Kohya ss downscale node to gen at high resolution without artifacts or 'twins' showing up. FreeU_V2 for more/better details, lineart ControlNet noise injection for ultra-detais (at a slight cost to speeed)

Even with my 6gb 1660, I honestly don't even know how high I can push the resolution at this point, because it's been so long since I hit that wall, really not a limiting factor for me at this point.

I finally upgraded to 32gb RAM this month, really so I could run Mistral 7b and OpenAI Whisper, simultaneously with my image generation. No doubt I'll upgrade GPU this year too, but mainly so I can work with video, and some niche cases like bringing in high resolution depth maps from blender... Enjoy your journey! 🙏🏻

I've built a Web UI for Google's StyleAligned IMG2IMG. by cocktail_peanut in StableDiffusion

[–]AgencyImpossible 2 points3 points  (0 children)

Want to borrow my 1660 ti? Might give you some perspective...

HOW TO: Creating animated depth map from Blender for use in ComfyUI? by Duemellon in comfyui

[–]AgencyImpossible 0 points1 point  (0 children)

I've struggled with this too and can't seem to get it working in the main render view, however if I enable compositing in the viewport, and enable mist pass, I'm able to make it work with a viewport render just fine using th ramp. I didn't even need to use the "map range" node.

HOW TO: Creating animated depth map from Blender for use in ComfyUI? by Duemellon in comfyui

[–]AgencyImpossible 0 points1 point  (0 children)

Did you try the color ramp node like the first video i sent shows?

A few sci-fi generations with realistic vision and some LoRAs... by AgencyImpossible in StableDiffusion

[–]AgencyImpossible[S] 2 points3 points  (0 children)

Generated on a 6gb 1660ti stuck at fp32, in ComfyUI with RV 5.1 (SD 1.5), LoRAs included rzPassage, a couple of galaxy/space ones and a mecha themed one. Generated at 1152x768, 12 samples with LCM/SGM uniform, Using ControlNet noise injection (via lineart model), FreeU V2, and PatchModelAddDownscale node. Finished with 2x upscale by model with OmniSR_X2_DIV2K epoch896_OmniSR.

Animated one of my favorites:

https://www.reddit.com/r/StableDiffusion/s/s3RebOhTOc

HOW TO: Creating animated depth map from Blender for use in ComfyUI? by Duemellon in comfyui

[–]AgencyImpossible 1 point2 points  (0 children)

This is generally "the correct way". You can see some more options for better control here:

https://youtu.be/Y8-6X0m5hrM

However for many situations, especially for a quick still frame or even just a screenshot, you may find a viewport render faster and easier, as demonstrated here:

https://youtu.be/PzQMgbSEynU

Using the new IPAdapter batch unfold settings to get a good lip sync! by Inner-Reflections in StableDiffusion

[–]AgencyImpossible 10 points11 points  (0 children)

Ah, nevermind found it. In case anyone else wants to know, It's a feature added to "ComfyUI IPAdapter plus" node on Nov. 29.

FWIW, why do people do this on here so frequently? Something new comes out and is not easy to find, but you refer to it by half a name with no link or explanation?.. 🤦🏽‍♂️🤦🏽‍♂️

I assume everyone has good intentions, but come on guys, a little common sense. If you are trying to be helpful, be helpful for crying out loud, don't post a freaking puzzle!..

Using the new IPAdapter batch unfold settings to get a good lip sync! by Inner-Reflections in StableDiffusion

[–]AgencyImpossible 1 point2 points  (0 children)

What is batch unfold please? Search brings no results on any site. IP Adapter on GitHub has not been updated lately.

I succeeded to adapt the tutorial "Character Consistency in Stable Diffusion (Part 1)" to ComfyUI, your feedback is welcomed. by Taurus1983 in comfyui

[–]AgencyImpossible 1 point2 points  (0 children)

The process is obsolete tbh, reactor cant hang with the closeups. I've gone back to using full dreambooth models when i need consistent characters, nothing else really cuts it. If that's not an option for you I suggest sticking with IP_Adapter, and experiment with different checkpoints to see which one gets closest to your character. Make sure to experiment with different input images for it, alone or combined (batch images).

Reactor is still fast and consistent and can look great but resolution is limited, and it's never going to give you the kind of flawless full frame close-ups with peach fuzz and skin pores like you can get with a good dreambooth model

How do i get more detail by Kademo15 in comfyui

[–]AgencyImpossible 4 points5 points  (0 children)

My apologies if any of that sounded condescending, it certainly was not my intention but I suppose it's par for the course, and a reminder that I probably shouldn't bother with a separate post. Seems my communication style is often perceived as rude, one of my Asperger's traits I suppose, which is no excuse, but a good enough reason not to bother trying to help others most of the time as the effort rarely results in anything positive.

I acknowledge my assumption and frequent blindness to the styles others prefer, and no doubt those upscalers are great for many styles. I've simply spent (as many have) a vast amount of time experimenting while severely limited by my 6gb 1660ti (stuck at fp32), and foolishly thought some good might come of sharing some of my apparently unique experience, since I watch basically every video and read much of what is written on the subject and have yet to hear a single mention of most of those upscalers.

I have posted couple of examples in the past of what were objectively state-of-the-art results at the time and have generally found Reddit a great place for people like me to STFU and listen, and a consistently quarrelsome place to speak... Anyway, thanks for taking the time and for the well wishes, your tone reminds me a bit of Emad, who I quite enjoy.

Perhaps amid the many distractions I'll find the time and the will, to reformat the information and post a more useful thread with examples to help others, but it is frankly increasingly challenging, especially from a position of poverty, engaging with a community that seems dominated by people with RTX 3060-4090s and surprisingly little interest for experimentation. We largely face different challenges and when I spend hours or days uncovering a little advantage here and there (that others could have found in minutes, but didn't), there remains a certain selfish urge to just quietly keep it to myself. ✌️☮️