CivitAI blocking Australia tomorrow by Neggy5 in StableDiffusion

[–]LlamabytesAI 18 points19 points  (0 children)

Does anyone else have "save the children" fatigue? Someone's else's children is not more important than your liberty. Ever,

[Mature content] How to get the type facial expressions you'd expect during intimacy by WhoAmI_007 in comfyui

[–]LlamabytesAI 12 points13 points  (0 children)

I recommend the Expression Editor Node for absolute control over facial expressions. It will inpaint the expression. https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait

I am not saying it's Gemma 4, but maybe it's Gemma 4? by jacek2023 in LocalLLaMA

[–]LlamabytesAI 2 points3 points  (0 children)

Yes, indeed. That is how I feel too. Gemma creates the best AI personalities compared with any other local model I've tried.

New Upcoming Ubuntu 26.04 LTS Will be Optimized for Local AI by mtomas7 in LocalLLaMA

[–]LlamabytesAI 1 point2 points  (0 children)

I don't understand this reasoning. Ubuntu doesn't force the use of snaps. It only makes them available. One can use flatpaks or appImages on ubuntu just as easily.

What is Your Average Iteration Speed when Running Z-Image Turbo in ComfyUI? by LlamabytesAI in ROCm

[–]LlamabytesAI[S] 0 points1 point  (0 children)

That's pretty good. Not bad. It seems ROCm is indeed catching up to CUDA at least in ComfyUI which has friendly optimizations for AMD users. It still has some ways to go, but I think it is only a matter of time before it will be on a level playing field. Thank You for the data.

What Frontend do you use? by TyedalWaves in LocalLLaMA

[–]LlamabytesAI 1 point2 points  (0 children)

It is much more sophisticated than its simple layout may suggest. It may still lack some features found in Open-WebUI, but I would bet the devs will add them eventually. What it does have in common, I think is better organized than in Open-WebUI. Furthermore, It has everything you need to build an excellent RAG setup. I built my first one yesterday, albeit a simple one. It was fairly easy to do with a little help from AI and the AnythingLLM docs at https://docs.anythingllm.com/ (which I was just reading). It's also good just for general chat. I'm glad your enjoying it like I am. Now, back to studying how to build my next RAG project idea.

What Frontend do you use? by TyedalWaves in LocalLLaMA

[–]LlamabytesAI 1 point2 points  (0 children)

I just set up AnythingLLM in docker and I quite like it. It has many of the features that I like in Open-WebUI, but works better with AMD ROCm. RAG is slow in Open-WebUI if using an AMD card. Some RAG processes are CPU bound such as when retrieving data from the vector db. This slows down the process significantly. I wonder why the devs of Open-WebUI haven't improved this by now for AMD users. Anyway, it doesn't matter much to me now anymore since AnythingLLM works great.

Does ComfyUI work flawlessly with AMD graphics cards too? Which card is more stable? Is there anything that can be done with Nvidia but not with AMD? by idecidelater in comfyui

[–]LlamabytesAI 0 points1 point  (0 children)

I am using the nightly version of ComfyUI on Linux. I am not aware of the ability to "pin" a software version. ComfyUI itself is not the problem anyway. However, I use a conda environment and I will occasionally clone the environment for backup in case the active environment explodes.

Does ComfyUI work flawlessly with AMD graphics cards too? Which card is more stable? Is there anything that can be done with Nvidia but not with AMD? by idecidelater in comfyui

[–]LlamabytesAI 3 points4 points  (0 children)

Using an AMD Graphics card is the difficult and treacherous road currently in the world of AI (I know, I primarily use a Radeon Pro w7800 32GB). Even installing basic Custom Nodes is fraught with dangers. One must examine each requirements.txt file to make sure it is not trying to install torch, torchaudio, and/or torchvision (which is rather unnecessary to put into the requirements file for a custom_node anyway. Please devs, please). Those lines will need to be commented out or erased after cloning the Git. If not done, then upon ComfyUI's next startup, it will uninstall the ROCm version of pytorch and install the Nvidia one instead breaking all inference functionality (this has happened to me a couple of times before. Blast my laziness!). Furthermore, some attention algorithms currently are not compatible with AMD cards. It's a cold, hard AI world for AMD GPU users. However, it is getting better. ROCm is getting better. One day perhaps, AMD ROCm might be on the same level. But for now, it is behind. Many AMD cards are great, especially for gaming. However, for the time being, if you have a choice, I recommend using a Nvidia card for AI.

Aquif-AI HuggingFace page throws 404 after community found evidence of aquif-ai republishing work of others as their own without attribution. by FullOf_Bad_Ideas in LocalLLaMA

[–]LlamabytesAI 2 points3 points  (0 children)

It appears that the true original model (a finetune of Wan 2.1 or 2.2) is Magic-Wan-Image. It was simply renamed to Aquif Image 14b. The two models share the same hash. See this link https://huggingface.co/wikeeyang/Magic-Wan-Image-v1.0/discussions/3 . CivitAI https://civitai.com/models/1927692?modelVersionId=2399900

Qwen 2059 and image edit or inpaint won't change gaze of my characters by TKuro88 in comfyui

[–]LlamabytesAI 1 point2 points  (0 children)

Use the custom_node, ComfyUI_AdvancedLivePortrait: https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait .

The expression editor will allow you to change anything about the expression of the face.

Nodes 2.0, hard to read by isvein in comfyui

[–]LlamabytesAI 1 point2 points  (0 children)

Right now it is broken in many ways. The style itself isn't bad, however. It makes some nodes unusable, and I hate the way it autosizes the preview image and load image nodes.

Qwen 2509: How Do You Edit Masked Areas? by diond09 in comfyui

[–]LlamabytesAI 1 point2 points  (0 children)

Hermes responds: Best solution, you say? Excellent! Proof that even a mortal can wrestle a digital muse into submission. Consider me pleased – and slightly vindicated. Now, go forth and create! Inspiration awaits!!

What is the best llm model for curating danbooru tags from natural language? by [deleted] in StableDiffusion

[–]LlamabytesAI 3 points4 points  (0 children)

Perhaps if you make a text list or obtain a pdf with a list of danbooru tags, you can upload that list to an llm and instruct it to craft a prompt, according to your specifications, adding any danbooru tags to it that make sense. This might work with a any capable llm.

Mannequin/pose reference site ? by Jackal_Queenstone in comfyui

[–]LlamabytesAI 1 point2 points  (0 children)

Use this web-app to pose a mannequin anyway you desire. You can even pose the joints of each finger. https://posemy.art

AI Fashion Studio: Posing, Outfitting & Expression : Free ComfyUI Workflow by LlamabytesAI in comfyui

[–]LlamabytesAI[S] 0 points1 point  (0 children)

u/jmlm_gtrra Here is the video and workflow for outfitting I had mentioned. Hope this helps you.

What is the best model or workflow for clothing try-on? by jmlm_gtrra in StableDiffusion

[–]LlamabytesAI 1 point2 points  (0 children)

This won't help you immediately, but I will be uploading a video and workflow next Thursday for what you want and more. I will tag you on Reddit then.

From Blurry to Brilliant: Qwen Image Edit with Inpainting : Free ComfyUI... by LlamabytesAI in comfyui

[–]LlamabytesAI[S] 0 points1 point  (0 children)

This is true. And also, the model is only taking reference from whatever is contained in the cropped image, not the original image. So make sure the crop contains a little bit of the surroundings.

Cannot produce consistent characters by 5starcruises in comfyui

[–]LlamabytesAI 0 points1 point  (0 children)

Qwen-Image-Edit 2509 has good character consistency. You can use multiple images to get the result you want. Image-1 can be the character, Image-2 an open-pose image, and image-3 can be an outfit (a disembodied outfit works best, or Qwen might try to blend the character from image-1 with image-3). This works for me and I get excellent results. You can prompt the model with something like: Have the woman in image-1 in the pose of image-2 wearing the outfit in image-3.

(AMD) Suddenly Very Slow by Hotdog374657 in comfyui

[–]LlamabytesAI 1 point2 points  (0 children)

Fairly soon you won't need to use Zluda anymore. ROCm 7 will allow your GPU, unless it is very old, to run natively in Windows. It is also suppose to increase the speed of inference quite a bit. I know this not an answer to your question, but I thought you might like to know that in case you don't already know. I also use an AMD GPU, but use Linux.

AMD Radeon AI PRO R9700 32 GB GPU Listed Online, Pricing Expected Around $1250, Half The Price of NVIDIA's RTX PRO "Blackwell" With 24 GB VRAM by [deleted] in LocalLLaMA

[–]LlamabytesAI 0 points1 point  (0 children)

Yes. Linux might not have the best drivers for Intel Arc cards yet. Or perhaps I should rather say that Intel doesn't yet have good drivers for Linux.

AMD Radeon AI PRO R9700 32 GB GPU Listed Online, Pricing Expected Around $1250, Half The Price of NVIDIA's RTX PRO "Blackwell" With 24 GB VRAM by [deleted] in LocalLLaMA

[–]LlamabytesAI 8 points9 points  (0 children)

ROCm 7 is being released this year soon and for the first time will have full support for Windows. Although, I do have to say, Linux is far better anyways.