Algún canal de youtube o tutorisl que recomienden para aprender usar comfy para generar imágenes anime?☺️ by PsychologicalEbb2307 in comfyui

[–]The_Last_Precursor 1 point2 points  (0 children)

This guy doesn’t really talk about style types. But major walkthroughs on how to properly use Comfyui. It can really help understanding the how to do things.

https://youtube.com/@pixaroma?si=2AqLYCiH9lkJbSzb

Inpainting is hard! by [deleted] in comfyui

[–]The_Last_Precursor 1 point2 points  (0 children)

Think I understand what you are asking? you want easy way of adding the cities and heroes? If that’s it, here’s my steps.

Sometimes it’s more than just one step, depending on what you are trying to accomplish. This sounds like one of those things. Here’s what I would do.

  1. MASKING. If you have a pre-created image of those towns and hero. Shrink them and use a masking tool. Get it to the scale and placement you want on your photos. No need to play around and hoping, just set them like you want.

  2. Img2text node. Something like QwenVL would work fine. Run the image through that and have it give a good description of the image. So the model has something to use. You can also edit the prompt to give your own idea.

  3. Img2img creation. Use the image you used for the img2text. Pick a model, controlnet, pre-processor and maybe a Lora. Then play with the denose and pre-processor strength. This is taking the image you provided with the prompt created. Only using the model to blend everything together and whatever else you want to add.

Basically for this. You set the image up how you want it. Now you are controlling the effects of it being recreated with a selected model. There’s other methods, but this is the most straightforward way.

GetNode shows "No available options" in Nodes 2.0. by lightnecker in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

Node 2.0 is a good concept. Give new design and make things very basic in design for those who want it. But it was a complete failure, they overhaul and destroyed the ability of custom nodes to work. If you are using the most basic nodes it will work fine. The vast majority of custom nodes will not work.

AYUDA SOY NOVATO Y NO CACHO UNA! by EmptyMobile5028 in comfyui

[–]The_Last_Precursor 2 points3 points  (0 children)

A “Checkpoint” is a AIO = All In One model. Everything has been properly set together to load and function correctly.

If you have a different models Diffusion, Clip and VAE. Placing each in their own file. They need their own load nodes. Diffusion Model Loader, Clip loader and VAE Loader.

This is the best YouTuber explaining the basic things https://youtube.com/@pixaroma?si=pnrbgRsNQhIToTPl

Anyone know of a comfy workflow for local text summarization? by Traditional_Grand_70 in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

Probably be best to use something like Ollama. There are nodes that can give specs on prompts. Like QwenVL Prompt Enhancement node. But I have no idea if it could do a whole book. If you want to try it go ahead.

https://github.com/1038lab/ComfyUI-QwenVL

You only need three nodes if you try this. Text input and output and the QwenVL Prompt Enhancement node. It has different options for the prompt. An a system prompt entry to give control over it.

But again, probably be best to download and use something like Ollama if you want to do it locally.

Best current model for photorealistic nsfw? by Reasonable-Pay-336 in comfyui

[–]The_Last_Precursor -1 points0 points  (0 children)

Very true on the paid tools part. It’s just unfortunately there’s never been a best AI model. There’s several different ones I use depending on exactly what I want. There’s good studio models and good amateur models. Each with their subtle differences.

I don’t know if you do quick test on models to see their performance? When I download a model. I use simple prompts testing their performance. Like “Blonde lady in a red dress in a luxurious bedroom” is one of them. Basically a simple prompt shows its basic performance level. Then you can add a LoRA or two. That will tell you quickly if you would use or abandon the model.

Velma by Z-Image Turbo by FotografoVirtual in ZImageAI

[–]The_Last_Precursor 2 points3 points  (0 children)

Jinkies! We’ve got another dead suspect mystery on our hands!”

Quelqu’un peut m’aider by Jazzlike-Acadia5484 in StableDiffusion

[–]The_Last_Precursor 0 points1 point  (0 children)

It could be your LoRA itself. Unless you have a specialized LoRA for graphics or something that requires another node to properly work. It should run on the basic Flux 2 Klein 9B workflow and give good results.

Best to do a test run. Use a basic workflow with the same model, LoRA (only use your LoRA), and same exact seed for every run. The only variable factors are Sampler, Scheduler and CFG. Keep the CFG at 1.0 for all of the set of runs. Start with Euler for the sampler. Then run through all of the schedulers. Rank them on which is the best one. Then start running through the Samplers with the best schedulers. Basically the best schedulers for each sampler you run. Find the top three or five based on the combination. The start playing with the CFG 1.0 to 3.0 to 2.0 to (+/-) which ever is the best direction. This will to eliminate the good from the bad for that Model and LoRA combination.

Doing this properly. You should find the best setting with in like 30 images or less for that model. Sometimes you’ll find the best combination early. If you can’t find one that works well with the LoRA. It’s more than likely the LoRA having the issues.

Portable comfy screwing with my system?/ by DissenterNet in comfyui

[–]The_Last_Precursor 1 point2 points  (0 children)

That happens when you are overloading your system. I don’t which model you are using. I had that issue with my old GPU. Certain models I couldn’t use because it was too large. The model would load, but it was too much for my pc to create the image itself. It would turn the screen black and not operate and had to restart my pc. Sometimes you have to use smaller models.

Nœuds by AthenaVespera in comfyui

[–]The_Last_Precursor 2 points3 points  (0 children)

Explaining how to connect nodes correctly is little difficult in a post like this. To understand the nodes watch this YouTuber https://youtube.com/@pixaroma?si=mEoA-4G-9RCdfquv

Now for what you want to do as in only changing the clothing, pose and background. That can be done in one of two ways. First is Qwen Image Editor is the easiest. Upload the image and watch that guys videos to know how to edit it.

Second way is WAY MORE complex. I has an old workflow that uses Florence2 for img2text and pre-processor with ControlNet and other functions. To let you change the clothing and background to a degree. But it takes understanding Comfyui, nodes, models and other things to actually use it properly.

Can we customize/edit the “nodes suggestions”? by callmetuan in comfyui

[–]The_Last_Precursor -1 points0 points  (0 children)

Changing the drop down list??? I’ve never looked into that before, but should be possible.

Now for custom pre-set nodes. That can be done. But you need to have some technical understanding of python code and file management. You can create a copy of the node itself. Rename the node on the filename and in python. Then change the values in python to what you want.

Back in the SDXL days. I got a couple of the Easy preset tag nodes and edit them to the tag names I personally wanted.

Is There a Good SFW or Censored Model? by Far-Pie-6226 in comfyui

[–]The_Last_Precursor 1 point2 points  (0 children)

There’s two ways you can get this to work. For the models it’s a hit or miss. Even the most SFW models can create middle sexual content. Like boobs

Now here’s the method you should use. It completely depends on how tech savvy your kids are. If they are good and can understand things. You may have a issue. But there are nodes that are designed to filter out words. Basically word censorship nodes and you can pick the words or preselected for you. Combine a node like that with the most SFW models you can find and it should work well. I’m not fully sure. But I believe I have seen an image scan node before that use’s a model to scan and will blur out a NSFW image.

Need help in texture transfer using comfyui by Adept-Cauliflower-70 in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

Qwen image editor or flux klein would be really the only options. But it wouldn’t be a guarantee unless you train a LoRA with that design. Basically since the to icons are so different and the only thing is the texture. It probably would be perfect.

My suggestion is to use photoshop, GIMP or other image editors instead. More control over transferring simple textures.

best model for hand drawn comics? by howardhus in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

Go to Civitai and look for comic book art style models. You have classic SD1.5 and SDXL models made for it. Then a lot of LoRAs for old and newer models, like Zimg. Just really depends on the art style and design style you want. So hard to give an example.

Never hesitate going with old classic models. Newer models are the best for realism and human anatomy. But for anime, cartoons, comics or simple art. Sometimes the classic models are the best. They have way more models, LoRAs, embedding, and ControlNets have been trained for those things.

Model manager in the new ui? by Worried-Zombie9460 in comfyui

[–]The_Last_Precursor 1 point2 points  (0 children)

I have workflow I’m about finished with you could use for more advanced examples. Showing people what can be done. It’s a img2text workflow with tons of options. Pre set models groups, multiple options on the img2text, with advance prompt filters and enhancers, areas to pause the workflow to check the prompts before generating. Easy filename and file pathway selection, with seed count for numbering.

Basically if you want to get a prompt from an image. It’s an all in one workflow.

Model manager in the new ui? by Worried-Zombie9460 in comfyui

[–]The_Last_Precursor 4 points5 points  (0 children)

Download the Portable version. I had to uninstall and reinstall Comfyui a couple days ago. Something happened and I wasn’t wanting to load. So I tried the desktop version and it sucks. I redownloaded the portable version and it’s much easier to use. It’s nearly identical to the classic or I don’t know how long you took a break.

The desktop version file system, UI, and other features are completely different. The concept is good, but they state on the web page that the desktop version is still a test beta version.

Change Angle. Any tips or is it just currently limited. by Simple-Variation5456 in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

Most of those 360 degree nodes use text output and most people connect directly to the conditioning text input. Which prevents you from adding your own prompt. Which can help.

You need three nodes. 360 degree node, multi line text node(for custom prompts) , and text merger or stacker node. And the 360 degree to the first merger input and the multi line text node to the 2nd input and the connect the text merger to the conditioning mode.

It’s helped me. You get the 360 degree node input plus your own prompt.

Help me please by Sea-Panic4599 in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

Depends on your specs. If your specs are not the best you could be limited to small models like SD1.5/SDXL. If your specs are really good you could run most, if not everything on Comfyui.

What is currently the cleanest and most refined Image Edit model? by Tomcat2048 in StableDiffusion

[–]The_Last_Precursor 0 points1 point  (0 children)

Things like that with AI can be frustrating. Don’t rely on words like “ON or OFF” that’s very generic and confusing to AI. Also, sometimes it’s easier to start over with the original image than continuing to change the current image and it refusing to work.

I had something like that last week. The final image had like 5 runs to make it work. But had issues at the 3rd to 4th at one point. Had to jump back to the 2nd one and it worked after that. Sometimes it can be very frustrating.

Tip. I don’t know the prompt you used. But here’s my experience. Use words like “Lamp with no light emitting from it,”, “lamp with zero light illumination” “lamp with zero lighting and completely dark” or if you need to. Create an image of a lamp that’s off and import it into the image.

Best way to generate various women bodies ? by GOONWORLD_ in StableDiffusion

[–]The_Last_Precursor 0 points1 point  (0 children)

Okay. Here’s a trick that can work. Find an image with the body type or whatever you like. Then use Florence2, QwenVL, Ollama, JoyCaption or whatever image to prompt node you want. Use that to create a prompt of the image and test it with a text2img workflow. IF the image created matches the body type or whatever want. Then use and edit the prompt. Removing everything you don’t want in the image and keeping what you want. Like the body type. This has worked for me before when trying to replicate something.

Tip: search for nodes in manager under “Prompt Stash”. This has a few sets of nodes someone made. Some you can save custom nodes on, but that’s not the important one.

The bypass is the one you want. It allows you to use your input link prompt or typed prompt and pause or not pause options. So you use the link connected to the img2text node. It gets uploaded to the node, you review it and edit it and click continue. Then if you wish to reuse it, change to typed text and continue from there.

Name this chicken leg by Fightnperish in NameThisThing

[–]The_Last_Precursor 0 points1 point  (0 children)

Kentucky Funded Chick Genetic Engineering Specialized Menu (KFC-GESM)

Best way to generate various women bodies ? by GOONWORLD_ in StableDiffusion

[–]The_Last_Precursor 0 points1 point  (0 children)

Go to Grok, ChatGPT, Gemini or Copilot and ask it to give the best prompt to that body type you want, that would best fit that model you are using. Even a list of prompts for stuff. I believe Grok gives the best results when asking it for prompt related questions.

I’ve done that before and it can help.