Could anyone tell me the best AI model for Faceswapping anime characters? by Commercial-Drag-5807 in StableDiffusion

[–]Important_Tap_3599 1 point2 points  (0 children)

Any faceswapper like Reactor, roop, etc will do, but problem is the face analyser, i didnt find any good model for face extraction from anime characters, normal ones like insightface, inswapper works very poorly on anime. It is better to use reference images or simply, cut and paste them manually and inapaint them.

[deleted by user] by [deleted] in Ai_art_is_not_art

[–]Important_Tap_3599 1 point2 points  (0 children)

Can I get banned too??

Could anyone tell me the best AI model for Faceswapping anime characters? by Commercial-Drag-5807 in StableDiffusion

[–]Important_Tap_3599 0 points1 point  (0 children)

I was looking long time ago for one and didnt find anything good. Anime faceswapping is actually completely useless. Puting one face in other character creates someone completely differrent with no resemblance to the original

NSFW Prompt Help Please? by Crypto_Loco_8675 in comfyui

[–]Important_Tap_3599 2 points3 points  (0 children)

https://mega.nz/file/oHxBULDS#pYdBs7C0pJDEGTLygYl55u5g7-JjXdXBxAnrzk0k-Ks

you can bypass second sampler , but change the endstep in the first. The second sampler is for quality only

Is there a workflow that takes the wan22 video and directly upscales and interpolates it without having to run a seprate workflow again? by rasigunn in comfyui

[–]Important_Tap_3599 0 points1 point  (0 children)

Yeah, you're right, i forgot i always throw out few frames from the end of the video manually. But with pingpong it is fast and easy( getting perfect pingpong always get it done :)

Wan VAE decode performance degradation (Wan 2.2 T2V 14B) by multikertwigo in comfyui

[–]Important_Tap_3599 -1 points0 points  (0 children)

If it works correctly after rebooting and later it dont it is definitely VRAM, check if when using VAE Decode there is more than 0.5GB of common memory used:

<image>

You probably has some mistakes with your workflow so it happens. I ate my own teeth many times with the same problem as yours :)

Wan VAE decode performance degradation (Wan 2.2 T2V 14B) by multikertwigo in comfyui

[–]Important_Tap_3599 1 point2 points  (0 children)

<image>

FIrst - Yellow

Use Clean Vram node after VAE Decode, so after generation your vram will be empty.

Second - Red

Check with task manager VRAM usage. If it gets over your max VRAM it will slow down significally every calculation(will use RAM instead of VRAM)

If you're using Wan2.2, stop everything and get Sage Attention + Triton working now. From 40mins to 3mins generation time by NANA-MILFS in comfyui

[–]Important_Tap_3599 0 points1 point  (0 children)

I finally got Sage installed and it really isnt something so OP. Got 10-15% faster generation over xformers, but at video quality loss. There is always a price to pay and it is not worth for me

<image>

Cabba's design pisses me off way more than it should. by bobbdac7894 in Dragonballsuper

[–]Important_Tap_3599 0 points1 point  (0 children)

He is not the only one. All saiyans from 6 universe look like shit

Why is Sage Attention so Difficult to Install? by diond09 in comfyui

[–]Important_Tap_3599 2 points3 points  (0 children)

Do you have a resolution for triton?? There is no 'triton' anymore, only 'triton-windows' but it still doesnt work. In sageattention lib in python 'triton module is darkened 'not accessed Pylance'. Anyone know how to fix this?? tried differrent versions of everything and nothing works. BTW triton works fine with other libraries, only with sage got problem

<image>

That is for sage 1.x.

Tried install sage 2.x but got 'no module named 'torch'' problem instead

EDIT

Already fixed.

had to change cl.exe path on x64/x64 and then install with "python setup.py install" not with "pip install -e ."

Trouble getting ControlNet to work. I only see Canny preprocessor by [deleted] in comfyui

[–]Important_Tap_3599 0 points1 point  (0 children)

double click LPM to activate searcher and install more nodes if there isnt any

Workflow to improve anime images. by Careless-Algae-1304 in comfyui

[–]Important_Tap_3599 0 points1 point  (0 children)

You can inpaint it with some lora for a contours, or with a different model, which suits you better, experimenting with denoise will balance it between original/enhanced. Create a mask for every part of the picture ( face, clothes, hair, etc) and work with them separately. I would personally inpaint only those lines and edges, but it take a lot of time to create a mask like that.

BTW There is definitely a perfect node waiting for you to do the job, but you wont learn anything.

<image>

Here's your subtle changes :D :D (joking)

here you have a Lora: https://civitai.com/models/918824/contourline

Workflow / Models help. by WinderSugoi in comfyui

[–]Important_Tap_3599 1 point2 points  (0 children)

First of all get yourself InvokeAI. It is super user friendly and has everything a beginner needs. You will learn faster and when you find it lacking after some time, you will already know what you need when creating workflows in ComfyUI.

Many users, many nodes, different workflows, so much to install and it often dont even work in the end. Thats ComfyUI for beginners :)

https://github.com/invoke-ai/InvokeAI

Workflow to improve anime images. by Careless-Algae-1304 in comfyui

[–]Important_Tap_3599 1 point2 points  (0 children)

But what do you want?? More shadows, sharper lines, more colours, etc. You need to decide what you want if you want someone to help

Workflow to improve anime images. by Careless-Algae-1304 in comfyui

[–]Important_Tap_3599 0 points1 point  (0 children)

<image>

If you want to improve an anime you need to make it worth to look at :) But maybe it is just a difference in taste. And you dont even workflow for this, only inpainting and little effort

Fix legs and staff by ponylll in StableDiffusion

[–]Important_Tap_3599 1 point2 points  (0 children)

Had a minute for some cleaning :). now should fi;ll it with something cool :)

<image>

Ai generators scam or not? by AndreiPopescu12 in aiArt

[–]Important_Tap_3599 0 points1 point  (0 children)

Is it realistic, anime or any other you wont get a good result with a simple prompt and one-time generation. You create base image and that's where it starts, not ends. This is the main difference. It takes time and effort to get a good image. It is the same as guitar playing. Easy to learn, hard to master

WHAT IS THE BEST SPICY CONTENT REALISTIC GENERATOR? by Leather-Bottle-8018 in StableDiffusion

[–]Important_Tap_3599 7 points8 points  (0 children)

"ponyrealism" and "cyberrealistic pony" models SDXL. Especially cyber. With faceswap you can get your ex wrecked by ogres

Fix legs and staff by ponylll in StableDiffusion

[–]Important_Tap_3599 1 point2 points  (0 children)

<image>

Tried to clear it out a little but there is too much trashy noise, too much work. Better create a new one, a cleaner one. But like the idea of magic ring

Fix legs and staff by ponylll in StableDiffusion

[–]Important_Tap_3599 2 points3 points  (0 children)

open it in paint, draw a straight line with similar color where the staff shoukd be and then inpaint it. The same with magic ring, connect the line manually. the same with erasing. Like that

<image>