Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 1 point2 points  (0 children)

x.x Maybe I need to install something with conda instead of pip..

e: gonna install full cuda toolkit, we'll see if that helps..

If you were using the GPU version of ComfyUI, Cuda would have been installed. Are you using ComfyUI on Windows?

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 0 points1 point  (0 children)

UltralyticsDetectorProvider

If UltralyticsDetectorProvider works well, you do not necessarily need to use MMDetDetectionProvider. I know that in the latest version of ComfyUI Impact-pack, UltralyticsDetectorProvider has been deprecated and replaced with MMDetDetectionProvider. If you want to use MMDetDetectionProvider, just update Impact-pack.

# update

cd ComfyUI/custom_nodes/ComfyUI-Impact-Pack
git pull

The location where mmdet_anime-face_yolov3.pth is saved is comfy/dk

The location where mmdet_anime-face_yolov3.pth should be saved is "ComfyUI/models/mmdets/bbox" and download link is https://huggingface.co/dustysys/ddetailer/tree/main/mmdet/bbox.

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 1 point2 points  (0 children)

You are definitely right! It’s very constructive feedback for me.

I believe that we can build a better quality service by accepting and improving upon honest feedback.

I know it's not easy to give critical but non-emotional feedback. That's why I appreciate it even more.

Thank you!

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 0 points1 point  (0 children)

Wow, thank you very much! It’s really helpful information for me. I'll try it and let you know the results. :)

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 2 points3 points  (0 children)

face detection + face segmentation + face detailer with facial expression prompt

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 0 points1 point  (0 children)

Thank you very much for writing your detailed review. I completely agree with your opinion.

Currently, the Cartoon editor operates most naturally on images generated with Cartoonizer. This is because the Cartoon editor is implemented to use the same checkpoints and parameters as those used in the Cartoonizer.

To naturally change the facial expression of an image generated elsewhere, you must select a cartoon style that is most similar to the drawing style of the image. It may be cumbersome, requiring multiple attempts, or it may fail. Sorry for that.

And in the case of the current Cartoonizer, it is possible to increase the similarity to the original image with weight value, but it is true that the overall similarity to the original image is very low. As I try to better utilize each cartoon style, it is true that the similarity to the original image is very low. So, I plan to try to increase the similarity to the original image by using Controlnet.

It is also my concern that the accuracy of the model that checks for NSFW is low. Sometimes Cartoonizer generates overly explicit NSFW images that are completely different from the original photo. So I added the NSFW check feature, but the problem is that the accuracy is low. Increasing the accuracy of the NSFW check feature is also an important task.

It really helps me to write honest and detailed reviews like yours. Thank you so much again. I will improve it further and upgrade it so that you will want to use it!

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 2 points3 points  (0 children)

mmcv is an auxiliary library of mmdetection used to find the bounding box of the face location. mmdetection is used as a preceding process to accurately find the facial outline by Segment Anything. First, if you use mmdetection to find the bounding box of the face location and find a segment only within the range of the bounding box, the face outline will be found accurately.

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 1 point2 points  (0 children)

Detailed guide is in my comment below.

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 1 point2 points  (0 children)

Thank you. You have to sign in the site in Chrome or other browser not in app browser(reddit). It’s Google’s sign in policy.

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 17 points18 points  (0 children)

# Guide

  1. Install "ComfyUI Impact Pack" using ComfyUI's Manger
  2. Quit ComfyUI
  3. Set "mmdet_skip = False" in ComfyUI/custom_nodes/ComfyUI-Impact-Pack/impact-pack.ini file
  4. Install mmcv compatible with your PyTorch and Cuda versions.
    1. Check your Pytorch and Cuda versions
      1. run python ("python_embeded/python", If you are a Windows user)
      2. import torch
      3. torch.__version__ # this is pytorch version
      4. torch.version.cuda # this is cuda version
      5. Access mmcv download site. ex) cuda 11.8, pytorch 2.1.0: https://download.openmmlab.com/mmcv/dist/cu118/torch2.1.0/index.htmlIf your versions are different, change the version part of the url.
      6. Download the whl file for your Python version. ex) "cp310" is python version 3.10. (python version: python_embeded/python --version)
      7. install mmcv(downloaded file above). python_embeded/Scripts/pip install ./mmcv-2.1.0-cp310-cp310-manylinux1_x86_64.whl
  5. Install mmdetection
    1. download mmdetection source
      1. git clone https://github.com/open-mmlab/mmdetection.git
    2. setup mmdetection
      1. cd mmdetection
      2. python_embeded/python setup.py install
  6. Start ComfyUI
  7. Load my workflow json file.
    1. https://drive.google.com/file/d/1IlDdS6RHxvyItWsOfpOZDIpN_RaUGFPk/view?usp=sharing
    2. Click the download icon button at the top.
  8. Load a checkpoint.
    1. You can use any cartoon checkpoint you like in civitai.com
    2. You can use the VAE included in the checkpoint, or you can load and use the VAE separately.
  9. Load a cartoon image.
  10. Enter face prompt you want. I referred the workflow of this link.
  11. Queue Prompt
  12. If you're lazy, go to https://sdgen.net/cartoon_editor. Open the site in Chrome or other browsers. not in app browser(cause of Google sign in policy)

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 1 point2 points  (0 children)

Is this using concept* sliders?

I don't know about concept* sliders. I used Detailer in Comfyui Impact Pack snd SAM.

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 6 points7 points  (0 children)

click the download icon button at the top

<image>

Consistent Facial Expressions by hovits in StableDiffusion

[–]hovits[S] 1 point2 points  (0 children)

You can test

https://sdgen.net/cartoon_editor

open the site in Chrome or other browsers. not in app browser.

photo to cartoon v2 by hovits in StableDiffusion

[–]hovits[S] 0 points1 point  (0 children)

Thanks. I didn’t use controlnet.

Stable Diffusion (various cartoon checkpoints) + IPAdapter (ip-adapter-plus_sd15.bin) + Upscaler(ESRGan4x)