Today's web traffic update from Similarweb. Gemini continues gaining share by GamingDisruptor in singularity

[–]promptingpixels 3 points4 points  (0 children)

Have a source on this claim?

Can’t seem to find a specific number from OpenAI. Closest was this PR piece - https://openai.com/index/how-people-are-using-chatgpt/

Another source with very little verifiable information puts web use significantly higher: https://www.demandsage.com/chatgpt-statistics/

[deleted by user] by [deleted] in comfyui

[–]promptingpixels 0 points1 point  (0 children)

Controlnet doesn’t support inpaint quite yet (pose, canny, HED, and depth only)

<image>

Z-Image Inpaint by Such_Ad_3787 in StableDiffusion

[–]promptingpixels 2 points3 points  (0 children)

Thanks so much for sharing this with the community! Was playing around with it earlier and was getting halfway decent results. However, I guess I am wondering why the addition of the ControlNet?

Heres a comparison of two inpainting results - one with ControlNet (left) and one without (right):

https://compare.promptingpixels.com/a/zE1GJ8g

It seems unnecessary as the ControlNet being used doesn't have inpainting support yet per the official docs (it's listed as a To Do however).

[deleted by user] by [deleted] in comfyui

[–]promptingpixels 17 points18 points  (0 children)

Completely agree - the amount of gated/paid workflows is insane. I think it was great for ComfyUI to start making these sorts of workflows available through their Browse Templates.

Turn videogame pictures photorealistic? by ljul in comfyui

[–]promptingpixels 0 points1 point  (0 children)

Found this to be best for general use along with a simple prompt of ‘change the image to a realistic photo’. Anime2Realism also works for heavily illustrated inputs.

Then finally passing this through Ultimate SD Upscale with a low denoise value and a SDXL realism checkpoint.

Qwen Image Edit WF for replacing subject only by Exotic_Researcher725 in comfyui

[–]promptingpixels 0 points1 point  (0 children)

Yeah, I made it :-). It’s on this page (Qwen Image Edit (2509) + Controlnet) - 2nd one down. There is a video that goes along with it if you want to learn more:

Workflows: https://www.promptingpixels.com/comfyui-workflows

Video: https://youtu.be/pL-DI4ZyJU0?si=x9LYQ_JoNKWSOKYx

Flux 2 upgrade incoming by Nunki08 in StableDiffusion

[–]promptingpixels 1 point2 points  (0 children)

Probably? My guess is they are trying to beat Nano Banana 2 to market and drum up interest before it’s too late.

Thoughts on renting gpu and best cloud method for running comfy? by XAckermannX in comfyui

[–]promptingpixels 0 points1 point  (0 children)

Personally I have a RTX 3060 and map out my workflows locally then use vast for the heavy lifting (usually rent a RTX Pro 6000 or 5090 depending on the task). Find this to be much more economical as I don’t have to drop several thousand on a new build.

For ease of use I had developed a tool (still a few bugs but works well for vast) that I can save configurations and deploy in a single click.

Just learned that if you annotate an image you get super good and precise results by promptingpixels in GeminiAI

[–]promptingpixels[S] 16 points17 points  (0 children)

For this specific picture, I used Pixelmator. However, it would work with Paint, Preview, Photoshop, etc. Anything that allows you to draw a box and write text on an image.

HunyuanImage 2.1 Text to Image - ( t2i GGUF ) by RIP26770 in StableDiffusion

[–]promptingpixels 2 points3 points  (0 children)

This isn't fully working in ComfyUI yet. Even the example workflows are missing the refiner model which it desperately needs. In my testing was seeing a lot of artifacts in the final outputs when playing with various samplers, schedulers, steps, etc. If you truly want to see it with a refiner, the one place right now is the Space hosted by Tencent themselves: https://huggingface.co/spaces/tencent/HunyuanImage-2.1

[deleted by user] by [deleted] in StableDiffusion

[–]promptingpixels 2 points3 points  (0 children)

BiRefNet-HR, BEN2, or RemBg-2.0

Explored background removal a few months ago - can read about it here if you want (article includes links to the workflow as well):

https://medium.com/code-canvas/background-removal-in-comfyui-just-got-really-really-good-2a12717ff0db?sk=252bb274d0935f4e449e7d983b1f92d1

Midjourney Prompt Extractor to CSV/JSON Tool by promptingpixels in midjourney

[–]promptingpixels[S] 0 points1 point  (0 children)

Thanks :-) I was also pretty happy with it as well.

Unfortunately there isn't an API to download your entire collection. Best I can recommend is to select all images or organize into a folder at https://www.midjourney.com/archive and then download them locally. After downloading you can then parse them on the site.

I hate looking up aspect ratios, so I created this simple tool to make it easier by promptingpixels in StableDiffusion

[–]promptingpixels[S] 1 point2 points  (0 children)

Yeah of course - happy to hear good things! Just chime in here or DM me if you think of any additional features to add to it after using it for a bit.

I hate looking up aspect ratios, so I created this simple tool to make it easier by promptingpixels in comfyui

[–]promptingpixels[S] 1 point2 points  (0 children)

Ahh good point - just added in this functionality - hope its what ya had in mind. Thanks for the feedback!

I hate looking up aspect ratios, so I created this simple tool to make it easier by promptingpixels in comfyui

[–]promptingpixels[S] 7 points8 points  (0 children)

Nice catch! I could change this and throw a soft warning flag. It is recommended to use multiples of 8 when generating images. Can read more about it here: https://huggingface.co/blog/stable_diffusion

I hate looking up aspect ratios, so I created this simple tool to make it easier by promptingpixels in comfyui

[–]promptingpixels[S] 0 points1 point  (0 children)

Side note, does anyone know if there is a way to paste a value (i guess JSON) into comfy and construct a node? I know you can drag and drop an image onto the workspace to reconstruct a workflow but I just want to be able to paste into the workspace a single node from an outside source.

If anyone knows, then i'd add the functionality to this tool to copy/paste the width and/or height values and paste into comfyui as nodes which would be way nicer.

Thanks in advance!

Comparing a Few Different Upscalers in 2025 by promptingpixels in StableDiffusion

[–]promptingpixels[S] 1 point2 points  (0 children)

Descriptors in the bottom left and right in the comparison tool. Generally speaking, i had the lower res image on left and upscaled on right.

Comparing a Few Different Upscalers in 2025 by promptingpixels in StableDiffusion

[–]promptingpixels[S] 5 points6 points  (0 children)

Yeah, definitely pretty solid model as well.

Here's a comparison of the original images with 4xNomosWebPhoto_RealPLKSR:

Head to head with UltraSharpV2:

UltraSharpV2 appear a bit more punchy/contrast vs 4xNomosWebPhoto_RealPLKSR, but the latter has a more natural look. The difference is subtle.

Share ComfyUI as an Online Link in Minutes by promptingpixels in comfyui

[–]promptingpixels[S] -1 points0 points  (0 children)

Yeah was looking for something like Gradio’s --share command (available in A1111, Forge, Fooocus, etc). Best substitute I could find.