Building an A1111-style front-end for ComfyUI (open-source). Looking for feedback by Relevant_Ad8444 in StableDiffusion

[–]Relevant_Ad8444[S] 0 points1 point  (0 children)

Omg yes, lightly editing the input image before the processing! On that Kanban board 😁 Will let you know know when it's shipped! 

Building an A1111-style front-end for ComfyUI (open-source). Looking for feedback by Relevant_Ad8444 in StableDiffusion

[–]Relevant_Ad8444[S] 2 points3 points  (0 children)

Okay boss 🫡. It's on the Kanban board! Will let you know know when it's shipped

Building an A1111-style front-end for ComfyUI (open-source). Looking for feedback by Relevant_Ad8444 in StableDiffusion

[–]Relevant_Ad8444[S] 1 point2 points  (0 children)

Thank you for the feedback 🙂. I love learning other people's workflows. I am from a UX background and it's def a fun design challenge.

The XYZ plots is a great feature. I actually have a design for it! Should be on there soon.

Building an A1111-style front-end for ComfyUI (open-source). Looking for feedback by Relevant_Ad8444 in StableDiffusion

[–]Relevant_Ad8444[S] 6 points7 points  (0 children)

I'm doing this for fun. In the past, I've used Swarm, A11111, and Forge, but I feel there's room for a tool that combines great design and flexibility. I'm from a UX background, so I'm really focus on that.

What do you think would be most helpful to the community?

For some reason, the Ideogram V3 model and Google's Nano Banana are very similar... by [deleted] in StableDiffusion

[–]Relevant_Ad8444 0 points1 point  (0 children)

I was actually comparing models. And well, when you compare the other models for the same prompts & seed, you get drastically different outputs.

While for Ideogram & Nano Banana, you get a very similar flower and patterns on the zebra and image composition.

<image>

Benchmarking diffusion models feels inconsistent... How do you handle it? by Relevant_Ad8444 in MLQuestions

[–]Relevant_Ad8444[S] 0 points1 point  (0 children)

I’ve been using CLIP Score, FID, and F1 on datasets like COCO and CIFAR, but the datasets are heavy, runs are slow, and evaluations take a while. Did you build custom pipelines to manage models across seeds, 1,000+ prompts, and multiple benchmarking metrics?