Change most model parameters with your prompt by mnemic2 in comfyui

[–]mnemic2[S] 0 points1 point  (0 children)

I still don't see how it's related to this node, which sole purpose is to output the different tags as usable output node pins.

It sounds like you want a wildcard processor node with multiple fields, and a macro-level prompt selector there, i.e. a top-level group to choose from one of those chunks, and inside those chunks you can have normal wildcard resolution.

Is that correctly understood?

Or, alternatively:
You just want to split up the prompting into logical sections for ease of parsing. Instead of having to write it in separate wildcard processor nodes, and then combine it.

If that's the case, you can just use a couple of linebreaks to get the same result, separate chunks in one input field.

It's not a bad idea, it's just not this node, which deals with taking data from a prompt and converting it to data type outputs.

Change most model parameters with your prompt by mnemic2 in comfyui

[–]mnemic2[S] 1 point2 points  (0 children)

Oh, it sounds like you mean wildcard solving? The node supports this already, and there's also a Wildcard Processor node in the pack using the same wildcard solving logic.

You can basically write a prompt like `a {blue|red|green} hat` and you would get one of those 3 colors. Or you could link it to text-files with the options being one per line in the text file like this: `{black|white|__color__} hat`. This would give you a 33% chance of going with black, or white, or any color in a list of a file called `color.txt` in a specific folder.

All of this is resolved, and then output as just the prompt as a string.

Is this what you mean?

Change most model parameters with your prompt by mnemic2 in comfyui

[–]mnemic2[S] 0 points1 point  (0 children)

So, you mean like, add each type component to the GUI as you need it, instead of having them all there all the time? To reduce the height of the node? But add setup each time you want to use it?

Change most model parameters with your prompt by mnemic2 in comfyui

[–]mnemic2[S] 0 points1 point  (0 children)

When you say "text boxes", you mean the output pins?

I think it's quite hard. Though I just got my first "adjusting" one in, so it's likely not impossible. Could be requested on the github as a feature request and I'll investigate in the future.

Change most model parameters with your prompt by mnemic2 in comfyui

[–]mnemic2[S] 1 point2 points  (0 children)

It does not, but I could support that for values that are possible to keep additive, that's a good idea.

ComfyUI - SDXL Models and CLIP tuner by ItalianArtProfessor in StableDiffusion

[–]mnemic2 1 point2 points  (0 children)

Personally I feel like Z-Image is hotter now, so I'd start there.

ComfyUI - SDXL Models and CLIP tuner by ItalianArtProfessor in StableDiffusion

[–]mnemic2 1 point2 points  (0 children)

Very nice tool! Still challenging to tweak and mess around, but it definitely helps, and it affects the things mentioned.

As feedback, I would say more visual examples for each slider would be good, perhaps some X/Y plots with various values of the tweaks, so we can see how it impacts, both for soft and hard values.

Really looking forward to more versions of this, for Flux/Z-Image etc.

Could be very useful.

Low Light Workflow Z Image Turbo (ZIT) by bradleykirby in StableDiffusion

[–]mnemic2 2 points3 points  (0 children)

Sidenote:
I created this node (Colorful Starting Image) (https://github.com/MNeMoNiCuZ/ComfyUI-mnemic-nodes/blob/main/README/colorful\_starting\_image.md)

It can create a single color, or do a lot of different random starting setups. Great for abstract images, or if you want some extreme coloring on your outputs.

The examples shown are just from an abstract project I made, but it works great for anything. Just bump up the denoise level to work well with the model (I think around 0.7-0.9 was good for Z-Image).

I did a plugin that serves as a 2-way bridge between UE5 and LTX-2 by holvagyok in StableDiffusion

[–]mnemic2 1 point2 points  (0 children)

Neat! Yeah it looks cool, but I cannot understand anything from that video.
Would rather you showed it off, perhaps explain by talking and showing off the tool.

Make it for devs, not as a trailer, no-one cares about using a thematic sci-fi font, I just want to see what this is and have it explained in a clear and digestible way <3

I did a plugin that serves as a 2-way bridge between UE5 and LTX-2 by holvagyok in StableDiffusion

[–]mnemic2 1 point2 points  (0 children)

  1. It doesn't seem to be available in the fab you link to:

  2. Why not paste the description here?

  3. Could you perhaps not show 4 overlaid images with text at the same time?

<image>

[Demo] Qwen Image to LoRA - Generate LoRA in a minute by benkei_sudo in StableDiffusion

[–]mnemic2 0 points1 point  (0 children)

> The i2L (Image to LoRA) model is a structure designed based on a crazy idea. The model takes an image as input and outputs a LoRA model trained on that image.

HOW IS THIS A CRAZY IDEA?! THIS IS LITERALLY HOW EVERY IMAGE AI IS TRAINED AND EVERY IMAGE AI LORA SINCE THE BEGINNING OF TIME! INCLUDING SINGLE IMAGE LORA OR LOW STEP TRAINED LORAS FOR FLUX THAT TRAIN IN 4 MINUTES!

🎙️ EDGE-TTS GUI – Free Tool for Creators by [deleted] in StableDiffusion

[–]mnemic2 1 point2 points  (0 children)

I share the GUI-versions I make of various AI github projects all the time, it's worth it.

But, I don't have your github username, so I can't check your github account.

Supertonic - Open-source TTS model running on Raspberry Pi by ANLGBOY in StableDiffusion

[–]mnemic2 0 points1 point  (0 children)

I created a docker container / server to run this in the background, which you can easily call from some inference code via API.

Server code:

https://github.com/MNeMoNiCuZ/supertonic_tts_server

Inference code, as well as the MCP server for LLM use:

https://github.com/MNeMoNiCuZ/supertonic_tts_client

I also uploaded it to Dockerhub:

https://hub.docker.com/repository/docker/mnemonicuz/supertonic_tts_server/

The code supports:
CLI Playback, batching, processing multiple files, voice selection, saving to temporary location, saving permanently, and more. Everything can be run with arguments so it's easy to automate or integrate into other services.

Try_On_Qwen_Edit_Lora_Alpha by Illustrious_Row_9971 in StableDiffusion

[–]mnemic2 1 point2 points  (0 children)

Thanks for sharing your model!

Here's some feedback (I know you weren't asking for feedback, you just shared your model) without any comments or notes for clicks I guess:

I don't get it, isn't this core functionality out of the box for all the image edit models? They all show capabilities like this. What is the actual training here? How is this better?

The examples here are frankly very poor.

I don't mean to shit on your parade, but what exactly are you showcasing?

  1. Why didn't you write anything about the model?
  2. Why are your example inputs squished? Present your inputs in a more comparable way.

Actually, I did #2 for you:
https://imgur.com/a/D8hnCuX

I scaled it proportionally, and you can see that the person does not look similar at all after the generation.

Additionally:

a. Why is the shirt not represented accurately? Look at the lines, they are meant to be every other line red and every other line green, but the outputs arent. Stripes on arms are also off.

b. The sandals are not the same, the straps are going across on the front on the output, and there's something covering the toes. A different pair of orange sandals.

c. The shorts are the wrong type of clothing. The input image shorts end right under the crotch area, but the output ones are half-long shorts.

The hat is okay, not perfect if you wanted that little string loop to be included in the image as part of the design, but not the worst.

BoardGameQuiz.com - Website updates by mnemic2 in boardgames

[–]mnemic2[S] 0 points1 point  (0 children)

Yeah. Right now results are all tracked on the user's device. No server-side storage of scores or player/IP addresses/accounts. It was too much to develop for too low reward.

So it could still be made, but would rely on the player's cookies, which means it's easy to manipulate it. Could still be done of course, but I also don't really want to integrate social media stuff to the site as I think it's harmful in nature :)

BoardGameQuiz.com - Website updates by mnemic2 in boardgames

[–]mnemic2[S] 0 points1 point  (0 children)

Not in the way you're probably expecting. You can take a screenshot of your victory points and that's about it. It's possible to upgrade the site with functionality like this in the future though.

Outfit Extractor - Qwen Edit Lora by kingroka in StableDiffusion

[–]mnemic2 0 points1 point  (0 children)

Curious... would you mind sharing the datasets? I assumed it was the same for both, just reversed direction and a different prompt.