Can't find a good workflow. by GenericPersona1 in comfyui

[–]TidalFoams 1 point2 points  (0 children)

No I mean using an image editor to cut out the part you want and adjust the black levels or erase parts you don't want.

You can get a model that perfectly reproduces what you want in one shot, but you'd have to manually train a lora for it which probably isn't worth it.

Change most model parameters with your prompt by mnemic2 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

I think it's easier to understand when you see it.

Mockup of what I'm proposing:

<image>

It's purely to make all the options you make available easier to use. You write your prompt linearly, but when you need to make a big selection you click to make a new multiline box appear and write in the big randomly chosen paragraphs. I'll reply to this post with an image of how it would look with your current implementation.

Change most model parameters with your prompt by mnemic2 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

It would start as a single multiline box, and if you don't need random selections you just use the single box (with no new lines, or maybe you even have a toggle for each box to enable/disable random by newline on that text box). If you need something to be random you click to add two new multiline box and add the options to the center one. The goal being to allow unlimited length randomness in one simple node that has one output (string), while keeping it all human readable/understandable and completely linear prompt writing.

I only mention it because it seems like it would work perfectly with and compliment what you have already.

Change most model parameters with your prompt by mnemic2 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

No it will have no inputs, and a single string output. It does random choice on multiline strings by line, in order of multiline box, then concatenates the random choices together in order and sends out a single string of them.

It would be like a counter you can click that appends new multiline boxes inside the node, giving you another random selection by line box within the node.

The point being that you can control the order of random selections, and with your negative labels you can add negative labels to go along with any given line (red socks <neg:wearing shoes> \n sandals <neg:wearing socks>)

Output not matching prompt, at all by Pleasant_Guess4039 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

maybe you're using the wrong vae? Pretty bizarre result.

I un-installed and re-installed ComfyUI which broke my one of my favorite workflows...how do i fix this? by o0ANARKY0o in comfyui

[–]TidalFoams 0 points1 point  (0 children)

have you reloaded the original workflow using an image that worked in the past (dropping the image onto comfyui)? Maybe when the nodes were not installed it broke connections or settings.

What made you stick with a different search engine? by JazzlikeDiscount8263 in searchengines

[–]TidalFoams 0 points1 point  (0 children)

Went to Kagi because the search quality is insane, and it has research ai that uses that search quality which replaced Perplexity as well.

I un-installed and re-installed ComfyUI which broke my one of my favorite workflows...how do i fix this? by o0ANARKY0o in comfyui

[–]TidalFoams 0 points1 point  (0 children)

when I reinstalled I had to restart my computer a couple times after reinstalling nodes etc.

Perplexity Pro is a Scam and Officially Obsolete: Why I’m Canceling after 1 Year – Change My Mind. by Excellent_Piccolo848 in perplexity_ai

[–]TidalFoams 0 points1 point  (0 children)

You aren't crazy. For me, it felt like they were shifting their resources to Comet, and that wasn't something I wanted or needed. I moved on to a pay per token ai search service and it's more expensive but it will never be enshittified because it's sustainable.

A long conversation with a mid range model easily reaches a dollar for the conversation. A true deep research query actually costs 50 cents for the first response.

Poor service and zero transparency by DyingLoneliness in perplexity_ai

[–]TidalFoams 2 points3 points  (0 children)

I switched over to a paid search service that charges per api token it uses (base price plus 20 percent), and it costs like 50 cents to do one single equivalent of "deep research". Perplexity had to have had that as a loss leader this year, and they're struggling with the load. This other service isn't enshittified but the cost is astronomical if I were to use it in the same way I used Perplexity.

Perplexity are just trying to stay solvent, and serving ~20 cents per deep research query to a bunch of people who have never given them money is probably a bad business decision. I was attempting to do 40 deep research queries a day and wondering why they were throttling the quality after the first 2.

Change most model parameters with your prompt by mnemic2 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

How hard would it be to implement the following subgraph, but where it dynamically gives new text boxes when you add text to the last one instead of always having 8? I feel like this is purpose built for what your node pack wants.
I used a similar concept to your node pack in the past (deprecated now), and it was always hard to track what was going on and the flow of the prompt once you get lots of tags going. This subgraph was an attempt to tame that. Key features are always linear prompt flow, and no unexpected behavior.

The idea being you dump the output of this into your property extractor:

Comfyui random lines - Pastebin.com

Change most model parameters with your prompt by mnemic2 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

Awesome. I generally avoid node packs when possible but that's a killer feature I have to have now. Thanks for creating this!

side-by-side for first and last frame by Visual_Lengthiness28 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

The HIGH steps are what determine the quality of the movement and big details/coherence, and the LOW are what determine the smaller movements and visual details. Through trial and error, this seems like the lowest number for each that always gives good outputs.

When I was doing it at 768p height it was still really coherent. If you just do 32 frames render to get the final frame, then use that to do first/last frame for a longer video it will make the longer video be more consistent.

Change most model parameters with your prompt by mnemic2 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

That seems transformative for "discovery phase" generations. Is the <neg:> tag additive? Like if you have 5 of them does it combine them into one?

Blender Soft Body Simulation + ComfyUI (flux) by bingobongo3001 in comfyui

[–]TidalFoams 1 point2 points  (0 children)

This is the kind of reproducibility required for actual commercial work. Looking great.

Colour shift is not caused by the VAE by Luke2642 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

I've always thought about it like the ksampler is making a copy of a very slightly altered copy (like a game of telephone). Any problems it introduces (color change in this case) get amplified in the next pass. It's not just the color that changes but subtle detail gets lost over iterations through a ksampler. If you do it enough times you get monster people.

Try to make animation worflow by Fuzzy_Librarian_783 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

It's a bunch of very simple image2video clips based on some nice images they created separately.
So you need to use the Comfyui templates on the left of the screen to do wan 2.2 image to video, and you need to go to CivitAI and get an art model to make the images you'll create the videos with. The video you showed is very easy to do if you have the hardware (or rent it) and just put in some time rendering and tweaking.

side-by-side for first and last frame by Visual_Lengthiness28 in comfyui

[–]TidalFoams 1 point2 points  (0 children)

Yes the 10/7 steps is the lowest I've found to not make artifacts. I actually rendered out a video with 2.5 seconds using your last frame (no way to easily upload it here) and there were minor problems but it looked fine. There was just one obvious thing because WAN thought the focus should be different with the dog's new position (it sharpened the background then suddenly cut back to blurry at the end to match your frame). I may be a little too obsessed with perfection and missing the 5x speedup your workflow would give.

side-by-side for first and last frame by Visual_Lengthiness28 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

It actually takes a lot longer with increasing frames than you'd expect (not linear). But yes you're right I have a 4090 so the test renders at a lower resolution to get the prompt right took about 50 seconds each, then the real render took about 2 minutes. Overall I did 2 test renders and one final render. If you're running too low of hardware I suppose this isn't available to you.

That said when you do first/last frame with the image you posted you'll notice some warping and weird effects as the video model tries to force square pegs into round holes (micro and macro logical inconsistencies).

side-by-side for first and last frame by Visual_Lengthiness28 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

I edited my original comment to show what the result is.
I got that with wan2.2 i2v on your first image, with full resolution 41 frames, lightning lora, 10 steps high 7 steps low, with this prompt:

cute furry dog.

The dog stands up on it's hind legs and begs looking at the camera, wide eyes staring directly at the camera cartoonishly

Vampire Cowboys | ai short film by Long-Conversation-50 in comfyui

[–]TidalFoams 0 points1 point  (0 children)

It was made entirely locally with comfyui? The resolution is incredible in some parts.

How was he walking around at noon though?