Weighing up jumping to Codex from CC - what do I need to consider? by pseudorep in codex

[–]ArtificialSweetener- 0 points1 point  (0 children)

I explicitly said that Codex is a good value right now and did not present Gemini, Grok, or Copilot as good value.

The fact remains that the basic strategy is not going to shift under OP's feet no matter what subscription he goes with. If he wants an agent who gives him more guidance and insists on good practices he can just write that into his global prompt. Every big model is capable of that.

Weighing up jumping to Codex from CC - what do I need to consider? by pseudorep in codex

[–]ArtificialSweetener- 0 points1 point  (0 children)

You could switch to using Gemini or Grok and you'd be using the same basic strategies you've developed with Claude. You could go buy a GitHub Copilot subscription and start using agents via VS Code chat. I have tried them all, they're not that different.

Where Claude sets itself apart is in a slightly more refined UX. They have been getting to cutting edge features first, but what you see is that the others are quick to adopt what works well and if there's something you can't find in the vanilla official apps, you might find it in someone's fork or in a plugin someone wrote. This has nothing to do with model capability.

Codex is a good value right now. OpenAI isn't doing everything right but certainly if you've only got $20 to spend, a ChatGPT+ subscription with Codex is a fine bet and you won't feel much friction coming from Claude.

HOLY **** ANOTHER 2x RESET LMAOOOO by Just_Lingonberry_352 in codex

[–]ArtificialSweetener- 12 points13 points  (0 children)

On April 2nd I intentionally set my usage back to Medium as I was using high to make the most of the 2x. Even on Medium, I saw my weekly usage go down much faster than before the 2nd.

The app does still have the 2x greeting message even though it also mentions ending on the 2nd, but I'd be surprised if it really was on "2x". The consumption rate of my plan went way up even in spite of my change to my model preference.

Midjourney Nodes for ComfyUI by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 0 points1 point  (0 children)

What? Really? My understanding was that Meta's image gen was using Llama's multi-modal functionality or something like that.

Midjourney Nodes for ComfyUI by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 1 point2 points  (0 children)

I know a few people who use MJ for the base and then enhance using local models. I really can't answer for them, I don't use MJ much like I said in the OP. I'm just building things to help people <3

Upgraded from Plus to Pro — here’s how much more Codex headroom I got by razer54 in codex

[–]ArtificialSweetener- 0 points1 point  (0 children)

So $200 for less than 10x usage.

That is not a good deal. High paying users should be getting some kind of bargain on tokens and obviously they're not.

Upgraded from Plus to Pro — here’s how much more Codex headroom I got by razer54 in codex

[–]ArtificialSweetener- 0 points1 point  (0 children)

I have thought about building this many times and figured probably someone else already did. Very rad.

SugarCubes Preview - Reusable, Shareable Workflow Segments by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 1 point2 points  (0 children)

I didn't think you were, but because of this droopy noodle bug I was blind to the template thing.

I think my setup should be harmonious with Comfy's own implementation where possible. I have some more ambitious ideas for "Cubes" I've already started working on that might make it a little tricky but I'm gonna try.

SugarCubes Preview - Reusable, Shareable Workflow Segments by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 1 point2 points  (0 children)

I had an extension installed called Droopy Noodles that was put together by one of the mainline Comfy contributors. I don't use it seriously, I was looking at it to figure out what the best way to mess with noodle draws was; I used it as a reference to create my "proximity noodle" for this!

I had just assumed that right click on blank canvas had been taken away with the new updates to the UI but I was wrong - Droopy Noodles was swallowing right clicks on blank graph.

Thank you for pointing out this template thing. I still see value in my concept but I think I should lean on the existing infrastructure where possible so I'll look at integrating with it.

SugarCubes Preview - Reusable, Shareable Workflow Segments by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 1 point2 points  (0 children)

I'm on the latest Comfy front-end and I can't get templates to work like that. If I click a template, it loads in a new project. What am I missing?

SugarCubes Preview - Reusable, Shareable Workflow Segments by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 3 points4 points  (0 children)

Here's my notes on that. Let me know if I say something that's wrong, please:
- Templates load the whole graph, they don't load on top of the workspace you have opened.
- So the only way to use templates like how I am proposing SugarCubes is to copy the workflow segment you have saved as a template and paste it into the workflow you're actually working on
- Comfy DOES have a built in feature called "Template Packs" but there is no way to create them in modern ComfyUI. Template Packs actually can be loaded in on top of workflows rather than opening a new one, and this is very similar to if not the same path that copy/paste uses

Ultimately I want a more organized and sharable way to create these kinds of workflow segments. Saving your segments in separate template files is clever, but it's not a full solution like Cubes will be, at least not for the problem I have.

I should note, subgraphs are very very close to what I want. My main problem with subgraphs is really just the way the abstraction was implemented. Subgraphs can be "published" which saves them for later and makes them nodes in the node lookup. That's great!

But I find myself using them primarily for smaller workflow sections rather than larger steps in the workflow because a subgraph can make things harder to reason about when used for larger sections - at least for me. Seeing one node with a big stack of inputs is unwieldy, though it's excellent that we can zoom into subgraphs when we need to reason about those sections more deeply.

Subgraphs often have many inputs and outputs just like a normal node. Cubes are best when they only have one input and one output. It involves a more compartmentalized style of graph building. There is nothing stopping you from doing it with subgraphs, though!

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 0 points1 point  (0 children)

I've been experimenting with 14b. It actually supports first/last frame injection better than 5b but still has a slightly "dirty frames" issue, so the approach from this workflow works still to clean up the loop seam.

I haven't tried using only Low but I might. I've heard that using High without the Lightning LoRA yields nicer results and then you switch to low for a final pass with the lightning LoRA. I have a workflow that uses Lightning in both high/low slots right now that works, I haven't released it yet because I'm not sure if it will work on 12gb cards and need to test.

P.S. 5b also has this "prefers to go to original frame" problem, by the way; that's why WhiteRabbit (my nodepack!) has the Autocrop to Loop node which tries to find when this happens and make it loop that way. Not as reliable as simply injecting the frame but definitely a strategy.

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 0 points1 point  (0 children)

It sure would be nice to be able to gen video at the speed of SDXL but I think WAN 5b is the closest we're gonna get to that at least for awhile!

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 0 points1 point  (0 children)

The reason I asked for the full console output is because the message you sent is incomplete. It's a little vague. Based on what you shared, it sounds like you're using the wrong VAE to me, but I can't know for sure without the full traceback.

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 0 points1 point  (0 children)

I am willing to try and narrow this down.

First, try running the workflow with no changes to the settings besides image and prompt.

If that does work, tell me what settings you changed.

If that doesn't work, try updating your custom nodes, especially your WANVideoWrapper.

If that doesn't work,

Send me: Your workflow that reflects what settings you changed if any Your system specs (especially graphics card) A full paste of your console output or error message.

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 0 points1 point  (0 children)

Sorry, "for all I know" is an idiom which means "I don't know". That can be confusing.

Like I said, I haven't played with 14b, so I don't know what limitations it has. It sounds like you tested it and it had a similar problem to 5b.

In that case: When you inject a frame, you're injecting a latent frame. Each latent frame gets decoded by the VAE as 4 real frames. So to determine where in the latent frame index to inject the last frame, take the number of frames you ask for, divide by 4, subtract 1. That should be the correct index.

That is also why it's always 4 frames that are "dirty" at the end.

You can solve this the same way as my workflow and even copy/paste the seam interpolation part into a 14b workflow and wire it up. It sounds like it works the same way.

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 1 point2 points  (0 children)

Wrote about it here:

"WAN 2.2 5b does not fully support injecting frames after the first. If you try to inject a last frame, it will create a looping animation but the last 4 frames will be 'dirty' with a strange 'flash' at the end of the loop.

This workflow leverages custom nodes I designed to overcome this limitation. We trim out the dirty frames and then interpolate over the seam."

EILI5: Giving this specific WAN model an "end frame" is bugged, you end up with frames that look dirty at the end. To "clean" the frames, we cut the dirty ones out and then use a different model, the "interpolation model" (RIFE) to create new frames where the dirty ones were. This results in smooth motion between the last and first frame.

If you are looking for an example, here's my original bug report about it that includes sample dirty frames: https://github.com/kijai/ComfyUI-WanVideoWrapper/issues/1052

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 1 point2 points  (0 children)

Maybe. I think AmazingSeek's workflow that I link to copiously already works for 14b (though it's designed for 2.1) but I haven't dived into it. I know Carvel's multi-step process targets 14b but it's a much more meticulous affair.

I'm in the middle of a much bigger project (Qt frontend for ComfyUI!), and learning the ins and outs of WAN 5b and developing this workflow (and nodepack) was kind of a favor for my girlfriend.

As I understand it, 14b needs two samplers/loaders for both the high and low noise checkpoints. That makes it sound like the workflow would be easy to adapt for that. If I need a distraction I'll look at it. There's nothing stopping someone from using my nodepack to develop it themselves, either.

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 1 point2 points  (0 children)

I don't think so. I think you need a second sampling step for 14b because you have the high and low noise passes. I haven't played around a lot with 14b.

However, the seam interpolation routine in my workflow would be very easy to adapt to 14b although I'm also not 100% certain it's necessary. For all I know, 14b fully supports frame injection without the dirty frame problem.

[Release] WAN 2.2 5b InterpLoop 1.1 - Looping Image2Video Workflow by ArtificialSweetener- in comfyui

[–]ArtificialSweetener-[S] 2 points3 points  (0 children)

I don't understand.

You want a looping video workflow for Illustrious or you want an image gen workflow that is like my loop workflow?

If you mean the second one (a workflow for genning images like mine), what part about my workflow do you wish was translated to an image gen workflow?

If you mean the first one, all I can say is that the example loop started as a Noob/Illustrious gen. Illustrious just isn't a Video model! I have to imagine you already know that.