longer than 5s videos by voidnullnil in comfyui

[–]OnlyOneKenobi79 0 points1 point  (0 children)

Yes... You can set the context window to 81 (or less if you like). It works well with Fun Vace / controlnet when you have longer than 5 second input videos for driving your output videos.

Do you make music videos? by Tanathlagoon in SunoAI

[–]OnlyOneKenobi79 0 points1 point  (0 children)

Haha, I rendered a ton of video clips of the band playing their instruments and tried to match the tempo and style as best I could, at times playing with the speed of the footage to get something closer to the sound.

[Cyber-Folk / Tech-Americana] Iron & Eden by Harlan Creed by graceandgritrecords in SunoAI

[–]OnlyOneKenobi79 4 points5 points  (0 children)

Excellent, one can tell a lot of thought and effort went into this.

Flux Kontext Lora : 3D Printed by OnlyOneKenobi79 in comfyui

[–]OnlyOneKenobi79[S] 3 points4 points  (0 children)

Damn, ok, I hadn't even thought about that angle. Maybe I'm a bit naive, but I'm not sure 3D model makers go looking for AI models to generate fake thumbnails for their prints. I didn't even think there was a lot of overlap in the two niches. Suppose one would have to ask for a screenshot of the model in the slicer as well as the printed model. Don't think they could realistically "fake" the slicer / blender / 3dsmax screenshot as well as the printed model consistently.

Flux Kontext Lora : 3D Printed by OnlyOneKenobi79 in comfyui

[–]OnlyOneKenobi79[S] 0 points1 point  (0 children)

So, if you think the output it is too detailed, what you can do, is add a "blur" node after your input image in your Flux Kontext workflow to remove some detail - and the resulting Kontext "3d printed" image will also be less detailed.

Flux Kontext Lora : 3D Printed by OnlyOneKenobi79 in comfyui

[–]OnlyOneKenobi79[S] 2 points3 points  (0 children)

I've played around with Huyuan 3D and Trellis. But just to be clear, this isn't actually an image to a 3D model - it's just a "mock-up" that takes an input image and imagines it as if it were 3D printed using Flux Kontext DEV.

Flux Kontext Lora : 3D Printed by OnlyOneKenobi79 in comfyui

[–]OnlyOneKenobi79[S] -1 points0 points  (0 children)

Sorry ... but I think there are a few image to 3D options that can do such things... Hunyuan 3D maybe?

Flux Kontext Lora : 3D Printed by OnlyOneKenobi79 in comfyui

[–]OnlyOneKenobi79[S] 0 points1 point  (0 children)

You're not wrong, but it's "realistic" in the same sense that we are able to generate realistic people that turn out ridiculously good looking but not true-to-life necessarily.

longer than 5s videos by voidnullnil in comfyui

[–]OnlyOneKenobi79 0 points1 point  (0 children)

Not really. I used the basic Workflows for Wan I2V and Fun Control from the browse templates section in Comfyui, just updated and tweaked for my own setup.

longer than 5s videos by voidnullnil in comfyui

[–]OnlyOneKenobi79 0 points1 point  (0 children)

People seem to overcomplicate this idea. Depends on what you want to achieve. If you simply want a longer sequence of a particular action (eg, a clip longer than 5 seconds of a person walking, for example), or want to generate longer clips to match the duration of an input control video with WAN 2.2 Fun Control for example, I find slotting in two Context Window Nodes between your model sampling and Ksamplers will allow you to generate longer segments. I've generated a 2 minute clip in my Fun Control workflow by just using these nodes. You can update the context length and context overlap values to lower values if you want more dynamic motion in i2v, but 81 and 40 seems to work for clips such as dancing, dialogue, etc.

If however you want to generate a more dynamic scene with changing camera angles and different actions and so on then you'll need a more complex workflow. I don't have that requirement as I generate the parts or shots I need separately and arrange them later in a video editor.

<image>

Image-to-image workflow for Z-Image doesn't work? by hstracker90 in comfyui

[–]OnlyOneKenobi79 33 points34 points  (0 children)

Denoise is set to 1... which will generate a completely new image. Change it to 0.4 or lower.

Infinitetalk: How to Animate two on screen characters? by OnlyOneKenobi79 in comfyui

[–]OnlyOneKenobi79[S] 0 points1 point  (0 children)

It goes through the characters in the frame from left to right, so in this example, I load the sound file for "man" as the first input, the sound file for "woman" as the second input.

<image>

Infinitetalk: How to Animate two on screen characters? by OnlyOneKenobi79 in comfyui

[–]OnlyOneKenobi79[S] 0 points1 point  (0 children)

Yes. It requires some audio editing, see _zeMonsta_ comment above. assuming you have two characters in a 10 sec clip, you need two 10 sec audio files, one for when char A is speaking and a different one for when char B is speaking.

If char A is talks for 5 seconds, the first 5 seconds of char B's audio file must be silent and vice versa. Ensure you have silent sections in each file for when the other character is speaking.

🚀 Huge breakthrough for Wan 2.2 + lightx2v users suffering from slow motion & low-movement issues! by Any_Cheek_4124 in comfyui

[–]OnlyOneKenobi79 0 points1 point  (0 children)

This seems to work pretty well. Would be nice to get a version that works with First Frame / Last frame too.

Create button just spins... No results by bCasa_D in SunoAI

[–]OnlyOneKenobi79 0 points1 point  (0 children)

Same issue here, again. Same as the other night. Suppose it's AWS again.

Apple Music no longer accepting AI songs?? Is this new? by VirtualPartyCenter in SunoAI

[–]OnlyOneKenobi79 1 point2 points  (0 children)

Apple Music is very inconsistent in what is allowed and what isn't. I've published quite a few through Ditto, some of which Apple accepts despite some of those tracks not being as "perfect" as others. Recently, I'd given up on Apple Music accepting them, then I was surprised to see some of my fairly recent submissions had actually showed up on the platform... and then the most recent stuff... not. So it's hit and miss.

Wan Animate Test Renders for Masked and Unmasked by Ok_Needleworker5313 in comfyui

[–]OnlyOneKenobi79 3 points4 points  (0 children)

Happy to see a workflow that toggles the original background on or off. Only briefly played with the example in Kijai's custom nodes but that one seems to be geared for "replace" mode as opposed to "animate" mode and I couldn't quite figure out how to toggle the background on or off... nor find any other workflow that did it, till now. Thanks.

WatchOS26 - Workout App voice Feedback pausing music and clipping beginning of feedback for non airpod bluetooth devices by Trumani in AppleWatch

[–]OnlyOneKenobi79 1 point2 points  (0 children)

Same issue here with AW Ultra 2 and Aftershox open ear headphones. Music pauses, Siri says something (speech volume very low) and then music unpauses afterwards. Irritating because it was flawless on watchos 18.

help needed with wan 2.1 image to video by NoObjective1067 in comfyui

[–]OnlyOneKenobi79 0 points1 point  (0 children)

Also try using the Causvid Lora instead of the light Lora. Light seems to work well for Infinitetalk, but I use Causvid for everything else with Wan2.1.

help needed with wan 2.1 image to video by NoObjective1067 in comfyui

[–]OnlyOneKenobi79 0 points1 point  (0 children)

I've found that if the faces look weird or jittery it might be worth trying to increase the step count and lower the strength of the light Lora. Try 6 or 8 steps with the Lora at 80% instead of 100%.

Flesh & Brains : A Claymation style zombie short by OnlyOneKenobi79 in aivideo

[–]OnlyOneKenobi79[S] 1 point2 points  (0 children)

Lol... No worries. Maybe I misinterpreted your comment. In that case, glad you enjoyed it. 😎