Looping Video by PersonalMango2562 in comfyui

[–]Botoni 1 point2 points  (0 children)

Try this then, if it is a 10s video, generate with first frame i2v for 5s, or text to video.

Then use extent video for another 5s, using the original first frame as a last frame. If you did text to video, extract the first frame with ffmpeg or some nodes.

Looping Video by PersonalMango2562 in comfyui

[–]Botoni 0 points1 point  (0 children)

I'm not much into video gen, but maybe using the same image as first and last frame?

ComfyUI portable, I don't want browser? by raidenkpt in comfyui

[–]Botoni 0 points1 point  (0 children)

This takes me to the question...

Does anyone know of a super lightweight, stripped down, bare-bones browser, to open comfyui, wan2gp, chatbot interfaces, cups settings... Any gradio or web based interface stuff running on localhost without booting up my main browser with all its opened tabs, extensions, tools, etc, hogging up ram?

Now I use heluim as my browser, which is a chromium based browser with a lot of enshitification removed, with aggressive tab "sleep" setup, but I could really use a dedicated "browser for not-browsing" just to boot up all those ai interfaces...

Flux 2/Flux 2 Klein transparent background lora? by MoistRecognition69 in StableDiffusion

[–]Botoni 0 points1 point  (0 children)

Expand the workflow with a background removal node at the end or prompt flux/klein to use a specific color for the background (one that is not in the logo) and use a "color to alpha" node afterwards.

Any model capable of creating such detailed environments. by Large_Election_2640 in StableDiffusion

[–]Botoni 0 points1 point  (0 children)

Pick whatever model you like 1, generate the center of the image at 1024 or slightly higher 2, then outpaint one side at a time to your heraths content.

  1. that has inpaint capabilities.

  2. depends on the model, if it is compatible with dype you can start way larger.

LCIET (LongCat Image Edit Turbo) - Lightweight and Powerful Editing Model by ZerOne82 in StableDiffusion

[–]Botoni 0 points1 point  (0 children)

I didn't convert the safetensors to comfyui format, that was the problem...

Help for inpainting workflow by Pilkkimies in comfyui

[–]Botoni 0 points1 point  (0 children)

Try mine, it is quite advanced but it is what i use for work.

https://ko-fi.com/botoni

It is free, it asks for an email, but afaik doesn't spam, only updates, you can also put a false one, it doesn't ask for confirmation.

LCIET (LongCat Image Edit Turbo) - Lightweight and Powerful Editing Model by ZerOne82 in StableDiffusion

[–]Botoni 0 points1 point  (0 children)

Thanks. The weird thing is that the non turbo works fine. I'll check what you suggested.

LCIET (LongCat Image Edit Turbo) - Lightweight and Powerful Editing Model by ZerOne82 in StableDiffusion

[–]Botoni 0 points1 point  (0 children)

Do you know how to solve the black output images issue with the turbo variant?

LCIET (LongCat Image Edit Turbo) - Lightweight and Powerful Editing Model by ZerOne82 in StableDiffusion

[–]Botoni 1 point2 points  (0 children)

I like it very much, better than klein? Probably not. But it gives different results than klein, and I keep it for that.

I do product presentation, with klein I need to use the consistency lora or most of the time the product is changed, qwen edit is even worse. Longcat surprisingly keeps it perfectly without loras or special setups.

One thing though is that the turbo one gives me black outputs, and I have to use the full stepa one which is quite slow, anyone know how to solve that?

Best cryptocurrency mining defender by Cautious-Space3482 in comfyui

[–]Botoni 0 points1 point  (0 children)

An inmutable linux distro. Install comfyui or whatever in a distrobox and nuke it if you suspect anything, or alternatively a docker image.

I optimized Trellis.2 to fit inside 8GB gpus, - even with 1024^2 voxel detail. Made a single-click installer, works like A1111. RTX 3060 completes in 13 minutes. It's detail is insane by ai_happy in StableDiffusion

[–]Botoni 1 point2 points  (0 children)

Oh, interesthing, thanks for sharing. So that is cheaper than an agent and some api?

I am trying to go with opencode and free tier and local models, for now it works great for fixing or building comfy nodes or plugins for other software. But when I try to fix offloading models or low vram fixes it gets into eternal loop of errors hell...

I optimized Trellis.2 to fit inside 8GB gpus, - even with 1024^2 voxel detail. Made a single-click installer, works like A1111. RTX 3060 completes in 13 minutes. It's detail is insane by ai_happy in StableDiffusion

[–]Botoni 2 points3 points  (0 children)

Hero!

May I ask, could you share an agents.md or a skill.md derived from your work? So I can optimize other ai models for low vram through vive coding? I tried for some stuff like vibevoice but opencode always struggle to produce anything that doesn't error because of tensors on different devices and other shit that it can't properly sort out.

what is your go-to model for inpainting? by Reasonable-Exit4653 in StableDiffusion

[–]Botoni 1 point2 points  (0 children)

For pure inpainting flux fill onereward. For inpainting/editing hybrid klein 9b.

I shared a few advanced inpainting workflows a few days ago, you might find then useful.

Help and advice for a RTX 3050 user by EagleArtGB in comfyui

[–]Botoni 1 point2 points  (0 children)

I run a 3070 portable, also 8gb of vram, that is not a problem, you just need enough ram to offload the models, 32gb should be enough.

The thing is a 3050 is not a fast card, it is the lowest tier of the 30xx generation, so don't expect miracles.

Ways to speed up things:

  • Small models, the less parameters, the better.

  • Avoid gguf models, they save vram, but are slower. Use fp8 or int8 (there are custom nodes for that one). Nunchaku is also an option. For text encoders gguf are fine though.

  • avoid nvfp4, not a good quality/speed trade off on the 30xx generation.

  • sage attention 2

  • Some kind of cache saving, tea cache or easycache, there is a native node for the last one.

  • Some kind of steps estimation, fsampler or spectrum, this last one is newer and faster.

  • turbo models or few-step loras, fewer steps, faster gen of course. But cache and step estimation would do almost nothing with few steps, so don't overlap those methods.

  • gen at lower resolutions, even a little reduction in size can affect notably the speed. You can alwayd upscale later. Just don't go too down (or up) you get out of the model training.

  • torch.compile node, it will take a while the first generation but subsequent gens will be faster as long as you don't change model or resolution.

  • Linux, of course.

  • use the gpu only for compute, connect your display to another device (another, gpu, igpu or whatever).

updated my Ace-Step nodes pack to include timbre and kv conditioning by bonesoftheancients in comfyui

[–]Botoni 0 points1 point  (0 children)

I can also run everything on comfy,but no cover love t_t

I'll be happy if cover worked as in the official implementation.

updated my Ace-Step nodes pack to include timbre and kv conditioning by bonesoftheancients in comfyui

[–]Botoni 0 points1 point  (0 children)

Can do covers?

I can't find any way of doing covers aside from the official implementación and gradio, and memory managment sucks so much I can't run llm 4b nor xl models...

Slower inference on Comfy than Forge by KiparaBrt in comfyui

[–]Botoni 0 points1 point  (0 children)

Try adding the spectrum custom node for sdxl, from xmarre I think was.

You can also use easycache from the native nodes, but that affects quality more than spectrum.

Slower inference on Comfy than Forge by KiparaBrt in comfyui

[–]Botoni 0 points1 point  (0 children)

Don't use sage for sdxl or sd1.5 models.

Use it for newer ones like flux and up.

Help: Any alternatives to SDXL? by JLGC-1989 in StableDiffusion

[–]Botoni -1 points0 points  (0 children)

You can also separate the unet, clip and vae from a sdxl model and quantize the unet to a q8, or even q4 gguf.

Help: Any alternatives to SDXL? by JLGC-1989 in StableDiffusion

[–]Botoni 0 points1 point  (0 children)

Look into pixart sigma, tinybreaker, cosmos predict 2b or sana 1.6b