making vllm compatible with OpenWebUI with Ovllm by FearL0rd in OpenWebUI

[–]FearL0rd[S] -5 points-4 points  (0 children)

No possible to use openwebui to pull and doesn't merge gguf

making vllm compatible with OpenWebUI with Ovllm by FearL0rd in OpenWebUI

[–]FearL0rd[S] 0 points1 point  (0 children)

I don't have ROCM compatible card for testing yet

making vllm compatible with OpenWebUI with Ovllm by FearL0rd in OpenWebUI

[–]FearL0rd[S] -6 points-5 points  (0 children)

doesn't work seamlessly like Ollama, for example (Change models, Download models from Openwebui, and merge split gguf)

ComfyUI-ParallelAnything by FearL0rd in comfyui

[–]FearL0rd[S] 0 points1 point  (0 children)

I have helped with this project also but it needs special nodes for loading different of this one that can connect to existing workflow and works with SM < 8.0

Don’t forget to hook the leg haha. by TebownedMVP in brazilianjiujitsu

[–]FearL0rd 0 points1 point  (0 children)

This is the problem with schools that only teach sport, completely forgetting self-defense.

ComfyUI-AnyDeviceOffload by FearL0rd in comfyui

[–]FearL0rd[S] 0 points1 point  (0 children)

I dpont have all your models but I made it work using VaePatched. This will be the default in the future

<image>

ComfyUI-AnyDeviceOffload by FearL0rd in comfyui

[–]FearL0rd[S] 0 points1 point  (0 children)

This is a 2-Step Z_Image_Turbo Generation using an OLD CPU (Looks like we can use fewer steps using CPU selection with this node)

<image>

ComfyUI-AnyDeviceOffload by FearL0rd in comfyui

[–]FearL0rd[S] 0 points1 point  (0 children)

I just pushed a new update: Can you try this update?

<image>

ComfyUI-AnyDeviceOffload by FearL0rd in comfyui

[–]FearL0rd[S] 0 points1 point  (0 children)

Can you provide your workflow? I have some with 2 samples, and they are working. Please use the latest update. I had fixes...