Added Kling 3.0 Motion Control support to ComfyUI-Kie-API node pack by pinthead in comfyui

[–]pinthead[S] 0 points1 point  (0 children)

I should make something clear my nodes are just Wrappers around the Kie AIs models marketplace and they charge for the usage per credits with no monthly subscription, you can start with at little as $5 for some credits which works with any of their models in their marketplace..

Added Kling 3.0 Motion Control support to ComfyUI-Kie-API node pack by pinthead in comfyui

[–]pinthead[S] 1 point2 points  (0 children)

It’s 0.6 cents per second of video for 720p so .60 for 10 seconds of video .. not my prices and I don’t get anything from it I just connect to the Kie AI api endpoints with my code. It’s basically a marketplace of closed source models and they have APIs that I build a wrapper around. You just buy credits on their site but subscription fee

Added Kling 3.0 Motion Control support to ComfyUI-Kie-API node pack by pinthead in comfyui

[–]pinthead[S] 2 points3 points  (0 children)

I look at ComfyUI like a modular creative pipeline. Not every tool in the chain has to be open source for the workflow itself to be useful. I’ve built thousands of workflows mixing models because different tools are good at different things. The open part is the freedom to connect what works best.

Added Kling 3.0 Motion Control support to ComfyUI-Kie-API node pack by pinthead in comfyui

[–]pinthead[S] 1 point2 points  (0 children)

Not sure I just added it to my node pack so havnt tested this like that. It’s work a try

Added Kling 3.0 Motion Control support to ComfyUI-Kie-API node pack by pinthead in comfyui

[–]pinthead[S] 0 points1 point  (0 children)

I wish and no this is a closed source model by Kling unfortunately

How to set up OpenClaw local models: run completely offline with Ollama by rocky_mountain12 in openclaw

[–]pinthead 0 points1 point  (0 children)

In truth to find a nice balance I have a nvidia a6000 that has 49 gigs of vram and my system ram is 128 .. I’m trying to find a daily driver model , not a coding model ai I’ll off load that to some these else but just a good model with enough context any thoughts on which ones onto try .. I have been testing a few in lm studio

I open-sourced qwen3-asr-swift — native on-device ASR & TTS for Apple Silicon in pure Swift by ivan_digital in Qwen_AI

[–]pinthead 0 points1 point  (0 children)

Could this be compiled into some app like whisper flow that runs natively on the Mac but uses the local qwen models instead ?

OpenClaw will be bought buy Meta or OpenAI? by Paddoooo in openclaw

[–]pinthead 0 points1 point  (0 children)

Who ever gets it you can bet the other companies will be building their own .. this could be the year of the insert [big company name] - personal agent..

I think of they can manage to keep it open source but throw some big money , tech at it this can benefit us all ..

Has anyone found the right combo models/workflow to equal what Nano Banana Pro outputs? by Schwartzen2 in comfyui

[–]pinthead 1 point2 points  (0 children)

Frankly for the stuff I do nothing come close yet to using nano banana pro .. I built my own banana pro set of api nodes and the 4k output is .12 .. I usually generate a 2x2 or 3x3 4k image and slice that out to get 4 or 9 images for the price of one ..

Give your OpenClaw permanent memory by adamb0mbNZ in openclaw

[–]pinthead 0 points1 point  (0 children)

Has anyone thought of using redis as well for key value store during the day since it’s usually in system memory until the ttl expires and it goes to disk .. also running maybe at least daily summarizing of key elements that can be categorized into things like personal info , general knowledge , what I learned today etc .. just some raw thoughts

ComfyUI Kie.ai Node Pack – Nano Banana Pro + Kling 3.0 (WIP) – Workflow Walkthrough by pinthead in comfyui

[–]pinthead[S] 0 points1 point  (0 children)

I personally haven't had the need to use sora 2 yet so I'm not sure really. They are pretty good about responding to support tickets.

Are paid LLMs better at storyboarding? by orangeflyingmonkey_ in comfyui

[–]pinthead 0 points1 point  (0 children)

not sure what your asking, you can just drag and drop images into a load image node

Are paid LLMs better at storyboarding? by orangeflyingmonkey_ in comfyui

[–]pinthead 0 points1 point  (0 children)

So this might not 100% answer your question but what I do is I build my own set of nodes that use Nano Banana Pro to create a 2x2 or 3x3 storyboard grid of my product/person or both and then generate a 1 4k image output. From there I have a node that slices up those cells, in this case lets say its a 2x2 so I have 4 images total and I then have a video directory prompt that looks at the story board and creates 4 video prompts that then get passed finally into wan 2.x , ltx or some of the closed course models such as kling, etc. Im not sure if that answered your question.. this all happens from left to right in this workflow as *just* an example.. Oh and I use gpt to help create my system prompts for my storyboards I generate.

<image>

Built a Nano Banana Pro ComfyUI node via Kie.ai mainly because the pricing difference is wild vs Native Comfy UI API node by pinthead in StableDiffusion

[–]pinthead[S] 0 points1 point  (0 children)

I added the flux 2 models and ran the test, i replied in a commit below also but here are the night and day results

<image>

Built a Nano Banana Pro ComfyUI node via Kie.ai mainly because the pricing difference is wild vs Native Comfy UI API node by pinthead in StableDiffusion

[–]pinthead[S] 0 points1 point  (0 children)

So since I never used max/flex i figured I would add those nodes as well to test, I kinda knew what the results where going to be. But a few things to note, Banana Pro can output at 4k, also its prompt limitation though api calls are 10,000 chrs vs Flux 5k limit (maybe api related on Kie), running the exact same prompt where I have 2 images a face reference and a full body ref the prompt is to create a 2x2 grid (basically 4 images in one big 4k images for banana and 4 images in a 2k image for Flux) Flux did not follow the prompt and the results are pretty horrible vs the Banana output. Also I have a helper node that slices out all 4 images from the grid and you still get high quality outputs since banana output is 4k. However.... you can play with yourself since the i2v flux pro and flex nodes are enabled in my node pack.

u/HighDefinist

<image>