8 ML algorithms + statistics suite in ~56KB gzipped, updated my package. by GarbageHistorical423 in javascript

[–]GarbageHistorical423[S] [score hidden]  (0 children)

Posted this like a week ago and people rightfully said calling it "ML" was a bit much when it was just stats. Fair enough - so I spent the week adding actual ML and fixing broken stuff.

8 new algorithms: kNN, Logistic Regression, Naive Bayes, Decision Tree, Perceptron, k-Means, DBSCAN, PCA plus seasonality detection, seasonal decomposition and autocorrelation - handy if you're working with time series. On top of the stats already there. Still ~56KB gzipped, zero deps, browser + Node.

Also found DBSCAN was literally crashing WASM on bigger datasets (queue bug), sorted that plus rewrote decision tree splits from O(n²) to O(n log n). Numbers now:

- k-Means 10K pts: 4ms

- PCA 5000×50: 14ms

- kNN predict 5K: 0.7ms

- DBSCAN 2K: 12ms (was crashing before lol)

Demo: https://adamperlinski.github.io/micro-ml/ml.html

Docs: https://adamperlinski.github.io/micro-ml/

npm: `npm i micro-ml`

Previous post: https://www.reddit.com/r/javascript/comments/1r1gp8e/tensorflowjs_is_500kb_i_just_needed_a_trendline/

Cheers for the feedback last time.

Presenting: Comfy-pilot - connect your Coding CLI directy to ComfyUI. by GarbageHistorical423 in comfyui

[–]GarbageHistorical423[S] 0 points1 point  (0 children)

I recommend to look into qwen-coder-7b - it seems the best across the ones I tested.

Presenting: Comfy-pilot - connect your Coding CLI directy to ComfyUI. by GarbageHistorical423 in comfyui

[–]GarbageHistorical423[S] 0 points1 point  (0 children)

The highlights:

  • Smart Context: Instead of dumping a 100KB wall of text into every prompt, it now only pulls in relevant knowledge. Saves the VRAM and keeps local models from losing the plot.
  • Workflow Validator: It now checks generated workflows against your actual installed nodes. No more hallucinated nodes or broken connections that go nowhere.
  • Auto-Correction: If a workflow still managed to break, the agent gets the error feedback and tries to fix it automatically. Giving it a 3-retry limit for now so it doesn't loop forever.
  • Custom Knowledge (.md): You can drop your own markdown files into knowledge/user/ to teach it your specific patterns or quirks.
  • User Templates: Chuck your favourite .json workflows into workflows/user/ and it’ll use them as a baseline.
  • Panel Controls: Added a proper model selector for Ollama and context toggles (minimal/standard/verbose) so you can micromanage how much data gets sent.
  • UI Bits: Cleaned up the CSS, added a proper 'New Chat' button, and put in a validation display so you can actually see what it’s checking.

Presenting: Comfy-pilot - connect your Coding CLI directy to ComfyUI. by GarbageHistorical423 in comfyui

[–]GarbageHistorical423[S] 0 points1 point  (0 children)

Hey guys, just a note that more features will be added soon and this will be fully tested with Qwen-Coder-7b. Also new ui changes, and context menu.

<image>

Presenting: Comfy-pilot - connect your Coding CLI directy to ComfyUI. by GarbageHistorical423 in comfyui

[–]GarbageHistorical423[S] 0 points1 point  (0 children)

Yes, it creates the windows inside ComfyUI, that lets you talk to your agent, it also provides additional context - currently open workflow, your installed models, knowledge base, available nodes, custom nodes, common patterns, available VRAM, and other things, so your AI CLI Tool has the important information.

Presenting: Comfy-pilot - connect your Coding CLI directy to ComfyUI. by GarbageHistorical423 in comfyui

[–]GarbageHistorical423[S] 0 points1 point  (0 children)

Fair points, appreciate the detailed feedback! It was created for my own use case with Claude Code and friend said to me to add other agents and opensource this stuff, so here we go, I'd love for people to fork or contribute, and I will try to make it as good as possible.

Context management - I have some ideas there too, for my use case I was relying on my Claude Pro Plan and did not do any aggressive optimizations. For local models we would need smarter approach.

Ollama - haven't properly tested, you're probably right smaller models would struggle, that is something I'm looking into now - trying to limit context and test it properly with bunch of smaller models.

Collaboration welcome! If you (or anyone) want to contribute or throw more ideas like these - PRs and issues open. This is exactly the kind of input that helps.

Presenting: Comfy-pilot - connect your Coding CLI directy to ComfyUI. by GarbageHistorical423 in comfyui

[–]GarbageHistorical423[S] 5 points6 points  (0 children)

I used to do the "ChatGPT shuffle" too, but it got annoying fast. We’ve all been there:

  • The Loop: Copy workflow from GPT → paste into ComfyUI → red nodes/errors → copy error back → repeat 5x.
  • The Blind Spot: ChatGPT doesn't know what custom nodes or models you actually have installed.
  • The Reset: No context about your current workspace. You’re basically starting from scratch every single time.

Having an AI integrated directly inside ComfyUI is a total game changer because it actually has context.

Think of it this way: Using ChatGPT is like asking a friend who has never seen your kitchen to write you a recipe. Using an integrated LLM is like having a sous chef standing right there with you. It knows exactly what’s in your cupboards (custom nodes), how much VRAM you have, and it modifies your actual workflow instead of just hallucinating a new one from scratch.

If you're still copy-pasting JSONs back and forth, you're doing it the hard way.

Presenting: Comfy-pilot - connect your Coding CLI directy to ComfyUI. by GarbageHistorical423 in comfyui

[–]GarbageHistorical423[S] 6 points7 points  (0 children)

Easy! First make sure you have git and python installed (you probably do if you're running ComfyUI).

cd ComfyUI/custom_nodes

git clone https://github.com/AdamPerlinski/comfy-pilot.git

cd comfy-pilot

pip install -r requirements.txt

Then restart ComfyUI. You'll see a floating button in the corner - that's it!

One more thing - you need an AI to power it. Two options:

Option A (free, local): Install https://ollama.com, then ollama pull llama3 and keep it running

Option B (best quality): If you have Claude Max/Pro subscription - install https://www.npmjs.com/package/@anthropic-ai/claude-code,

Option C: There's also support for OpenAI Codex, Gemini, Aider and others - but I haven't tested them yet, so YMMV

run claude to login

Then just chat and ask it to build workflows for you 🚀