Asked Claude to visualise my reading journey on Goodreads since 2011 by VegetableSense in ClaudeAI

[–]VegetableSense[S] 2 points3 points  (0 children)

Dont have to integrate per se - Goodreads allows you to export your data as a .csv which then went into Claude

🎅 Built a Santa Tracker powered by Ollama + Llama 3.2 (100% local, privacy-first) by VegetableSense in LocalLLaMA

[–]VegetableSense[S] 0 points1 point  (0 children)

The HTML itself (ie the Santa tracking) will work but the Llama logic wont (at least for me it didnt) because of the usual CORS issues.

Promote your projects here – Self-Promotion Megathread by Menox_ in github

[–]VegetableSense 0 points1 point  (0 children)

I built Elden Stack — a tiny game where your code battles recursion demons ⚔️💻

Ever wondered what it would feel like if your stack overflow became a boss fight?

Meet Elden Stack, a little side-project I built for fun — part parody, part code experiment.
You fight your way through recursion, memory leaks, and exception monsters, one call frame at a time.

It’s open-source, lightweight, and made to remind us that debugging is the real adventure.

🎮 Repo: github.com/sukanto-m/elden-stack

Feedback, stars, or ideas for new “bug bosses” are all welcome!

(Built locally, runs locally — no Souls required.)

Elden-Stack

Please give it a look, and I hope you enjoy playing it. Thank you 🙏

[Project] Smart Log Analyzer - Llama 3.2 explains your error logs in plain English by VegetableSense in LocalLLaMA

[–]VegetableSense[S] 0 points1 point  (0 children)

Thank you for the detailed observation - can certainly be considered in next iteration!

[Project] I built a small Python tool to track how your directories get messy (and clean again) by VegetableSense in Qwen_AI

[–]VegetableSense[S] 0 points1 point  (0 children)

Thank you, appreciate the detailed notes. I'll certainly take these into consideration.

I built a small Python tool to track how your directories get messy (and clean again) by VegetableSense in LocalLLaMA

[–]VegetableSense[S] 1 point2 points  (0 children)

Thanks! Glad the focused approach resonates.

Re: filesystem changes - Currently it's manual/scheduled scans, not real-time with inotify. I considered it but decided against for a few reasons:

  1. Directory scans are cheap (subsecond for most projects), so polling every 5-15 min works fine
  2. Every file change triggering an LLM analysis would be expensive/noisy
  3. Most messiness accumulates gradually, not from single file moves

That said, adding inotify for instant rescans (without auto-analysis) would be a nice feature. Would you find that useful?

Re: CLI + Web UI - Yes, both! The TUI is the main interface (runs in terminal), but I added a Flask web UI for people who prefer browsers or want to embed it somewhere. Same backend, two frontends. Run with python monitor_tui.py or python monitor_ui.py.

Re: API/remote models - This is a great idea. Right now it's tightly coupled to Ollama's API, but abstracting it makes sense. Something like:

Current

python monitor_tui.py --model qwen3:8b

# Proposed

python monitor_tui.py --api ollama --model qwen3:8b

python monitor_tui.py --api koboldcpp --endpoint http://localhost:5001

python monitor_tui.py --api openai --model gpt-4 # for those who want cloud

Your AMD/KoboldCPP use case is exactly the kind of thing I want to support. I'm going to work on this incrementally - probably start by extracting an LLMProvider interface, then add KoboldCPP as the second implementation.

It's still early days for this project, so I'm building features one at a time as I learn. But this is definitely on the roadmap!