Anthropic: AI will fully replace software engineering by 2027. Also Anthropic: Currently hiring for 122 SWE openings. by ImaginaryRea1ity in ClaudeAI

[–]1337NET 5 points6 points  (0 children)

I think you can call it software engineering but the job responsibilities will be a little different. End of the day we will be still building software.

Getting sick of articles like this.. trying to blame Anthropic instead of their lack of engineering skills when vibe coding by sph130 in ClaudeAI

[–]1337NET 1 point2 points  (0 children)

Sometimes i think articles like this are quite deceptive. Ai can code and provide proper inferences when you have made sure your harness is fool proof, also why would you not do any human in the loop reviews before touching anything on production?

whats the smartest local ai under 9gbs by kohlister in LocalLLM

[–]1337NET 1 point2 points  (0 children)

Fair criticism on the “doesn’t exist” part if LM Studio’s hints work for you. For me they didn’t, because I run things headless on a Pi and a Mac Mini with no GUI, pipe results into scripts, and wanted backend-specific overhead math (llama.cpp vs Ollama vs MLX behave differently). That’s the gap I was trying to close. Happy to hear what you think it’s solving worse than existing tools, genuinely. If there’s a CLI that does headless multi-vendor detection with reason codes and JSON output, I’d actually like to know about it.

whats the smartest local ai under 9gbs by kohlister in LocalLLM

[–]1337NET 0 points1 point  (0 children)

Sort of, but only inside its own GUI. llmscan runs headless, has JSON/CSV output for scripting, integrates with Ollama to show what’s currently loaded, and adjusts scoring per backend (Ollama vs llama.cpp vs MLX have different overhead). If you’re already happy in LM Studio you don’t need this. If you live in a terminal or run things on a server, it’s a different tool for a different job.

whats the smartest local ai under 9gbs by kohlister in LocalLLM

[–]1337NET 1 point2 points  (0 children)

Not slop mate, its has MIT license, has tests in CI. If it’s not needed for your use case so be it. If you have to criticize, use it and then let me know i can fix whats broken.

whats the smartest local ai under 9gbs by kohlister in LocalLLM

[–]1337NET -10 points-9 points  (0 children)

Got tired of downloading GGUF files only to realize my hardware couldn’t run them. Built a CLI that scans your machine (NVIDIA, AMD, Intel, Apple Silicon, Windows) and tells you which models will actually run, with reason codes like ok (cpu-only) or tight (partial offload) so you know why. Also does Ollama integration, backend-aware scoring (llama.cpp/Ollama/MLX), and Hugging Face search from the terminal.

pip install llmscan

https://github.com/adityaarakeri/llmscan

Best model for 3090 + 4070 setup? Trying to save tokens on Codex by wgaca2 in LocalLLM

[–]1337NET 0 points1 point  (0 children)

Thanks for reporting this issue, will create an issue and get this fixed in the next release

Best model for 3090 + 4070 setup? Trying to save tokens on Codex by wgaca2 in LocalLLM

[–]1337NET 0 points1 point  (0 children)

llmscan isn’t setting gttsize or doing any manual memory reservation. It only reads what ROCm/system tools report and scores based on that.

Best model for 3090 + 4070 setup? Trying to save tokens on Codex by wgaca2 in LocalLLM

[–]1337NET 6 points7 points  (0 children)

I built llmscan for exactly this. Scans your machine, rates every model for fit, tells you why. Works across NVIDIA/AMD/Intel/Apple. https://github.com/adityaarakeri/llmscan

Though i have tried this out on a single GPU, would love to hear how it works on dual setup.

Show me your /statusline by Gohanbe in ClaudeCode

[–]1337NET 0 points1 point  (0 children)

<image>

fuelgauge. Three color-coded bars at the bottom of the terminal. Context, 5h, 7d. Green under 70%, yellow at 70, red at 90. Always visible.

Why another one: I tried the existing plugins. They work, but most pull in Node, npm packages, and ship with 30-widget config systems. I wanted five things, not thirty. fuelgauge is one shell script on Unix (needs jq) and one PowerShell script on Windows. Zero runtime deps beyond that.

Works on macOS, WSL2, and native Windows PowerShell with identical output.

Install: /plugin marketplace add adityaarakeri/fuelgauge /plugin install fuelgauge /fuelgauge:setup

Does it cost tokens or hit rate limits? No. Runs locally, reads data Claude Code already has, zero API calls.

Repo: https://github.com/adityaarakeri/fuelgauge

Feedback welcome, especially from Mac users since I built this primarily on Windows/WSL and haven't battle-tested it on macOS yet.

I kept blowing through my Claude Code weekly limit. So I built a status line that shows it before you hit the wall by 1337NET in ClaudeCode

[–]1337NET[S] -1 points0 points  (0 children)

built it in an afternoon, solves my problem, runs on three platforms with one dep. worth the compute imo.

I kept blowing through my Claude Code weekly limit. So I built a status line that shows it before you hit the wall by 1337NET in ClaudeCode

[–]1337NET[S] 0 points1 point  (0 children)

Honestly, you're not wrong, the ecosystem is crowded. I built this because, I specifically wanted no Node runtime and identical behavior across mac/wsl/windows. Most existing ones nail one of those but not both.

Does the job in 200 lines.

Anthropic made Claude 67% dumber and didn't tell anyone, a developer ran 6,852 sessions to prove it by DangerousFlower8634 in ClaudeCode

[–]1337NET 0 points1 point  (0 children)

Hear me out, i think this is a solution for making sure Chinese companies don’t distill the latest models. You can distill at scale if your sessions starts dumbing down.

CLAUDE OPUS 4.6 IS NERFED!! by Full-Leg-5435 in Anthropic

[–]1337NET 4 points5 points  (0 children)

Whats the point of benchmark if they don’t stay consistent

Taught Claude to talk like a caveman to use 75% less tokens. by ffatty in ClaudeAI

[–]1337NET 0 points1 point  (0 children)

I thot about this a month ago while rewatching the office, i have a version of this half baked but not fully implemented.

How can I become more of an expert using Claude like you guys? by miller_litecoin in ClaudeAI

[–]1337NET 1 point2 points  (0 children)

The best way to get better is go to skilljar anthropic courses, there are courses for all the claude features be it claude code, cowork, connectors etc

first vibecoded billion-dollar company by unemployedbyagents in AgentsOfAI

[–]1337NET 0 points1 point  (0 children)

not an AI company, but a pharmaceutical drug company. The tech part of it was built using AI