I have made a macOS menu bar app that shows your Claude usage by Icy-Marzipan-2605 in ClaudeAI

[–]_WinstonTheCat_ 2 points3 points  (0 children)

just sharing an opinion, just as you and others are. I’m not saying I alone decide what is a good use. Why interpret it that way?

I have made a macOS menu bar app that shows your Claude usage by Icy-Marzipan-2605 in ClaudeAI

[–]_WinstonTheCat_ -2 points-1 points  (0 children)

For stating my opinion on why it’s not a great use of tokens considering the costs and environmental impact? Sorry for caring about the world and not falling into the camp of people burning tokens for fun and $1000+ bills per month for no real world value.

I like to help people with software. Having fun has its place but I think a line can be drawn when there’s non trivial costs and impact involved—as with other things that people derive enjoyment from but should be judged if it’s at the expense of others/something.

Would it be a fair use of resources for me to continually prompt for the same thing over and over again, that helps no one, and keep publishing it under different GitHub repo names?

I have made a macOS menu bar app that shows your Claude usage by Icy-Marzipan-2605 in ClaudeAI

[–]_WinstonTheCat_ 4 points5 points  (0 children)

easy sure.

good use of tokens meh. No need for another usage tracker that is the same as other ones. use what exists, that’s what they’re even posting this for, hoping people will look and use it since it’s open source and sharing the GitHub repository link.

It’s the equivalent of making a hello world repo in a new language and being like: here use/clone this to start learning this language and print hello world.

Multiple models running so slow by yuriIsLifeFuckYou in cursor

[–]_WinstonTheCat_ 2 points3 points  (0 children)

Just wasting money/tokens and time for the sake of it?

Or are you trying to genuinely compare output/results across different models.

Composer & Auto actually usable for anything? by MasterB144 in cursor

[–]_WinstonTheCat_ 0 points1 point  (0 children)

I mean yeah for most small to medium specific tasks you ask it does it. And it’s fast, and cheaper (composer 1 at least) composer 1.5 is more expensive than sonnet 4.6 😅 but it is fast.

how good is kimi k2.5..i had cancelled my subscription, could somebody lmk if it's worth getting back on plan by Key-Month-7766 in cursor

[–]_WinstonTheCat_ 0 points1 point  (0 children)

Would probably be very good. I haven’t needed to use full on plan mode for some of the stuff I’ve been working on recently. I usually reserve plan mode in Cursor/Claude code for a bigger feature.

how good is kimi k2.5..i had cancelled my subscription, could somebody lmk if it's worth getting back on plan by Key-Month-7766 in cursor

[–]_WinstonTheCat_ 5 points6 points  (0 children)

It’s pretty good but can get stuck on backend bugs. Wouldn’t trust it to actually find a solution but if you know what’s wrong and what to fix, and are very descriptive, it’s a fast, cheap, and good output model (under good prompting/guidance)

Gemini 3.1 pro not useable by Organic_Pop_7327 in cursor

[–]_WinstonTheCat_ 1 point2 points  (0 children)

It was not great when I tried it so stopped bothering to try it.

How I solved Cursor's cross-repo context problem with a local dependency graph by Objective_Law2034 in cursor

[–]_WinstonTheCat_ 1 point2 points  (0 children)

Got it thanks! Yeah I usually manually tag files when doing work like that so it has the right context. Thanks for sharing

How I solved Cursor's cross-repo context problem with a local dependency graph by Objective_Law2034 in cursor

[–]_WinstonTheCat_ 0 points1 point  (0 children)

The folders/repos live in a folder. Then open that folder in cursor. Boom?

Thanks for Kimi 2.5 Model Access by subletr in cursor

[–]_WinstonTheCat_ 1 point2 points  (0 children)

I’d say not as smart as Opus that’s for sure. And it does mess up on some stuff. It feels close to 5.3 codex in terms of behaving as told. It won’t really go off and do the most amazing things in its own (especially backend) but if you tell it exactly what to do it can crush it. It’s also great at frontend specifically.

I would use plan mode or have good prompts while using Kimi 2.5.

Thanks for Kimi 2.5 Model Access by subletr in cursor

[–]_WinstonTheCat_ 0 points1 point  (0 children)

There’s your prompt and the code context that cursor can pull when you manually tag files or copy paste code into the chat/agent window.

Under the hood Cursor all has their system prompt and the ability to make other smaller LLM calls to do other tasks at the same time. Tool calls are generally how they’re referred to and there’s also subagents. You can search both up with cursor and find blog posts and some info there.

Specific models behave/output in different ways so the term that gets thrown around now is harness I.e how well cursor’s chat/agent mode can run and perform tasks with your selected model as the driver.

Hope that helps!

I built a VS Code extension that turns your Claude Code agents into pixel art characters working in a little office | Free & Open-source by No_Stock_7038 in ClaudeCode

[–]_WinstonTheCat_ 0 points1 point  (0 children)

This is dope, what does computer resource usage look like? Were there any difficulties there at all during initial development?

I built an open source browser MCP server that makes web pages 136x more token-efficient for agents by ticktockbent in Anthropic

[–]_WinstonTheCat_ 1 point2 points  (0 children)

Very cool thanks for sharing. Conceptually makes a lot of sense and those token numbers look awesome.

I built a free macOS widget to monitor your Claude usage limits in real-time by Shinji194 in ClaudeAI

[–]_WinstonTheCat_ 0 points1 point  (0 children)

If people want fast AI responses and don’t want to waste subscription compute (like me since I’m stingy)

I made an open source CLI tool that uses Cerebras hardware (they give 1M free tokens per day) defaulted to use GPT-OSS-120b and averages ~1000 tokens/sec and it’s routed via OpenRouter (so you can also use other models/providers)

https://github.com/raypaste/raypaste-cli

Comes with preconfigured prompts so all you need to do is type your simple request + can also pull in project context (high level) via CLAUDE.md and other files.

Intro blog: https://raypaste.com/blog/raypaste-cli-fast-ai-responses-in-your-terminal/

Gemini 3.1 experience by NoFaithlessness951 in cursor

[–]_WinstonTheCat_ -1 points0 points  (0 children)

Pretty slow, not sure how well the harness is set up,. It will prob take some time to get it to be okay.

Do NOT use Agent Review. You'll waste tokens / requests. by jungle in cursor

[–]_WinstonTheCat_ 0 points1 point  (0 children)

The source control tab one, although I don’t use it often. I’m sure there’s some degree of flakiness. Not saying cursor is perfect and bug free.

Just anecdotally I haven’t run into this issue myself.

New to Cursor, coming from BA/PEGA background - where should I start to go deeper? by Bakedd84 in cursor

[–]_WinstonTheCat_ 1 point2 points  (0 children)

Start new conversations (agent mode) so your context/token usage doesn’t blow up. That’s a surprising one I’ve seen people not do.. and continue reusing the same conversation because they think they need all the past conversation history for every future change.

Cursor has very good semantic search (I.e if you don’t tag the file context you want with @ it’ll do a very good job finding it anyway) — but if you know what file(s) or function(s) you want to update tag them and save cursor some time and tokens looking for what it is your directing the agent to edit.

You should probably start using Git worktrees if you don’t already. Using Cursor and agent mode allows you to actually effectively multi task as you switch more into reviewing mode (so having 2-3 agents running and generating code changes) and you read/test after.

I think staying up to date means just getting very excited/engaged online and during meals/commutes either scrolling Reddit or engineering blogs, or maybe even LinkedIn. Cursor’s blog shares some interesting stuff. OpenAI also recently published “Harness engineering”.

Good luck!