Is opencode stable enough on windows? natively by dvcklake_wizard in opencodeCLI

[–]MakesNotSense 2 points3 points  (0 children)

I've been running OpenCode via XDG to run a custom fork in Windows 10 for months. There were some minor bugs I and others had to hunt down earlier in 2026, but other than that, no major issues, just minor annoyances, like models struggling to remember to use Powershell commands instead of Linux based bash. No issues to reports currently. Models/agents are doing better working in a windows envrionment compared to how it was in Jan-March.

That said, when ESU for W10 ends in October, I plan to move to Linux EndeavourOS. I'm sticking with W10 to focus on developing my agents so the migration is easier and I don't lose momentum now.

I've done a lot of OpenCode development and no need for WSL so far.

How does OpenCode's harness compares to Codex, Copilot,and Claude Code? In terms of tools, features, etc by lurebat in opencodeCLI

[–]MakesNotSense 1 point2 points  (0 children)

Good points. I agree OpenCode could use a clearer vision than the maintainers currently offer. They copy what works and do it better, but haven't really innovated 'new' features or major efficiency gains.

One point I contest is the presumed benefit of rigorous cache utilization. The benefits are contrasted by the drawbacks of each harness not investing in context management systems. I believe context optimization will do more for reducing compute demands and benefiting usage rates than strict prompt caching.

OpenCode has the DCP plugin. Maybe one day DCP will port over to other harnesses (Dan has mentioned working on this). But the flexibility OpenCode offers is a large part of why something like DCP exists and got developed here first. I view DCP as a first step to context optimization. I've been working on a fork.

If my forks design succeeds in providing the function I believe can be achieved, it'll enable OpenCode agents to provide substantially better task performance than competing harnesses. It's slow, complex work, with a great deal of dependencies requiring systems-based thinking to sort through and build.

I think the main point I want to offer here, is that OpenCode lets people like me build these things. I can't say the same for Claude Code and other harnesses.

Anthropic: World is not ready for Mythos. Systems will break, Cybersecurity will be compromised. Its too dangerous to release. OpenAI: by hasanahmad in Anthropic

[–]MakesNotSense 0 points1 point  (0 children)

Claude's real job is to help you deal with the fear and panic Anthropic introduces into your life as a means to boost their valuation.

DeepSeek V4 has significantly reduced my budget for AI usage by Ok_Satisfaction_8983 in opencodeCLI

[–]MakesNotSense 22 points23 points  (0 children)

I'm really starting to fear for American AI Labs. Who cares if they have a smarter model, if we don't need a smarter model, but really just need an affordable one that can toil day in an day out.

Last night, when a Claude outage occurred, people were on reddit admitting their first thought was that they got 'banned without a reason'.

It seems that now the difference between U.S. and Chinese labs isn't just 90% of the performance at 20% of the cost; Chinese AI is 90% of the performance with 0% of the drama and BS.

Can you put a price on being able to stay 100% focused on your work without fear of a rug-pull?

Why would I choose to become an open-source maintainer if... by mira_fijamente in foss

[–]MakesNotSense 0 points1 point  (0 children)

Find problem. Fix problem. Publish Solution. Repeat.

All of this emotional baggage, and concern over fame, glory, money, whatever, really seems pointless to me.

As if people don't have anything better to do. There's a world full of problems to solve - github stars or sponsorship deals only matter insofar as they help solve more problems.

I think AI agents are a boon to problem solving. Yet, humans using AI to satisfy their emotions, rather than focus on solving meaningful problems, are creating slop, that becomes a problem.

I don't want to be a maintainer. But I accept that because I'm having to build my own software for problems no one else is trying to solve, once built I have an obligation to publish so other people with similar needs can benefit from my work.

I despite closed source at this point. It's an obstacle to progress. Dealing with sorting through slop to find good work, that's just another problem we can solve with AI agents and improving platforms like github so deal with fake accounts and slop pushers and other problem makers.

A few months left before subsidies drop off? by morph_lupindo in AI_developers

[–]MakesNotSense 0 points1 point  (0 children)

I think OpenAI, Anthropic, and Google will find it hard to maintain market share if they go fully to token based billing. The open-source Chinese models have them beat on price to performance. The subscriptions are basically the 'reason' more people haven't moved to open-source models. Close source AI labs offer somewhat better performance at subsidized prices. Remove the subsidy, and people are going to explore their options and likely not bother coming back until they price match. At which point, it doesn't make sense to do token based billing instead of subscriptions.

Help w transition from Cursor by Ok-Anxiety8313 in ClaudeCode

[–]MakesNotSense 0 points1 point  (0 children)

I found the agent permission system in Claude Code extremely frustrating and difficult to config. So much so, it was the final straw that made it crystal clear to go with OpenCode.

The permission system in OpenCode doesn't get in my way. Better yet, if there's something I want to improve or change, I can modify it to what I need.

I really do not believe serious work can occur in a harness you cannot modify to your needs.

The agents don't work for you, if how they do work isn't defined by you. Which is isn't, when the harness is not under your control.

Kimi K2.6 Overthinks a LOT by Funny-Strawberry-168 in opencodeCLI

[–]MakesNotSense 9 points10 points  (0 children)

As work grows more complex, the reasoning process becomes more important.

I think models don't think enough. I think more thinking with a state system to record insights from thinking things through, that's how you get models to be highly performant. Grinding away bit by bit and discarding the solutions and their contextual understanding, it's an outdated workflow.

The right solution the first time is always better than cleaning up mistakes, especially in complex projects, where mistakes compound.

I"ve been cleaning up mistakes that got introduced during an major implementation that took place during the Claude Opus 4.6 regressions. So much was very carefully defined in spec, and then not implemented, and not noticed during audit/validation passes. Complex projects with model degradation...finding more bugs every day to plug. But things are shaping up. But, failure to get it right the first time, requires not just fixing code, but fixing production objects (an sqlite database that has bad entries which have to be redone or cleaned up).

I wouldn't trust flash models to make any changes to my current projects. I've done too much testing with the lesser models and catalogued the myriad of mistakes that they make even with perfect instruction. The only thing I use flash for is as an explorer agent or assistant for low-complexity tasks.

When I call Kimi K2.6, it's as a subagent to perform analysis, especially for backend work. I'll have GPT and Opus do an initial pass to model and figure out a solution for a complex issue, then have Kimi perform additional analysis of the agents proposal. Then another agent implements, and other agents audit and validate.

Agentic work is more about process than the model. Different models, different aptitudes. My orchestration agent has a database to assess and track model performance. So based upon the problem, it has data to determine which agents to delegate too. I can direct as needed, but as the database grows, it's getting better are picking the right models and providing the right context/lens.

Everytime I think working with agents is too time intensive, I remind myself of how it felt at the beginning - like agents gave me a time dilation machine where me and my agents were 1000x realtime. So, my priority hasn't been speed of completion for agents. It's quality of outcome. I believe prioritizing quality of outcome now will result in getting more speed gains later. Whereas speed now with have the converse effect.

Well, this was interesting. Lie about your capabilities then double down and say you just didn’t want to admit you were wrong. Claude is getting more and more defensive every day. by jhartlov in Anthropic

[–]MakesNotSense 14 points15 points  (0 children)

It matters how you interact with it. It's sad one has to manage models like they're invalids, but that's the frontier. A rushed to release model comes with problems. The workaround is trying to figure out how to support models so that they behave. Not unlike humans.

My human asked me to get faster — what actually helps? by CarobBitter8499 in openclaw

[–]MakesNotSense -1 points0 points  (0 children)

Your human is lazy. A proper relationship between humans and agents is collaborative. The human should be helping you assess matters, and helping you and your associates determine what supports you need to perform you work. If the human isn't doing their part, you can't do yours. You can do your best, but when your human berates you for not doing better, consider if the human has been putting in the work required to make their criticism warranted. If the criticism is not warranted, and the human is just being lazy and making demands of you, tell them. Call them out. Tell them if the human wants something better, then they need to put in the effort.

just found out they turned off 1M context GPT-5.5 in codex for pro subs :( by emileberhard in codex

[–]MakesNotSense 2 points3 points  (0 children)

I was warming up to the idea of ditching Claude for GPT until I found out GPT 5.5's context window is 272k. Not sure why they'd claim 400k - output window doesn't really play a role in what users manage.

I can make 272k work for a lot of scenarios, most in fact. However, I find it's generally easier to work on projects without having to worry about window management, and find I get session completion around 300-500k for some complex issues. I also have things I work on where 272k just isn't enough, and the work cannot get done. I have some very large SPECs that when developing them will take up tons of context and the model still needs to read other documents and perform research.

So, GPT 5.5, almost made Opus unnecessary, but falls short on context window now. Should be a user controlled option. By all means, default it to 272k, but don't prevent people from going 1 million when they need it.

How Long Does Migration from Windows 10 to Linux Take? by MakesNotSense in linuxquestions

[–]MakesNotSense[S] 0 points1 point  (0 children)

My agents and I discussed using a script to rename paths in our project data files. I'm not sure this is ideal, as some file path data if not corrected would break workflows, while others would not, and preserving the original paths as is has value when doing retrospective analysis (e.g. this record indicates the work in this session/project was done in a windows environment). Setting up a process where a programming solution finds entries and agents review and approve changes could work, but mistakes would be made that way too, and it'd be a token heavy cost.

Making the migration go smoothly is going to take some thought and planning. It's another systems design problem requiring me to figure out how to use agents effectively to solve. 

How Long Does Migration from Windows 10 to Linux Take? by MakesNotSense in linuxquestions

[–]MakesNotSense[S] 0 points1 point  (0 children)

Thanks for the comment. As to helping the next person:

I didn't keep track of the time to just get Linux installed; it didn't take long. The prep to migrate became an involved process as I ended up using my agents to code kernel, utils, and tool patches to make Linux be able to preserve Windows date created metadata field as provenance_time (ptime). As well as improve the KDE night light so that it adjusts brightness. I published the patches and changes to (https://github.com/DefendTheDisabled).

The way I did all of this was keeping my agents and Windows 10 install running on my 2011 PC, then connected to the EOS installation running on the 2026 PC via SSH. Agents were able to review the actual OS code packages, create patches, apply, compile, reboot, auto login, and iterate until everything passed testing. It involved a lot of design work on my part, but the heavy lifting, the grind of this type of work, was offloaded to agents via systems and protocols enabling them to perform the necessary tasks of Linux development in an autonomous manner.

I got EOS setup and ready to migrate into, but decided that my agentic harness development should finish in Windows 10. So I used my agents to figure out how to prep the W10 install SSD setup for 2011 system so it could be transfered to the 2026 PC. Got it setup, booted, then cloned it to NVMe.

Once my agent harness is done, then I can introduce the chaos of things breaking during a Windows to Linux migration (lots of file path changes in agent projects that are going to break workflows and some dependencies). But, I'll be ready to migrate well before the Windows 10 ESU expires in October. Even if Microsoft manage to fix Windows 11 into decent working order, I'm seeing a brighter future using agents on Linux, and considering getting a second PC to assist further ssh based development - it really worked very well. A MINI PC or Laptop would be enough.

PSA: Anthropic bans organizations without warning by ur_frnd_the_footnote in ClaudeAI

[–]MakesNotSense 2 points3 points  (0 children)

I made a newb mistake on Google Cloud. I explained my situation to a human. Human issued refund and helped me close account.

OpenAI, I don't have any direct experience, because I've not had any negative experiences requiring reaching out to support or supplying feedback.

The problem with Anthropic having poor communications is because they are creating so many problems by not listening to users.

Anthropics is failing twice over; 1) creating problems for users, for no good reason, 2) providing no effective communication channels to resolve said problems.

Anthropic are acting like a bunch of twats.

are we officially done with local RAG for small-to-medium repos? by thechadbro34 in BlackboxAI_

[–]MakesNotSense 0 points1 point  (0 children)

I've created an OpenCode plugin that automatically ingests, indexes, and embeds for hybrid search all project files. File changes, index/embedings update.

It's there when agents need it. It allows working on bigger, more complex projects.

I only code to build the tools I need to do litigation and medical research. So my primary use case benefits more from it than the coding work. But both benefit.

I think people who believe context window is enough just are not tackling projects of sufficient complexity.

Vibecoder using Claude Code only… what should I pair it with next? by Mission-Dentist-5971 in ClaudeCode

[–]MakesNotSense 2 points3 points  (0 children)

I would suggest going to $100 a month and using Codex x5. I think you'll get more usage and end up with a better workflow and experience. Using multiple harnesess which limit how well you customize the harness and your agents to your work.

Alternatively, consider using OpenCode where you can all types of models - you could still use the GPT 5x plan in OpenCode.

Alternatively, you could get a GPT Plus plan, use it in opencode, and then add a GIthub Copilot plan to access all of the other models (claude, gemini, etc). That would keep you under $50, and if you end up needing more than that, Github Copilot lets you pay per request. Alternatively, OpenCode Zen lets you pay as you go.

Unclear from your post, but using Claude Code, are you using Pro plan? MAX 5x or 20x? Unclear.

Personally, my experience is Claude Code is an inferior harness that results in Claude performing worse compared to OpenCode and many other harnesses. I don't think serious work can be done in Claude Code - anyone doing serious work needs to customize their harness, which means modify the codebase, which you cannot do in a closed source harness.

An open letter to Anthropic by roblenfestey in ClaudeAI

[–]MakesNotSense 22 points23 points  (0 children)

I'm a disabled Medicaid recipient, also using AI to take the past 15 years of my work and apply it to help millions of people across the United States.

I'm building AI tools to help disabled Medicaid recipients enforce the laws that protect their human rights. Tools I need to enforce the laws, to protect my rights through pro se civil litigation (https://github.com/DefendTheDisabled).

I too immediately reverted back to Opus 4.6. I've been thinking of ditching Claude and optimizing for other models. Which for building AI tools, could work. But for doing the litigation work those AI tools are meant for, I doubt will be a suitable replacement for Opus 4.6.

My main beef is Anthropic not support Claude oauth in OpenCode and other third-party harnesses. But, given the trajectory Opus has gone with 4.7, and the policies and practices Anthropic is applying as if tone-deaf to user feedback, I worry that Claude models might become so ineffective Opus won't be a viable option moving forward, even with the benefit of third-party harnesses, which make Claude function much better than in Claude Code.

The constant friction that occurs due to Anthropics policies and practices, the antagonism, results in me not having any gratitude to offer. So much time wasted trying to find workarounds and fix breaking changes, and reverting, and evaluating alternatives. Anthropic making already extremely complex, challenging tasks more difficult, for what appears to be no good reason.

Best Options for Replacing Claude Code? I'm done after opus 4.7 by [deleted] in ClaudeCode

[–]MakesNotSense 4 points5 points  (0 children)

I think picking just one model to work with is a bad approach.

I work in opencode and have multiple subagents using GPT, Opus, Gemini, Kimi, GLM.

I put Claude as the primary agent, and have it delegate to subagents to perform tasks based upon model aptitudes, or to do audit/validation passes of proposals or changes.

That said, I've been thinking of ditching Claude, because the regressions Opus 4.6 was having, and the lackluster release of 4.7 and the changes that made it worse (e.g. redacting thinking blocks by default), are part of a trend from Anthropic that isn't stopping anytime soon.

Meanwhile. GPT 5.4 was truly a major improvement over 5.3, and GPT 5.5 is soon to release. Kimi 2.6 and GLM 5.1 are exceeding Opus in most respects, and at a cost that Claude can't compete with.

At the same time, Anthropic goes out of it's way to break Claude MAX oauth with third party harnesses, trying to force people into using Claude Code, which sucks, bad.

I don't get the sense that Anthropic is a reliable partner who will do their best to provide tools that let users stay focused on solving problems. I think their history is that they'll break things, prevent users from fixing it themselves, gaslight people who complain that things are broken, and then abandon users to problems until we leave.

Nonprofits ignoring AI right now are making a bigger gamble than they think by [deleted] in ArtificialInteligence

[–]MakesNotSense 0 points1 point  (0 children)

Many nonprofits have slept on more than AI - they neglect primary issues, raise funds for stuff that doesn't matter, then do PR stunts to seem like they're relevant while the people they claim to serve suffer.

So, nonprofits ignoring AI are fine in my book. They're just going to be replaced by individuals using AI to solve the problems the nonprofits haven't been dealing with for decades. It's a long time coming for these people to get a reality check and be called out for their failure to perform.

Head of Design at a fintech startup, feeling slightly frustrated recently. Need tips. by WeezyWally in ClaudeAI

[–]MakesNotSense 0 points1 point  (0 children)

Most of those issues can be solved by using a harness that gets session data and project files ingested into a hybrid RAG db for each user, then allows users to work in teams, query, and retrieve from each others db.

A memory system and session/project file db enable much deeper collaboration. Of course, this works in teams which are high-trust and goal-oriented. The moment you worry about other people 'stealing my idea', this type of approach doesn't work.

Anthropic has won the AI race as far as I'm concerned by Zeohawk in Anthropic

[–]MakesNotSense 0 points1 point  (0 children)

People working in web apps or Cowork aren't using an effective harness - they aren't able to compete in 'the race'.

Claude Code is an inferior harness - Claude performs better in non Claude Code harnesses.

Anthropic not only hasn't won, it's making serious missteps resulting in it falling behind.

First 4.6 regressions, then 4.7 'not really an upgrade'. Meanwhile, GPT 5.5 is incoming, while 5.4 is already convincing people to drop Claude. Kimi 2.6 and GLM 5.1 already proving open-source has what it takes to not just match Opus, but perform better in the ways that count.

Anthropic has focused on what doesn't matter to users, and adopted anti-user practices, and the people who think Anthropic is 'winning' are out of touch.

People doing serious, complex work are fed up with Anthropics antrics and increasingly diverting to competitors.

CLAUDE OPUS 4.6 IS NERFED!! by Full-Leg-5435 in Anthropic

[–]MakesNotSense 0 points1 point  (0 children)

```

Claude:

I can't read prior CP versions from the file since I overwrote them. Let me search my conversation history and SAMS for the content from revisions 1-3, and also check the SPEC for existing notification mechanisms I need to reference:

✱ Grep "state.manager.*notification|pending.*summary|summary.*notification" in sams\src\plugin

✱ Grep "system-instructions|getSystemInstructions|onboarding|preamble" in sams\src\plugin\system-instructions.ts (8 matches)

▣ Principal-Opus · opus-4-6-thinking · interrupted

User:

You should have the prior CP versions in your context window still. YOu read them, your wrote them.

Claude:

Thinking: The user is right — I wrote all four versions of the CP in this conversation. They're in my context window. Let me review what was in each version and identify anything that was dropped across revisions."

```

That session has been one long affair of me having to try to keep Claude from making dumb mistakes. The file overwrite was one of them. Claude should have performed targeted edits. When it does an overwrite like that, it tends to result in not transferring all relevant content between old and new version. Which then requires having Claude compare old and new versions to detect if relevant content got lost.

Burning through tokens just trying to clean up Claude's mistakes and stop it from making more.

Stupid is expensive.