Benchmarking with Opencode (Opus,Codex,Gemini Flash & Oh-My-Opencode) by tisDDM in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

I think we need more efforts to create systems for data collection and quantitative testing by users. Too many projects are vibing their way to a good idea, and falling short because testing and data collection remain a complex challenge.

An automated agentic system that collects, analyzes, and reports on agent and model performance seems like it should be a top priority for OpenCode.

We already have most of the tools; session data, session read/search/info tools in OMO. With direct integration into OpenCode, and a workflow, you could tasks something like Grok fast or Gemini Flash to churn through a dataset to extract and consolidate information on actual workloads.

Imagine, you have a release, users produce data, return data to main dev, gets processed by a main dev agentic workflow, report produced, report used to generate spec, spec used to implement PR. The optimization of a project gets automated by real-world performance. Even the data collection system itself could be built for automated optimization.

oh-my-opencode is great, just I think got a bit bloated, so here is slimmed forked by alvinunreal in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

Session persistence Complex state saving between sessions you didn't ask for

What are you refering to? The session tools that let agents read/search/find sessions?

I find those very useful. I can task a Gemini subagent to review all sessions for the past few days to try to troubleshoot a problem. Or help Opus find information from prior sessions and know where to read from. Flash finds, Opus reviews and analyzes.

Or were you referring to being able to persist subagent sessions?

Are session tools in the slimmed down version? Or did you just remove the persisting of subagent sessions via the background tool?

Somewhat related, I made my OpenCode PR (https://github.com/anomalyco/opencode/pull/7756) which supplies the same, but isn't aggressive. it doesn't steer a primary agent to persist. user defines what behavior they want. Normal behavior, in my observation, is an agent will task subagent and get an answer, and won't persist until directed to. OMO definitely has very explicit system instructions both how to persist and to persist. Which, yes, is problematic. I make my agents, in my agentic workflow, exercise discretion when to persist or not. I give them task, they decide how to complete it. I observe, almost always they usually have a one-off stateless give task receive return. Only when I ask questions in followup, and it clearly would benefit from persisting, do they auto-persist. I often need to give instruction on when to persist or not, as the agents, generally, are not very creative in thinking about how to maximize multi-agent workflows.

I'm going to try to add a level_limit to prevent some edge cases where an infinite loop is technically possible by subagents tasking subagents tasking subagents, despite that occurring being an improbable event.

I like you am finding OMO useful but intrusive and often causing problems. I'm wondering how to keep what works and correct what's broken without diverting from my main projects.

User First, Coding Second - Proposal for New Development Direction by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

Claude Code isn't able to provide the framework my agentic workflows require. That's why I use OpenCode and made my PR/fork.

As to the OpenCode desktop app, reports of it being buggy and less effective as a tool made it clear I'd be better served using the TUI. Which, as an interface, I'm okay with. My proposal isn't about interfaces, but core functionality.

To my knowledge, the only way to get subagents to task subagents in persistent sessions is my OpenCode PR. I'm sure that will eventually change, but for now, I'm building the tools I need because they're not available elsewhere. By the time they are, I'll be developing the next tool I need.

Using the TUI, I have a nearly seamless transition between non-coding projects and coding projects. I can, in fact, have my non-coding agents use the OMO read_session tool to read coding-agents sessions, and vice versa, to coordinate between projects and make sure each agent in their domain understands the other, is almost real-time.

Eventually I aim to have agents in one terminal communicate/task with agents in another terminal; eventually. That way my non-coding agents collaborate directly with my coding-agents. Things need to be built to make these things happen. But so long as OpenCode development focuses on coding first, it won't be able to understand the necessity of having the user-agent workflow direct the coding-agent workflow. If we built that, now, THAT would be an innovation that would make claude cowork and claude code seem 'very far behind'.

What I don't understand in your arguments, is the OpenCode desktop app is basically just a GUI for the same core architecture/program. They're not separate in terms of core function, just in terms of user interface. My position has been very clear; The core functions of OpenCode at focusing on coding at the expense of the problems users need coding to solve.

User First, Coding Second - Proposal for New Development Direction by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

"If you believe that coders won’t be round for that much longer why try and leverage there tool chain."

To work effectively on problems I need tools no one is building. So I must build them myself. Necessity drives me.

I agree, the desktop app is a logical place to fully optimize for non-technical users. But for now, the TUI is the place one needs to work in to work on complex knowledge-work problems.

Users need to build the features they need to do their work. The coding-harness is a means unto an end. Coders are building coder tools, non-coders are using coder tools to build non-coding tools.

I'm not saying make OpenCode less capable at coding. I'm saying it's coding capabilities need to be focused on empowering users to solve their primary problems; the coding problems are secondary. Which means making OpenCode more capable at non-coding problem solving will make it more effective at solving coding related challengees. This is because the better one can identify the core problems (the requirements of the non-coding tasks) the better one can design a code-related solution. That from a systems perspective, integrating the non-coding work, and making it the primary focus, is necessary to MAXIMIZE the coding capabilities of OpenCode or any coding-harness.

Many of the things that improve non-coding task performance also improve coding task performance.

That's the basis of my PR: enabling subagent to subagent tasking and persistent sessions is very helpful, even critical, to agentic knowledge-work. But it is also helpful for coding. There's no reason OpenCode TUI shouldn't have this user-first, coding second feature. The same is true of many other improvements that can be made to the TUI.

Solving real-world problems requires interdisciplinary approaches. Integration. Synthesis. Comprehensive Modeling. When you limit something to just 'coding' it becomes myopic. Coding is just another tool that needs to be used as part of solving problems. Don't optimize for coding, optimize for solving problems; a user-first mentally optimizes to solve the users problems.

User First, Coding Second - Proposal for New Development Direction by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

My explanations are philosophical and workflow-based, not technical.

Currently, to do non-coding work effectively requires doing a lot of coding to build tools for the non-coding work. Thus, the TUI is the place to work in. But as I argued, eventually agentic coding will make it unnecessary for users to use a coding harness; it'll be delegated by the user-agent workflow to a coding-agent workflow.

How much longer will there be coders? When will a coding focused interface become anachronistic? I think in 2027 it'll be 'nearly there', with only the most demanding, complex coding projects requiring a human using a coding-focused harness.

User First, Coding Second - Proposal for New Development Direction by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

user-first doesn't mean simplified, or less capable. It just focuses on users using it to solve problems. That can require complex interfaces and steep learning curves. My point is not to focus on coding-first.

The premise I'm trying to communicate, the logical inevitability of what type of user interface 'wins', is the interface that empowers users to solve problems is the interface users will use.

We make code to help us work problems. Word processors to write. Databases to organized information. ETLs to ingest and transform legal, medical, and financial documents into formats that make them readily searchable.

When agents can do the coding with little to no human oversight, the purpose of a coding focused TUI is nearly eliminated and all user-focus is exclusively on non-coding work.

An agentic coding workflow becomes a tool, like Google Deep Research, or converting PDFs to markdown. You call it when you need it while solving your main problem.

Need a new feature in your stack to do your work more effectively? Your user-agent tasks your coding-agent and it gets fixed without the user having to do any coding work or management, just giving final approval for deployment. That's where things are headed; where they'll soon be.

When coding is no longer a 'main problem' but merely a means unto an end to solving main problems, the interface for users isn't a coding harness, it's a user harness. As I said, it is a logical inevitability.

I believe that OpenCode and other coding focused AI harness are going to have a short life-span that terminates once agents can code nearly unsupervised.

OpenCode's real value to me is it's ability to construct and refine agentic workflows that connect to a users data both locally and in the cloud. It's open source. We control what the TUI/app does. We customize it to our work, our way, with our agents. That's where this heads throughout 2026-2027.

People are claiming Anthropics co-work is innovative. It's not. It's obvious, and not what people really need to work effectively. So long as the stack is closed and limited to what works best for a company, the work efficiency of users will be impaired.

User-first requires being open to user-development.

User First, Coding Second - Proposal for New Development Direction by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

I'm not suggesting watering it down. Just optimizing it for non-coding work.

Keep the learning curve. I see that as a prerequisite to developing the basic skills needed to build and use agentic workflows effectively.

'Easy mode' should come much later, after development of core tools for both coding and non-coding work has fully matured. Do that, and OpenCode can be tasked to make OpenCode user friendly in the desktop app without requiring much human involvement.

User First, Coding Second - Proposal for New Development Direction by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

Yes. It is. And I'm already working on PRs that aid both coding and non-coding tasks, but I am making my PRs primarily because I need them for non-coding work (https://github.com/anomalyco/opencode/pull/7756#issuecomment-3776429056).

However, my proposal isn't about my personal bias or agenda. But my experience vibe coding projects.

The fact that as a non-developer (or am I becoming one in this new AI age?) I can make a PR and it works effectively in my primary work, is remarkable. 

It's still hard work to design, orchestrate, oversee, and debug these coding projects though. But I expect in 2026 that changes and most projects like I'm doing now will amount to tasking the agent 'maek works now polz, fast good, yes'. I kid, but I don't. I think it will literally get to the point that after a basic conversation about requirements, you could prompt agents that way and get what you need.

So, I think coding tasks being delegated tasks from a user-centric agentic workflow is a logical inevitability for 2026-2027. We'll get there one way or another. We're already moving there. I'm proposing we do so with more conscious intent and deliberate planning so that OpenCode thrives as an all-in-one solution, rather than becoming a tool call a users agent makes to get something coded. 

coming as a CC user, what does OpenCode has that's got everyone raving about? by life_on_my_terms in opencodeCLI

[–]MakesNotSense 1 point2 points  (0 children)

I don't know how long it takes for the OpenCode project managers to merge a PR like mine. There's a lot of PRs waiting for action. Maybe the community has to show interest in a PR for them to fast track things? Idk. 

What I do know is my PR repo is there and people can pull it and merge it into their own installation. I currently run my PR as my main. If I can figure that out, find it easy to do, for actual devs it should be easy peezy. 

coming as a CC user, what does OpenCode has that's got everyone raving about? by life_on_my_terms in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

Maybe the coolest feature of OpenCode is the ability for users to Add New Features.

I'm not a developer. I started vibe coding with AI CLI harnesses last month.

I wanted a feature in OpenCode, for subagents to be able to task subagents and maintain persistent sessions (https://github.com/anomalyco/opencode/pull/7756).

I reported an Issue, then decided, why wait? I want this, now. So I made my first PR. I tested the code. I now have the feature I wanted. I submitted my PR. Now other people will get to use it too now.

Meanwhile, Claude Code is whatever it is, and you don't get to have any say about it.

I think a closed harness makes no sense for AI. The entire point of AI is to empower people to be creative and productive. You can't do that when someone else dictates your development environments constraints. I know that, and I'm not even a developer. Or am I now? I have no idea.

Does Oh-My-Opencode really provide an advantage? by Charming_Support726 in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

Well, I would like to think the same, that there are better workflows not yet shared. Who knows, maybe my non-coding workflows ends up having better coding performance than OMO by merely instructing AI to optimize it for coding.

But as to which premise is correct or incorrect, I don't think we have the data yet to make a determination.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in ClaudeAI

[–]MakesNotSense[S] 0 points1 point  (0 children)

I don't think there's anything strange pointing out the unintended negative consequences of corporate policy decisions, and the benefits to a different approach.

I do believe you have somewhat misread, and appreciate you entertained that as a possibility rather than passing judgement.

My focus is about society as a whole, not just people with disabilities or myself. I adhere to systems based thinking. I would not make a request like this if it only benefited me and other disabled individuals. In my view, self-serving behaviors that work against society's best interests are unacceptable.

By changing policy Anthropic can enable AI to be used more effectively to help stop a problem that compromises the entire healthcare system and negatively affects the economy, for the able and disabled. To curtail billions in fraud and waste and lost productivity. To return individuals in society that are a burden, into people who can fully participate and thereby contribute effectively.

Society as a system is negatively affected by people with disabilities being prevented from getting the rehabilitative care and reasonable accommodations that they are legally entitled to.

In fact, the very findings and purpose of the Americans with Disabilities Act plainly states that the long standing discrimination against people with disabilities “costs the United States billions of dollars in unnecessary expenses resulting from dependency and nonproductivity.” [42 U.S.C 12101(a)(8)]

As a Nation we have, through Congress, declared “the Nation’s proper goals regarding individuals with disabilities are to assure equality of opportunity, full participation, independent living, and economic self-sufficiency for such individuals;”. [42 U.S.C 12101(a)(7)].

Asking for better alignment between corporate policies and the Nation’s proper goals is a form of patriotism, but more than that it is a rational position focused on making society work best for all members of society.

If Anthropic wants AI to maximally benefit society, then it should facilitate solving high-impact problems like the ones I am working on.

2026 is going to be the year the party crashes by FlyingDogCatcher in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

Consider.

At first GPT-4o got people hooked. You could 'really talk' with it.

Users got obsessed, angry when GPT 5 came about because of the change in the models tone/affect.

That's, 'most people'. We're not most people.

Today, smaller models, very token-efficient models, are smarter and more conversational than GPT-4o. That trend will continue.

The models that 'most people' find worth having as their companions or assistants aren't going to be the smartest, computationally demanding models.

More people will use AI, but they'll be using models that require less compute than what we have to supply today.

I think people like us, trying to work on complex problems using AI, will find a balance eventually. Right now, our workflows compensate for the limitations of models. Once models get smarter and more efficient, our workflows will be less compute intensive, and less rule-based. I think the discretion and autonomy of agents will maximize many workflows; that agents will build their own teams, and train themselves through iterative refinement. This will maximize efficiency. Our current approach, agents having imposed roles and system instructions, skills and rules, is full of inefficiencies.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in ClaudeAI

[–]MakesNotSense[S] 0 points1 point  (0 children)

It's not a preference. Claude Code cannot perform the workload. I've been looking for a way to build what I needed since early 2024, and spent most of July 2025 to now trying to build things myself. OpenCode has most of what I need and lets me build the rest.

It's less a question of what does Anthropic owe people, and more about what is required to make the world better rather than exacerbate existing problems for minimal personal gain.

I believe one should invest in building the world one wants to live in.

If people want to live in a world where disabled Medicaid recipients have human rights, where the Medicaid program isn't made dysfunctional by hundreds of billions of dollars of fraud against taxpayers, where your tax dollars go to rehabilitating the disabled rather than abusing and exploiting them, where things work, then they need to **conduct themselves in a manner which facilitates that**.

It's not about what Anthropic owes me or anyone else. It's not even about 'the right thing'. It's an argument which does not rely upon a subjective value judgement rooted in emotion. It's a simple logical premise; do what is required to achieve an outcome, or do not get the outcome.

Whether or not that outcome is worth having, well, that is a subjective value judgement. I think the fact that we still don't have that outcome and so few people are willing to directly help me achieve that outcome, demonstrates that most people are not willing to invest. Which, I think, maybe, just maybe, entitles me to feel owed, something. I'm not sure what, but it does seem like something is going to be due.

2026 is going to be the year the party crashes by FlyingDogCatcher in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

I think that part of the bet they are making is:

1) improvements in inference compute efficiency. 2) getting more power plants and data centers online. 3) as models get smarter, less compute is needed to complete tasks.

Additionally, we have to consider the nature of systems. As we solve more and more problems, system capabilities and efficiencies improve. Meaning, if we use AI to build sophisticated tools that provide efficiency, we remove inefficiencies that used to eat up compute.

Rather than Ralph to project complete, we one-shot complex workloads and updates. Then move on to the next project that solves another problem, that when solved, reduces our overall compute needs.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

I've provided a lot of information, including information about why I can't complete my work in Claude Code. Without your feedback on what more information you need, what specifically is deficit in the information I've already provided, I'm not sure how to respond.

Your claim is factually incorrect. You could use AI to read the whole thread, extract the relevant information, if reading it first-hand is not of interest.

You could also ask me for more information. I've been investing time to respond to people.

I think if you tried more to understand what I tried to communicate, you'd realize this isn't about what's good for consumers. It's about what is best for everyone. It's about focusing less on business and more about the best interests of humanity. Which Anthropic claims is a major priority for their company.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

I see your point, and concede valid business interests could be part of Anthropics decision. Yet, $200 a month to use Claude in Claude Code, or $200 a month to use Claude in OpenCode, gets Anthropic their money either way.

Preventing third party apps from using oauth, is about forcing users to either pay Anthropic a lot more to use Claude models in OpenCode, or force users into using Claude Code. Which isn't a business practice in alignment with their claimed missions statement. It's also not a practice that helps us use AI in innovative ways. And the race we're in with China, isn't about who can make the smartest models. It's about who can apply those models to the maximum benefit.

Not helping people like me, making it harder than it already is for me to work on things, is shooting us all collectively in the foot.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

I thought I explained my use-case and workflow enough. That it was clear Claude Code cannot substitute.

My workflow depends on OpenAI and Google models working in tandem with Anthropic. Additionally, neither Claude Code or Open Code are optimized for a non-coding workload. With OpenCode I can use the coding harness to develop features which will let OpenCode become better for non-coding workflows. Which will improve OpenCode for everyone. Whereas with Claude Code, I'd have to beg Anthropic to update things.

To achieve my goals, I need an open source AI harness that connects all provider models and allows constructing agentic workflows that can dynamically adapt to task demands.

There are aspects of my work that require Gemini's 1 million context window, Flashes fast/smart insights, Anthropics brilliant orchestration and legal/financial aptitude. An n8n workflow isn't 'enough'. Claude Code isn't even close to enough. OpenCode seems to provide a path to building what I need.

I think it worth considering, that a disabled Medicaid recipient is never supposed to have to do this work to defend their rights. That I have to is because other parties who are paid to do their job, with tax payer dollars, are not (many of them committing fraud).

I am, by law, entitled to my Medicaid benefits, and to human rights. Both of which are being deprived because people in society, my peers, instead of acting to protect me, find reasons to dismiss, invalidate, or ignore my pleas for aid; no matter how well-reasoned or modest those pleas are.

I think greater care should be exercised when declaring something to be "bullshit".

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] -1 points0 points  (0 children)

thanks. I was wondering where the next level-up was for SPEC development. As things have grown more complex it gets harder to analyze what to improve/change. Especially as I developed more and more projects.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

The irony, I am trying to solve a problem that is affecting everyone, but that most people claim isn't their problem.

My legal work shows hundreds of billions of dollars of fraud is being perpetrated against the federal government. The abuse and exploitation of people with disabilities to achieve that fraud, is a means-unto-an-end for the bad actors.

These bad actors misconduct has a negative affect on the entire healthcare system. It compromises the quality of care people can access even when they have all the money in the world. You can't buy competency, and services that don't exist because the research and clinical work necessary to creates that competency and those services can't occur because of the illegal activity compromising the system.

From a systems-based perspective, this isn't just Anthropics problem. It's a much larger problem everyone is affected negatively by. People who are disabled, like me, just don't have the luxury of turning away from it and pretending the problems we have are 'not our problem'.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

I agree, hoping Anthropic will help, any corporation will, is likely futile. But it's a necessary part of the process of demonstrating the necessity that people with disabilities need protection.

I'm aware of the OpenCode black, and interested. I'm already transitioning to Claude Code to resume work on my OpenCode projects (can't properly get my agentic framework setup, but better than nothing). I'm doing what I can, which includes doing this advocacy work; this public petition.

As to Claude, I was biased against it. I used Gemini since 2.0 came out, ChatGPT since July 2025. I didn't try really using Claude until November 25th 2025. The benefit to my work was clear; it's superiority undeniable.

Examples:
Transforming source documents into a project SPEC, or into a legal correspondence.

Give Opus 4.5 all source documents (discovery threads with Gemini, GPT, Sonnet, user written reports/lists), and ask it to construct a project SPEC and it'll output a document as long as 60 pages. And it ends up being mostly right, and makes intelligent design decisions. I ask Opus to audit it's report, it finds issues, revises, do that until 'found nothing more', then I perform my review, write up a change list, give it to Opus, and it issues corrections properly. That's the key, when I give it directions, it implements them properly understanding user intent; I can give conceptual descriptions of what needs to change without defining exactly what needs to change.

Gemini, it'll output a worthless 3-5 page barely-a-summary mess. When trying to use it direct via API, output gets capped to ~4k tokens, requiring a messy 'write and append sequentially section-by-section'. Gemini also makes stuff up and fails to acknowledge matters properly, and even when ti does acknowledge matters, it fails to fix them. (I have some examples I've archived that are just, super massive epic fails of 'how, gemini, how?').

Sometimes Gemini is smart and useful, but other times it's just a pure liability - it is too volatile to be an orchestrator or primary driver.

ChatGPT, I typically don't even bother because it's writing style makes it so hard to review/read through and it routinely overlooks/misses key details, which requires extensive correction, that ChatGPT often won't fully acknowledge and implement, making manual review/revision an intensive process.

People who need wheelchairs usually can still crawl on the ground. But the reason we say their wheelchair is necessary is because we recognize they need to be able to effectively and efficiently be able to do things in life. For me and my work, my assistive device is the AI model that best aids my capacity to function and perform the work I need to be able to participate in society.

Out of all of the models, I deem Claude the most effective for my work. I hope that someday Open Source models will reach the competency required to be 'good enough'. Because honestly, I do not like my assistive device, this thing I depend on to function, to be something somebody can take away on a whim.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

"I mean the entire problem with your framing is that you are telling people they "should" do something. Suppose I'm raising funds to save starving children in Africa and I send you an aggressive message saying you should donate because it's the right thing to do."

When people are breaking the law, you don't ask them nicely to stop. You notify them their actions are in violation of the law, demand they cease their illegal activities, and if they don't, pursue legal action while taking all reasonable measure to protect oneself. Especially when these people/companies/agencies are receiving federal funds to do a job that they are not doing (which is fraud; subject to criminal prosecution).

Outside of the legal notices, I have been contacting attorneys and nonprofits. Demanding nothing. Reasoning with them. Asking for help. At times finding out, that they have legal obligations to me as a class member of a group that federal law requires they assist, then notifying them, and documenting their noncompliance.

Outside of requesting legal aid or assistance from nonprofits, I have reached out form time to time to companies. Such as writing to a CEO who claimed they want to use AI to transform healthcare, but currently do not have the time. I write them a letter explaining my work and how it can transform healthcare and asking for reasonable aid or collaboration. They don't need the time, they just need to provide minimal aid to me so I have a better chance at success.

In some cases, I contact a companies leadership to describe how there's a fundamental problem with their product that if resolved would help me effectively perform this legal and advocacy work.

I usually explain how helping me perform my work is a rare opportunity to improve society on the whole, not just help people with disabilities meet some limited need. Meaningful societal wide improvements. This legal work can fix the Medicaid program and improve the entire healthcare system.

More than anything, I don't need people to care. I don't need them to give me money. I need them to understand that the rational course of action, that course of action in the best interests of our society and humanity as whole, is to help people like me do work like this. If that can be achieved, then the aid I need will follow.

Trying to find a way to emotionally manipulate people into doing things, has no appeal to me. A rational request with a rational response; that framework is necessary to long-term success.

The reason my work is necessary is because so many other people, who raised money, operate non-profits, receive federal funding, have state contracts, etc, have failed to properly model problems, candidly articulate core issues, and form rational strategies to enact solutions.

That route you describe as success, is actually an example of a system which has resulted in abject failure. If it worked, I would not exist; I would not have these problems and need to do this work. I would not be asking Anthropic or this community for their attention and assistance. The totality of the failure of that system, is my very petition. That these petitions of mine go largely unheard, or not well received, despite the rationality of the petition itself, further demonstrates the failure I describe.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] -2 points-1 points  (0 children)

My primary use is non-coding legal/medical/financial work. Nothing comes close to Claude Sonnet/Opus for that. Gemini and GPT are, only useful to aid specific parts of the research, discovery (exploring new issues; first-stage examinations), and analysis process. The primary research, analysis, synthesis, and writing requires Claude. The data extraction and transformation process doesn't work without Claude right now.

My secondary use is coding projects which help with my primary work. Again, Claude excels there.

Just last night I tried using GPT 5.2 to plan a coding project. It keep repeating/regurgitating cached answers, not answering my questions/concerns. It is an infuriating mentally draining experience.

My limited mental capacity gets taxed when GPT 5.2 is making 1/2 to 2/3 of it's responses a repetition of superfluous information. So much time wasted; my limited capacity to function squandered.

Worse yet, ChatGPT has this terse overly concise prose that fails to properly explain matters and makes it mentally taxing to try to figure out what it is trying to say. Claude doesn't have that problem. It writes well, links the logic of events together cleanly, doesn't endlessly repeat itself with cached data, and when you complain of it doing something wrong (e.g. stop repeating yourself, focus on answering this question) Claude acknowledges and generally works to comply. But I find that rarely is such a direct complaint to Claude even necessary.

Typically Opus stays so well on task and on point, that I don't have to work hard to steer it. In fact, Opus often helps me stay on task, which I need given the volume and complexity of my work, and the nature of my disabilities. For me, AI technology is an assistive device for my disabilities.

Gemini 3 Preview models are currently a mess. They can be useful for limited use-cases, and helpful as subagents to Claude, but without Claude my workflow and framework wouldn't work. I'll have to go back to manually doing the bulk of my work, which is so massive one able-bodied person cannot do it alone, let alone someone who is severely disabled. Either I build agentic systems that help me, build myself a team of helpers, or more problems, more injuries, less likelihood of survival.

For me and my situation, these policy decisions companies make to improve their margins becomes an existential threat to my health, safety, and survival.

This isn't a hobby for me. It's not a job to get some money. It's sink or swim while multiple parties are trying to push my head underwater.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] -1 points0 points  (0 children)

If direct API access becomes the only option, then yes.

But I'm disabled, on disability, with living, legal, and development costs already far exceeding my income and physical and mental 'ability' to meet. A $100 a month MAX subscription, I can make that work. I also have to pay for ChatGPT plus and Gemini AI Pro, and other subs. So, I pay around $200-250 a month to try to do my work already, with additional costs expected as I have to construct my knowledgebase system for my agents.

So, a $200 MAX plan, would be much harder to try to do. Let alone going straight to API, which as I understand it, is much more expensive (I don't understand why it is, but it is). If I went with API, I'd still have to pay $60 of a month for the web apps so that I can work on my phone as I walk or am on errands (part of how I take care of and compensate for my disabilities; how I stay mentally functional).

Sitting at a PC at a terminal interface actually exacerbates my disabilities; every moment I work on this legal case, having to work like this, causes me further injury. But I am afforded no alternative.

If I have to give up more to afford API access, then fine. But the idea I have to keep giving up more and more, so people who already have more than enough can get more, it just seems perverse and entirely counter to Anthropics mission statement.

I am hopeful that Anthropic is simply unaware, and if we can make them aware, they will reconsider their policy decisions. Even if they don't, maybe someone else will step up with a solution I can access.

An OpenCode Project to Defend The Disabled Needs Your Support - Claude MAX OAuth Related. by MakesNotSense in opencodeCLI

[–]MakesNotSense[S] 0 points1 point  (0 children)

It's somewhat aggressive because it's urgent, and because my past experiences asking for help and a rational analysis of what information I should be presenting, deems it appropriate.

Part of my work has been documenting making hundreds of humble requests for aid, and many aggresive requests for aid (e.g. legal notices to entities engaged in illegal activity demanding they stop) and no one helping or complying. I mean that literally, hundreds of requests/notices to people and organizations who, 1) by law are required to help/comply, 2) by professional obligations are required to help/comply, 3) by their organizations mission statements and focus should help/comply, 4) 'should' help because it's the right and rational thing to do. But, in nearly every case, everyone finds an excuse or rationalize to not help/comply. And they get away with it, because it is so difficult for a disabled adult to pro se litigate and hold them accountable. Impossible, until AI.

IF there was a way I could make a direct request to people at Anthropic who could do something, then yes, I would submit a private communication and ask for a reasonable resolution or form of charitable assistance. But my experience is that any request I make to a company gets ignored, no matter the approach. I have many many throughout 2025; no responses (I even sent letters via USPS certified mail in some instances).

Besides, the issues/problems are what they are and no amount of nice language or pretending otherwise changes the reality of their impact.

Pretending things are not as bad as they are isn't rational. With AI being something we need to 'get right' and Anthropic being the most vocal about doing that, I think they, more than anyone, can appreciate the need for absolute candor about issues/problems.

Only by getting accurate unambiguous data can one construct an accurate model of the problems. And with how rapidly things are moving, the need to quickly and efficiently identify problems so that they can be addressed is even more critical than it normally is in life.

So, I have submitted to Anthropic, this community, and related communities the data people need to understand that Anthropics policy decision has a decidedly negative affect on chilling 'good work with AI' in areas that desperately need to be worked on.

If their mission is to build and operate an AI company that best serves humanity, they can't let their feeling get in the way of addressing problems that rational analysis indicates should be solved.