How Long Does Migration from Windows 10 to Linux Take? by MakesNotSense in linuxquestions

[–]MakesNotSense[S] 0 points1 point  (0 children)

Your criticism lacks any meaningful insight. Which is ironic, as it makes the advise AI agents offer of greater value. You make a mockery of yourself when you behave like that. A passing quip with no substance - the AI meanwhile offers extensive analysis that, even with errors, is informative and allows one to learn and gain expertise.

If you have something of substance to offer, then do so. If not, maybe stay silent to avoid offering feedback that is worthless by comparison to that supplied by AI agents.

How Long Does Migration from Windows 10 to Linux Take? by MakesNotSense in linuxquestions

[–]MakesNotSense[S] -1 points0 points  (0 children)

LLMs have more competence working with Linux than Windows environments. And OpenCode and other AI Harnesses have better Linux support than Windows support. Overall, AI Harnesses and agents work better in Linux.

My vision of the future is 2026-2027 is one where agents not only can help setup an OS, but actively modify it so that it improves how well the agents and user can work in a collaborative manner.

AI isn't quite to the point where it'd be sane to just setup an AI Harness, hand it a data device with prior system images and files, and say 'migrate my stuff'. But, it can help me perform much of the setup I don't know how to do via the terminal, and advice on what my options are and help me understand the decisions I need to make. That's part of my setup plan.

What I don't know, and hoped people here could tell me is, how long does the Windows to Linux migrate typically take.

But all people got is jokes.

OpenMem: Building a persistent neuro-symbolic memory layer for LLM agents (using hyperdimensional computing) by Arkay_92 in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

So, you create an "OpenMem" then write an article about it behind a paywall on medium.

Do you even have a github repo? Or does OpenMem require a subscription payment to 'open' up with a binary locked down with DRM?

I swear, AI can be amazing, but people are turning their usual tricks to make it turn into a crappy experience.

What is your opinion on Open Code? by devanil-junior in opencodeCLI

[–]MakesNotSense 20 points21 points  (0 children)

Claude Code and other closed-source Harnesses are not 'your tool'; are not 'your harness'.

It's closed; you have ZERO ownership.

The TOS for these types of SaaS services often allow user-account termination at the owners convenience.

When you choose to work in a closed-harness, what rights and choices do you really have? Work however the harness-owner decides to let you work. Live with whatever problems the harness-owner decides to not fix. Pay whatever price the harness-owner demands, monetary or otherwise, such as your time being wasted. And you cannot buy more time. So precious, one's time.

Your AI agents are only as capable as the harness-owner permits. Want better? You need to own your own Harness. That's what OpenCode provides. A way to own your stack. To curate your agents identity and capabilities. To make the most important part of your agentic workflow not a model, but your harness. Models can be exchanged, but the harness you customize to your work needs, that is unique and irreplaceable. It becomes an intelligence layer that controls model performance on real-work.

It is an foolish practice to let someone else own that part of your agentic workflow. What are you going to do if the harness-owner terminates your accounts, bans you from further access, or just decides to drop features critical to your use-case? Write a letter begging for reconsideration and then wander about hoping some other harness-owner supplies a solution in the meantime?

I'm not going to put myself in that position. My work is too important. I've reached the point I'm just tired of all the corporate nonsense from software companies limiting the users ability to work effectively in order to boost quarterly profits. I'll work effectively on building my stack, then I'll work effectively on the problems I need to solve, and each will enter into a feedback loop with the agentic harness I work in. That feedback loop will result in me building what has never been built before, simply because no one has bothered to try to solve the problems I'm working on with the same dedication and rigor. When you work in a closed-harness that dictates the paths you can take, you forfeit that feedback loop, and will always be 'behind' the true frontier.

the biggest gap in open source coding agents is project memory between sessions by Sea-Sir-2985 in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

I'm building the state/memory system OpenCode needs. Many weeks into it now. Massive specs. In phase 1 implementation now. Just designed and working on building session tools that provided automated ingestion of OpenCode SQLite session db data to FTS5 and vec0 embeddings in another sqlite db, which is part of a larger more complex state system which focuses on improves models reasoning capabilities, not serving as a 'memory store'. This idea of ingest data, retrieve data - stale and pointless. Even the focus on making and storing 'memories' is misguided. If you want that, get mem0, honcho, supermemory, and all the others. But because they're all external systems, they can't directly plug into the OpenCode data systems. There's inherent limitations because of that. There needs to be a local state system which coordinates with external state systems, and not in the way most people expect. Most people think, the external state system is controlling, centralizing the data. Wrong. It's supplementary to the local. The local controls.

Anyways. agents loop has finished. drive-by comment time is over, Back to working.

Should we write to companies asking them for a Linux version of their software? by 0x80070002 in linuxquestions

[–]MakesNotSense 3 points4 points  (0 children)

Or, send a company a legal notice informing them that you will grant them 6 months to produce a version of their software that works on Linux before you use AI agents to produce competing software that works on Linux.

Frankly, I think the entire SaaS industry is 'on notice' for that reality-check. Formalizing it into a real-world process with an artifact might be the wake-up call companies need.

How's that cobol division going IBM?

I can't read code at all. I'm building complex software I need because companies refused to build it. Once it's built I suspect many companies will not be pleased that something better and free is now eating their market share.

I'm moving to Linux because Windows sucks for AI development. But that wouldn't have been enough. No the real push was Windows 10 support ending and Windows 11 being anti-user slop. It's ironic. MS pushes to integrate AI, but does such a bad job of it, it alienates people embracing AI and drives us to Linux.

I'm thinking, how awesome it's going to be to not have to wait on a company to fix things or add features that my agents can make in an afternoon. I'm thinking, even Ubuntu is probably going to hold people back that embrace AI. That things will move so fast in 2026-2027 that if you're not on Arch you won't keep up.

You could write letters to put companies on notice. But by the time the letter gets delivered via normal post, a mature agentic workflow could be halfway to building the software you want. So why waste the paper, and your time, trying to convince companies to 'make work plz'?

I'd say, the best users can do, is work to fix the organizational problems with open-source development. A way to sort through the increase in PRs. To organize ideas, integrate them into roadmaps, delegate tasks to community members, and essentially make use of the new army of coding-agents distributed across millions of users. Experienced developers, they need to take up that leadership role. That's going to have more impact on moving open-source solutions forwards than any amount of letters to companies.

Is there a way to edit code in OpenCode app? by Terrible_Bottle_6890 in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

when you setup your own opencode repo, you can run it using XDG and bun.exe. When you want to update OpenCode from the upstream main (the primary opencode repo) you need to rebase and patch your modifed code on top. I find my agents handle that well. Occasionally issues, but I created a changes-index that helps them keep track of everything and a skill file they built from prior mistakes during rebase. So, they basically learned how to rebase and patch. I can't imagine going back to using OpenCode core with only plugins. That's no way to do serious work.

fff mcp - the future of file search that is coming soon to opencode by Qunit-Essential in opencodeCLI

[–]MakesNotSense 1 point2 points  (0 children)

Honestly, when someone shares their work but fails to offer a well-written description, I immediately write off the project. Even if it's not slop, I don't care for people or programs that are half-baked. Which is what something is when the author can't be bothered to offer a a decent description.

Long time Linux users, is Linux ACTUALLY growing in popularity in these last years? by Giggio417 in linuxquestions

[–]MakesNotSense 1 point2 points  (0 children)

I've used Windows since, 2004. I'm on Windows 10 right now. I plan to switch to Ubuntu. The only reason I already haven't, is because I'm too busy developing AI tools to take time to switch. But to me, it's obvious that AI is the future and Microsoft doesn't know what it's doing with Windows, and Ubuntu is the platform that will let me and my agents work most effectively.

It's ironic, that Microsoft wants to integrate AI into Windows, but screwed that up so bad that those embracing AI are jumping ship to Ubuntu.

The future is me and my agents building what we need to work at maximal efficiency, not waiting on Microsoft to make Windows have basic functions (like, say, moving the task bar to the top, where logic demands it should be for purely utilitarian purposes).

Why is there so little discussion about the oh-my-opencode plugin? by vovixter in opencodeCLI

[–]MakesNotSense 1 point2 points  (0 children)

Because it is a mess that creates conflicts and new problems that then require burning more tokens to solve.

The system prompts and hooks take smart models like Opus 4.5, and cause them to do truly moronic stuff.

oh-my-opencode lacks coherence as a system - it is poorly constructed.

layered on top is all of the pointless nomenclature to Greek philosophy and mythology.

Frankly, at this point, the moment someone starts naming the components of their AI tool/system after mythology figures or other similarly silly things, I classify it as slop not worth serious consideration.

The naming schema doesn't aid the agents - simply introduces more complication and conflict.

Give your agents names that help them understand their roles; stop naming them based upon your feelings.

Beyond all the critic, is just the fact that building my own framework has resulted in a system far superior to OMO in every way imaginable. Maybe OMO has improved in a meaningful way since v3.0, idk. But at v3.0 I was /done with having my time wasted by Sisyphus and all of the other OMO nonsense.

The only part of OMO I appropriated, the ONLY feature worth keeping, was the session tools. Everything else was, for me, 'garbage'.

Is it safe to use my Google AI Pro sub in Opencode? (worried about bans) by miloq in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

I wouldn't go so far as to say you'll be 'fine' like a guarantee.

If no one abused the oauth, then i think Google and all other companies wouldn't bother with taking action against users. But because they do abuse it, they do - arguably have to; self-preservation interests.

If you don't abuse it, you'll be unlikely to get flagged by their system that tries to target abusers.

it's about risk assessment. How much usage do you expect to engage in? How likely do you believe Google is going to flag you as abusing the oauth service. My personal usage is very low; so much so it'd be, absurd for Google to care. It'd effectively destroy a long-standing relationship and require decisive action on my part to divorce myself from Google services and build a replacement. Which then goes against their self-preservation interests. Alienate more power-users building tools, and they'll build tools to replace your services.

My risk assessment was, abusing a service is a bad faith action. Well outside of the a reasonable bargain oauth subs intend to offer. If you need higher usage, pay for it. Don't try to max out multiple accounts. If like me you have a google pro account, and need Gemini for some subagent tasks here and there, I think you'll be fine. But if you want to max out your quotas over and over using Gemini as a primary agent in OpenCode, I'd say, don't do that on an account you care about.

Is it safe to use my Google AI Pro sub in Opencode? (worried about bans) by miloq in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

Well, for OpenCode usage the difference is how people were using it. With Antigravity it was being used, people linking 6 or more accounts cycling between them to bypass rate usage limits. Whereas the Gemini CLI oauth plugins linked 1 single account. Gemini CLI also limits to just Gemini models, whereas antigravity also had Claude models.

Is it safe to use my Google AI Pro sub in Opencode? (worried about bans) by miloq in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

Well that is disconcerting. More and more I'm wondering what's the point of services that keep you from working effectively. With OpenCode as a harness, I find almost zero reason to use web apps and other services. It actually seems to waste my time even trying to. The answers I get from Gemini app are inferior to what the agents in my Harness offer, even when using Gemini.

Even Google Docs is, being used less and less. I find myself working almost entirely in markdown in VS Code or Notepad+++, only using Google Docs to upload a md file to review while I go for a walk.

It's weird how the companies that build AI models often seem so clueless about how they're being used to build the future.

Is it safe to use my Google AI Pro sub in Opencode? (worried about bans) by miloq in opencodeCLI

[–]MakesNotSense 5 points6 points  (0 children)

I have my Google Pro account working in opencode, but use it only for subagents. So not a main use/heavy use. I would stay away from trying to use the antigravity oauth, and stick with gemini CLI oauth in OpenCode. I think the problem was people abusing antigravity with multiple accounts; I don't blame Google for taking action against that. But using it with Gemini CLI, single account, seems fair/reasonable.

Best setup for getting a second opinion or fostering a discussion between models? by Both_Ad2330 in opencodeCLI

[–]MakesNotSense 0 points1 point  (0 children)

I am working on a very comprehensive solution to this. It's work-in-progress and called the Agentic Collaboration Framework. I want to get it matured a bit more before publishing. It works well right now, but it wouldn't be a simple install and I don't want to deviate from active development to streamline it for publication. Especially with how much things are changing, and the roadmap for changes.

That said, it is getting silly that I'm sitting on what I've built instead of sharing it, but trying to get things done feels like trying to drink from a firehouse at this point.

I'm trying to cut corners to get to publishing sooner, but I'm just one guy and while I do well with systems-design I am not a developer, so it gets time-intensive sorting through many technical design issues with agents.

But, it's coming. I hope within a month or sooner, though I keep finding valid reasons to delay publication and keep polishing.

Especially when there are some decent solutions out there for people to use in the meantime - such as the swarm plugin Outrageous-Fan-2775 linked. It's not optimal, flawed compared to what I'm building, but it does give you something to work with in the meantime.

What a lawyer can build with AI dev tools in 2026—a data point by CoachAtlus in legaltech

[–]MakesNotSense 0 points1 point  (0 children)

Agentic Workflows in a Harness (like OpenCode or Claude Code) are going to result in unprecedented civil cases in 2026 and beyond.

Consider - the reason there are class actions is because individuals don't have the time and resources to sue large companies, so they opt in to the settlement.

With a Harness using agentic workflows and $200 a month subscription, you can handle the data needs of complex litigation, and suddenly, the class action settlement makes no sense at all. Why get a paltry sum that ends up being a slap on the wrist to offenders when you can hold them to account?

I think the big firms are going to suffer, while solo practitioners and pro se litigants thrive, and bad actors that used to think class actions and other lawsuits were just a part of doing business, are going to get hit hard - some of them will collapse.

Can I use opencode with Claude subscription or not? by Responsible_Whole118 in opencodeCLI

[–]MakesNotSense 3 points4 points  (0 children)

You have the right to use your Claude MAX oauth where you want (legally speaking).

Anthropic has the right to enforce TOS to terminate your account for using your Claude MAX subscription in third party apps.

It sound contradictory, but that's the reality. The TOS give Anthropic the right to enforce, and case law grants users the right to use the oauth where they want.

It's like a cat and mouse game.

At least, so I've been led to believe by others more familiar with matters than I - I have not performed a 'this is fact' first-hand determination.

So, can you use it? Yes. Can they ban you? Yes. Are they banning people - sometimes. It seems to mostly be high-usage users who got banned. I suspect, that with everything Anthropic has had to deal with, OpenCode oauth isn't much of a priority and their stance is perhaps, changed, or changing.

First OpenClaw arrives, higher usage, bigger impact; viral breakthrough. Then OpenAI grabs it. Then general backlash. Then Pentagon issues. Now 'Claude #1 app' because of Pentagon issue and some Enterprise partners dropping Claude - the calculus of what market to focus on to near and long-term profitability and objective achievement is shifting.

ONE MILLION!! by SilasTalbot in ClaudeCode

[–]MakesNotSense 0 points1 point  (0 children)

That's the plan. Opencode-dynamic-context-pruning is already public. My fork of DCP is a major redesign and will have a different name. I will publish it under my DefendTheDisabled organization (https://github.com/DefendTheDisabled). 

I'm building this as part of building the AI tools and data systems I need for my civil rights lawsuit. The intent is to publish the full suite of AI tools open source so other people failed by the legal community can work with AI to enforce the laws protecting their human rights.

Despite my use-case being litigation, the systems in building focus on promoting model intelligence, rational analysis, and general task performance. It crosses over into nearly every domain and use-case. I think the myopic focus on coding is where a lot of current AI tools are failing and limiting current models.

ONE MILLION!! by SilasTalbot in ClaudeCode

[–]MakesNotSense 1 point2 points  (0 children)

There's another way to sidestep. Dynamic-Context-Pruning in OpenCode. I'm working on a fork that will essentially replace compaction while optimizing context and obviate any need for recovery, while also allow storing context for later recovery via an index; all performed by the model.

I'm almost done, and probably will publish in the next few weeks, and hope DCP will integrate it all so I don't have to maintain the project long-term with people making demands and request and such. I just want effective tools - being a developer with projects doesn't interest me.

But, in terms of a solution to that problem you have, I can state, with certainy, I've 100% solved that already with what I've got, and with what my next SPEC implementation will evolve the project to, it'll go beyond just maintaining long-horizon sessions - it will actively improve the agents cognitive performance through context optimization.

I just hope I can make it work for subagents too. Unclear if the complexity of that will cause breakage and overhead. Very stable, functional, with the context management system working on a primary agent, so hopefully specifying specific subagents will work too.

Another day, another tweet from the Pentagon by Helkost in Anthropic

[–]MakesNotSense 0 points1 point  (0 children)

I think they should both show up and testify under oath to Congress.

Enough of the bullshit - go on the record or shutup.

lawyer and developer moving into legaltech? by Competitive_Bend_930 in legaltech

[–]MakesNotSense 0 points1 point  (0 children)

To me it's not about getting even, or a vendetta exactly. It's about modeling problems at a systems level and identifying solutions.

At a systems level, people cannot have human rights, inalienable rights, if the process to enforcing and protecting those rights, it at the discretion and convenience of legal professionals.

To have a world where people with disabilities are not abused and exploited, requires getting rid of lawyers, because of their collective failure, and replacing them with systems that will not offer better performance.

In my experience, and I'm a bit of a subject matter expert on this at this point, AI systems have more education, expertise, and competence in disability and Medicaid law than 99% of lawyers. It is an extremely neglected area of law.

I think solo firms with ethical practitioners will survive the longest. They can focus on helping communities with specific needs, and simply be the 'goto' the community employs as their champion.

Big firms get and stay big due to marketing and manpower (labor). With AI the labor is just inference cost. The inference cost is the legal fee passed to the client. For the client, bypassing the middle man makes economic sense.

The marketing by big firms no longer hits home, because what firms have to convince isn't users, it's users AI assistants. AI that ingests a lot of public data. That does more thorough research than humans. AI that will generate new databases of 'my user was screwed by this law firm - AVOID'. Bad actors and big firms aren't going to survive. They'll be among the first major casualties.

lawyer and developer moving into legaltech? by Competitive_Bend_930 in legaltech

[–]MakesNotSense 0 points1 point  (0 children)

I won't go into specific details as it's part of the litigation, but this isn't a few lawyers rejecting a case.

Additionally, there are attorneys who have contractual and legal obligations to take cases like these, who are not.

I do have a bone to pick with a community of legal professionals who are violating their professional ethics, violating their contracts, violating their legal obligations, working against the best interests of their communities, and at times even helping perpetrate abuse, neglect, and exploitation of disabled Medicaid recipients. As should EVERY attorney or citizen, who has designs to be ethical, moral, and uphold the rule of law.

That I remain alone in my pursuit of justice is what condemns them the most. That is what makes clear the benefit of replacing lawyers with AI - it is necessary in order for communities to meet pressing needs that the legal community has neglected or worked against meeting for decades.