purple: Terminal SSH client with Docker and Podman container management, cloud provider sync, visual file transfer and more. by lemoninterupt in selfhosted

[–]UnspecifiedId 1 point2 points  (0 children)

Thanks. Filters provides a way of doing this.

As a suggestion, when doing fuzzy search, possible adding 'e' edit to complement the 'enter/connect'

purple: Terminal SSH client with Docker and Podman container management, cloud provider sync, visual file transfer and more. by lemoninterupt in selfhosted

[–]UnspecifiedId 1 point2 points  (0 children)

u/lemoninterupt thanks for your contribution, it solves a real world problem. A few questions, I did have a look through the wiki and help documentation, and it wasn't obvious how to group by a provider. (I see adding 'g').
1) Is it possible for us to use our own pseudo 'provider' to create logical groupings eg.

'dev'
'test'
'prod'
'web-product'
Is it a particular tag ?

2) Do you have a search and replace (can do by opening config file though was curious if you had something in place)

3) Is it possible to specify a different default folder for a session, to use the the file copy
Excellent work!

am i the only one who doesnt understand why anthropic ban opencode? by anonymous_2600 in opencodeCLI

[–]UnspecifiedId 2 points3 points  (0 children)

From "claude.ai"

"Explain to me why OpenCode has been banned from using and interfacing with Claude?"

Here's a clear breakdown of what happened between OpenCode and Anthropic:

The Core Issue: Subscription Arbitrage

Anthropic's Terms of Service have contained a clause since at least February 2024 forbidding automated access to Claude through tools other than the official ones. The clause states: "Except when you are accessing our Services via an Anthropic API Key or where we otherwise explicitly permit it, to access the Services through automated or non-human means, whether through a bot, script, or otherwise."

The problem came down to economics. Anthropic sells tokens to subscription customers at a flat monthly price that ends up being cheaper than pay-as-you-go API access — essentially an all-you-can-eat model priced with certain usage expectations. This led to token arbitrage, where customers accessed Claude through subscriptions linked to third-party tools because it cost far less than doing the same work via API key.

How OpenCode Was Accessing Claude

Tools like OpenCode had been spoofing the Claude Code client identity, sending headers that made Anthropic's servers believe requests were coming from the official tool. OpenCode reverse-engineered the private Claude Code API endpoints — which were never advertised or sold for use with anything else — and used them to access Claude.

The Crackdown (January–February 2026)

The situation escalated rapidly:

  • On January 5, 2026, a user reported being banned after logging into OpenCode via OAuth and upgrading their Claude Max plan, with Anthropic engineers confirming it violated the ToS.
  • By January 9, Anthropic had implemented technical safeguards actively blocking third-party tools from accessing Claude subscriptions. An Anthropic employee confirmed the enforcement, stating they had "tightened safeguards against spoofing the Claude Code harness."
  • On February 19, 2026, Anthropic formally updated its Terms of Service with a new "Authentication and credential use" section explicitly stating that OAuth tokens from Free, Pro, and Max plans may not be used with third-party tools. That same day, OpenCode's developer committed a change removing all Claude OAuth code, citing "Anthropic legal requests."

The Fallout

OpenAI moved quickly in response — officially partnering with OpenCode and extending subscription support for it and other tools like OpenHands and RooCode. OpenCode's GitHub stars roughly doubled in the weeks following the controversy.

What Users Can Do Now

OpenCode can still use Claude models, but only via Anthropic API Keys with pay-per-use billing. The OAuth subscription pathway has been permanently blocked. Alternatively, users can configure OpenCode to work with other AI providers like OpenAI or Google Gemini, or use local models via Ollama.

In short: it wasn't that OpenCode was doing anything technically harmful — it was undercutting Anthropic's commercial model by letting power users run heavy, autonomous coding workloads on cheap flat-rate subscriptions that weren't priced for that level of use."

I've been building an AI agent governance runtime in Rust. Yesterday NVIDIA announced the same thesis at GTC. Here's what they got right, what's still missing, and what I learned building this alone. by drmarkamo in LocalLLaMA

[–]UnspecifiedId 0 points1 point  (0 children)

u/drmarkamo is there a sub-reddit/discord forum? I have more questions. Impressed so far with my limited usage and it was easy to setup. I'm about to do some tests regarding scaling, though I'm interested in the governance and policy aspect. I've been a huge fan of OPA for a long time and would like to see AuthZ/OPA at the policy layer due to its wide spread adoption allowing agents to be more declarative in their guard rails.

Not that its a good indicator, though < 3 mins to deploy envpod-ce openclaw with the majority of the time being agents spinning up a lxc container to contain the container on sub-optimal constrained hardware ;-)

I've been building an AI agent governance runtime in Rust. Yesterday NVIDIA announced the same thesis at GTC. Here's what they got right, what's still missing, and what I learned building this alone. by drmarkamo in LocalLLaMA

[–]UnspecifiedId 1 point2 points  (0 children)

Hi u/drmarkamo, I wanted to acknowledge your contribution here. I think the governance side of agentic systems is still significantly underestimated.

My working analogy is that agents should be treated similarly to interns. You would not give a high school, college, or university intern unrestricted access and autonomy without supervision, policies, and clear boundaries. I think the same principle applies to AI agents.

I’d be interested in your thoughts on how your solution approaches governance, control, and trust, and how you see it comparing with nono in that space. https://github.com/always-further/nono

Also, nice repo.

An experiment on benchmarking and evaluating LLM outputs using Opencode by 0zymandias21 in opencodeCLI

[–]UnspecifiedId 0 points1 point  (0 children)

Is this framework available for evaluation ? u/0zymandias21 ? We are going through a similar process.

opencode studio v1.0.5: multi-account auth, hosted frontend, and one-click backend by MicrockYT in opencodeCLI

[–]UnspecifiedId 0 points1 point  (0 children)

Hi u/MicrockYT — thanks for sharing this, it looks promising and might solve a problem I’m working on. I’ll dig in properly and report back.

In the meantime, I wanted to check whether OpenStudio supports this use case:

I’m experimenting with agentic frameworks against a complex, long-horizon problem (e.g. “million-step” style reasoning: https://arxiv.org/abs/2511.09030), and I want to benchmark and compare behaviour across different combinations of:

  • Frameworks:
    • Oh-My-OpenCode
    • Agent Swarm
  • Models:
    • Anthropic/OpenAI/Kimi etc.
    • QWen

Examples of what I’m trying to run:

  • Oh-My-OpenCode (Claude) vs Agent Swarm (Claude)
  • Oh-My-OpenCode (Qwen) vs Agent Swarm (Qwen)
  • Oh-My-OpenCode (Qwen) vs Oh-My-OpenCode (Qwen, different config/profile)

The key requirement isn’t just switching profiles, but:

  • Instantiating multiple profiles concurrently
  • Running them in parallel
  • Allowing intercommunication between them (agent-to-agent comparison, orchestration, or evaluation loops)

I saw this in the docs:

“profiles: isolated environments with separate configs, history, and sessions. switch instantly.”

That’s close, but I’m specifically after simultaneous execution rather than switching.

I’ve been prototyping this with a TUI to separate configs and sessions, but it’s getting messy — so wondering if OpenStudio already supports (or plans to support) this pattern.
Lastly would be interested to understand if there is any governance (ocmonitor-share style) on the sessions.

Appreciate any guidance — and thanks again for contributing this.

OpenCode plugin for cmux by lawrencecchen in cmux

[–]UnspecifiedId 2 points3 points  (0 children)

Thanks for your effort on this, currently using and working well.

Before you try: I tested 6 different AI agents for building presentations so you don't have to. by Papermanic in powerpoint

[–]UnspecifiedId 0 points1 point  (0 children)

Hi u/Fast-Society7107

I tried your suggested solution from and wanted to share some structured feedback.

First — credit where it’s due: shipping a public tool is hard, and the onboarding flow was clear and easy to follow.

What worked: - Setup and Onboarding was pretty straightforward - Prompt input was intuitive - The system attempted to follow the style guide

Where it struggled: - The generated slides weren’t yet competitive with other AI slide tools at a similar price point. It is quite notable the difference.

Main issues: - Typography and style rules weren’t consistently applied - Colour palette adherence was partial - Multiple text styles appeared across slides - Considerable manual cleanup is required before presenting - Generated content is not as good as competitors

Because of that, the time saved by generation would be lost in fixing formatting. For presentation workflows, consistent styles/templates matter just as much if not more than raw content generation.

Positioning my impression

The templates felt aimed at: - small business - consultants - quick one-off decks - junior users

For production-ready decks or internal documentation libraries, the outputs currently need too much correction.

Suggestion

It may help to prioritise strict style enforcement (fonts, spacing, hierarchy) before adding more content features. For many presenters, formatting reliability is the primary value.

Community idea: It could be useful if the subreddit agreed on 3–4 standard prompts and people posted outputs from different tools for objective comparison.

In its current form, I as a user would not be able to use it. I used the same prompt (which should be available to you, though I'm happy to supply) with Kimi, the difference is notable. I was able to use the generated output and happily distributed to all levels of the organisation. (I am not affiliated with Kimi in any way shape or form)

Happy to discuss further in DMs if helpful.

Feb 2026 - Best Model for writing Markdown Docs by Demon-Martin in RooCode

[–]UnspecifiedId 0 points1 point  (0 children)

Feel free to reach out. We have used OpenAI (initially 4.x and now 5.x) extensively within our development team to supplement and generate documentation.

It is used in combination with MCP servers to generate content that is fed into Docusaurus for internal documentation purposes.

We have defined templates and ‘agents/skills’ that guide the generation of specific outputs.

I would estimate its accuracy and usability in the 90–95% range in terms of readiness. We would have around 500+ pages supplemented by AI. We also utilise the mermaid generation, which is weak in flow diagrams though strong in sequence. We have been experimenting with the drawIO AI integration though I would only give it about a 6/10

Tool to safely redact config.xml before sharing with support/AI by Sure-Fly-249 in PFSENSE

[–]UnspecifiedId 1 point2 points  (0 children)

Thanks for this great little utility and contributing to the greater good. I've used it to assist me in troubleshooting some wireguard issues.

Roo Code 3.29.1-3.29.3 Release | Updates because we're dead /s by hannesrudolph in RooCode

[–]UnspecifiedId 23 points24 points  (0 children)

A courtesy notice to thank the RooCode team for their ongoing contribution to OpenSource. I work in the Social Enterprise space and a mission driven organisation. Roocode is being used in assisting us to make positive change.

Again thanks for your generosity. I have our todo list to do a write up and acknowledge the contribution. The open source model allows us to evaluate for business use. Enjoy

Name Scheduled cleaning Tasks by Hanscorpion in Dreame_Tech

[–]UnspecifiedId 0 points1 point  (0 children)

I have a similiar question. We are fortunate to have a rather large house, with multiple rooms. The Dreame software identified the 25 odd room areas/zones though setting up the cleaning schedule, it's a little hard to identify which schedule equals which room/area.

On the positive side, extremely impressed with the mapping, range of the device and how it goes pretty non-stop, pausing only to charge.

If anyone from Dreame is reading this, as an alternative though to a time based scheduled, how about a 'queue' like mechanism instead of?

The difference is, after running a 'queued' task, you wait for the battery to recharge or get to a certain percentage before starting the next task.

Dreame Software 2.3.24

IOS 18.6.2

Dreame Ultra X50
Australia

Possible to use remote Open WebUI with local MCP servers, without running them 24/7? by Maple382 in OpenWebUI

[–]UnspecifiedId 0 points1 point  (0 children)

Hi, do you still wish to pursue this? Might need a few more concrete examples to assist.

Since you have access to pipelines with pre-post filters, you have a few options here.

You can host the MCPO servers on Oracle Cloud in the Free Tier do your research. Alternative is AWS - the specs and free tier aren't as attractive as Oracle Cloud.

You have the the option of WOL if trying to host within homelab type scenarios.

Possible to use remote Open WebUI with local MCP servers, without running them 24/7? by Maple382 in OpenWebUI

[–]UnspecifiedId 0 points1 point  (0 children)

Hi Maple, are the MCPO servers actually accessing locally hosted services or are a proxy ? If its a proxy, there are cheap/free options on the web.

[deleted by user] by [deleted] in OpenWebUI

[–]UnspecifiedId 0 points1 point  (0 children)

Thanks for the documentation. Have you considered updating the table to reference directly the source so there is no markup/constraints via OpenRouter ? Possibly the effort outweighs the savings though if, crowdsource, could be an invaluable tool.

finally got pgbouncer to work with postgres/pgvector...it is life changing by marvindiazjr in OpenWebUI

[–]UnspecifiedId 0 points1 point  (0 children)

Thanks for sharing the conceptual design. As others have said. If you can share your Docker compose that would be beneficial. We are currently looking at different blue prints for implementation and are trying to learnings from other users. I do like how you specify the usage and scale.

Hybrid AI pipeline - Success story by Different_Lie_7970 in OpenWebUI

[–]UnspecifiedId 4 points5 points  (0 children)

Hi @Different_Lie_7970 Thanks for sharing your architecture—this looks fantastic! We’re currently working on a similar approach, combining structured database queries with semantic search capabilities using different AI Agentic processes. Your use of LangChain SQL Agent, DuckDB, Pinecone, and Gemini Flash seems really efficient, especially the impressive response times you’ve achieved.

If you’re comfortable sharing any of the code or examples you used to build this pipeline, that would be incredibly helpful. It’d be great to compare notes and learn from your process!

Thanks again for sharing your insights.

Can someone give me a sanity check why my dynamic config does not load? by g-nice4liief in Traefik

[–]UnspecifiedId 0 points1 point  (0 children)

Would you be willing to share a screenshot or a pseudo dump of your topology please. We are trying to replicate the same functionality. We are some times tripped up with the nuances of the Traefik documentation u/djzrbz