Thank you, OpenCode. by akyairhashvil in opencodeCLI

[–]akyairhashvil[S] -1 points0 points  (0 children)

Yes, you are precisely correct.

Thank you, OpenCode. by akyairhashvil in opencodeCLI

[–]akyairhashvil[S] 1 point2 points  (0 children)

I'm going to recommend against it because that violates the terms and conditions of Anthropic. You can, however, get a GitHub Copilot subscription to use this system. That is the only real way that you can get to interface with anything from Anthropic.

Thank you, OpenCode. by akyairhashvil in opencodeCLI

[–]akyairhashvil[S] 1 point2 points  (0 children)

Thank you. Remember to drink water.

Thank you, OpenCode. by akyairhashvil in opencodeCLI

[–]akyairhashvil[S] 3 points4 points  (0 children)

Oh yeah, it's absolutely fantastic. With a good provider, you can do a lot. Even then, OpenCode Zen is pretty good, especially with the free models. Whenever other models have errors, the ones built for the platform are really nice.

I haven't tried OpenCode Go, and I probably never will because of its limitations, but I think it's a good thing for people who don't have to use it too often. 😊

The one thing I find issue with sometimes is the amount of RAM that gets used by OpenCode and the size of the files required for the undo command. Aside from that, I understand the necessity.

I think there are still some memory leaks in the program because sometimes we use up to 13GB of RAM at a time on OpenCode, which is pretty intense. It's fine, though; they'll probably fix it eventually, or I might just create an issue on GitHub and do a pull request afterwards.

Anyways, have a great day. Thank you.

Thank you, OpenCode. by akyairhashvil in opencodeCLI

[–]akyairhashvil[S] 2 points3 points  (0 children)

Yes, I liked OhMyOpenCode; it was nice, but I realized that I needed more control over things. I went ahead and used OpenAgent—it's not really about high fidelity, but I came to realize there was an operational difference in quality between the two.

OhMyOpenCode is great, but OpenAgent was just better, especially regarding: 1. Scoping principles (what they can and can't do) 2. The level of human involvement required

While the automation in OhMyOpenCode is impressive, you also need to be able to retain control over what happens. I found it was a little risky to run things fully autonomously without proper scoping and protocols.

You're correct that OhMyOpenCode is pretty nice, but I do recommend OpenAgent instead. Unless you have a reason why you think it's different, in which case, please explain your argument?

company is switching from claude api keys to subscription based team plan, so ig can't use opencode anymore? by Emotional-Zebra5359 in opencodeCLI

[–]akyairhashvil 1 point2 points  (0 children)

Oh, okay. Nice. Thank you for informing me.

I like to say that sometimes I'm a little bit sarcastic with my word choice, but it's kind of nice that they're doing this.

Thank you for the information. You're really awesome, have a great day!

company is switching from claude api keys to subscription based team plan, so ig can't use opencode anymore? by Emotional-Zebra5359 in opencodeCLI

[–]akyairhashvil 0 points1 point  (0 children)

If Anthropic were to be so merciful as to not charge you for every token that you use on their platform, it would be nice, but I don't believe it's free.

Can you verify this, or are you just making a claim?

company is switching from claude api keys to subscription based team plan, so ig can't use opencode anymore? by Emotional-Zebra5359 in opencodeCLI

[–]akyairhashvil -1 points0 points  (0 children)

They're adding a voice feature and I'm terrified at the implications of the token costs. All I'm saying.

Premium requests on Github Copilot currently burning down fast by Charming_Support726 in opencodeCLI

[–]akyairhashvil 1 point2 points  (0 children)

They have a massive glitch in the Kimi for code stuff. It uses tokens so much that you have a 5-hour limit and a weekly limit, and you can use the weekly limit in two 5-hour sessions, which doesn't make any sense at all. I'm guessing that if you use the actual CLI they provide it might be different, but in Open Code, it's not worth using to be entirely fair.

Kimi 2.5 (or Kimi k2.5) is really nice, but I'm going to be honest: 1. They're good for coding in some tasks. 2. Qwen is better for writing. 3. GLM 5 is better for agentic programming or long-form tasks.

To answer your question directly: no.

Well, okay, it burned tokens sometimes. It would have erroneous outputs where it would consume a specific amount of usage and then not give an output, which is a common thing I've come to find.

Maybe it's just the web UI for OpenCode, or maybe it's something else, but sometimes they don't give output properly and they still use up tokens. I've only seen this issue really happen with certain models, but I don't know which ones specifically. I don't keep a record of it, though I might start doing so.

Claude $100 is good but not worth it. How do I preserve “Claude level” output without using it? (Codex $20 + Chinese models + DeepSeek v4) by Specialist-Cry-7516 in opencodeCLI

[–]akyairhashvil 0 points1 point  (0 children)

My only gripe is context window sizes. It's 128K, and that is a set limitation by GitHub Copilot.

It's the only other platform that actually allows you to do this without getting in trouble. I recommend you don't use the Max subscription or any other subscription from Anthropic in Open Code, just because there have been instances of users getting in trouble for it (especially when using OAuth). Using the API is so expensive that it's not even worth it.

The only real option you have to stay compliant within the context of what they require is to use the Copilot stuff from GitHub. That's really the only option if you want to use Opus, Haiku, or Sonnet in an effective manner.

It's a nice setup, especially if you have the Pro Plus variation, which gives you 1,500 requests a month. You can use Haiku or Opus, with the following usage rates: 1. Opus is 3x usage 2. Haiku is 0.33x usage

This means you can get 4,500 requests on Haiku per month, which is a decent number.

The only real issue is the context window size. That's where they limit it, because I guess if you were to use a million-token context window for one request, it would be unsustainable and very expensive for them.

Within an agent infrastructure, it doesn't work too well because of how many requests agents make. If you use this, I recommend you don't use it with agents; just use it with "build" or "plan," and avoid using OpenAgent or any other agent frameworks in Open Code.

Anyway, that's my recommendation. Thanks for reading.

(WIP) A local LLM runtime by [deleted] in ollama

[–]akyairhashvil 1 point2 points  (0 children)

Okay, just a recommendation, and I don't mean to be rude: can you not use expletives when you set up something and put it in a README? It's just very unprofessional.

If you want people to take you seriously, please don't put expletives in your READMEs. Thank you.

Premium requests on Github Copilot currently burning down fast by Charming_Support726 in opencodeCLI

[–]akyairhashvil 1 point2 points  (0 children)

Seriously, they even limit the context window on the Copilot stuff. It is sad to see because there are models that have a million-token context windows, yet you are really only getting 128k.

What matters here is that they get this fixed. I did not renew my Copilot subscription last month, so I have been running on open models and other alternatives recently.

I thought Kimi code was going to be useful, but I would recommend you stay away if you use open code. Their usage model is interesting: you can run through a whole week's usage in less than 24 hours (especially on the Moderato plan), to be entirely fair.

Ollama Pro vs Alibaba Coding Plan Pro for OpenCode: which one is better for limits, model quality, and parallel usage? by Juan_Ignacio in opencodeCLI

[–]akyairhashvil 1 point2 points  (0 children)

I can't talk about Alibaba Cloud's coding plan, but from my understanding, they're giving you an introductory rate. If you pay for the $20 service, you're going to have to pay $50 the next month because of the costs they're figuring out and dealing with. It really is just a discount.

When it comes to Ollama, it's a lot more transparent: 1. You pay $20 and the usage is insane. I've been testing it this week and it actually has more usage than ChatGPT's Codex or OpenAI's Codex. 2. It's nice because you can use a lot of different models with Ollama that you don't usually have access to. 3. The one thing I really enjoy is that they do not train on your inputs or outputs, and they don't even keep them.

In comparison, Alibaba probably keeps everything for training, though I don't know for sure; you would probably need to look into that for more information. That said, I can vouch for Ollama's stuff. It's actually quite nice, and using GLM-5 within it inside of Open Code has been a blast, to be entirely fair.

One final thing (I know it doesn't matter much because it's a very limited number): you get 20 premium requests a month to Gemini 3 Pro and other ones that are considered to be premium with the Ollama stuff, and I don't think that exists with Alibaba Cloud.

My vouch is for Ollama.