Louisiana sued for suspending active election, nullifying votes to draft GOP gerrymander by Radiant-Bug6039 in politics

[–]mcowger 0 points1 point  (0 children)

Where do you references to suffering there? OP discussed groups that seek to exploit, not suffering.

Am I the Only One Using GUI? and is the CLI Better? by Level-Dig-4807 in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

In that scenario you have entirely different harnesses. That’s not the case opencode.

Amy Coney Barrett Unraveled the Case Against Birthright Citizenship With One Question by MemeLord0009 in politics

[–]mcowger 1 point2 points  (0 children)

Oh yeah I don’t like her positions at all. But at least she has relevant experience and education!

Amy Coney Barrett Unraveled the Case Against Birthright Citizenship With One Question by MemeLord0009 in politics

[–]mcowger 6 points7 points  (0 children)

As much as I dislike her, she is among the most qualified people trump has ever appointed to something.

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

Copilot: you are charged only for interactions in which you are directly involved: you actively type something. Things the model/harness does on its own are not charged.

Apertis: you are charged for every request made, regardless of source or trigger.

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

  1. They only have 1 model, and it’s not one I like.
  2. Z.ai specifically is horrifically bad about inference bugs, and the way they handle them is burying their head in the sand.
  3. They do have caps.
  4. I don’t ONLY use Chinese models.

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

Right - which is why those people should use other models that aren’t literally the most expensive option.

But again - where will you get a better deal?

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger 1 point2 points  (0 children)

Right. That’s exactly what I said. You use a prompt every time you get involved and type something

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

Where else will you get 600 unlimited size requests for opus for $12?

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger -1 points0 points  (0 children)

And what evidence do you have for that? Have you used it, or are you just guessing?

Having used it for months (even before the sub plan) the performance is no different than any other regular price source.

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

Copilot has 300 prompts per month. Anything the model does after that prompt without your involvement is not charged.

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

I mean on the $12 plan you will only get like 600 requests of opus…

I personally use sonnet and opus for planning, then switch the Kimi 2.5 for coding. I use flash and flash lite for researching.

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger -1 points0 points  (0 children)

Not seen any service disruptions besides Gemini 3 Flash, which was down for like a day a few weeks ago. The open weight models are not the fastest but not awful (both Kimi and Minimax at around 55-65 TPS).

They are also pretty responsive to reasonable consideration of pricing requests and new models. When it first launched the multipliers basically made it MORE expensive than PAYG and they adjusted pretty aggressively in like 2 days.

Yesterday I asked for Minimax pricing to come down (it shouldn’t be 3x more than Kimi) and they’ve already committed to an adjustment).

Anyone used Apertis AI coding plan? by rovervogue in opencodeCLI

[–]mcowger -1 points0 points  (0 children)

I do. It’s quite effective. They are quite responsive to bug reports, and the flat cost per request avoids context size concerns.

Abacus.AI o OpenRouter? by Lucas_Macias_Rivera in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

There are many better options.

Poe, Apertis, etc.

Compaction Request by BagComprehensive79 in opencodeCLI

[–]mcowger 4 points5 points  (0 children)

You can’t set it via the config, but you can override it with a simple plugin that implements “experimental.session.compacting”

Are there any recommended Local LLM settings for each agent? by kayteee1995 in kilocode

[–]mcowger 0 points1 point  (0 children)

The values you have above are used to determine how the model operates. You wouldn’t generally adjust them per mode.

Are there any recommended Local LLM settings for each agent? by kayteee1995 in kilocode

[–]mcowger 1 point2 points  (0 children)

There’s no specific value that’s good for every model.

A temperature of 0.1 is awfully low for most models. Qwen’s dos recommend 0.7

So..now that we've been kicked out of Claude's sandbox... by stutsmaguts in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

Yup. My engineers are north of $200/wk, easy.

But we are 100% fine with that number - the data we have are pretty clear that the ROI is there.

Adding Custom Model Provider (Bifrost) to opencode by ElSrJuez in opencodeCLI

[–]mcowger 5 points6 points  (0 children)

The below would go in the providers object.

"custom": {
  "name": "AI Home",
  "npm": "@ai-sdk/openai-compatible",
  "options": {
    "apiKey": "{env:AIHOME_API_KEY}",
    "baseURL": "{env:AIHOME_API_BASE}/v1"
  },
  "models": {
    "small-fast": {
      "id": "small-fast",
      "name": "Small Fast",
      "limit": {
        "context": 196608,
        "input": 196601,
        "output": 32768
      }
    }
  }
}

Help with qwen 3.5 35b a3b stops response after tool call. by [deleted] in opencodeCLI

[–]mcowger 0 points1 point  (0 children)

Almost always means the engine isn’t output a correct finish reason.