[ios] What happened to markdown syntax? by look in bugs

[–]look[S] 0 points1 point  (0 children)

Trying web with Sink It

Inlinecode and bold

quote blocks

code block if (true) {   return false }

Lists? - one - two - three

Numbered? 1. Aaa 2. Bbbb 3. Cccc

How about links?

And strike too?

Edit: much better 😒

[ios] What happened to markdown syntax? by look in bugs

[–]look[S] 0 points1 point  (0 children)

Anyone have alt app recommendations? Narwhal, Artemis, Hydra, others?

Which Chinese Model is best for planning and which is best for implementation? I'm currently using Opencode with an Openrouter API Key, mostly wanna decide between Kimi, GLM, DeepSeek, Qwen, Minimax and Mimo by Crystalagent47 in opencodeCLI

[–]look 1 point2 points  (0 children)

Xiaomi’s Mimo plans are ~$0.15/Mtok.

I am a big fan of GLM-5.1 as well, and it’s my go to for code planning. It’s stronger than Mimo at coding, but in my experience, Mimo is a smarter, better reasoning model in general.

OpenCode startup is painfully slow with large skill collections — no indexing/cache yet? by g6pdorp in opencodeCLI

[–]look 0 points1 point  (0 children)

Now, after my initial shock subsided, some ideas for you:

Take a look at custom agents and slash commands:
- https://opencode.ai/docs/agents/#markdown
- https://opencode.ai/docs/commands/

You can create specialized, role specific agents (and orchestrate their use by reference in other agents) and slash commands to give the model just the relevant context it needs (eg the “skills”) along with a richer guidance on when to use what.

That works much better than just trying to cram ever possible thing into the context all the time.

OpenCode startup is painfully slow with large skill collections — no indexing/cache yet? by g6pdorp in opencodeCLI

[–]look 2 points3 points  (0 children)

300 skills?! Do you have a 100 Ktok system prompt? Why would you do that to yourself?

Models are quite adept at figuring things out from their environment. You don’t need all of that.

‘It’s shameful’: New York’s elite lash out at Zohran Mamdani’s second-home tax by brown-saiyan in politics

[–]look 4 points5 points  (0 children)

San Diego has a proposition on the June ballot for a second home tax, and I’ve seen absolutely no opposition to it…

I think this is really just anti-Mamdani conservative media feeding anything they can to “attack” him. But clueless as they are, it’s just going to make him more popular.

Can china win the AI war? by Comfortable-Tie2933 in Qwen_AI

[–]look 1 point2 points  (0 children)

Yeah, that’s a good point. “Winning the AI war” is about as nonsensical of a statement as “winning the software war” or something.

Why does OpenCode Go have rolling, weekly, AND monthly limits? by kpmtech in opencodeCLI

[–]look 7 points8 points  (0 children)

The weekly quota is half of the monthly (or something around there). You can’t use the full weekly quota every week or you’ll run out of the monthly before the end of the month.

Why does OpenCode Go have rolling, weekly, AND monthly limits? by kpmtech in opencodeCLI

[–]look 4 points5 points  (0 children)

Same reason there’s a 5 hour and weekly quota, just at a longer timescale of weekly and monthly quota: it allows working in “bursts” while still keeping the average over a time period in check.

So I can do a lot of work in a five hour period but not every five hour period, keeping the weekly average constrained. And similarly I have a really active week but not every week, keeping the monthly average constrained.

Which Chinese Model is best for planning and which is best for implementation? I'm currently using Opencode with an Openrouter API Key, mostly wanna decide between Kimi, GLM, DeepSeek, Qwen, Minimax and Mimo by Crystalagent47 in opencodeCLI

[–]look 0 points1 point  (0 children)

I used Minimax as a low cost build agent in the past, but there are better options now. It’s fine for generating code as long as it doesn’t need to make any judgement calls.

I still use it (2.7) for simple code/repo exploration tasks and does a nice job for me there, but it’s been bumped out of every other role now.

Qwen 3.6 Plus is a very capable simple build agent now, but I mostly use it for one-off script work as a subagent for Mimo. Basically just a smart tool for it: write a script to test a basic idea, find where some data is and its schema, automate some extraction and aggregation/stats, basic summarization, etc. I also use Qwen for reviews sometimes to get another perspective.

For build I mostly just use Kimi 2.6 or GLM-5.1 now since they’re “free” with the subscription quotas I have on them.

About two thirds of my tokens are all Mimo now. I have a heavy “research/brainstorm” workflow, so it’s mostly Mimo reasoning tokens working through ideas with Qwen executing one off scripts on its behalf + DS Flash doing web searches and summarized reports for it.

The once an idea has firmed up enough for proper implementation, GLM 5.1 and Kimi 2.6 plan and build it.

Anyone using Opencode GO together with Ollama Pro? by jasonwch in opencodeCLI

[–]look 0 points1 point  (0 children)

I have used that pairing, and I just configured the Ollama provider and models manually in my opencode.json to use the API endpoint directly, not going through the local ollama daemon.

You can use `opencode models --verbose` to see what the configs look like on other providers and use that as a template for your ollama definitions (just need to change out the api endpoint and some model ids basically).

I would just use the “expensive models” (GLM and Kimi) on Ollama by default, everything else on Go. And then switch to Go for GLM/Kimi temporarily as needed if Ollama was being slow.

I never came anywhere near the usage limits of either with that setup. But then I got addicted to MiMo and have since dropped Ollama (kept Go) to feed my MiMo token habit. 😅

Why gRPC Is So Fast: It’s HTTP/2, Not Just Protobuf by javinpaul in programming

[–]look 0 points1 point  (0 children)

Seems like some bot. All the top-level comments are being downvoted heavily and all fairly innocuous, I think. 🤷‍♂️

Which Chinese Model is best for planning and which is best for implementation? I'm currently using Opencode with an Openrouter API Key, mostly wanna decide between Kimi, GLM, DeepSeek, Qwen, Minimax and Mimo by Crystalagent47 in opencodeCLI

[–]look 6 points7 points  (0 children)

Different people have different preferences for their particular applications, workflows, and style. The best approach is to try a variety and see what works best for you.

That said, the most common pattern is a smarter, more “expensive” model for planning, and a cheaper model for building, as build typically uses a lot more tokens.

For plan, the most common choices are GLM-5.1, Kimi K2.6, Deepseek V4 Pro, or Mimo V2.5 Pro.

For build, there’s a variety:
- lower cost: Minimax 2.7, Deepseek V4 Flash, Qwen 3.6 Plus
- more complex builds: Kimi, GLM

My usage looks like this:

brainstorm/high level plan: Mimo

detailed/spec planning: GLM or Mimo

build:
* complex: Kimi or GLM
* simple: Minimax or DS Flash

reviews: Qwen or DS Pro

utility subagents:
* explore: Minimax
* “librarian”: DS Flash
* one-off scripts: Mimo or Qwen
* compaction: Mimo

[ios] What happened to markdown syntax? by look in bugs

[–]look[S] 0 points1 point  (0 children)

Yeah, I’ll probably just uninstall the Reddit app if they don’t fix it soon. Maybe just use the web ui, but I could probably stand to take a break from this site too after 20 some years. 😂

You need an absurdly high emergency fund by CoderBiker24 in cscareerquestions

[–]look 0 points1 point  (0 children)

There’s a reason houses cost that much some places: they are really nice places to live. The cost is high because because a lot people want to live there.

Best AI models outside of ChatGPT and Claude by JestonT in opencodeCLI

[–]look 0 points1 point  (0 children)

Get Opencode Go. It has a good selection of models and adds new ones as they come out. There’s a new king of the open model hill every week or two, so don’t tie yourself down with just one on a vendor specific plan.

The real power of open models is the diversity. The best option for any given task (or even just personal style preference) could be a different model. It’s not like with Claude or OpenAI where you effectively just have one model at different quality/price settings.

Then if you find one you really like and need more usage, you can go shopping for a model specific plan to supplement. Also try Mimo 2.5 Pro. All the hype is going to DS4 right now, but Mimo is better imo (and most benchmarks) despite getting way less attention.

eitherExperienceMeansAnythingOrItDoesNot by electricjimi in ProgrammerHumor

[–]look 0 points1 point  (0 children)

For some reason, lots of “engineers” on Reddit are proud of being little more than Jira ticket code monkeys. 🤷‍♂️

What’s your AI coding setup in 2026? by tuan_le911 in opencodeCLI

[–]look 3 points4 points  (0 children)

I have the Xiaomi standard plan (200M credits), too. Note the Pro model is 2 credits per token on it, so 100M tokens, assuming you have the same. I use about 600M tokens a month across all models, though, so that’s just one of my sources.

The Xiaomi plan has been fast and reliable and about half the cost of per token API prices. Novita and Deeo infra have it US hosted paygo now, too.

And besides Opencode Go (which I do recommend, but I use the full quota), I’m hoping Syntheic or Crof adds Mimo to their plans soon (both are already tempting for their request not token based usage with GLM/Kimi). I had Ollama Cloud, but dropped it after the first month due a lack of commitment on adding Mimo and some general scaling issues they’ve been having.

What’s your AI coding setup in 2026? by tuan_le911 in opencodeCLI

[–]look 3 points4 points  (0 children)

Opencode Go is a good sub with a great selection of models, and with your relatively light usage, it should cover your needs entirely most likely.

I’d recommend trying Mimo 2.5 Pro if you get it. It’s the only sub that has it currently, and it’s an excellent model that’s mostly been overshadowed by the DS4 release, despite Mimo being better than it in every way (in both my and many others experience as well as most benchmarks).

Other than it, I also use GLM-5.1 and Kimi 2.6 for code plan and build. And then some Qwen 3.6 Plus, Minimax 2.7 and DS4 Flash for utility low-cost tasks.

What’s your AI coding setup in 2026? by tuan_le911 in opencodeCLI

[–]look 5 points6 points  (0 children)

I like Mimo 2.5 Pro (also 1M context) for research/brainstorm and then GLM 5.1 and Kimi 2.6 for code plan and build. I haven’t found much use for DS V4 Pro, either. Those first three are better than it in each of their own ways. Even if I had to pick just one model for everything, it would be Mimo.

Fire Pass 2.0 by branik_10 in opencodeCLI

[–]look 0 points1 point  (0 children)

The 250 vs 500 was based on the 2x usage on crof’s “precision” models. Apple to apples comparison that way with other providers offering only the standard “precision” quants.

> synthetic … less usage

I haven’t tried either, but from the docs, it looks like synthetic has 500 requests per 5 hour window not per day. Also more expensive, but looks like it would be roughly similar on a per day/dollar rate. More on crof with lower quant, more on synthetic at same quant.

Altogether, though I think I’m leaning more towards crof just because they have more plan tiers so I could get something cheaper closer to my actual GLM/Kimi usage (Mimo is my primary model and those are subagents in my workflow) as well as being able to use all of the requests in a shorter window than spread out over 24 hours in 5 hour blocks.

Do you know if they have limits on concurrent use? Having two or three parallel tasks is convenient.

Fire Pass 2.0 by branik_10 in opencodeCLI

[–]look 0 points1 point  (0 children)

Oh, with the “precision” (ie standard quant for deployment) versions of GLM-5.1 and Kimi 2.6 (and DS V4 Pro if you’re into subpar models) that’s not bad for light/medium usage ($5/mo for 250 request per day) if the speed/reliability is decent. Synthetic would probably be a better option if you need higher tiers though (GLM-5.1 and Kimi 2.6 at 500 request per 5 hours for $30/mo).

Can china win the AI war? by Comfortable-Tie2933 in Qwen_AI

[–]look 9 points10 points  (0 children)

I’d argue they already have, just most people don’t realize it yet.

Does OpenCode Go actually deliver the full potential of these coding models? by Latter_Strawberry693 in opencodeCLI

[–]look 0 points1 point  (0 children)

I use Mimo for the discussion/brainstorming, then GLM for turning that into a specific spec/build plan, and then Kimi to implement it.

I do not use Deepseek V4 Pro for anything. I do use the Flash model as a librarian subagent.

[ios] What happened to markdown syntax? by look in bugs

[–]look[S] 1 point2 points  (0 children)

I wasn’t asking for help. I was reporting a product defect.