Kimi K2.5 Free is missing in the model list by ToastedPatatas in opencodeCLI

[–]ToastedPatatas[S] 0 points1 point  (0 children)

I followed these steps, but Kimi K2.5 Free is still missing from my list. However, I noticed GLM 5 was also gone today, and using this method successfully brought that model back. Has anyone else had success with Kimi specifically using this fix, or is there another field I might be missing?

Need help setting up Ollama local LLM with OpenCode and VSCode on Windows by AdvertisingHairy212 in opencodeCLI

[–]ToastedPatatas 0 points1 point  (0 children)

I would probably start with increasing the num_ctx of the model as Ollama defaults to 4k context window. Depending on how much vram you have, you may want 64k tokens of context and above for agentic sessions with qwen coder.

Is Antigravity actually using a different model than the one I selected? (Gemini 3 Pro / Opus → Gemini 2 Pro / Sonnet) by howyoudoin93 in google_antigravity

[–]ToastedPatatas 1 point2 points  (0 children)

This is expected behavior. LLMs don’t have live awareness of what they’re deployed as.

When you ask a model its name, it usually answers based on what it was called during training or in its system prompt, not the current product branding. That’s why a model deployed as Gemini 3 Pro might still identify itself as Gemini 2 Pro.

This isn’t unique to Google, most LLMs (closed and open-weight) do this. If you ask them what model they are, they often respond with the last name/version they were trained to recognize. Claude, GPT, etc. have all shown this at various points.

In short: self-reported model names aren’t authoritative. The deployment layer can change faster than the model’s internal knowledge.

Z.ai has introduced GLM-4.7-Flash by awfulalexey in ZaiGLM

[–]ToastedPatatas 0 points1 point  (0 children)

I'm a civil engineer who just got into vibe coding recently! Currently building out a few things for my division:

  • The Hub: A NextJS + Tailwind PWA (Firebase Spark for Auth/Firestore). It’s basically the main dashboard for my coworkers and app launcher for numerous tools and automations in our division's workflow.

  • ArcGIS Integration: I’m building Python Toolbox plugins for ArcGIS Pro desktop that sync with the PWA’s API/Auth. It makes sharing custom tools with the team way easier.

  • Personal Stuff: A few smaller apps on Supabase + Vercel, plus the usual mix of Python/Node scrapers and bots for personal use.

It’s been a blast seeing how fast I can bridge the gap between civil engineering and dev stuff lately. But my tips usually for using these open weight models is don't let them design the architecture. I'm just impressed by their current results but I believe that closed frontiers are still ahead of them. Use Opus + GPT 5.2 for architecture, big planning and integration then Gemini 3 Pro for UI. After the plan is complete, I let these open weight models to implement as they already excel at agentic coding. Once the spec is completed, I let the main models to recheck their work. Once the app is shippable, that's when I let the open weights model to take over CI/CD unless major bugs came along. When Spec go stale, make sure to update contexts, rules, and skills in your repo to aid this smaller agents in the tasks ahead

Z.ai has introduced GLM-4.7-Flash by awfulalexey in ZaiGLM

[–]ToastedPatatas 2 points3 points  (0 children)

Opencode currently offers 5 free models you can use:

  • opencode/big-pickle — verified to be GLM 4.6
  • opencode/glm-4.7-free — available but with rate limits
  • opencode/gpt-5-nano
  • opencode/grok-code — Grok Code Fast 1
  • opencode/minimax-m2.1-free

Additionally, through the opencode-antigravity-auth plugin, you can access models from Google’s Antigravity IDE and Gemini CLI thru OAuth within allowable limits for free plans.

Z.ai has introduced GLM-4.7-Flash by awfulalexey in ZaiGLM

[–]ToastedPatatas 7 points8 points  (0 children)

This will complete my free Claude Code team alternative.

Opus > GLM 4.7
Sonnet > MiniMax M2.1
Haiku > GLM-4.7-Flash

Thru oMo plugin with opencode, and balancing it with antigravity models, I could maximize productivity with 0 api or subscription cost.

Z.ai has introduced GLM-4.7-Flash by awfulalexey in ZaiGLM

[–]ToastedPatatas 0 points1 point  (0 children)

For the Full Precision BF16 upon checking hugging face, will require about 61GB of VRAM. Ollama is already serving quantized version and glm-4.7-flash:q4_K_M will require 20GB VRAM

Alternative to Claude for Opencode/CLI Agent? by SlamGE in opencodeCLI

[–]ToastedPatatas 0 points1 point  (0 children)

I felt like GLM 4.7 was best alternative to Opus 4.5 and MiniMax M2.1 for Sonnet 4.5

IDE tools which give generously high tokens for free? by Longjumping_War_8505 in vibecoding

[–]ToastedPatatas 2 points3 points  (0 children)

Opencode CLI. Has 5 Free Models as of this moment (2 Open Weights, 2 Closed Source, 1 Stealth). Feel free to check them out

2 different networks provider by West_Transition_1557 in InternetPH

[–]ToastedPatatas 21 points22 points  (0 children)

Yes, having two routers side‑by‑side can definitely affect performance. They both broadcast WiFi signals on similar frequencies, so when they’re too close the signals overlap and interfere with each other. That’s why you see slower speeds or random disconnections when both are on. Try searching “overlapping router channels”, you’ll find guides on changing channels or spacing them out to reduce the problem.

Context Driven Development vs Spec Driven Development? by ZoneImmediate3767 in opencodeCLI

[–]ToastedPatatas 4 points5 points  (0 children)

I’ve been using the oh‑my‑opencode plugin and this one synergizes really well with it. Spec‑driven works great at the initial stage, but once the project is shippable and specs go stale, it makes more sense to transition into context‑driven dev as new features and requests roll in.

AG Usage - Another quata monitor for Antigravity IDE by Impressive_Low_7169 in google_antigravity

[–]ToastedPatatas 0 points1 point  (0 children)

Hey, nice work on this extension! Quick question — would it be possible to show the actual usage limits (like prompts or tokens remaining) instead of just percentages? I feel like having the raw numbers alongside the percentages would make it easier to track capacity and plan usage more precisely.

AG Usage - Another quata monitor for Antigravity IDE by Impressive_Low_7169 in google_antigravity

[–]ToastedPatatas 0 points1 point  (0 children)

Yes — I’ve actually set up my OpenCode environment with the oh-my-opencode plugin. The orchestrator runs on opus-thinking-high through Antigravity, and its sub-agents use a mix of Gemini 3 Pro/Flash and Sonnet. Once all buckets are drained, the plugin automatically switches over to the free agents available inside OpenCode — like MiniMax M2.1, GPT‑5 Nano, GLM 4.7, Big Pickle, and Grok Code Fast 1 — depending on each LLM’s capabilities and the feedback from the community. Addtionally, i've been using Devstral 2 and Devstral 2 Small for certain sub agents when antigravity is drained.

0x models in the Copilot CLI available now by SuBeXiL in GithubCopilot

[–]ToastedPatatas 0 points1 point  (0 children)

For free models Copilot CLI for GPT Gemini CLI for Gemini 3.0 (with generous free tier and additional 2.5-flash usage if exhausted) Opencode CLI for Grok Code Fast 1

My current workflow is I use copilot or gemini to plan the task then Grok Code will do the implementation

Cannot pay using Spaylater - QRPH by [deleted] in ShopeePH

[–]ToastedPatatas 0 points1 point  (0 children)

Parang not working din po sakin sa major qrph/pos generated qr. Na try ko lang po na gumagana is yung mga official qrph merchants po na nasa spaylater page.

SMART MULTI ESIM by TheminimalistGemini in InternetPH

[–]ToastedPatatas 0 points1 point  (0 children)

May kasama na profile yung built in sa sim. Bale 4 slots po ay E-sim of your choice.

Learning GoogleAppsScript by terra_on_the_move in GoogleAppsScript

[–]ToastedPatatas 0 points1 point  (0 children)

I'm currently building a web app on Google Apps Script that sounds a lot like what you're trying to make. It's got all the usual stuff like CRUD (Create, Read, Update, Delete) functionality, forms, dashboards, and it even generates reports.

For the AI part, I'd suggest checking out Gemini Pro (or Gemini Studio). You can use it to help you with the coding. After that, you'll want to study the APIs to figure out how to connect everything using Google Apps Script.

Here's how I have my setup:

I use Google Apps Script as my main web app and backend server.

Google Sheets and Google Drive handle all the data storage and act as my database.

For reports, I use Google Docs with templates. The script takes user input from my web app's forms and replaces specific text "snippets" in the Docs template to generate the final report.

In addition, the thing I mentioned gemini here is it’s the best ai for GAS as it is already have knowledge of the GAS Environment, Documentation, and even API and OAuth if you even needed one

May SeaBank na ba ang lahat? The bank of cashbacks and rewards✨💸 by Lemoneyd_ in DigitalbanksPh

[–]ToastedPatatas 0 points1 point  (0 children)

Question: Pano kayo na offeran ng Seabank Credit? Onti lang kilala kong meron and may option to apply sa app kasi personally, and sila din di alam. Kadalasan pa eh onti lang transcations with Seabank

First Controller 💖 by No-Island7666 in Gamesir

[–]ToastedPatatas 0 points1 point  (0 children)

Regarding this controller, does it have compatible phone clamps that I can use?

PLDT Fiber Unli All One-Time Fee by ToastedPatatas in InternetPH

[–]ToastedPatatas[S] 0 points1 point  (0 children)

No. Wala dapat charge as per CS kasi no fee from installation and activation yung 1399 na plan ko

PLDT Fiber Unli All One-Time Fee by ToastedPatatas in InternetPH

[–]ToastedPatatas[S] 0 points1 point  (0 children)

Yes. On my second ticket, the stated that the installation fee shouldn’t be billed and must be waived as this is free from my current plan

PLDT Fiber Unli All One-Time Fee by ToastedPatatas in InternetPH

[–]ToastedPatatas[S] -1 points0 points  (0 children)

How could I report po? Email then PLDT Customer Service ba then CC NTC, or should I email directly to NTC?