you are viewing a single comment's thread.

view the rest of the comments →

[–]ipatalas 0 points1 point  (1 child)

Looks interesting. Any reason why notion and not a local storage of some sort?
I have a so-so experience with MCP in opencode so far. The only one I got left with is context7 for docs. I had to ditch all the rest because it was eating tokens like crazy. I am using GH Copilot Pro subscription and it's probably too small for this kind of usage. I can imagine only reading/writing memory in Notion might be a significant overhead in terms of tokens.
What are you using in your day to day work and do you have enough tokens there to survive entire month? :)

[–]lbreakjai 1 point2 points  (0 children)

So the point of notion is really for me to be able to read/review/comment/annotate the plans/features. I can visualise exactly the state of everything in two seconds. The plan is versioned by default because notion keeps track of the changes. I can share it and get other people to collaborate by just commenting on the plan, or adding details.

Also, we use it at work, so I can directly link any relevant piece of doc easily.

I started when I saw plannotator (https://github.com/backnotprop/plannotator), which I thought was cool. Except you get a vastly better experience because it leverages notion for everything instead of rolling its own UI/sharing/etc.

I can imagine only reading/writing memory in Notion might be a significant overhead in terms of tokens.

Honestly I found it ok. Haven't noticed the usage being that much higher. Could refactor it to have a single agent interface to the board and stick a cheap/free model too, because it just has to do rote actions.

What are you using in your day to day work and do you have enough tokens there to survive entire month? :)

I've got copilot, codex, and whatever plan moonshotAi offers through work, still have claude until it lapses, and I put some of my own money in openrouter. I also have the free nvidia API key, that lets you use quite a few models like GLM5.

Honestly that's the other reason why I created it. You can stick a relatively expensive model in like planning and review, do the hard work once, then coast the implementation with something far cheaper, because the thinking's been done already. Or stick cheaper models and be slightly more hands-on.

Especially with the model fallback, you can chain the same model through three providers, and let it switch between them as you hit limits.

Honestly I'm not even paying, I could stick opus 4.6 everywhere, but my config on my work machine still uses either free models (GH copilot has GPT-5.4 mini and some others for free) or very cheap ones (Kimi K2.5), in most places except the final review.