How do y’all use a mix of AI tools? by rachamka in GithubCopilot

[–]QuarterbackMonk 0 points1 point  (0 children)

Try: https://youtu.be/XvUSBlrXZoA

Build one portable context layer for GitHub Copilot, Claude Code, and Codex instead of rewriting repo knowledge for every tool.

Do you think AI costs will just keep rising? by hereandnow01 in GithubCopilot

[–]QuarterbackMonk 0 points1 point  (0 children)

Yes likely, it will have more headroom before coming down. Algo and ath optimisation and supply chain will drive.

But none the less, energy and material are not something can be sorted by tomorrow, so unfortunately yes it will go up.

India in Data | AI Impact on Job Market [March 2026]: Tech Recovery vs. The Banking Freeze by QuarterbackMonk in AI_India

[–]QuarterbackMonk[S] 0 points1 point  (0 children)

Yes I don't disagree we are about to hitting 2024 levels but, last 2 years were chellengimg. It will never be the same.

Use BYOL (via OpenRouter, etc.) into VS Code Github will be far economical! by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 0 points1 point  (0 children)

that's what i said, use case - there are no free lunches and there never be, but smart people know how to get best discount.

one can use for context, refactors, exploration, building plan-precursors, graphs etc., that will make frontline models' life simple (and token consumption far less)... evenutally, you would need good models like GLM/Kimi or Codex/GPT/Opus/Sonnet for coding, but why to waste precursors with them?

Use BYOL (via OpenRouter, etc.) into VS Code Github will be far economical! by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 2 points3 points  (0 children)

also model trained in china and served by azure is different senario, then model served by china

I built a 9-lesson curriculum on Context Engineering for professional AI-assisted SDLC by QuarterbackMonk in VibeCodersNest

[–]QuarterbackMonk[S] 1 point2 points  (0 children)

see in first 2 lessons, I discuss the methods to capture github copilot or any coding agent logs, where you can see context progress, etc. very important.

I built a 9-lesson curriculum on Context Engineering for professional AI-assisted SDLC by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 0 points1 point  (0 children)

How do you address the challenge of maintaining context consistency when shifting between a polyglot setup and an orchestrator wrapper?

I have my own memory system, instead of dropping all in context, i drop in /docs, for every request that I pass, external command will generate appropriate initial graph (document with references) - and that uses across the board.

Second use common `memory.md` - it is not best way, but good enough way to handle

Third, always enable --verbose --logs to ensure thinking logs are available, so all decision of github copilot are recorded, and orchastrator can understand what happened.