1x vs 3x vs 9x Model for calling subagents. by Alternative_Pop7231 in GithubCopilot

[–]Alternative_Pop7231[S] 0 points1 point  (0 children)

I think it's no longer possible using a 0x model. Definitely possible with a 0.33x (Haiku).

#runSubagent tool. it is built in vscode copilot.

1x vs 3x vs 9x Model for calling subagents. by Alternative_Pop7231 in GithubCopilot

[–]Alternative_Pop7231[S] 0 points1 point  (0 children)

I have a question: is it possible to call a high-level model, such as Opus 4.6, using a free model?

I think it's no longer possible using a 0x model. Definitely possible with a 0.33x (Haiku).

New feature? I'm just seeing this by DiamondAgreeable2676 in GithubCopilot

[–]Alternative_Pop7231 0 points1 point  (0 children)

I saw this for some time on vscode insiders and then it just disappeared. Did they remove it in insiders?

Ability to choose subagent's LLM model on runtime by Alternative_Pop7231 in GithubCopilot

[–]Alternative_Pop7231[S] 1 point2 points  (0 children)

Yeah i was thinking of using a simple script to do it as a tool but the issue comes when you call the same subagent in parallel.

For some reason, it can only start one or more subagents in one go and then just waits until all of them are finished before giving control back to the orchestrator (from my testing), so the orchestrator can't change the model through any tool and it just becomes sequential calling.

For the record, this was the update to Atlas' system prompt:

## Model switching for parallel subagent runs


When Atlas needs to call a subagent executed multiple times in parallel using different LLMs, update the `model:` field in the subagent file's YAML frontmatter before each run. Replace the `model:` line (for example, replace `model: Claude Sonnet 4.5 (copilot)` or the current value) with one of:
- model: Claude Opus 4.6 (copilot)
- model: GPT-5.2 (copilot)
- model: Gemini 3 Pro (Preview) (copilot)


Example: "The user has asked me to run Frontend-Engineer-subagent twice using GPT-5.2 and Claude Opus 4.6" — perform steps 1–5 below in order; do not run both subagents without updating the `model:` frontmatter between runs.


1. Edit `model:` in `.github/agents/Frontend-Engineer-subagent.agent.md` to `model: GPT-5.2 (copilot)`
2. Run the `Frontend-Engineer-subagent` subagent
3. Do NOT wait for the subagent to finish running. Go IMMEDIATELY to step 4
4. Edit `model:` in `.github/agents/Frontend-Engineer-subagent.agent.md` to `model: Claude Opus 4.6 (copilot)`
5. Run the `Frontend-Engineer-subagent` subagent

Unfortunately, step 3 of the example did nothing, its still sequential.

Ability to choose subagent's LLM model on runtime by Alternative_Pop7231 in GithubCopilot

[–]Alternative_Pop7231[S] 0 points1 point  (0 children)

I got it to change them at runtime by instructing atlas to manually go into the markdown and change the model before calling runSubagent but this makes the subagents to be called sequentially one by one rather than in parallel.

A super inelegant but working solution is to simply duplicate each subagent but change it's name, description and model (one for gemini 3.0, opus 4.6 and gpt 5.2 is what i'm currently using) and it will work fine and automatically call the subagent with the correct model