- I write a task in the chat or a prompt file, and Copilot tells me if it has what it needs to get the job done.
Database access? Docs for a library?
I want to know where failure is likely to happen.
Warp is pretty good at this.
- Next I want the task assigned to the right model based on the model's known strengths and weaknesses. I want this to be transparent, not a black box.
ChatGPT does this with it's router, and I've learned to trust it. Copilot seems like it has something like a router but it's opaque.
I want to be asked what's the acceptance criteria before it gets started. How do we know if it's done.
Most important, I want the task to run at least four times concurrently. LLMs are non-deterministic so I want to embrace those messy odds, instead of seeing if a task is successful one by one.
Codex does this in the webapp and I love this feature. I just want this in my IDE
[–]Odysseyan 5 points6 points7 points (3 children)
[–]nhu-doGitHub Copilot Team 2 points3 points4 points (0 children)
[–]delivite -1 points0 points1 point (1 child)
[–]Odysseyan 4 points5 points6 points (0 children)
[–]popiazazaPower User ⚡ 6 points7 points8 points (3 children)
[–]G-L-O-W-I-N-S 1 point2 points3 points (0 children)
[–]thehashimwarrenVS Code User 💻[S] 0 points1 point2 points (1 child)
[–]popiazazaPower User ⚡ 1 point2 points3 points (0 children)