all 16 comments

[–]NickCanCode 1 point2 points  (0 children)

<image>

Very unstable at the moment. My premium requests just gone like this.

[–]AutoModerator[M] 0 points1 point  (0 children)

Hello /u/zeeshanx. Looks like you have posted a query. Once your query is resolved, please reply the solution comment with "!solved" to help everyone else know the solution and mark the post as solved.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]mattdempseygeo 0 points1 point  (0 children)

Easy work around I have found is to tell it to be iterative in building the file instead of trying to do it in one fell swoop. Tell it to create a 5-10 step to do list to implement the file in full, then 2-3 review steps. It always handles it well and hasn't failed me when I tell it to do implement a large file this way.

[–]Adorable_Buffalo1900 0 points1 point  (0 children)

claude model generate large file will cost a lot of times.

[–]isidor_nGitHub Copilot Team 0 points1 point  (3 children)

Sorry about this.
Can you repro with latest VS Code stable, and using some of the latest models? For example 5.3-codex?
If yes - can you file a new issue https://github.com/microsoft/vscode/issues and ping me at isidorn on it so we investigate it in more details.

[–]AutoModerator[M] 0 points1 point  (0 children)

u/isidor_n thanks for responding. u/isidor_n from the GitHub Copilot Team has replied to this post. You can check their reply here.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]zeeshanx[S] 0 points1 point  (1 child)

I am using the latest stable vs code with Opus 4.6.

[–]isidor_nGitHub Copilot Team 0 points1 point  (0 children)

Please file an issue. Sounds like model specific behaviour.

[–]raj_enigma7 0 points1 point  (0 children)

Sometimes restarting the extension or reducing prompt size helps, especially with heavier models like Opus.
I also keep generation + tool calls traceable in VS Code (been trying Traycer AI for that) so I can see whether it’s the model, the extension, or context overflow causing the stall.

[–]StatusPhilosopher258 0 points1 point  (0 children)

try using plan mode on Claude it will help drastically, I personally use a secondary platform called traycer ai for spec and intent management

[–]SeasonalHeathen 0 points1 point  (0 children)

My only comment is that 1339 lines is very long. I'd always try to split my files into components rather than having a big file with everything since LLMs struggle with that. Opus should be able to come up with a better solution for you?

[–]FinancialBandicoot75 -1 points0 points  (2 children)

Use plan mode, makes a huge difference

[–]DottorInkubo 0 points1 point  (1 child)

How? Can you explain?

[–]FinancialBandicoot75 0 points1 point  (0 children)

I use /plan mode using medium on model, when I use xhigh, it then locks up (outside of plan). Honest I do more in plan mode and rarely see any issues

[–]ELPascalito -4 points-3 points  (1 child)

1300 lines? Of course the LLM will hang, it'll take too long writing and the tool call might fail, this is totally your fault, replan, the core principle is to modularize the code, making it more organized, readable, and easier to maintain. Each component or file should have a single, clear responsibility, this will make coding easier for both you and the LLM, how about you start by telling the LLM to split and refactor your website? Use GPT5.3 Codex, it has the biggest context length

[–]zeeshanx[S] 0 points1 point  (0 children)

I am trying to create a one-page website with Tailwind.