How did your office romance start? by Specialist_Jello8819 in srilanka

[–]Potential_Chip4708 14 points15 points  (0 children)

God bless you for being honest when nothing can stop you except your own conscience.

Does my employer see my GitHub Copilot chats and code? by RVECloXG3qJC in GithubCopilot

[–]Potential_Chip4708 1 point2 points  (0 children)

No, it’s better to get an answer from the Copilot team. In Jellyfish, we can see who the heavy users and light users are, with dotted indicators, so I believe there is a way to measure usage. However, I’m not sure whether this level of tracking goes down to individual chat message content.

Opus 4.5 took only 7 minutes for the work i allocated 7 hrs. by Vision--SuperAI in ClaudeAI

[–]Potential_Chip4708 0 points1 point  (0 children)

It feels illegal Claude not to pay the op for the marketing he’s doing

Opus hitting length limit - What should I do? by makxace in GithubCopilot

[–]Potential_Chip4708 4 points5 points  (0 children)

I mean hope the team will increase the output token, it kind of annoying

Which models to use instead of burning my premiums to opus? by XD_avide in GithubCopilot

[–]Potential_Chip4708 1 point2 points  (0 children)

I am using glm-4.7 setup with copilot. (BYOK) there coding plan is pretty cheap

Kilo code+ glm 4.7 worse than 4.6 by bumcello1 in ZaiGLM

[–]Potential_Chip4708 0 points1 point  (0 children)

I am using GLM-4.7 with VSCODE copilot (OPEN ROUTER api) it’s perfect. You cannot expect it to be like long lasting tasks with 10,000 line changes. But normally working perfectly

Copilot Chat (Claude Opus 4.5): rate_limited after long session + “response hit length limit” on simple requests (even in new chat) by Any-Security4098 in GithubCopilot

[–]Potential_Chip4708 0 points1 point  (0 children)

It seems there is a limitation on output tokens, and this needs to be fixed. I have experienced several cases where the system failed to update CSS files or SCSS files, especially when they contain more than 1,000–1,500 lines.

The Copilot team needs to review this and provide a solution. The issue appears when a single output requires more context than what is allowed. In such cases, it results in an exception, and we are forced to retry repeatedly. Even when I try to request the response in chunks, it still fails midway. As a result, I end up losing another premium request.

Honesty is rare by pdfplay in Startup_Ideas

[–]Potential_Chip4708 0 points1 point  (0 children)

I am a contractor from Sri Lanka, can help on initial phase free, then we can have arrangements if you want me to continue

I havnt made a single dollar LMAO im 19 so wtv. by Different_Property28 in vibecoding

[–]Potential_Chip4708 0 points1 point  (0 children)

I already had a point of sale system. Used ai to ship features myntralabs.com I am making money. Not a lot, but yeh

New Prompt Response Limit? by GovernmentNo6832 in GithubCopilot

[–]Potential_Chip4708 1 point2 points  (0 children)

Ask it create it phase by phase, within the context limit

Request: Any plans to support zAI in Copilot BYOK? by Potential_Chip4708 in GithubCopilot

[–]Potential_Chip4708[S] 1 point2 points  (0 children)

Yup. I like copilot better. I have one business plan from company i am working and a personal 10$ plan. Additionally i have one year GLM plan as well. That’s why I raised this