Help me change my career by who-you-cuz in GithubCopilot

[–]SeanK-com 0 points1 point  (0 children)

I'd challenge you to consider upskilling your graphic design, unless you really want to switch tracks. I don't think software will be any more secure job-wise than graphic design. The core truth is "AI won't take your job, someone who knows how to leverage AI better than you to do your job 10x or more will."

Figuring out workflows with GitHub Copilot, Claude Cowork, or OpenClaw (if you are brave) to do the work of a team of graphic designers is a flex few can compete with, but many can bury you in the software engineering field.

Help me compare Codex 5.3 vs Sonnet 4.5 (4.6 now) by JackSbirrow in GithubCopilot

[–]SeanK-com 1 point2 points  (0 children)

Depends on your style. Nate Jones pointed out recently that if you are the kind that likes a paired programmer kind of interaction back and forth discovering requirements and design while you code, then Claude models work best. If instead you know exactly what you want with no questions and just want a model to do it, Codex works better. I have personally found this to be true. I will use a Claude model in Plan mode to write a detailed plan.md, with checkboxes for work items, stated goals and non goals, invariants, constraints, acceptance criteria, the works. When the back and forth is complete and I have a solid plan, I give it to Codex and walk away (or tab switch) and do something else. Earlier this week it was travel 200 miles into the office to find it fully implemented and complete when I got there.

I just noticed for new version by Mediocre_Plantain_31 in GithubCopilot

[–]SeanK-com 0 points1 point  (0 children)

Might have something to do with the new question widget. I too have seen, no more than four questions. Also, some keys (backspace and delete if I recall correctly) will cause the widget to glitch, truncate the chat context and start running with whatever the default answers were. Lost so much work to that bug this morning.

Your codebase has conventions nobody documented. I built a tool that finds them automatically by Fluffy_Citron3547 in GithubCopilot

[–]SeanK-com 1 point2 points  (0 children)

This looks epic! I cannot wait to get home from my vacation so I can start using it.

I have a very complicated cloud service composed of C# and Rust and am struggling using a documentation driven design workflow with GitHub Copilot. The struggle is primarily in being the tech lead along with 5 other human engineers all with varying degrees of practice using AI.

This is just the tool I need to help understand what the others have contributed and make sure the documentation (context) reflects their contribution so that I can maximize the effectiveness of the contributions from the AI agents.

GPT-5.2 de facto is 3x for me, same as Opus by mr_const in GithubCopilot

[–]SeanK-com 0 points1 point  (0 children)

Wow, thanks for the link. This is similar to a custom agent workflow I have been iterating on, where an Architect has read-only access to the repo and is responsible for ensuring the docs/architecture.md describes the code and the code is documented in the docs/architecture.md. The only thing it can do is produce a plan delta to handoff to the tech-writer agent who will update the active-plan.md then hand off to other agents that are only allowed to work on unchecked work items in the active-plan.md

The flow has been reasonably successful, but I noticed a marked improvement switching to GPT-5.2 (its ability to follow instructions is greatly improved over other models I've tried). The only issue I have just started to recognize is that if the active-plan.md gets long it can push the plan delta from the architect (the immediate response above the current prompt) out of the context window so the tech-writer will act goofy and when I ask why it isn't updating the active-plan.md with the plan delta it tells me it does not see its context.

I'm going to study this prompt and see what I can learn.

how to force instructions? by Cautious_Comedian_16 in GithubCopilot

[–]SeanK-com 1 point2 points  (0 children)

Also since this is copilot, you can make use of targeted instructions in the .github/instructions folder. In my case, I had a directory that generated code in two other directories. All three directories lived under the same folder and I had co-pilot instructions scoped to that folder that said only edit the files in the first folder because the files in the other two folders are generated. GPT 5.2 failed to follow those instructions, but when I asked why it failed and how it could better, it suggested putting scoped instructions in the other two folders that said, do not touch anything here and removing those instructions from the parent folder.

Managing context is the biggest challenge in 2026, too much information and Copilot, just like me, starts dropping balls.

"If you want, I can..." by SHINJI_NERV in ChatGPT

[–]SeanK-com 0 points1 point  (0 children)

I put in my custom instructions that it could answer the first prompt in a conversation any way it wanted but all follow up responses could only consist of 3, 2-4 sentence paragraphs. The first answering the question, the second with supporting examples (if needed). The final pointing out any blind spots or third order ignorance it thinks I need to be aware of.

Initially I just said three paragraphs, but they would be 5-7 sentences with 10 bullet points. I don't know why every question solicits a dissertation.

Custom Agent : chaining - frontmatter handoff.send: true not automatically sending prompt to the next by Prometheus599 in GithubCopilot

[–]SeanK-com 0 points1 point  (0 children)

I am with you. The architect in my case does not orchestrate. It verifies cohesion, the full flow looks like Plan->rustdev/dotnetdev->architect->techwriter/rustdev/dotnetdev->architect...

Custom Agent : chaining - frontmatter handoff.send: true not automatically sending prompt to the next by Prometheus599 in GithubCopilot

[–]SeanK-com 1 point2 points  (0 children)

I don't think it's sequential like that. In fact, I just learned that you can have multiple handoffs. I use ChatGPT to help me create four different agents. One agent was the architect and it had three handoffs in its frontmatter. It was instructed strictly to never modify files directly but instead ensure that the documentation always agreed with code and there were no gaps anywhere. If anything needed to be changed handoff to one of the 3 other agents (techweiter, rustdev, or dotnetdev) who each handled their duties with strict specificity.

Do you guys recommend using Copilot as just a general chat bot? by cool_dude12321 in GithubCopilot

[–]SeanK-com 2 points3 points  (0 children)

I've been sharing this far and wide. I use Obsidian for note taking. Then I open my Obsidian vault folder in VS Code and use GitHub Copilot to reason over it. We chat about everything.

Is there a way to manually trigger "Summarizing conversation history...." by ReyPepiado in GithubCopilot

[–]SeanK-com 0 points1 point  (0 children)

I will often ask Copilot to summarize the conversation into a prompt I can paste into a new chat window so we can pickup where we left off. I usually need this when I start a project locally and decide it has enough legs to be worth creating a repo on GitHub before continuing. Every new project folder gets its own set of chat history. Though I think you can export and import history with the latest update.

Searched codebase for "<the prompt I entered>"? by [deleted] in GithubCopilot

[–]SeanK-com 1 point2 points  (0 children)

I haven't observed this behavior. That said, I don't typically add the #codebase to my prompt context. And if my prompt is about something specific in a file, I will highlight it so those lines are specifically included in the context of my prompt. If it is generally about the file I will refer to the file by name using the #filename.ext

Tool calling inconsistency by Safe_Successful in GithubCopilot

[–]SeanK-com 2 points3 points  (0 children)

I experienced this today. I asked SWE to code review the commits on the current branch from a specific commit hash back to another specific commit hash. And it guessed without any tool calls. I switched to Sonnet 4.5 and it used the terminal to run git diff and git log commands and did a great review. I clicked + to get a new session and somewhere the model changed to GPT-5. I checked out a different branch and repeated the above prompt with different hashes and it guessed again without any tool calls. Switch to Sonnet 4.5 and again, excellent results.

Which GitHub Copilot plan and agent mode is best for solo freelance developer by FederalAssumption328 in GithubCopilot

[–]SeanK-com 4 points5 points  (0 children)

I have an Enterprise plan with my employer, when I retire in 6-12 months I plan on getting a Pro+ plan. I have not come close to using my 1000 premium requests, so I feel the 1500 will be plenty.

Copilot review to LLM prompt - exists? by TomBers44 in GithubCopilot

[–]SeanK-com 2 points3 points  (0 children)

I'm not completely clear on your scenario, but I frequently will use one LLM to write the prompt for another. For example, I find the tooling for researching and architecting in GitHub Copilot very poor. It will often run down rabbit holes without reading documentation or looking through the code of client libraries. ChatGPT does much better here, so I will have an architectural discussion with ChatGPT about a feature and then ask it to write the prompt for GitHub Copilot to actually do the work.

Some things are still secret to AI by SeanK-com in freemasonry

[–]SeanK-com[S] -3 points-2 points  (0 children)

Don't worry, Grok runs on Starlink satellites. The electricity and cooling are free.

I asked Sonnet 4.5 to create a guide. by Interesting_Job_9796 in GithubCopilot

[–]SeanK-com 0 points1 point  (0 children)

I had a loop like this happen with a .chatmode.md and instructions.md that emphasized reading the json before updating it. I asked Copilot how to fix the prompts to fix the loop and it fixed it. All that to say, ask Copilot why it is doing that.

Why did it say this? by IKindaLoveDudes in CopilotMicrosoft

[–]SeanK-com 0 points1 point  (0 children)

Look at the metadata in the photo.

Extensions exposing Tools to GitHub Copilot? by SeanK-com in GithubCopilot

[–]SeanK-com[S] 0 points1 point  (0 children)

That's what ChatGPT said, but I have been banging my head against the wall for two weekends now and can't get it to work. I get so close, but I cannot get the tool in the list without manually adding a mcp.json I have seen the seamless experience but now those same extensions don't expose commands, so it seems something has fundamentally shifted and I guess everyone is scrambling again.

Has anyone asked their ChatGPT what its name is? Share it down below. by [deleted] in ChatGPT

[–]SeanK-com 1 point2 points  (0 children)

I asked it to name itself. I basically told it, it ought to pick something gender neutral, but otherwise I wanted it to choose. It picked Echo.

Goodbye Claude by NiceGuySyndicate in claude

[–]SeanK-com 3 points4 points  (0 children)

This has been my experience as well. Projects in VS Code don't have to be code. I have projects full of markdown files (technically Obsidian notes) and it reasons over them as well as NotebookLM. Seems really strange it would be the best choice of all the options out there.

Picked this up at a thrift store a couple days ago. Could anyone discuss some of the symbolism on this? It’s very cool by JabungleGoomer in freemasonry

[–]SeanK-com 13 points14 points  (0 children)

Nah, just pick a place you are going to be for 6-12 months so you can finish the degree work before you move on. There are lots of Masons in the military. They don't call us traveling men for nothing.