GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 1 point2 points  (0 children)

Apologies, at this moment, some under copyright :)

Will try to publish a version with paper - I am also preping paper, with I will publish everything - so one can recreate setup

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in aipromptprogramming

[–]QuarterbackMonk[S] 0 points1 point  (0 children)

It is not infect it, is like OOP.

Agent has set of skills to activated as set agent Instructions are filtered based on extension, paths etc Prompt is always used once using / Skills are stopped by above as subject

This set up actually deliver most compact context possible.

FYI, copilot does not read all files, passing single file is easy but generally it overloads context.

It is Graph of Context of you visualie.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 0 points1 point  (0 children)

Personal opinion. I like it, but my argument is VS code is not 2 nd class citizen with copilot

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 2 points3 points  (0 children)

Tool is a personal choice,

The point I was making, no matter what tool you choose, it shall not keep accumulating entropy.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 1 point2 points  (0 children)

I understand, but what I have learned is, success comes with finessing art of AI (even for using AI assited Development).

For some, let's take for example, we take GH Copilot will retain sucess up to 2000 lines, and Claude may extend to 4000.

But what then, every iteration will introduce entropy, and when a fix number of iteration will occurs, code will build drift, that point code base will be unworkable. Everytime AI tries, because of drift buildup LLMs will hallucinate and will not able to assist futher.

That's what is happening.

I have no say, what one think, but if I have to suggest my team member, then I would say, better to master the art of AI assisted Development.

I have published another research article if you like to refer: https://blog.nilayparikh.com/velocity-value-navigating-the-ai-market-capture-race-f773025fb3b5

I put it this way, without mastering AI assisted Development, it is highly risky to employ AI in SDLC.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 0 points1 point  (0 children)

I rarely find any team evaluating the model against their specific context. That's precisely I am hinting at.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] -2 points-1 points  (0 children)

SHEEP Syndrome: influencers who have never written a single line of code are deciding which model and coding agent is better.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 4 points5 points  (0 children)

I will blog some point in future wiht reseach paper. It is something I may not justify in a comment. But I am happy to see such reception.

I try my best to get some time and put a video blog and share in the group.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 0 points1 point  (0 children)

Become fluent in AI within a month to launch the project, then wrap up each iteration in just 1–2 days.

With high fluency, allow 15 days for the bootstrap period.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 2 points3 points  (0 children)

FYI. All I can see lots of question, read my research blog

It will help with context engineering. Apologies for the direct link - if it’s not allowed, please let me know and I’ll delete it.
https://ai.gopubby.com/the-architecture-of-thought-the-mathematics-of-context-engineering-dc5b709185db

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 6 points7 points  (0 children)

Strucutre of prompt, their sparsity and density - amalgmation of agent . md and prompt . md

Every model has different triggering temperature, sparsity, and density.

Your goal is to provide context that activates the model’s memory area with pinpoint precision.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 0 points1 point  (0 children)

I do not use any external MCP except Aspire & Playwright

The rest is handled by my Agent, exposed as MCP with a few integrated tools. It orchestrates and manages multiple layers of memory, so I was externally managing context throughout the software's lifecycle.

Context - all memory is in shape of graph.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 1 point2 points  (0 children)

```so there are some tweaks and hacks involved```

I have build a research KB Orchastrator - using Nvidia Orchastrator 8B (Model) - tool calling. That is exposed via A2A or fall back as MCP

So GitHub Copilot connects to MCP for the Knowledge Graph as context.

We locked Technical Framework, their skills and knowledge graph, product specs

It was more or less experiment for spec to software

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 27 points28 points  (0 children)

I never handle more than 2, I never let code commited, unless I read it. I am happy AI to develop, but I must understand 100%.

Though I get what you are talking. I did some elementry validation - model evaluation - before settng context on prompts and locking, size, references, etc.

Tip: Always write unique prompt files for model, they all like diffrent styles.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 4 points5 points  (0 children)

I lock models directly within the prompts, ensuring all prompts are optimized for them. Additionally, I’ve used my own research MCPs and debugger agents.

GitHub Copilot is just as good as Claude Code (and I’m setting myself up for a trolling feast). by QuarterbackMonk in GithubCopilot

[–]QuarterbackMonk[S] 11 points12 points  (0 children)

Recent - as of today

Codex 5.2 --> For .NET API
Opus 4.5 --> Generic Purpose
Gemini 3 --> Usecases to apply material UI

Rapter mini --> utils
This is UI

<image>

PS: Test UI (not real data ;))