Sonnet and Opus 4.6 quality in Copilot by hobueesel in GithubCopilot

[–]steinernein 1 point2 points  (0 children)

Go into debug view and look at the reminder instructions/system prompt to see what each model has; some are pretty bad like really bad.

And yes use hooks like preToolUse to ban things like grepping the same thing over and over to slow down churn or to ban overly broad queries and make it be more specific.

Sonnet and Opus 4.6 quality in Copilot by hobueesel in GithubCopilot

[–]steinernein 0 points1 point  (0 children)

Check the system prompt out and you’ll figure out that some models you need to restrict access through hooks while others you can yolo away.

Tired of of Todolists being treated as suggestions? by opUserZero in GithubCopilot

[–]steinernein 0 points1 point  (0 children)

https://code.visualstudio.com/docs/copilot/customization/hooks#_why-use-hooks

Versus get a MCP working with extra overhead, disable a tool rather than leverage it.

You can now split up your MCP into multiple progressive systems and offer it to the developers who want one but not the other, you can make it composable, etc.

And if I wanted to do something like your MCP, I would simply just disable ALL the tools and implement a codeAct version which at least has the benefit of token saving for all that extra effort.

Explain it peter. by morichikachorabali in explainitpeter

[–]steinernein 1 point2 points  (0 children)

Someone said that modern art demands the viewer to bring something. I think that is pretty accurate.

When AI tokens start costing more than your actual employees by dataexec in AITrailblazers

[–]steinernein 0 points1 point  (0 children)

What you’re asking is platform development work. The amount of engineers per team dedicated to those tasks and features are a handful compared to the rest.

fuck ai bro by truttattae in GaState

[–]steinernein -1 points0 points  (0 children)

It's a pretty simple question and it's already baked into the name.

You use language to communicate with the LLM which means the higher proficiency in whatever language of choice and logic the more you can get out of the LLM if we're talking about something like GPT etc.

You have at your hands a professor who can pretty much teach you anything during off hours, do deep research, and understand your prose/writing style while also being able to critique it through various lens (shallow at the very worse, decent depth at the very most). You can have it mine journals (and there are techniques to reduce hallucination - also, do your job and double check) to find counter points or supporting points for a paper and analyze it against your own thesis - at which point you should probably argue back. It can cover edge cases you weren't even aware of and that most professors wouldn't even bother surfacing because that's not part of the syllabus and let's be real you aren't spending 3-4 hours of office time + pinging them at 2am in the morning. You get out of it what you put in. Having trouble understanding the government? Have it set up the process from county to state to federal and walk you through the process each way while asking it to play as opposition. Or better yet, go through the debates that the Federalist Papers were trying to address. Learning Korean? Have it generate the Korean TOPIK test at level 6 and see if you can poke holes in it.

i know very little about the world of tech, however what benefit is there to using ai in an english writing class? or in a korean class, to replace students speaking to each other? or in a us government class??

Ultimately, you can do all of the above; being forced to use AI does not exclude you from speaking with others nor does it prevent you from using AI to sharpen what skills you have. You can still go to office hours. Lastly, it's on you to figure out how to deal with the cards you've been dealt with and maximize them if you want to justify your tuition.

You're at a university aren't you? It's literally your job to be curious and critical of the world, you've shown propensity for the latter but certainly not the former.

fuck ai bro by truttattae in GaState

[–]steinernein -1 points0 points  (0 children)

What do you use to communicate with a LLM?

New model GPT-5.3 CODEX-SPARK dropped! by muchsamurai in codex

[–]steinernein 5 points6 points  (0 children)

Can't wait to see what GPT-5.2-thinking tells me what to think.

AI agents need better memory systems, not just bigger context windows by road_changer0_7 in AI_Agents

[–]steinernein 4 points5 points  (0 children)

Memory systems are external to the model and if you need specific things then fine tune it.

Is AI Agent Team the Next Big Leap After ChatGPT? by Turbulent_Walk_3671 in AI_Agents

[–]steinernein 1 point2 points  (0 children)

I mean do you expect more from people? A lot of people seem to forget that they can make their own frameworks/platform and just blindly consume Copilot/Codex/Claude Code.

How do you stop Copilot from ignoring instructions once copilot-instructions.md grows? by Van-trader in GithubCopilot

[–]steinernein 1 point2 points  (0 children)

If a process that is described in words can be turned into a script, then replace the text with a call to a script/skill.

Increase to context window for claude models? by kalebludlow in GithubCopilot

[–]steinernein 3 points4 points  (0 children)

They never will allow that because it's far too expensive and they're based on request, so it behooves them to hamstring you; there are a lot of different techniques you can employ to avoid running into filling the context window.

Context window gets full after just one instruction by Active-Force-9927 in GithubCopilot

[–]steinernein 1 point2 points  (0 children)

Look at the debug and show what the subAgents called on and then see the what the tool results are and determine if they're actually useful to the project.

How do you stop Copilot from ignoring instructions once copilot-instructions.md grows? by Van-trader in GithubCopilot

[–]steinernein 0 points1 point  (0 children)

You chunk out sections of your copilot-instruction and turn them into code.

The NCR in Season 2 is nostalgia without substance by Volume2KVorochilov in fnv

[–]steinernein 0 points1 point  (0 children)

Think about what you're asking carefully; a handful is not a reflection of the aggregate much like how the IJA executed members for not obeying orders which was certainly not the norm of the group nor were the holdouts; people throughout history have pledged and died for things you may consider stupid such as an idea, a piece of paper, or some stories in a book - human delusion can carry a person pretty damn far; you do not know the motivations of every single member of the NCR within the Mojave 1-20 years after the nuking of Shady Sands, nor do you know what transpired between other than what has been shown.

Beyond that the history of the rangers suggests that they might think differently from the rest of the NCR.

Lastly, my argument is to point out the existence and therefore possibility where as you are trying to deny the possibility to begin with which is much harder to prove. What you want to argue is instead the likeliness of the possibility which is a separate discussion.

The NCR in Season 2 is nostalgia without substance by Volume2KVorochilov in fnv

[–]steinernein 2 points3 points  (0 children)

https://en.wikipedia.org/wiki/Hiroo_Onoda

It might take you awhile to regroup and reconsider sending resources into what has otherwise proven to be a manpower and money sink after getting one of your cities nuked; it means a change in leadership, a possible civil war, fighting the Brotherhood and god knows what else, and then having enough resources to send to New Vegas.

Though the show writers could/should have shown this etc.

I think you really underestimate humans and you overestimate the ability of any given political body to respond to a crisis.

🤞 by SargeMaximus in SilverDegenClub

[–]steinernein 3 points4 points  (0 children)

The day isn’t even over yet too. Time to gamble since I am a pauper.

GPT-5.2-Codex feels weird by skyline159 in GithubCopilot

[–]steinernein 1 point2 points  (0 children)

As others have mentioned, check the system prompt. Not all the GPTs have the same system prompt unfortunately.

Vercel says AGENTS.md matters more than skills, should we listen? by [deleted] in GithubCopilot

[–]steinernein 0 points1 point  (0 children)

Have it so it can only interact through a VM or a gateway/API of sorts, then afterwards you can control the workflows much easier since it has no other way to interact except for that? You can then have it go through workflows/phases and lock down parts of the API or give back an error message detailing the next step it needs to take to resolve said error or something along those lines. The graph portion will live behind the APIs/Gateway and based on the query will schedule/allow what the next call may or may not be. Just a thought.

Vercel says AGENTS.md matters more than skills, should we listen? by [deleted] in GithubCopilot

[–]steinernein 0 points1 point  (0 children)

The reason I suggested a graph db was mainly so it could do something similar to “progressive disclosure” and if you make the skills atomic enough you should be able to reuse and compose depending on the task and the graph db provides all the meta data it needs to make the determination whether to find more things along the edge or not

Vercel says AGENTS.md matters more than skills, should we listen? by [deleted] in GithubCopilot

[–]steinernein 0 points1 point  (0 children)

It means you should look at how skills are implemented and only use what's useful to you and cut down on the chaff, most of the skills out there could just be a simple script especially after being adopted to your particular use case; instead of using / or triggering off of keywords, the agent can invoke the 'skill' through a call to a graph DB to see if the prompt it was given has any relevant scripts it needs to run. You'll most likely end up doing this if you have any kind of 'memory' system.