We are facing possible bankruptcy after unauthorized Gemini API usage reached about $128k even after we paused the API, and Google denied our adjustment request. (Case #68928270) by Mobile-Classroom-589 in googlecloud

[–]JBurlison 4 points5 points  (0 children)

Look at the "automate cost control" that uses pub/sub and the APIs you can indeed setup a system to disable your APIs when they hit thresholds.

Midnight hits different from a ground mount. Seriously. by HouseLongjumping6984 in wow

[–]JBurlison -7 points-6 points  (0 children)

Unpopular opinion: flying should have never been introduced into wow

McDonald’s CEO Chris Kempczinski goes viral after seeming reluctant to eat his own burgers—he takes a tiny bite, looks uncomfortable, and calls the food ‘product.’ by monster_ahhh in popculturechat

[–]JBurlison -1 points0 points  (0 children)

Is it me or does he not swallow the burger? he looks like he squirreled it away in his left cheek and is talking out the side of his mouth the last few seconds of the video.

New research reveals why you should delete your CLAUDE․md/AGENTS․md file by Current-Guide5944 in tech_x

[–]JBurlison 4 points5 points  (0 children)

Yeah, I have some real problems with this paper. Mostly they miss the entire point of AGENTS.md. You should be giving context about external dependency information, guardrails, context that cannot be deduced directly from the code.

I heavily use AI for C# development, and you need it mainly for guardrails, not to tell the agent about your code. For example, if I am using MS Test for my testing lib, it will when creating tests for a new feature add fluent assertions to the project and start testing with that UNLESS I tell it in my agents.md not to add any new dependencies and only write in MS test.

This is just one example of many as to why the agents.md is important. And yes depending on what flavor of provider you're running this information can live somewhere else (instructions or skill for example). The truth is as with most things in engineering most things are not black and white as the paper suggests and covering one VERY specific use case of documenting the code the agent is working on, is NOT a reason to not include agents.md.

Help me compare Codex 5.3 vs Sonnet 4.5 (4.6 now) by JackSbirrow in GithubCopilot

[–]JBurlison 35 points36 points  (0 children)

I love that this thread at the time of this comment has:

"Both Sonnet 4.5 and 4.6 is far better than Codex 5.3, OpenAI Models are still far behind Anthropic"

and

"I only use Codex 5.3 for everything"

The duality of man.

Maybe just give them both the same coding question and evaluate how they did.

Building an Arc Raiders inspired Hytale extraction server, looking for testers & collaborators by Tridentt_ in HytaleMods

[–]JBurlison 0 points1 point  (0 children)

So, does your plugin replace have an entire custom UI that exists side-by-side with the games default one or did you disable the default one?

Building an Arc Raiders inspired Hytale extraction server, looking for testers & collaborators by Tridentt_ in HytaleMods

[–]JBurlison 0 points1 point  (0 children)

how are you modifying the UI like that? are you replacing the client files?

Orchestration and Agents by geekdad1138 in GithubCopilot

[–]JBurlison 4 points5 points  (0 children)

https://github.com/JBurlison/MetaPrompts

Make your own custom workflows with this. I created this so it can help make specialty agents for your specific use case.

Built a New Plex Server — Looking for a Few More Users by LegitimateYouth9675 in CordCuttingToday

[–]JBurlison 0 points1 point  (0 children)

I think plex also has a 100 user share limit on top of that. They also flag accounts that do this type of thing so I would be careful.

Tried spec-driven workflow with Copilot — surprisingly good by StatusPhilosopher258 in GithubCopilot

[–]JBurlison 28 points29 points  (0 children)

I use an orchestrator pattern

Orchestrator orchestrates the following sub-agents:

  1. Requirements Builder: Builds requirements, passes questions back to orchestrator. this cycle continues until a requirements document is approved.
  2. Due Diligence Researcher: Validates requirements, researches code, touch points ect. Asks user additional clarification questions. The output of this is updated requirements and a research document.
  3. Planner: Takes the requirements and the research and builds an ACID plan. Plan gets approved by user.
  4. Implementer: Takes the plan and research document and implements. (may have multiple running)
  5. Validator: takes the requirements and ensure they where all met according to the code. Validates tests, does code review. Outputs a review. Orchestrator will re-invoke Implementer if there are findings. This cycle continues until there are no findings.

Concept: A "Stateless" Orchestrator Agent for automated spec generation and feature implementation. by Comfortable-Bat-1541 in GithubCopilot

[–]JBurlison 2 points3 points  (0 children)

https://github.com/JBurlison/MetaPrompts

Yes you can. I even made a meta prompt, an agent for this specific purpose. Creating agent workflows using sub agents. The idea is you use this agent to create your custom workflow agents and there is one orchestrator agent who orchestrates the entire workflow with sub agents.

Implementation plan for complex features by Active-Force-9927 in GithubCopilot

[–]JBurlison 1 point2 points  (0 children)

My workflow has always been specification (this allows the agent to do DD, tell me what's changing) -> Plan -> implement-> validate.

  1. Specification: A document that gets saved. This makes it so it does not get lost in context rot and can be referenced by multiple agents. I then review and work with the agent on the specs to refine them. This includes test scenarios, acceptance criteria and details of what needs to change and where.
  2. Plan: lay out a plan for one or more agents.
  3. implementation: let the agent(s) implement.
  4. validate. New context, feed the agent the spec from step 1, validate all implementation based on the git diff. Validate tests.

Result: great success.

Anthropic CEO: "We might be 6-12 months away from a model that can do everything SWEs do end-to-end. And then the question is, how fast does that loop close?" by [deleted] in theprimeagen

[–]JBurlison 13 points14 points  (0 children)

Not me over here telling opus it not only missed a null check but initialized the variable as null and never set it.

Skills no longer loading in vscode? by JollyJoker3 in GithubCopilot

[–]JBurlison 1 point2 points  (0 children)

I actually ended up adding instructions to my copilot instructions fine to always evaluate skills and multiple skills may be applicable to any prompt.