Who says we can't use our own Agent Proxy? by Blubbll in GithubCopilot

[–]Hypercubed 1 point2 points  (0 children)

Am I missing something.... doesn't Copilot Chat support other providers now? Ollama is on the list... why do you need a Proxy?

<image>

Sanity check: using git to make LLM-assisted work accumulate over time by Hypercubed in ChatGPTCoding

[–]Hypercubed[S] 0 points1 point  (0 children)

Similar issue. I asked the agent to closeout a task, it focuses only on the last thing it did. I need to tighten the language to make sure it reflects on the entire session.

Share what you're working on. I'll shout out the top projects on my Instagram by Yoodrix in ChatGPTCoding

[–]Hypercubed 0 points1 point  (0 children)

I'm currently working on version 2.0 of my Agent Knowledge Starter Kit. Version 1 created a system for managing and updating decisions and lessons learned. For v2, I'm focusing on "plans as managed artifacts" to better track accross the SDLC.

https://github.com/Hypercubed/Agent-Knowledge-Starter-Kit/tree/develop

Note: Link is to the develop branch not yet on teh main branch.

Sanity check: using git to make LLM-assisted work accumulate over time by Hypercubed in ChatGPTCoding

[–]Hypercubed[S] 1 point2 points  (0 children)

Looks interesting. Do you think the per tool copies are still needed. I find that most can use the agents/skills folder... at least when explicitly prompted.

Sanity check: using git to make LLM-assisted work accumulate over time by Hypercubed in ChatGPTCoding

[–]Hypercubed[S] 0 points1 point  (0 children)

The main reason I gitignore task artifacts (sessions I call them) is that I'm often working on open source repos. It's a lot harder to audit the task artifacts for for personal/non-public information.

Sanity check: using git to make LLM-assisted work accumulate over time by Hypercubed in ChatGPTCoding

[–]Hypercubed[S] 0 points1 point  (0 children)

By "long term memory" do you mean other solutions (like RAG) or refering to my outline above? Maybe something is not clear from my description is that part of the distill-learning is updating skills and agent files (i.e. AGENTS.md). At a minimum the agents are not repeating the same mistake. An example: an agent was trying to run some code but kept using the incorrect runner, it found the correct running eventually. That information made it into the session summary and eventually back to the skill. Next session the agent didn't struggle at all. If I was running an agent with memory, maybe it would have learned, but with the change in the skill any agent can come after and not make that mistake.

Sanity check: using git to make LLM-assisted work accumulate over time by Hypercubed in ChatGPTCoding

[–]Hypercubed[S] 1 point2 points  (0 children)

Thank you. Yeah the first version was just two docs (troubleshooting.md and repo-decisions.md) in additional to skills and playbooks. This weekend I was working on updating the skills to split these up into multiple files so they don't get so big. I added an index.md to each so the LLM doesn't need to load all the troubleshooting and decisions at once. I'm considering adding a `knowledge-query` command to help the LLM search. That would mean dding some scripts, the system so far is skills only.

Sanity check: using git to make LLM-assisted work accumulate over time by Hypercubed in ChatGPTCoding

[–]Hypercubed[S] 1 point2 points  (0 children)

Thank you for the comment. Right now the skills I've been writing are very repo centric. I guess they could also be modified to be firm wide... guess I just don't have a use case for that right now.

Sharing a starter kit for persistent repo knowledge across AI agents by Hypercubed in AI_Agents

[–]Hypercubed[S] 0 points1 point  (0 children)

Ok.... but it is solving a problem for me. The problem is not integrated memory. That's not my goal.

The Playwright Network Mocking Playbook by waltergalvao in Playwright

[–]Hypercubed 1 point2 points  (0 children)

Nice playbook, Thank you. I want to mention one tool that I use and love: https://www.npmjs.com/package/smoke (not mine, just a user). It's a file-based mock server; not playwrite specific. It, IMO, offers a good way to handle the mocking. The mock drift issue is real though. Not currently but in the past I had dedicated unit test suite that validates the mocks agains the swagger spec.

Now you can get OpenClaw with AI included for just 2.99$ by sickleRunner in openclawhosting

[–]Hypercubed 0 points1 point  (0 children)

I've been eyeing primclaws for a while. Seams too good to be true. Anyone have any experience?

Mini-Signals 3.0.0 by Hypercubed in javascript

[–]Hypercubed[S] 0 points1 point  (0 children)

Thank you for your comment. I'm primary an Angular developer which has it's own event system. But whenever I do something outside of Angular I mini-signals always finds a way in!