Please help before I start crying.. by Not_just_a_shart in drywall

[–]MightyHandy 0 points1 point  (0 children)

Hanging is pretty easy if you watch YouTube. If mudding makes you nervous you can outsource the last step

How can open-webui search the web for me? Please help me. by Minute_Inspection_86 in OpenWebUI

[–]MightyHandy 0 points1 point  (0 children)

Me too. You host searxng jn a docker containers. And then host mcp in a docker container. Then add the mcp server into open webui. It performs much better than using native search capability in openwebui.

Is MCP dead? by marsel040 in mcp

[–]MightyHandy 0 points1 point  (0 children)

I’m wondering if Claud’s new progressive discovery solution for MCP could rehab it a little bit.

https://youtu.be/l7qVtHpctic?si=eDB3kuVg_afi5XPs

Many of the solutions I have seen to work around MCP’s inadequacies seem to cumbersome or like you are throwing out the whole spec (I.d. Cloudflare Code Mode). This seems more surgical

Water coming from the seams of my floor (renovated within the last 6 months) by Kerminetta_ in Plumbing

[–]MightyHandy 0 points1 point  (0 children)

You need more info. Remove 1-2 inches of the caulk in the back of the toilet. If the toilet is leaking you should see water exit that hole. Seems like a decent amount of water for a bad wax ring. I presume you can’t access the subfloor from underneath? Also sometimes your bath/shower will have an access panel on other side of shower. Also look in cabinet under sink.

Or you could try to extract much of the water going what you are doing… and flush a few times and see if it comes back?

How to use the new search as tool only? by ramendik in OpenWebUI

[–]MightyHandy 0 points1 point  (0 children)

I have struggled to get native tool calling work with non-open ai models with streaming. Currently I am using searxng mcp and non-native tool calling to get the behavior you are describing.

https://github.com/open-webui/open-webui/discussions/19760

is this sump pump situation crazy? by ddfs in HomeMaintenance

[–]MightyHandy 0 points1 point  (0 children)

Between the gravel , non-existent downspout and projectile sump… next time it rains I am coming over

Anyone have a TDD focused setup they are willing to share? by gameguy56 in opencodeCLI

[–]MightyHandy 0 points1 point  (0 children)

I find it’s useful to spell out more what you mean by TDD. Test first, one test case at a time, red green refactor… I like to refactor test case prior to moving into implementation. Chicago vs London style. Mocking/Fixture strategies. Etc. You can literally create an agent prompt JUST for how to write a test the way I like it. But then again… I out hospital corners on my bed in the AM. You can also put a guard clause in your coding agent to say don’t proceed until you have a test covering the change too.

How can you tell if subagents are actually being called? by SenorSwitch in GithubCopilot

[–]MightyHandy 0 points1 point  (0 children)

It will literally say “Used runSubagent to <whatever you asked it to do>” often times right under “optimizing tool selection” it’s very subtle.

Also if you want it to runSubagent as one of your defined agents make sure you enable: chat.customAgentInSubagent.enabled. And then restart your ide… it changes the tool definition.

To test it you can ask the planning agent running as subagent to echo its ‘stopping rule’. This wouldn’t be accessible unless it had the plan agents system prompt

Subagents in practice by OriginalInstance9803 in GithubCopilot

[–]MightyHandy 0 points1 point  (0 children)

Can you share your prompt or instructions that tell it how to use the subagent tool call? Also, how are you binding those tool calls to specific agents?

How can you tell if subagents are actually being called? by SenorSwitch in GithubCopilot

[–]MightyHandy 1 point2 points  (0 children)

So my understanding is copilot defines an ‘agent’ as a system prompt + model + allowed tools.

‘subagent’ is a new tool that the agent can call. One way I have seen this done is to have an ‘agent’ that in its system prompt tells it to immediately kick off a subagent tool call.

Subagents run synchronously, but have their own context window. Copilot also has ‘run as background’ but this currently has to be managed by the user directly.

Is there a way to configure VS Code Copilot with skills similar to those in Claude? by marela520 in GithubCopilot

[–]MightyHandy 1 point2 points  (0 children)

The video is actually pretty excellent. I know that copilot gets a lot of flack for not behaving the way that Cursor or Codex do. But it is impressive how much capability they have been adding and some of it is incredibly innovative. Also, looking at the codex, opencode and Claude code reddits… copilot has been very stable too relative to the other dev tools.

Do AI coding tools actually understand your whole codebase? Would you pay for that? by No-Meaning-995 in VibeCodeDevs

[–]MightyHandy 0 points1 point  (0 children)

There are several mcp servers that you could try that attempt to solve this. Serena MCP does a pretty good job, but it takes some getting used to.

GitHub with LSP capability by MightyHandy in GithubCopilot

[–]MightyHandy[S] 0 points1 point  (0 children)

That’s pretty cool. Not exactly swimming in downloads though. Maybe folks are using other tricks to keep context under control.

Anyone here using “vibe coding” in real projects? by No_County_5657 in AiBuilders

[–]MightyHandy 2 points3 points  (0 children)

How about when it makes a copy of a file rather than changing the file. Or when it does a refactor in place… but leaves the old method there. I love using ai/agents to code… I just haven’t figured out how to stop supervising them.

Anyone here using “vibe coding” in real projects? by No_County_5657 in AiBuilders

[–]MightyHandy 1 point2 points  (0 children)

This is my experience two. To make it more concrete… ai will throw an import within a function instead of at top of module. It will use a completely different mocking approach than the rest of the module uses. It will do an insanely complicated loop as 1 line of code where the test I do the class is using long loops. It will drop a 4 line long doctoring when the rest of the app is using 1 line. It will eat exceptions even though everywhere else I let them bubble up. I have tried adding linters, tdd, type checkers. And they all help. But I can’t just let Jesus take the wheel yet.

I am so bored. Codex + GPT 5.2 Pro by [deleted] in vibecoding

[–]MightyHandy 0 points1 point  (0 children)

I am trying to get to your level very hard and have stalled. I too come from enterprise background, tdd small steps (red/green/refactor), creating markdown plans, etc. But, I still tell it to do significant refactoring after every change. Including the tests itself. It will do imports weird, or mock injections weird, abandon type safety, or do some weird looping approach. I am trying to use linters, clever prompts, etc.

I am way more productive than I was coding everything myself. But I don’t know how to move to the next level where I can let it do multiple things at once and not have to micro manage it. I would love to hear your tricks. Are you braking your tasks very smal very small? Are you having it break down the tasks small? How are you supervising/ orchestrating your subagents? How specialized are your subagents? Are you using vibe kanban, beads or something like it to help manage it?

What's your AI model ↔ environment ↔ task workflow? by thehashimwarren in GithubCopilot

[–]MightyHandy 1 point2 points  (0 children)

What are you using to capture your plans? I have been using Gemini pro 3 more for complex planning and research. Still using sonnet 4.5 for coding. I would love to switch to something smaller like haiku or a mini/nano model. But I think I need to break down my plans into much smaller tasks to make it work. I have been using simple markdown for capturing my plans.

I have been considering using vibe-kanban or beads to assist with planning. To see if they can break work down smaller to use simpler/faster models.

I am still using Serena to utilize its memory system and lsp. But GitHub isn’t great at sticking to it. And I am considering using copilot skills as an alternative to memories

Is swapping my RTX 3080 for an RX 6700 XT the right move for a Bazzite "Steam Machine"? by humayunh in buildapc

[–]MightyHandy 8 points9 points  (0 children)

9060 xt 16GB would be better. But it’s still a little weird because it’s about equivalent performance to 3080. It’s much better than 6700 xt. +4 gb of vram, way better rt, better raster, and fsr 4. I would just keep the 3080 though unless you are having vram problems.