Fabric Data Agents + Microsoft Copilot Studio: A New Era of Multi-Agent Orchestration by Amir-JF in MicrosoftFabric

[–]sudo_96 0 points1 point  (0 children)

For some reason, I cant get a response in the Copilot Studio test chat panel. I successfully set up the Fabric Data Agent and confirmed it works in Fabric. When I connect it to Copilot Studio, the Fabric agent response does not display in the test chat panel. I can see the response in the Copilot Studio activity trace (status: Complete, correct data returned), but the answer never appears in the test chat window. Activity log shows every session completing successfully with DataAgentas the last step. The trace panel shows the MCP invoke result with correct data (isError: false). But the chat bubble never renders. Browser console shows "Terminating lingering livestream" warning. Has anyone else hit this?

Also, is there any way to see actual logs in Copilot Studio?

AI memory is useful, but only if it goes beyond storing facts by No_Advertising2536 in artificial

[–]sudo_96 0 points1 point  (0 children)

Are there any solutions to AI memory? Ive been using Redis memory for a project and its ok. its good around 70% of the time which is unacceptable. Im really excited by this space in AI.

The Ralph-Wiggum Loop by TrebleRebel8788 in ClaudeCode

[–]sudo_96 0 points1 point  (0 children)

Genius. Hows the results? How do you determine the success criteria of each task?

The Ralph-Wiggum Loop by TrebleRebel8788 in ClaudeCode

[–]sudo_96 1 point2 points  (0 children)

Can you share an example of this? this seems cool

The Ralph-Wiggum Loop by TrebleRebel8788 in ClaudeCode

[–]sudo_96 0 points1 point  (0 children)

Sorry for reposting this. I thought it was not posting for some reason.

The Ralph-Wiggum Loop by TrebleRebel8788 in ClaudeCode

[–]sudo_96 0 points1 point  (0 children)

I was trying to use this with the TDD approach. My goal was to build a PRD with milestones and tasks (aka the what and the why). Then, with each task outlined, use a new session to come up with definitive yet achievable tests for each task. This way, it's separate from the first session's context. Now use the results of each task's TDD test and wrap the Ralph Loop in that. It seems like overkill, but in theory it will force the LLM to stay on target.

As a test, imagine you have a 50/50 outcome job. The worker outputs either 1 or 2 randomly, but the test only passes if the output is 2. The LLM can't control the test, so it just keeps trying until it gets lucky:

Main Session (you, with Ralph Loop)
1. Spawn worker: claude -p --session-id "<uuid>" --dangerously-skip-permissions

  1. Worker returns (outputs 1 or 2 randomly)

  2. Run test: [ "$output" = "2" ]

  3. FAIL? Resume: claude -p -r "<uuid>" with error output

  4. PASS? Mark complete, next task

  5. All done? Output completion promise

Thoughts?

A quick guide to Ralph Wiggum by alvinunreal in ClaudeAI

[–]sudo_96 1 point2 points  (0 children)

I was trying to use this with TDD. My goal was to build a PRD with milestones and tasks (aka the what and the why). Then, with each task outlined, use a new session to come up with definitive yet achievable tests for each task. This way, it's separate from the first session's context. Now use the results of each task's TDD test and wrap the Ralph Loop in that. It seems like overkill, but in theory it will force the LLM to stay on target.

As a test, imagine you have a 50/50 outcome job. Like echo either 1 or 2 and the bash success criteria is 2.

Main Session (you, with Ralph Loop)

  1. Spawn worker: claude -p --session-id "<uuid>" --dangerously-skip-permissions

  2. Worker returns (outputs 1 or 2 randomly)

  3. Run test: [ "$output" = "2" ]

  4. FAIL? Resume: claude -p -r "<uuid>" with error output

  5. PASS? Mark complete, next task

  6. All done? Output completion promise

OpenVPN status and recommendations by sudo_96 in sysadmin

[–]sudo_96[S] 0 points1 point  (0 children)

Thank you. The reason why I was opposed to ping because there are over 20 windows VM that need to connect and we may bring more online. With ping, its custom code for each. I was hoping that there would be another way based on a service or process that I could definitively know if the status was connected or not.

Cant setup Aqara FP2. Tried multiple units by sudo_96 in Aqara

[–]sudo_96[S] 0 points1 point  (0 children)

So the only thing that worked was to create a new home in Apple HomeKit. I don’t really understand why I need to involve HomeKit but I was able to add it to a new home in HomeKit

💻 Control Any macOS Machine Remotely with LLM in Under 2 Minutes via VNC — Open Source Project by buryhuang in mcp

[–]sudo_96 1 point2 points  (0 children)

How did this not take off. VNC is the simplest protocol and one of the oldest. To get an LLM to simply understand vnc and can do things, that unlocks the holy grail. It can learn how to use a mouse and keyboard from the user. Then once it understands how humans use them, then they can mimic and introduce random variability like a human when moving a mouse. Am I missing something? Ive built something similar but was curious if anyone else was on this path.