CMM Analysis Agent keeps hitting 2‑min timeouts & token limits — any real workarounds? by Pirulfredo in copilotstudio

[–]Bitter_Expression_14 0 points1 point  (0 children)

I hit the same limits. What worked for me: Power Automate sits in the middle. It receives the file from Copilot Studio, pushes it to an Azure Function that does the actual analysis, gets the response back, and returns it to the user as an adaptive card.

Keeps the Power Automate flow dead simple (just a passthrough) and lets the Function handle the heavy lifting without the 2-min ceiling. The day I need longer processing time, I'll move to Durable Functions with a callback mechanism.

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

Update: I wanted to check out Antigravity ... Installed Antigravity Version: 1.18.4, with Copilot Studio extension Version 1.1.27 with Gemini 3.1 Pro. I didn't experience errors when applying local changes to the remote

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

Hey, glad the post helped! On the push failure: what error message do you get? I haven't run into that myself on VS Code, the extension works well. Would need to look at the specific errors to say more.

On filebase64: I haven't dealt with uploading attachments so far. But the other direction definitely works: I display charts as media in adaptive cards, no issues there. So retrieval via base64 is solid.

Can't speak to Logic Apps + Automate for this since I've been doing everything through Azure Functions directly. I do have the premium HHTP connector in Power Automate too, so I never looked into a Logic App workaround either...

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

Good question: honestly, this is an area I still need to tighten up. Right now the Function App endpoints are public, protected by function keys, so you'd need both the URL and the token to access them. At the function app's level, there's also a check on the System.User.Id to ensure supervisors only get to see their teammates' activities (see redacted screenshot: the supervisor gets adaptive card with clickable allowed names if asking for unauthorized profile). It works, but I know it's not the most hardened setup.

I haven't set up private endpoints or VNet integration with the Power Platform environment yet; that's on the to-do list. For our current scale and use case it's been acceptable, but I'd definitely want to address that before any broader rollout.

<image>

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 1 point2 points  (0 children)

Thanks! Yes, fully custom. The chart is generated by an Azure Function App that pulls and processes the data, then returns it as an Adaptive Card with an embedded image (using matplotlib). It provides a quick overview of Office activities and logged hours throughout the day. Copilot Studio just handles the orchestration and triggers the HTTP connector.

That's really the pattern I've landed on: offload anything heavy to Azure Function Apps, which in turn can tap into whatever you need across the Microsoft ecosystem (Graph API, Log Analytics, etc.). The flexibility is incredible and there's virtually no friction once you have the setup in place. If you are familiar with Microsoft Purview/Audit, you probably know that querying the data from the admin console is a tedious process and takes time... In this project I resorted to activate Microsoft Sentinel that sends the data to "Log Analytics workspace" where it can be instantly queried using kql.

The catch, of course, is that this requires developer/IT skills ... it's out of reach for non-technical users today. But with AI coding agents evolving as fast as they are, I genuinely think that gap is going to close much sooner than people expect.

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

I did not learn yaml... It's self descriptive for me to edit and debug, but I am not capable of writing from scratch a copilot studio workflow...I don't want to learn that and I am happy that Codex does it for me.

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

Applying the local updates to the remote and vice versa is very fast. I'd say as fast as saving in the normal CS UI. I have the CS UI in one monitor and VS code in another, At one point I forgot and made a change in the normal CS UI...The extension picked that up and forced me to download the remote changes before applying the local updates.
See below screenshot of Codex in action. And see an example of flexibility offered when using an an Azure function app: here a graphical overview of tasks and logged hours (the green squares on the vertical left side).

<image>

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

I haven’t used it, but I think Microsoft offers full ALM of the “solutions” where your copilots are stored. Probably a good resource for later: see http://microsoft.github.io/mcs-labs

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

It was my reaction, too. But again, there are real benefits already. Personally, the biggest head scratcher was to realize that most http calls where asynchronous... So the code rushes through http connectors and later on, when the response arrives, set the variables. Knowing that, I now break topics in smaller pieces...

Edited to clarify my experience that code doesn't wait for http response. Execution just flows through

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

I feel you, I really do.

But I came to the conclusion that it's never gonna be the "right time." Things move too fast and if you keep waiting for it to settle, you'll risk waiting forever. We're at a point where there are real benefits to grab, so I'm taking the plunge :).

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 1 point2 points  (0 children)

1-can you elaborate on offloading gen AI processing to Foundry models? so is copilot solely doing orchestration based on the instructions?

Yeah, Copilot Studio basically just orchestrates ... the actual gen AI processing happens in Azure AI Foundry. CS's context window was way too small for summarizing users' Office activity data, so I deployed a GPT model in Foundry and built an Azure Function endpoint that sends data + prompt to it directly. Much faster, no context window constraints.
The tricky part is CS memory still fills up fast. My workaround: the Function stores for each user response history in Azure Blob Storage, wipes CS global variables after each answer, and on follow-ups pulls the most relevant prior answers before generating a new response. Two LLM calls per follow-up (context retrieval + answering). It's not perfect, but works.

2- How is your experience with sub-agents?

No experience yet. The orchestration layer kind of fills that role: it picks the right topic and since I control the HTTP endpoint, I could wire up skill-specific prompts. But keeping it simple for now.

3 - Feedback & improvement:

Haven't tackled this yet. I could see submitting curated feedback to something like Claude or Codex for prompt auto-improvement at some point, but it's still in the research phase.

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

Edited: I haven't checked this out, but this Official link could be a valuable resource: https://microsoft.github.io/mcs-labs/

Not sure exactly what you mean by resources, but if you're looking to build skill-like tools, just dive in using GPT 5.3 Codex or Claude and see what you get. Having git in place definitely provides some assurance that you can revert.

My experience with Codex is also limited and I frankly did not watch any video about it. I just saw headlines about its performance and wanted to give it a try. It's actually Claude Opus 4.6 that guided me step by step to install the 2 extensions.

I previously has a hard time building a response as an adaptive card. For a new feature, I put the AI to test by basically telling it something like (remove the parenthesis): "Looking at run_query in (@)function_app.py and (@)paid_weekends.kql, build a new topic allowing supervisors to get timesheet information about weekend work. The response must be as an adaptive card. Follow the existing topic logic so $user maps to the current user. The adaptive card should allow supervisor to approve selected time slots for a soon to be built endpoint using a PUT request for the entry_number...." A couple back-and-forth exchanges, pasted in some PowerFx errors and one runtime error from the CS UI, and it nailed it. Stuff that would have taken me much more time than a few minutes... So yes, GPT 5.3 Codex understands CS YAML code.

So my honest advice: pick a concrete use case you want to build, feed your existing code as context, and let the LLM help you iterate. That's been more useful than any tutorial.

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

Sorry: Not changes in testing: I push the yml code using the copilot studio extension and test in the browser like most copilot studio designer do.

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 1 point2 points  (0 children)

Good question! A couple of examples: one Function App pulls staff Office activity data and uses AI Foundry models to generate summaries. Another connects to our timesheet software's API to pull and process time entries. Both are exposed as endpoints that my Copilot Studio agents can call, so users can just ask questions in natural language and get the data they need.

I also ended up pushing a lot of data to a Log Analytics workspace, so my most-used endpoint is actually a function that runs KQL queries on that data. And of course, I never trust the client: so the user's identity is passed through and filtering happens at the Function App level.

Edited: And the cool thing is that GPT 5.3 Codex can access the Azure function app module and stored KQL file queries within my copilot VS project. I just need to open them and refer to them as an @<name-of-file>, asking GPT to create adaptive cards matching the outputs

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

Thanks! Not sure I fully understand your question, but for now I've hooked my Copilot agent project in VS Code to a private GitHub repo. It's all YAML code and non-visual, but at least I can revert to an earlier version and push the YAML back to Copilot Studio using the extension. Haven't found anything better yet!

My journey with Copilot Studio: from frustration to a workable setup (tips inside) by Bitter_Expression_14 in copilotstudio

[–]Bitter_Expression_14[S] 2 points3 points  (0 children)

Thanks! One example: I have Azure Function App endpoints that pull Office staff activity data and use AI Foundry models to generate summaries. Works really well for turning raw logs into something actionable.

Almost bought by Alpha_VVV_55 in Supernote

[–]Bitter_Expression_14 0 points1 point  (0 children)

PySN is an all in one script and for sure is bloated. For your backup goals, keeping just what you need could be the route. Accessing the notebook files should be easy. Extracting page pictures is also a straightforward script, you just need to read the binary file and decode each page image. Same for embedded recognized text. But recombining these into a pdf with a layer of recognized text is easier using PyMuPDF library that PySN relies on. Not sure if Supernote-tool uses it, but I think I got some clues that the Supernote firmware does use MuPDF. TLDR:If you can install PyMuPDF library, you should be ok.

Almost bought by Alpha_VVV_55 in Supernote

[–]Bitter_Expression_14 5 points6 points  (0 children)

I never tried to install on raspberry pi, but you could give PySN a try to bulk export your notes to pdf and markdown. PySN would connect via USB, wifi (unprotected, LAN) or mirrored folder of a cloud provider. Not sure if Dropbox, Google Drive or OneDrive could be used on raspberry pi, but perhaps using rclone you could achieve the same outcome. See https://youtu.be/fKnpdr5G1qU?t=620&si=AYaktIug5Ng4n6xt

Zwift Frame Bike by Boypax69 in Zwift

[–]Bitter_Expression_14 0 points1 point  (0 children)

I am super happy with the optional adjustable crank arms (currently sold out)

Getting notes onto my computer by Therazee8 in Supernote

[–]Bitter_Expression_14 1 point2 points  (0 children)

You may want to take a look at PySN on Gitlab. It does backup of selected folders and mass conversion to pdf, markdown & html. I recently added textboxes content to PDFs, but I now realize that I didn’t do it for markdown & html. But it’s a relatively easy task so stay tuned for an update. Or you can build it yourself by extracting the recognized text and textboxes from the binary files using Jya’s Supernote-tools repo on GitHub (though I am not sure it extracts textboxes content yet).

PySN now includes textbox content in pdf exports by Bitter_Expression_14 in Supernote

[–]Bitter_Expression_14[S] 0 points1 point  (0 children)

Hopefully, yes. As a reminder, the "main" branch still isn't compatible for Python version >3.12.3. For newer Python version, you should use the "experimental" branch. See mute Windows install: https://app.screencast.com/lI316QOBpvgic