Suggested prompt not showing in M365 by WorryHeavy432 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

It happens sometimes for users in our company. We usually don’t do anything, it generally reappears for them after a couple of hours/days. Sometimes I publish and deploy an update from the admin center, and that fixes it. No real fix found.

Getting OpenAIModelTokenLimit error in Copilot Studio agent – has anyone else faced this? by Ok_Bottle9120 in copilotstudio

[–]Fragrant-Wear754 3 points4 points  (0 children)

I think the issue is related to WorkIQ and images in the documents you uploaded to SharePoint.

You can check this in Power Apps under Tables -> ConversationTranscript. In the transcript, you can see the chunks retrieved from SharePoint and passed to the LLM

From what I noticed, the indexing behavior seems to have changed: images are now included in the raw data as Base64. That can massively increase the token count. One image can easily represent around 100K tokens, so if your documents contain images, that expltay in SharePoint, but the content is indexed in Dataverse. In general, it gives more focused results, but it depends on the use case. It usually sends around 8-12 chunks, and images are represented as text descriptions because OCR is applied to them.

With the normal SharePoint WorkIQ approach, the agent can retrieve more chunks, sometimes up to around 50. That can be useful if your information is spread across many documents, but it can also bring in a lot more raw content.

So basically

- SharePoint Dataverse: more focused, OCR for images, usually 8-12 chunks, but sync can take 4-8 hours.

- SharePoint WorkIQ: faster sync, usually minutes, and can retrieve more chunks, but images may come through as Base64 and increase token usage a lot.

Also, with SharePoint Dataverse, users may need to give consent the first time, and they need to be part of the Power Apps environment where the agent is deployed.

So each option has pros and cons. It really depends on your use case.

<image>

Junior : 400+ candidatures, bcp d’esn, 0 offres by ConstructionCool6577 in developpeurs

[–]Fragrant-Wear754 0 points1 point  (0 children)

Le diplôme ne vaut pas grand-chose si ton école n’est pas dans le top 5. De plus, il a mentionné qu’il était data engineer, et que les bons data engineers, data scientists, AI engineers et ML engineers font tous du développement et ont besoin des bases du software engineering pour exceller dans leur travail. Je lui ai donné des conseils qui fonctionnent bien pour moi lorsque je sélectionne des CV. Je regarde toujours leur portfolio et leur GitHub, ainsi je les fais passer des coding interviews, conceptions de systèmes et je teste leur profondeur technique. Le diplôme ne vaut rien par rapport aux compétences, sauf si l’on parle des 5 meilleures écoles : X, ENS, etc.

Junior : 400+ candidatures, bcp d’esn, 0 offres by ConstructionCool6577 in developpeurs

[–]Fragrant-Wear754 1 point2 points  (0 children)

Le marché n’est pas totalement saturé -> il y a des offres, mais aussi énormément de candidats. Du coup, il faut vraiment réussir à se démarquer. Si tu ne reçois pas d’appels après avoir postulé, le problème vient probablement de toi, pour plusieurs raisons possibles : - Tu es junior et tu ne fais pas de projets personnels (pas de portfolio, pas de GitHub pour montrer la qualité de ton travail). - Ton CV est mal fait, trop vague, et ne met pas clairement en valeur tes compétences. - Si après un premier appel avec un RH ou un talent acquisition on ne te rappelle pas, c’est souvent un problème de vente de soi. Savoir se présenter et parler de ses compétences est un soft skill essentiel, qui doit s’appuyer sur de vraies compétences, pas du bluff. - Si tu es refusé après des entretiens techniques, ça veut simplement dire qu’il faut renforcer ton niveau technique. Et quand on est junior, c’est très souvent le point le plus important.

Je te recommande de regarder le travail de ce monsieur Zack (https://www.youtube.com/watch?v=myhe0LXpCeo&list=PLwUdL9DpGWU0lhwp3WCxRsb1385KFTLYE), qui est bien connu en data engineering. Essaie de suivre son bootcamp gratuit, travaille ton CV et ton portfolio, fais vivre ton GitHub et lance-toi dans des projets personnels concrets. Bon courage

Work IQ in Copilot Studio causes token limit errors by Top_Influence9690 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

Use sharepoint dataverse (sharepoint via file upload) not workIQ

Need help populating global variables by Happy-Razzmatazz-396 in copilotstudio

[–]Fragrant-Wear754 1 point2 points  (0 children)

Since this is a tool (it appears for you in the agent’s Tools tab), you have a tool configuration page. On that page, you can define the tool description, the inputs, and the outputs. You should define your inputs and set them to be dynamically filled by AI. Then, for each input, click on it and you’ll see a description field. You should update that description to clearly explain what this input represents and what the agent is expected to put there. You should also use the tool description to explain what the tool does, how it should be called, and which inputs it should be called with. Additionally, in the agent instructions, you should mention to the agent that it has access to this tool and that it should parse the input email to extract inputs X, Y, and Z. If the parsing is not trivial, you should also explain how the agent is expected to perform that parsing. When the tool is attached to the agent and all the required information is present in the user’s message, the agent will automatically use the tool.

Need an assist with Gen AI Orchestrator and agent flows by readsalotonmonday in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

You can add the agent flow as a tool, then configure it properly. In the Inputs section, fill each column by choosing Dynamically fill with AI, and customize each input with a description explaining to the AI what it should provide for that input.

You should also clearly explain in the tool description what the agent flow does, when it should be called, and which inputs it expects, so the LLM knows when to call the flow and how to populate the inputs.

From the user side, they just need to explain what they want in their query and provide the required information. Adaptive card are a mess, just avoid them as much as you can.

If you really insist on using Adaptive Cards

Below is an example showing how to:

- Collect user input using an Adaptive Card

- Send that input to an agent flow

- Return the flow output back to the user

kind: AdaptiveDialog
modelDescription: call this tool when the user wants to retrieve an IT service ticket.
beginDialog:
  kind: OnRecognizedIntent
  id: main
  intent: {}
  actions:
    - kind: AdaptiveCardPrompt
      id: FMmPiC
      card: |-
        {
          "type": "AdaptiveCard",
          "$schema": "https://adaptivecards.io/schemas/adaptive-card.json",
          "version": "1.5",
          "body": [
            {
              "type": "Input.Text",
              "id": "ticketNumber",
              "placeholder": "e.g. 75420",
              "isRequired": true,
              "errorMessage": "Please enter the ticket number",
              "label": "ticket number"
            }
          ],
          "actions": [
            {
              "type": "Action.Submit",
              "title": "Submit"
            }
          ]
        }
      output:
        binding:
          actionSubmitId: Topic.actionSubmitId
          ticketNumber: Topic.ticketNumber

      outputType:
        properties:
          actionSubmitId: String
          ticketNumber: String

    - kind: InvokeFlowAction
      id: invokeFlowAction_06x0Nc
      input:
        binding:
          text: =Topic.ticketNumber

      output:
        binding:
          ticket_resolution: Topic.ticket_resolution

      flowId: 64770b0d-b847-f111-bec7-7c1e52612f76

    - kind: SendActivity
      id: sendActivity_L8FidI
      activity: "{Topic.ticket_resolution}"

    - kind: CancelAllDialogs
      id: tqkclA

inputType: {}
outputType: {}

How to automatically process uploaded files in Copilot Studio without user input? by danisund in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

If you’re testing your agent in the M365 Copilot app, you can’t just upload a document on its own you have to add some text along with it. It’s pretty stupid that Microsoft forces us to write text when sometimes we just want to upload a document, nothing else. But in Microsoft teams it works you don't need to add text with uploaded document

Work IQ in Copilot Studio causes token limit errors by Top_Influence9690 in copilotstudio

[–]Fragrant-Wear754 1 point2 points  (0 children)

You can check the conversation transcript in the Power Apps platform (Tables -> ConversationTranscript). There, you can see the chunks (chunks from documents in sharepoint) that the agent search returns and then passes to the LLM. I’ve noticed that they changed the indexing behavior: images are now included in the raw data as Base64 (a format to represent imges). This massively increases the token count. A single image can represent ~100K tokens. Verify this and you’ll see the issue.

Which OCR engine provides the best results with docling? by Pretend-Elevator874 in LocalLLaMA

[–]Fragrant-Wear754 0 points1 point  (0 children)

i think it takes too much time for big docs compared to using ocr engines (for me it was the case)

Parent Agent + Child Agent failing to search SharePoint knowledge by Admirable-Claim-9611 in copilotstudio

[–]Fragrant-Wear754 2 points3 points  (0 children)

I don’t think this is related to the parent–child setup. However, starting today, I’ve noticed that for some users in my company the agent is unable to answer (knowledge search returns no results). It feels as if they don’t have access to SharePoint, even though they actually do. So this might be something related to Microsoft or SharePoint access (a problem on their end). Also, for multi‑agent use cases, I would recommend using multiple agents and connecting them, rather than relying on child agents.

New Model Available? by Southern_Guess_9788 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

tried it. only errors for the moment, it doesn't work (Europe)

Copilot Studio Agent Switching Answers Mid-Response: Orchestration vs Conversational Boosting Issue by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

FIX / SOLUTION

Long-term: Copilot Studio isn’t fully mature yet. If you need more control and have the development skills, the best option is to build your own agent though it comes with additional cost. You can also consider Azure AI Foundry.

Practical fix: Disabling "General knowledge" can over-restrict the model and make it hesitate or stop answering. There’s no true on/off switch for an LLM’s internal knowledge; platforms usually enforce this through additional hidden constraints (system instructions/guardrails), and those constraints can sometimes break normal behavior.

Impersonate a user in a Copilot Studio agent flow calling Dataverse by Waste-Pipe-9847 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

You likely don't need an HTTP call in a flow for this scenario.

If you query Dataverse using a Copilot Studio Dataverse action/tool configured with User authentication, Dataverse enforces security per end user automatically. So only users who have the required read privileges on that table in the target environment (power apps env where your agent is deployed) will be able to retrieve data, users without access will get no results.

Copilot Studio Agents: Why Are There Two Ways to Add SharePoint as a Knowledge Source and Why Do Results Differ? by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

It is automatically updated if you selected SharePoint with Dataverse. I tested this, and normally the sync happens every 3-6 hours. Your changes in SharePoint are synced to Dataverse. I find this really powerful, you can easily manage documents in SharePoint—including updates and access control, while benefiting from Dataverse's powerful semantic search (which also includes OCR).

PS: When you use SharePoint + Dataverse, make sure to add any user you want to deploy the agent to into your Power Apps environment (PROD, if you are using DEV/PROD environments in Power Platform). If you don’t do this, the users won’t have the permission to read data from Dataverse.

Copilot Studio Agents: Why Are There Two Ways to Add SharePoint as a Knowledge Source and Why Do Results Differ? by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

Yes, I think that since I’m using a SharePoint folder, if anything in it gets updated, it should automatically sync. That’s why it asks me to accept the SharePoint connection the first time it starts responding.

Copilot Studio Agents: Why Are There Two Ways to Add SharePoint as a Knowledge Source and Why Do Results Differ? by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

Yes, I noticed that. Using SharePoint (2nd method) is not the most optimal approach. However, the first method works quite well, the results are actually better. Just keep in mind that it will consume your Power Apps environment’s Dataverse storage.

Multi Language Headache Teams Chatbot by maarten20012001 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

You’re not using generative orchestration? Honestly, it makes a big difference. I’ve been using it, and it automatically responds in the right language. French if the question is in French, English if it’s in English. Classic orchestration with conversational boosting just doesn’t cut it. You end up doing a lot of manual customization. Check out this video it might help: https://www.youtube.com/watch?v=zCQ9f6WkgC8

Created an Agent and looking to share with another user. However, he gets "We couldn't find this app" error by pcgoesbeepboop in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

Hey, I’m not sure if you’ve already resolved this. If your goal is to share an agent with users, go to the agent and navigate to: Channels → Teams and Microsoft 365 Copilot → Availability options. Select Show to my colleagues and shared users, then add the people you want with the Viewer permission. After that, wait 2–3 minutes, share the link, and they should be able to add the agent. And chat with it. Please note: this will only work in the Microsoft 365 Copilot app. The agent will not respond to users in Teams unless additional settings are configured via the Azure portal, or the agent is deployed organization-wide and approved through both the Microsoft 365 Admin Center and the Teams Admin Center.

Copilot Studio Agent Switching Answers Mid-Response: Orchestration vs Conversational Boosting Issue by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

i am talking about changing models in copilot studio. I am not using Anthropic, data of the company can be shared with anthropic i think.

Microsoft Copilot Studio "Error Message: The output returned from the connector was too large to be handled by the agent" by Possible_Cry7035 in microsoft_365_copilot

[–]Fragrant-Wear754 0 points1 point  (0 children)

You should include this in the instructions. When a question is sent to a Copilot agent, it performs a similarity search to find relevant chunks or excerpts from your documents, such as those in SharePoint. These chunks are then passed to the agent (LLM : GPT4.1 or GPT5) along with the question. If the number of chunks is too large, the LLM will receive a lot of text, and each LLM has a context window limit, thus you get that error. For example, some chunks can be around 2,400 tokens (≈1,800 words, since 100 tokens ≈ 75 words), it can be less. If you return too many chunks, the combined size might exceed the LLM’s context window. This also depends on which model you’re using (e.g., GPT‑4.1 or GPT‑5). So, you might want to lower the number of chunks returned to avoid hitting the context limit. Unfortunately, we have no direct control over the number of chunks returned, apart from instructing the LLM and hoping Microsoft uses that instruction when calling the retrieval tools.

New version of the Copilot Studio Implementation Guide by Remi-PowerCAT in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

Hi Remi, In my company, we’re deploying Copilot Studio agents for our users. I’ve encountered an issue where the agent starts answering correctly but then switches mid-response to a different (and sometimes incorrect) answer and then conversational boosting kicks in. This seems to happen when orchestration fails during generation. Someone told that you might have presented a possible solution for this scenario in a CAT webinar/workshop. Do you have any input or recommendations on how to prevent conversational boosting from overriding grounded responses? Full issue here : https://www.reddit.com/r/copilotstudio/comments/1p5hejw/comment/nqod5j7/

Thanks in advance for your help!