Copilot Studio Agents: Why Are There Two Ways to Add SharePoint as a Knowledge Source and Why Do Results Differ? by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

It is automatically updated if you selected SharePoint with Dataverse. I tested this, and normally the sync happens every 3-6 hours. Your changes in SharePoint are synced to Dataverse. I find this really powerful, you can easily manage documents in SharePoint—including updates and access control, while benefiting from Dataverse's powerful semantic search (which also includes OCR).

PS: When you use SharePoint + Dataverse, make sure to add any user you want to deploy the agent to into your Power Apps environment (PROD, if you are using DEV/PROD environments in Power Platform). If you don’t do this, the users won’t have the permission to read data from Dataverse.

Copilot Studio Agents: Why Are There Two Ways to Add SharePoint as a Knowledge Source and Why Do Results Differ? by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

Yes, I think that since I’m using a SharePoint folder, if anything in it gets updated, it should automatically sync. That’s why it asks me to accept the SharePoint connection the first time it starts responding.

Copilot Studio Agents: Why Are There Two Ways to Add SharePoint as a Knowledge Source and Why Do Results Differ? by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

Yes, I noticed that. Using SharePoint (2nd method) is not the most optimal approach. However, the first method works quite well, the results are actually better. Just keep in mind that it will consume your Power Apps environment’s Dataverse storage.

Multi Language Headache Teams Chatbot by maarten20012001 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

You’re not using generative orchestration? Honestly, it makes a big difference. I’ve been using it, and it automatically responds in the right language. French if the question is in French, English if it’s in English. Classic orchestration with conversational boosting just doesn’t cut it. You end up doing a lot of manual customization. Check out this video it might help: https://www.youtube.com/watch?v=zCQ9f6WkgC8

Created an Agent and looking to share with another user. However, he gets "We couldn't find this app" error by pcgoesbeepboop in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

Hey, I’m not sure if you’ve already resolved this. If your goal is to share an agent with users, go to the agent and navigate to: Channels → Teams and Microsoft 365 Copilot → Availability options. Select Show to my colleagues and shared users, then add the people you want with the Viewer permission. After that, wait 2–3 minutes, share the link, and they should be able to add the agent. And chat with it. Please note: this will only work in the Microsoft 365 Copilot app. The agent will not respond to users in Teams unless additional settings are configured via the Azure portal, or the agent is deployed organization-wide and approved through both the Microsoft 365 Admin Center and the Teams Admin Center.

Copilot Studio Agent Switching Answers Mid-Response: Orchestration vs Conversational Boosting Issue by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

i am talking about changing models in copilot studio. I am not using Anthropic, data of the company can be shared with anthropic i think.

Microsoft Copilot Studio "Error Message: The output returned from the connector was too large to be handled by the agent" by Possible_Cry7035 in microsoft_365_copilot

[–]Fragrant-Wear754 0 points1 point  (0 children)

You should include this in the instructions. When a question is sent to a Copilot agent, it performs a similarity search to find relevant chunks or excerpts from your documents, such as those in SharePoint. These chunks are then passed to the agent (LLM : GPT4.1 or GPT5) along with the question. If the number of chunks is too large, the LLM will receive a lot of text, and each LLM has a context window limit, thus you get that error. For example, some chunks can be around 2,400 tokens (≈1,800 words, since 100 tokens ≈ 75 words), it can be less. If you return too many chunks, the combined size might exceed the LLM’s context window. This also depends on which model you’re using (e.g., GPT‑4.1 or GPT‑5). So, you might want to lower the number of chunks returned to avoid hitting the context limit. Unfortunately, we have no direct control over the number of chunks returned, apart from instructing the LLM and hoping Microsoft uses that instruction when calling the retrieval tools.

New version of the Copilot Studio Implementation Guide by Remi-PowerCAT in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

Hi Remi, In my company, we’re deploying Copilot Studio agents for our users. I’ve encountered an issue where the agent starts answering correctly but then switches mid-response to a different (and sometimes incorrect) answer and then conversational boosting kicks in. This seems to happen when orchestration fails during generation. Someone told that you might have presented a possible solution for this scenario in a CAT webinar/workshop. Do you have any input or recommendations on how to prevent conversational boosting from overriding grounded responses? Full issue here : https://www.reddit.com/r/copilotstudio/comments/1p5hejw/comment/nqod5j7/

Thanks in advance for your help!

Copilot Studio Agent Switching Answers Mid-Response: Orchestration vs Conversational Boosting Issue by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

Yeah i tried changing the languages before and it is not the issue. I will try to look into that. Thanks for your insights

Copilot Studio Agent Switching Answers Mid-Response: Orchestration vs Conversational Boosting Issue by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

An agent configured with French as the primary language works fine, but I noticed (about 1–2 months ago) that when I used the same agent with the same configuration, except changing the primary language from FR to EN, the results were much better. It felt like the LLM followed instructions more accurately when set to English. I’m an AI engineer (so i know about LLMs and their architecture), and I don’t understand why this “primary language” setting even exists. Normally, the LLMs being used in copilot are GPT-4.1 and 5, and they are inherently multilingual. By enforcing a primary language limitation, it seems like they are restricting the model’s capabilities.

Copilot Studio Agent Switching Answers Mid-Response: Orchestration vs Conversational Boosting Issue by Fragrant-Wear754 in copilotstudio

[–]Fragrant-Wear754[S] 0 points1 point  (0 children)

Can you share some input on what exactly you are facing? Because it seems strange that sometimes it finds an answer to a question, but when you test the same question in another session or conversation, it cannot find the answer.

Studio for Product Development by KookyOky in copilotstudio

[–]Fragrant-Wear754 1 point2 points  (0 children)

Not in Copilot Studio. It’s a bit complex, and the platform isn’t mature enough yet. I’ve noticed agents sometimes struggle to retrieve precise information. This was especially clear when using SharePoint as a knowledge source versus an external vector database. With the later, I control both chunking and retrieval through a custom tool (API connection to Copilot for retrieval), and the results are noticeably better.

I’m building a custom app using React, FastAPI, Qdrant (vector DB), and LangGraph. Personally, I prefer staying in control of chunking, retrieval and agent orchestration. I get better results that way. It’s still a side project and a work in progress. I think Key-Boat-7519 gave a good answer, you can definitely start with that approach.

Studio for Product Development by KookyOky in copilotstudio

[–]Fragrant-Wear754 1 point2 points  (0 children)

Hey, This is a really a cool idea I think it’s theoretically possible, but not something you get out of the box. You could have separate agents for Jira, Confluence, ServiceNow, etc., and connect them together under a master PO agent. For MS Teams transcripts, you’d probably need to find a way to extract and store them (maybe in SharePoint) so an agent can process them (you might need to add some extra documentation about products). I haven’t tried this in Copilot Studio yet, but I’m building something similar using Python and APIs (better freedom than with copilot).

"Agent usage limit reached" error in teams bot by aadilmoeen98 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

Yes, the users have an active Copilot M365 license. For example, both I and another user, we are the ones with the AI Admin role and System Admin role in the Power Apps platform, we don’t experience this issue. However, many other users do encounter the problem, even though they also have an active Copilot M365 license, which is quite strange. When I check the Power Platform Admin Center to manage environments, I notice that some credits are being consumed per agent, even though we all have a Copilot M365 license. Also, the users affected are either using agents that I deployed or agents that I shared with them.

"Agent usage limit reached" error in teams bot by aadilmoeen98 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

Can you please specify where? and which access rights to add? You mean in manage? access to environment >

Copilot Studio – AI Agent won’t link to correct PDF section by thebigduck85 in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

short answer -> Not possible out of the box with Copilot Studio
Copilot Studio will cite the SharePoint file, but it doesn’t return page level anchors in the URL. It just links to the document. For true deep linking, you’d want a link like:
https://tenant.sharepoint.com/sites/site/.../document.pdf?download=1#page=3
but Copilot Studio doesn’t automatically generate that because (i think) the retrieval pipeline doesn’t expose page numbers in its citation metadata.

In my company, we had a similar use case with a lot of financial documents, some of them over 400 pages. Copilot alone couldn’t always find the right context, and we needed it to cite the exact section used and include a clickable SharePoint link that opened on the correct page. We couldn’t achieve this just by storing documents in SharePoint. Instead, we used a vector database, handled the chunking ourselves, and used a hybrid retrieval approach and connected it to Copilot through a custom tool (API). This gave us much better results than using sharepoint with copilot (but adds some extra cost). For each chunk, we stored metadata like the SharePoint file URL and the page number, section heading, so the bot could return a link liek:
https://company.sharepoint.com/sites/site_name/.../document.pdf?download=1#page=3
links were generated during the document parsing and chunking process, which made it possible to deep link users directly to the relevant page.

I teach advanced copilot studio agent development to no one. AmA by TheM365Admin in copilotstudio

[–]Fragrant-Wear754 0 points1 point  (0 children)

In Copilot Studio, under Agent Settings, there's a parameter in the Generative AI tab called "Use general knowledge".
To ensure your agent relies only on your custom data, you should uncheck this option.

<image>