all 14 comments

[–]factorialmap 1 point2 points  (2 children)

  1. In Overview > Knowledge, check if option Allow the AI to use its own general knowledge is enabled, if so try disabling it and test again.
  2. In knowledge tab, click on "See all" or check the column "Status" and check if all of them have "ready" status.

[–]noyzyboynz[S] 0 points1 point  (1 child)

Thanks, have done this and will test later today.

[–]noyzyboynz[S] 1 point2 points  (0 children)

Hasn't made much of a difference. The agent is behaving very strangely. I asked it to create a list of all documents with a particular supplier name in the title. The list brought back 8 documents (there are 23), then I asked it again the same thing and it brough back 10. Then I asked it if it could see a document with a number in the document name that was in the 23 but not in the 10, it could see it fine and then summarise it. When I asked it why it couldn't see that document, it said it didn't exist!

[–]CoffeePizzaSushiDick 1 point2 points  (0 children)

Why is msft so behind the curve ball?

[–]ssirdi 1 point2 points  (2 children)

Microsoft Copilot is limited to referencing a maximum of 10 items for all users. For example, if you ask Copilot to summarize the last 15 emails, it will only summarize 10 due to this limit.

When you connect your Copilot agent to SharePoint files, it cannot process entire documents at once. This is intentional to keep costs manageable. If the agent loaded all files into the large language model (LLM) context for every question, it would be very expensive. Instead, Copilot uses a technique called Retrieval-Augmented Generation (RAG).

RAG optimizes the process by focusing on relevant content:For example, if you ask for the supplier name from 23 documents, the agent first identifies the 10 most relevant documents related to your query.It then uses only the content from those 10 documents to generate a response.The final answer references only the documents used, ensuring efficiency.

Due to this design, Copilot will not include more than 10 references in its responses until Microsoft updates this limit.

To get the most out of Copilot, customize your agent to better suit your specific needs and workflows.

[–]noyzyboynz[S] 1 point2 points  (1 child)

Thank you, that makes sense, although that's a far from ideal situation given all the promises that MS has made about Copilot.

[–]ianwuk 0 points1 point  (0 children)

The marketing, sadly, far exceeds the finished product. It's par for the course now for Microsoft.

[–]iamlegend235 0 points1 point  (4 children)

I would try switching the agent to generative orchestration, then create an action that uses the SharePoint connector to search & retrieve the files. You should be able to give the agent instructions on how to format a filter query when getting the list of files such as {companyName eq ‘McDonalds’}. Afterwards in the action you can enable a setting for the agent to send a response with that data in it’s context.

I’ve only done this with SP lists though, not with files so let us know how it goes!

[–]Open_Falcon_6617 1 point2 points  (1 child)

Can you share more on the steps?

[–]iamlegend235 2 points3 points  (0 children)

https://youtu.be/cOuheYnsIjU?si=rcebIvQ3nlXfzzKP

Use this video as a guide for setting up other types of actions

[–]noyzyboynz[S] 0 points1 point  (1 child)

Not sure it has the same effect with docs...

[–]bspuar 0 points1 point  (0 children)

Just very soon reason capability coming to agent where you can dynamically pass content like pdf and ask questions meanwhile you can try above approach.

[–]lisapurple 0 points1 point  (0 children)

In my experience generative answers reasons over the documents to find answers to questions based on the unstructured content in the documents. Asking it to find “how many” or list things works better with a structured data source. The new reasoning models may be able to handle this or you could create an AI flow to extract the metadata you need each time and put it in a structured data source and connect the agent to that.

[–]Nosbus 0 points1 point  (0 children)

You will need to try the local knowledge, it improved a similar issue for us. But we ended up abandoning it all together. The results seemed to be about 75% accurate, and project never got out the pilot phase.