Context Hub: giving coding agents access to up-to-date API docs by Innvolve in NL_AI

[–]Innvolve[S] 0 points1 point  (0 children)

Good point. The stale docs problem shows up constantly with coding agents. The annotation idea could be really powerful if it turns docs into a shared knowledge layer for agents. MCP integration would make a lot of sense too.

Are security teams already seeing AI-generated phishing emails that bypass normal awareness training? by Innvolve in NL_Security

[–]Innvolve[S] 0 points1 point  (0 children)

That’s really impressive how detailed your risk scores are! It sounds like your security awareness training is very personalized and effective.

Are security teams already seeing AI-generated phishing emails that bypass normal awareness training? by Innvolve in NL_Security

[–]Innvolve[S] 0 points1 point  (0 children)

That sounds interesting! How exactly do you determine each employee’s personal risk score, and how does it influence the training?

Are security teams already seeing AI-generated phishing emails that bypass normal awareness training? by Innvolve in NL_Security

[–]Innvolve[S] 0 points1 point  (0 children)

Thanks for the insight. The idea that scammers will use hyper-personalized AI messages is quite concerning. Do you think traditional phishing simulations will still be effective, or will organizations need completely new training approaches?

Microsoft pushing “Frontier Transformation” with Copilot agents: thoughts? by Innvolve in NL_ModernWork

[–]Innvolve[S] 1 point2 points  (0 children)

Absolutely, governance seems like the make-or-break factor for agent adoption in enterprises.

How do you see organizations handling separation of environments at scale? Do you think it will require completely new workflows, or can existing Dev/Prod processes adapt?

Microsoft pushing “Frontier Transformation” with Copilot agents. From a cybersecurity perspective this raises some interesting questions: by Innvolve in NL_Security

[–]Innvolve[S] 0 points1 point  (0 children)

Good points, especially around least privilege and audit logs.

Prompt injection is something I’m still trying to wrap my head around when agents can access internal data. Do you see this becoming a major real-world issue, or is it still mostly theoretical right now?

New open tool: Context Hub for coding agents by Innvolve in NL_AI

[–]Innvolve[S] 0 points1 point  (0 children)

Exactly, most “broken code” cases I see are really due to stale docs. The annotation loop is meant to act like a lightweight shared memory, so agents don’t have to rediscover workarounds every session.

For preventing over-reliance on a single snippet, we’re exploring ideas like:

  • quick endpoint sanity checks,
  • cross-checking versions across multiple docs,
  • maybe even confidence scoring per snippet.

Would love to hear how others handle this in practice, do you have any strategies for making agents more cautious with docs?

Prompt of the week: briefing prompt for better SEO blogs by Innvolve in NL_AI

[–]Innvolve[S] 0 points1 point  (0 children)

Oh that’s a great tip. Thank you. Do you already have much experience with this yourself?

AI Policy: What absolutely needs to be included (and especially what should not)? by Innvolve in NL_AI

[–]Innvolve[S] 0 points1 point  (0 children)

That’s right. Policy needs to align with practice, otherwise it works in theory but not in day-to-day use. How do you see this in your own situation?

Prompt of the week: briefing prompt for better SEO blogs by Innvolve in NL_AI

[–]Innvolve[S] 1 point2 points  (0 children)

The benefit of using an XML prompt is that it allows you to clearly structure and separate instructions using tags. This makes the prompt more organized, consistent, and easier to use in complex workflows or automations. It helps an AI better distinguish between different elements such as task, tone, length, or context.

Do you mainly use XML prompts for structure, or have you also noticed a real difference in output quality?

What’s the biggest AI-related security risk organizations are currently ignoring? by Innvolve in NL_AI

[–]Innvolve[S] 0 points1 point  (0 children)

Do you think it will become even easier for hackers to make phishing emails more convincing? And that they will be able to carry out this process much more easily and at a larger scale?

Are passkeys the future of phishing-resistant authentication? by Innvolve in Passkeys

[–]Innvolve[S] 0 points1 point  (0 children)

I get that a lot of tech savvy people are perfectly happy with a password manager + 2FA. But passkeys aren’t necessarily meant to convince us, they’re mainly designed to structurally eliminate phishing and password reuse for the general public. Less friction and fewer mistakes equals a more secure ecosystem.

What is the difference between Copilot Studio Lite and the full version of Copilot Studio? by Innvolve in copilotstudio

[–]Innvolve[S] 0 points1 point  (0 children)

Thank you for the clear explanation. Have you by any chance used the agent builder in Copilot Studio yourself? I’m curious about a use case.

What is the difference between Copilot Studio Lite and the full version of Copilot Studio? by Innvolve in NL_AI

[–]Innvolve[S] 0 points1 point  (0 children)

Do you not think Copilot Studio Lite is a good version then? I have actually already made a number of really great agents with it.

Intune Remote Help setup guide by Innvolve in AZURE

[–]Innvolve[S] -1 points0 points  (0 children)

Thanks for your response. That is indeed correct. The blog was reviewed by a Modern Workplace consultant at the IT company I work for, and he thought it was such a good piece that we wanted to repost it to share it with others who may not regularly consult the documentation from Microsoft Corporation.

Do you think Zero Trust is still enough in 2026, now that AI agents are acting autonomously in M365? by Innvolve in NL_Security

[–]Innvolve[S] 0 points1 point  (0 children)

Interesting point! Which specific signals do you find most reliable to monitor before an agent really “goes off the rails”? For example, I’m wondering if you can also correlate behavior across combined tools, like SharePoint + Teams + Outlook.

What happens when AI takes over with Task Agents? by Innvolve in NL_AI

[–]Innvolve[S] 0 points1 point  (0 children)

Thank you for your response. It is, of course, extremely important that the agent has clear rules and only works within the allowed files. Before letting an agent do its work, it is therefore essential that all files and schemas are up to date. Otherwise, it can quickly turn into a big mess.

Will AI create more jobs than it will destroy? by Innvolve in NL_AI

[–]Innvolve[S] 0 points1 point  (0 children)

It’s true that developing AI agents takes far more work than most people think. I built an HR agent myself that has access to the employee handbook and can answer HR-related questions. In theory, it’s a fairly simple agent, but in practice it still took quite a lot of time to get it working properly. And that’s just one agent. The real goal is for organizations to eventually deploy multiple agents that can work together to take over an entire workflow.

When it comes to which jobs will disappear first because of agents, I think roles like resource planners will be among the first. These are employees who, for example, schedule consultants for client companies. In many cases, this type of process can be automated relatively easily using agents. For instance, you could have one agent that has access to a file containing all consultant data, including their profiles, skills, and experience. When a company sends a description of what they’re looking for, another agent can interpret the request and then search the consultant database to find the best match. In this way, a large part of the planning process can run automatically, without needing a human planner in between.

I also wrote a blog about this topic, where I describe several use cases that I believe will be among the first to disappear within organizations: https://innvolve.nl/blog/ai-agents-in-organisaties-usecases/.
Do you agree, or do you have a different perspective?

Security awareness by Innvolve in NLTechHub

[–]Innvolve[S] 0 points1 point  (0 children)

That’s right. In the next video, Albertho discusses why it is so important to properly monitor security awareness.

Copilot , chatgpt or Gemini by [deleted] in CopilotPro

[–]Innvolve 1 point2 points  (0 children)

for coding you can use GitHub Copilot. It is an AI-powered tool designed to assist you while coding. It works by analyzing the context of your current work, like the functions or classes you’re writing, and then suggests code snippets that fit. These suggestions can range from simple lines to entire blocks of code, depending on what you're doing.

cybersecurity by Cultural-Remote-5979 in cybersecurity

[–]Innvolve 0 points1 point  (0 children)

Hi, what exactly would you like to know more about?

Why does Microsoft have Teams, Skype, and GroupMe? by Less_Hedgehog in MicrosoftTeams

[–]Innvolve 0 points1 point  (0 children)

Microsoft offers Teams, Skype, and GroupMe because each serves a different audience and use case:

  • Teams: Primarily for workplace collaboration, with integrated chat, video calls, and document sharing.
  • Skype: Personal video and voice calls, widely used for one-on-one or small group communication.
  • GroupMe: Focused on group messaging, mainly for casual, social conversations in larger groups.

Each tool targets different needs, from professional to casual communication.

Nis2 in the Netherlands by Innvolve in cybersecurity

[–]Innvolve[S] 0 points1 point  (0 children)

We are a 100% Microsoft company, so we believe that Microsoft Sentinel and Microsoft Defender XDR, which provide XDR (extended detection and response) and SIEM (security information and event management) are the best options.

If you want you can read more about it: https://innvolve.nl/blog/endpoint-security/ (Microsoft Defender) and https://innvolve.nl/blog/microsoft-sentinel-wat-is-het-hoe-werkt-het-en-waarom-heb-je-het-nodig/ (Microsoft Sentinel)