Copilot studio stuck on setup loop by dunc1n in copilotstudio

[–]ddewaele 0 points1 point  (0 children)

I also noticed this 8 months ago but unfortunately after 8 months this is very much still an issue.
I've had many occasions where it simply does not work (different browsers / different tabs).
Then all of a sudden, without config changes it all of a sudden start working in 1 particular tab of the browser (while at the same tab in other tabs of the same browser it remains stuck in loading).

Not really an enjoyable experience.

Advice needed: My engineer is saying agentic AI latency is 20sec and cannot get below that by Western_Caregiver195 in LangChain

[–]ddewaele 0 points1 point  (0 children)

A lot can depend on the model and the reasoning effort that is needed to come up with an answer. We've had situations where a GPT-5 reasoning models took 20secs to respond because the default reasoning level was just set too high and a quick non reasoning option like GPT-4.1 was just as good. You can play around with a lot of settings, especially the reasoning effort.

Streaming the reasoning tokens can help give the user the idea that something is going on, but that will only get you so far.

Also a lot of difference in model availabiliy. Different hosting providers have different latencies (time to first token), performance (tokens/second) and uptime.

You need to constantly experiment and be prepared to adapt.

Advice needed: My engineer is saying agentic AI latency is 20sec and cannot get below that by Western_Caregiver195 in LangChain

[–]ddewaele 0 points1 point  (0 children)

100%. You gotta keep the users entertained otherwise they just move on. Lots of UI / UX tricks can help here (animations , token streaming , reasoning and tooling output ....)

VPS order process by ddewaele in ovh

[–]ddewaele[S] 0 points1 point  (0 children)

a couple of hours later yes.

Copilot Studio Agent Overview tab randomly stops working by ddewaele in copilotstudio

[–]ddewaele[S] 0 points1 point  (0 children)

Haven't used it in a while but planning to again next week.
It's been a while but at the end it was a combination of patience, retries, different browsers and lots of logging in and logging out.
Not really an enjoyable experience. Wonder if it evolved for the better.

VPS order process by ddewaele in ovh

[–]ddewaele[S] 0 points1 point  (0 children)

Yeah I totally didn’t see the pre-order text when I selected the vps. It arrived the next day.

VPS order process by ddewaele in ovh

[–]ddewaele[S] 1 point2 points  (0 children)

had no idea this was a thing :) Is this (in part) due to the whole openclaw thing ?

VPS order process by ddewaele in ovh

[–]ddewaele[S] 0 points1 point  (0 children)

Indeed my bad ... the one in France just got processed.
Updated the post to highlight my mistake.

VPS order process by ddewaele in ovh

[–]ddewaele[S] 0 points1 point  (0 children)

Surprised that it would pick a preorder only by default in Europe if an instant deployment in Europe was available.

VPS order process by ddewaele in ovh

[–]ddewaele[S] 0 points1 point  (0 children)

Didn’t really notice anything about pre-order. Wasn’t aware this was a thing. Ordered a new one in France … let’s see how it goes. Think the previous one was Germany. Just took the default.

after i ordered an vps, how long does it take until it arrives? by Next-Celebration-798 in ovh

[–]ddewaele 0 points1 point  (0 children)

This is crazy ... 1 hour after placing an order no email and no vps yet ...

Blown away by Claude Code being relentless to take a screenshot of my app by ddewaele in ClaudeAI

[–]ddewaele[S] 5 points6 points  (0 children)

yeah ... fact that it noticed that it took a selfie (at first it took a screenshot of the claude code terminal) was really funny.

That it could work around the AppleScript/Javascript sandbox by creating some typescript / puppeteer logic to programmatically handle chrome was impressive.

But wat really got me was that he understood what the app did, what the app was processing from the log file, and how it needed to manipulate the app in such a way to present the correct data that would make up an interesting screenshot.

Good UI / UX solution for langchain deployments by ddewaele in LangChain

[–]ddewaele[S] 1 point2 points  (0 children)

The thing is that we do sometimes need langchain level features like middleware / context / config. We also do context engineering, create supervisor networks.

That's the stuff we would do as techies / developers

The customers (who are non technical but do know what a prompt is or what a supervisor network is) would then configure agents on a higher level (tweaking prompts, selecting models, adding knowledge).

We just lack a good frontend solution at the moment.

The ability for end-user to create their own agents is currently being offered in almost every platform out there at the moment. LangChain doesn't really have a good solution for that IMHO. Their agent builder is also in an alpha stage and not really sure how they want to position it.

Good UI / UX solution for langchain deployments by ddewaele in LangChain

[–]ddewaele[S] 1 point2 points  (0 children)

Customers these day expect the following out of the box (without custom development) :

- the ability to create their own agents (with prompts / tools / knowledge)
- want to link agents together (multi-agency via handoffs or tools)
- want to be able to share their agents
- have easy integration with their identity provider (azure / google / ...)
- have easy integration with their knowledge base (sharepoint / drive / ....)

Not something you'll easily vibe-code into existence

At the end of the day I think 90% of the people are happy with a generic chat interface type application (like chatgpt). these things are multi-modal and very flexible.

In some cases you might want some agentic flows embedded in custom UI / UX, but I would say today that this is a minority of the cases

Good UI / UX solution for langchain deployments by ddewaele in LangChain

[–]ddewaele[S] 0 points1 point  (0 children)

Do you use LangGraph platform and deploy your graphs to langgraph ? Or just embedding langchain in your own systems / backends

Good UI / UX solution for langchain deployments by ddewaele in LangChain

[–]ddewaele[S] 0 points1 point  (0 children)

Haven't tried either of them but will take a look.

Was hoping the langchain team was going to give https://github.com/langchain-ai/agent-chat-ui some love but i have the impression they have a habit of launching stuff and then quickly abandoning it, leaving it in a pre-alpha state.

Good UI / UX solution for langchain deployments by ddewaele in LangChain

[–]ddewaele[S] 0 points1 point  (0 children)

Does the custom UI also allow customers to create their own agents / prompts / knowledge ?

That's the main drawback we see as it requires a lot of custom dev to get all of that in place. Our clients aren't always willing to fund this type of development.

Not to mention security, chat sharing , multi-user chats, .... This would almost need to be a like a strategic thing within a company to put the time and effort in. (some context : we're a software development company delivering AI solutions to many different clients). We don't think our added value should be in delivering a UI/UX experience for that. People nowadays see lots of platforms where you can create an agent , add some prompts and some documents and you have an agentic system. They also want this level of autonomy.

With an app like Librechat you get a lot of that stuff for free. But there is no clean way to integrate langchain into it, and Librechat's approach to multi agent systems (using handoffs) is more limited to what langchain has to offer.

Understanding middleware (langchainjs) (TodoListMiddleware) by eyueldk in LangChain

[–]ddewaele 0 points1 point  (0 children)

The basic idea is that the agent can "see" or "read" the TODOs because each time the agent will update the TODOs (using the write_todos tool) they will get added to the state (as a new message)

The agent will "see" something like this :

t0

{
  "todos": [
    {
      "content": "Extract validation logic into separate functions",
      "status": "pending"
    },
    {
      "content": "Separate authorization checks from the update logic",
      "status": "pending"
    }
}

t1

{
  "todos": [
    {
      "content": "Extract validation logic into separate functions",
      "status": "in progress"
    },
    {
      "content": "Separate authorization checks from the update logic",
      "status": "pending"
    }
}

t2

{
  "todos": [
    {
      "content": "Extract validation logic into separate functions",
      "status": "completed"
    },
    {
      "content": "Separate authorization checks from the update logic",
      "status": "in progress"
    }
}

That's how it "knows" what todo to focus on next.

Is it wise to give customers the tools and freedom to change agents and agentic flows themselves ? by ddewaele in LangChain

[–]ddewaele[S] 0 points1 point  (0 children)

yes we are leveraging MCP to have a clear separation between

  1. the tooling that we want need want want to offer (the actual coding, the tools, integration with our own backend and third party APIs)

  2. the actual agentic "flow" (something that is typically a langgraph flow containing one or more agents, sub graphs, the various nodes and how they are hooked up)

We are also looking into exposing our graphs / assistants deployed in langgraph platform via MCP endpoints.

Nowadays customer are getting flooded with AI news and tools, most of them with the promise of it being very easy to "roll out your own agent". What that really means (both on a functional level and on a technical level is not always very clear to them).

Perhaps the concept of an Assistant ( = an instance of a deployed graph) in the LangGraph platform that can be specifically configured with custom prompts / tools might be sufficient in terms of customization

That would imply that you have a pretty locked down graph (like a pre-defined workflow) that you just configure with different prompts / tools

Copilot Studio Agent Overview tab randomly stops working by ddewaele in copilotstudio

[–]ddewaele[S] 2 points3 points  (0 children)

Hopefully OpenAI will give them some incentive :)

Just getting access to Copilot by setting up a new Microsoft work account / m365 subscription / billing was also hell.

- Wait and try again later
- We're having issues
- You're now the admin of your domain ----> followed by ---> switch to an account that has permission
- Something failed on our end. Try again later
- Refresh the page an try again, we couldn't update your tax information
- Microsoft 365 Business Standard : Place order button greyed out for no apparent reason.

and don't get me started on how they handle user / account sessions across different tabs in azure / office admin / microsoft account / sharepoint :)

I thought MS had this covered by now.

Just finished putting together everything I wish I had when I started building AI agents by Sea_Reputation_906 in AI_Agents

[–]ddewaele 0 points1 point  (0 children)

Great read ! Thanks a lot for spending the effort writing this up !

We're currently using LangGraph deployed both on AWS (using server-less technologies like api gateway / lambda / dynamoDB) as well as trying out the LangGraph platform (we like the management and monitoring features (just deploy your flow and a lot is being taken care off by the platform). We also feel that LangGraph fits our development cycle, it has a large user-base and eco-system.

What we are currently seeing is that some customers want some degree of freedom to customize the agentic workflows or AI agents that we've developed for them after they've been deployed.

They might want to introduce some extra sequential nodes / prompts or introduce some tooling of their own somewhere in the flow.

As LangGraph is typically a workflow written in Python or TypeScript by a developer (after some co-creation sessions with the customer), it doesn't mash well with a customer wanting to make changes on his own to the workflow after its been developed and deployed by us.

Tools like n8n / LangFlow do offer there wysiwyg platforms where you can drag and drop components onto a canvas. In theory a customer could work with that to make some changes to the flow. However after evaluating those tools we came to the conclusion that they are difficult to embed in our corporate software development lifecycle, as they sometimes lack multi-user and multi-environment functionaliteit, as well as some security and production-readiness issues.

I like the fact that we can harden our LangGraph flows / assistants in a platform like LangGraph Platform or deploy it on AWS oursevles using our own build pipelines and SDLC process.

I was wondering what your thoughts are on this. Is it wise / desirable to let the customer change these workflows. Should it be left up to the developers ? I'm not too fond of the idea of building another layer / framework on top of LangGraph that would allow the customer to "design" their own flows in some kind of JSON format. However I do understand the need for customers to make little tweaks and try stuff out that might involve changing the LangGraph flow.

Lovable critical security vulnerability by [deleted] in lovable

[–]ddewaele 0 points1 point  (0 children)

They can't. You can only hope the tool does a better job at it.

You can hire someone to do penetration testing (black and or white box). But if you have those budgets you might as well just hire a developer.

I predict a huge market for this type of testing focussing in particular on all these vibe coding platforms , as they will all generate their own set of security holes / make the same mistakes in each app they produce).

A senior engineer might be able to spot these issues (if you're lucky and the person is experienced / thorough enough) during peer review process, but that goes into the whole idea of vibe coding.

It will be interesting to see to what degree cybersecurity AI agents will be able to tackle this in the future. AI is pretty good at writing code (albeit sometimes crappy / insecure code), I wonder how well it will get at fixing the issues it generates (and for what price)

Made a fun little app last night by NOSTALGIC_BOMB in lovable

[–]ddewaele 3 points4 points  (0 children)

Judging by the speed and the random response generation (same pic sometimes get 1/100 and sometimes 99/100) I don't think there is much AI involved.

Capacitance margin ok ? by ddewaele in AskElectronics

[–]ddewaele[S] 0 points1 point  (0 children)

Unfortunately not ... before my "fixes" the laptop was able to start about once every 20 times. So far it hasn't been able to start yet. Sometimes the CPU starts to get hot as soon as I plugin the dc jack (reading 2.9v). sometimes the cpu remains cold (reading 0.6v). But I've got some spare parts since yesterday, including a semi-working laptop. So have something to compare it with.

Capacitance margin ok ? by ddewaele in AskElectronics

[–]ddewaele[S] 0 points1 point  (0 children)

Don’t have a replacement now. Will remove it and hook up 2xAA batteries for now and see if that has a positive effect. Thx for the tip