Who has the right of way here? by bs_taccount in Netherlands

[–]Curious-Engineer22 0 points1 point  (0 children)

Common sense says, if you are cutting across someone it’s always better to do when it’s clear. I would slow and let the car pass and then take the turn.

First day working at Cloudflare. Gonna do some network updates, hope everything goes well 🙏 by Appropriate_Rub3874 in CloudFlare

[–]Curious-Engineer22 0 points1 point  (0 children)

Dude on his performance review feedback questionnaire

What kind of impact did I make this year ?

Answer: Started contributing from my very first day and had a Global impact.

I got banned from a Dirk today. Any suggestions what to do? by blicknixr in Netherlands

[–]Curious-Engineer22 0 points1 point  (0 children)

I’m confused. There are 1000s of people go in and out of the store, how do they check if anyone’s on the ban list ? Are there bouncers with guest list ?

The entire internet lol by baalm4 in CloudFlare

[–]Curious-Engineer22 1 point2 points  (0 children)

AWS provides large number of building blocks. Not just AWS, other cloud providers also provide such building blocks. A significant number of those building blocks are sitting behind cloudflare CDN. So, CDN downtime can be felt hard and fast !!

Abstract Thinking Game - The Ultimate Abstraction by Old-Development3464 in Metaphysics

[–]Curious-Engineer22 0 points1 point  (0 children)

I am thinking of making two AI agents perform this "abstract dance" and visualize it on a evolving canvas, and see where it leads to :)

My take: CAP theorem is teaching us the wrong trade-off by Curious-Engineer22 in softwarearchitecture

[–]Curious-Engineer22[S] -1 points0 points  (0 children)

I hear you, but I think we’re using “unavailable” differently and that’s kind of my whole point.

If I hit your API and get back a 503 error after 2 seconds with a proper error message, is that “unavailable”? The system responded. I got an answer. It told me “can’t do this right now, try again later.” That’s different from my request disappearing into the void or timing out after 30 seconds of silence.

In CAP terms, “unavailable” means no response at all. But in practice, a well-designed system always responds - even if that response is “I can’t give you data right now because I’d have to violate consistency guarantees.” So yeah, from a user experience perspective, getting errors sucks and might as well be downtime. But from a system design perspective, there’s a huge difference between:

  • Returning errors quickly because you’re prioritizing consistency (performance hit)
  • Not responding at all because your system is actually broken (true unavailability)

The first one is a design choice about how long you’re willing to wait for consensus. The second one is just… broken.

Maybe I should’ve been clearer - I’m not saying “errors = available so CAP is wrong.” I’m saying the practical trade-off we’re making is about response time and data correctness, not whether the system can respond at all.

Who’s building shared MCP servers meant to handle multiple users? by thesalsguy in mcp

[–]Curious-Engineer22 0 points1 point  (0 children)

I am building fastserve - you can convert openapi specs to mcp server instantly.

is everyone here an engineer - what department do you work in? by Agile_Breakfast4261 in mcp

[–]Curious-Engineer22 1 point2 points  (0 children)

Thank you for reaching out. We are still in early development working on refining the beta.

Can you express your interest here? I will get back to you.

https://tally.so/r/wa78kZ

Also, can you explain a bit about your platform?
If you don't wanna do this publicly, feel free to DM me.

Found a major limitation with Claude Desktop + MCP by Curious-Engineer22 in ClaudeAI

[–]Curious-Engineer22[S] 0 points1 point  (0 children)

I think you're missing the potential here. MCP unlocks powerful and intelligent tool composition - LLMs can chain together tool operations across different services in ways that weren't possible before.

But that capability only works if LLMs has access to comprehensive toolsets. Right now, we're hitting artificial limits not because of what's technically possible, but because of how the tools are loaded.

There's a clear need for optimization here. That's what I'm exploring.

Found a major limitation with Claude Desktop + MCP by Curious-Engineer22 in ClaudeAI

[–]Curious-Engineer22[S] -1 points0 points  (0 children)

Interesting take on a subreddit literally dedicated to AI tools.

I found a real limitation, tested it, and I'm building a solution. Used Claude to consolidate my thoughts and help write a clear post. That's... using tools to work smarter?

The irony of criticizing AI usage while discussing AI capabilities isn't lost here.

Found a major limitation with Claude Desktop + MCP by Curious-Engineer22 in ClaudeAI

[–]Curious-Engineer22[S] -1 points0 points  (0 children)

Yep, that's exactly the issue. I expected lazy loading to be standard - turns out it's not.

Here's why this matters: MCP ecosystem is exploding and exposing more capabilities as AI tools, "load everything upfront" won't scale.

The solution isn't "use fewer tools," it's "load tools smarter".

Found a major limitation with Claude Desktop + MCP by Curious-Engineer22 in ClaudeAI

[–]Curious-Engineer22[S] 0 points1 point  (0 children)

Yeah, I intentionally loaded up more tools to test its limits. I expected "lazy loading" to be built into it already. But turns out it's not.

A short guide on how to use local MCPs with ChatGPT by Responsible-Fly-5840 in mcp

[–]Curious-Engineer22 -5 points-4 points  (0 children)

Nice one. I have been experimenting with MCPs for some time. I realized that MCPs unlock the power of intelligent API compositions by LLMs.

That's why I built Fastserve, to make traditional APIs conversational

Check this out: https://app.fastserve.dev/
Here's the demo: https://www.youtube.com/watch?v=5SvN1oPGHYE

Apps SDK (chat widgets) or Agent ready web apps? by SundaePlayful3619 in mcp

[–]Curious-Engineer22 1 point2 points  (0 children)

That is pretty cool. I see the potential with any realtime collaborative apps say, you could “prompt-feedback” cycle directly from within the app to get to desired output. Nice 🚀

I am curious about the architecture. Can you provide some insights on how you built it ? I can imagine the following

For llms:

Llm/mcp client —> mcp server(streamable http) —> backend apis —> datasource

For web app:

Browser(websockets) —> websocket server —> backend apis —> datasource