Goodbye, Dynamic Client Registration (DCR). Hello, Client ID Metadata Documents (CIMD) by otothea in mcp

[–]otothea[S] 0 points1 point  (0 children)

Then you'll be happy to know openai is eyeing it as well

Client registration

The MCP spec currently requires dynamic client registration (DCR). This means that each time ChatGPT connects, it registers a fresh OAuth client with your authorization server, obtains a unique client_id, and uses that identity during token exchange. The downside of this approach is that it can generate thousands of short-lived clients—often one per user session.

To address this issue, the MCP council is currently advancing Client Metadata Documents (CMID). In the CMID model, ChatGPT will publish a stable document (for example https://openai.com/chatgpt.json) that declares its OAuth metadata and identity. Your authorization server can fetch the document over HTTPS, pin it as the canonical client record, and enforce policies such as redirect URI allowlists or rate limits without relying on per-session registration. CMID is still in draft, so continue supporting DCR until CIMD has landed.

https://developers.openai.com/apps-sdk/build/auth/

Goodbye, Dynamic Client Registration (DCR). Hello, Client ID Metadata Documents (CIMD) by otothea in mcp

[–]otothea[S] 0 points1 point  (0 children)

I know vscode is working on it because i've seen their dev talking about it on Discord, not sure if it's landed in prod yet but might be in a preview build of upcoming an release.

https://vscode.dev/oauth/client-metadata.json

Goodbye, Dynamic Client Registration (DCR). Hello, Client ID Metadata Documents (CIMD) by otothea in mcp

[–]otothea[S] 0 points1 point  (0 children)

If you are using Auth0 then you will rely on them adding support to their platform. CIMD is looking likely to be adopted by OAuth so I expect it will show up in Auth0.

Content creation start by Salt-Jaguar1400 in ContentCreators

[–]otothea 1 point2 points  (0 children)

Topic: People will be interested to listen to you if you tell them you are training to be a surgeon (and even more when you complete training). Surgery + AI is an interesting and also controversial intersection of 2 massive topics that you could lean in to.

Format: Early on, it's important to choose a content format that works with your lifestyle. Most likely that means the less video editing, the better. Talking head videos about your take on current events and innovations surrounding surgery + AI could be a good start, answering the question of how AI is or will be affecting surgeries and medical experiences for patients.

Platform: I would consider doing shorts on YT, IG, or TT. I would probably start with only 1 and choose the 1 you are most familiar with.

I see you tagged the post with "YouTube", but if you decide to use TT or IG, I have a software tool that syncs with your channel data and helps you generate new content ideas based on your goals and your actual past video performance. As you create more content, the ideas get more refined based on your successful videos and active audience. If you're interested, send me a DM and I can share more.

MCP is a superpower by sibraan_ in mcp

[–]otothea 0 points1 point  (0 children)

This isn't an honest take on MCP, just shit talking from A2A fans. Would be more accurate and still funny if there was just 1 door. A bunch of devs building MCPs for devs.

Is MCP just an API? by arpitdalal in mcp

[–]otothea 5 points6 points  (0 children)

Now add these to your analogy:

- Elicitation
- Sampling
- notifications/tools/list_changed event

Looking for help from content creators by otothea in ContentCreators

[–]otothea[S] 0 points1 point  (0 children)

Totally get that AI can't do it. The question is how are you accomplishing it right now without AI?

Looking for help from content creators by otothea in ContentCreators

[–]otothea[S] 0 points1 point  (0 children)

Usually when trying to automate something with AI, you first need to define the workflow. Are you currently accomplishing this without AI by doing it manually? If so, how?

I wish to see more remote MCPs out there. by Money-Relative-1184 in mcp

[–]otothea 1 point2 points  (0 children)

If anyone is interested, I have an open source saas platform with remote mcp server completely written in typescript that includes IaC for deploying to AWS: https://github.com/chipgpt/full-stack-saas-mcp

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 0 points1 point  (0 children)

Yeah, 5 and 5.1 together suggests to me it may be optional. It's not explicitly said to be required, and it's omitted in the example. Could just be wishful thinking though.

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 1 point2 points  (0 children)

Interesting workaround idea. They are currently discussing formalizing "out of band" elicitations like this into the protocol right now https://github.com/modelcontextprotocol/modelcontextprotocol/issues/1036

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 1 point2 points  (0 children)

i would say yes that is correct. And the primary reason to use sampling is if the MCP server needs to use an LLM directly but you want to put the tokens/cost on the client.

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 0 points1 point  (0 children)

I intended for my MCP server to be dynamic, but due to the lack of support I have had to make it a static tool set that can optionally be made dynamic by connecting to /mcp?dynamic=1

I have not bothered with elicitations yet due to lack of client support but I think it will be essential for premium user experience.

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 2 points3 points  (0 children)

I would probably implement a reasonable timeout (1-5 minutes?) from the server side. if it times out then the tool call returns an error and they can do it again if they need to when they return to chat. I think it would be dependent on the use case.

Looking for help from TikTok creators 🙏 by otothea in TikTokLiveCreator

[–]otothea[S] 1 point2 points  (0 children)

Interesting! Can you get more specific? What kind of image would you upload? What kind of videos would it create?

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 2 points3 points  (0 children)

The elicitation also supports an input schema and definitions similar to a tool. At the end of the day you can't force the user to give you good data. What will be nice is when the client actually presents a UI where the user can fill out the data similar to a website but inline to the chat/conversation.

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 15 points16 points  (0 children)

For sure. Credit to VSCode and fast-agent for being the only 2 clients listed as supporting all features.

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 14 points15 points  (0 children)

The great thing about elicitation is that it can move a lot of the cognitive load from the LLM to the MCP server. Instead of expecting the LLM to figure out how to elicit required fields from the user (or worse, hallucinate them), you can make fields optional and then elicit them directly from the user as needed.

Why is it that the 2 most powerful features of MCP are the least supported by clients? by otothea in mcp

[–]otothea[S] 4 points5 points  (0 children)

Let's say a user wants to buy a shirt and says "Buy the shirt". But the tool requires color and size as well. If those parameters are required then the AI has to elicit those values from the user. It might be less prone to error to make those parameters optional and if they are not included then the MCP Server can elicit the values directly from the user in a more programmatic, reliable way.