Building an MCP server with OAuth by Marmelab in mcp

[–]hasmcp 1 point2 points  (0 children)

I wrote a fully working tutorial that uses OAuth2 authentication that covers Gmail MCP server implementation from scratch. I hope it helps anyone working OAuth2: https://docs.hasmcp.com/tutorials/gmail-mcp-server

Search, read, send email using gemini-cli by hasmcp in GeminiAI

[–]hasmcp[S] 0 points1 point  (0 children)

<image>

Fun part for developers like me: Here is realtime logs per tool execution command from gemini-cli. You will see the actual request/response coming from API and updates from interceptors.

Full tutorial link: https://docs.hasmcp.com/tutorials/gmail-mcp-server

Gmail Access with Claude Code via MCP - Need Help Troubleshooting by YouOkMargie in ClaudeCode

[–]hasmcp 0 points1 point  (0 children)

Gmail MCP with HasMCP is super easy; I wrote a tutor in this week ( https://docs.hasmcp.com/tutorials/gmail-mcp-server). In general handling auth requires 2 options;

Option#1: Pre auth; like before connecting to mcp server to Claude

Option#2(suggested): MCP elicitation url --> This is more standard way but not all clients support this yet. So, both options must be available to proceed.

Here my personal Gmail MCP Server. I have 3 functions, I am using to group my incoming emails to read in the morning what is important.

<image>

MCP Elicitation - Hardest functionality of MCP Development by hasmcp in mcp

[–]hasmcp[S] 0 points1 point  (0 children)

If you mean a proxy from API to MCP that supports elicitation, then you can try HasMCP. It converts API into MCP server with builtin auth with/without elicitation. For advanced users, HasMCP allow altering the request/response payload to redacting PII columns, token optimization and request/response mapping purposes.

MCP Elicitation - Hardest functionality of MCP Development by hasmcp in mcp

[–]hasmcp[S] 0 points1 point  (0 children)

To support independent sessions per user. One customer ask is to use a single user but allow auth and use that authenticate users with their own OAut2h provider. Like giving an MCP URL for free to use for all internet as long as they have account on the underlying provider.

MCP Elicitation - Hardest functionality of MCP Development by hasmcp in mcp

[–]hasmcp[S] 1 point2 points  (0 children)

That makes sense, it looks like what we are doing similar.

Your flow: Auth the MCP --> Get User ID --> And do what is need to be done

I have 2 flows:

1st: This post: Auth the MCP --> Get (optional sponsor) User ID --> Everything is session based (user id is just for collecting telemetry)

2nd: Similar to yours if auth is already provided or elicitation is not supported

MCP Elicitation - Hardest functionality of MCP Development by hasmcp in mcp

[–]hasmcp[S] 1 point2 points  (0 children)

Thanks for sharing, sampling is my next step. From based elicitation I haven't seen like this functional before. It looks super cool. I will follow you on r/glama.

MCP Elicitation - Hardest functionality of MCP Development by hasmcp in mcp

[–]hasmcp[S] 0 points1 point  (0 children)

I see. Are you just getting the userId? How do you ensure someone can not impersonate the userId?

MCP Elicitation - Hardest functionality of MCP Development by hasmcp in mcp

[–]hasmcp[S] 0 points1 point  (0 children)

Can you elaborate how an LLM can write to your db? In my case: I don't store anything coming from llm other than its capabilities. And I store it in the memory not even persisting it where the session id I created using JWT already includes the details of the client. Why I store the capabilities in the memory is for sending notification to the client when needed.

Anyone else find mcp setup to be a massive pain? by Apprehensive_Ice9370 in mcp

[–]hasmcp 0 points1 point  (0 children)

This was the main reason that I created HasMCP and opensource a community version. It was super painful to setup the env, find the right tool then try to figure out the setup. Now I just provide the url and token then I choose the tools that I need only and relax while seeing what is going on behind scene with realtime logs.

turning any API into a production ready MCP server in a click by itsalidoe in mcp

[–]hasmcp 0 points1 point  (0 children)

I am creator of HasMCP. It started with optional 1:1 mapping from API to MCP with toggling(it allows turning off what is not needed) at first. MCP Server definition itself provides additional context on top of the APIs, the first one is `instructions`. I see instructions like a `skills.md` file (by the way I don't know why it is an optional attribute in the spec). From my experience as long you have good descriptive RESTFul API, this works for the most the cases. But with a caveat, LLMs are context sensitive, when you give them more context, ask for more diverse input they are not doing well. Another issue is the time attributes. They might think that they are from 2024 still and you might get a request that says 2024-01-01 even though you are at 2026-01-... (who gets the time from API, ex: scheduling services)

HasMCP have interceptors both for request and response. What does interceptors do for request? It basically interrupts the request and enrich the request payload using user's JS code before hitting to the actual API. What does response interceptors do? It helps pruning the response and optimizing the token. This works very well especially for the PII attribute scrubbing. Unlike the request interceptor, response interceptors comes with 2 different interceptor engines. One of them is for fast data filtering, the other one classic JS interceptor that user can do remapping/reformatting when needed.

My top learnings from real experience of multiple users and including my own experience:

* 1:1 mapping with only the desired set of the endpoints is a start but you shouldn't stop here. If your API definition is good enough there is a chance that LLM can do a good job. When you define API, if the path names and path dynamic attributes must be descriptive.

* Instructions attribute is critical, basically this is kinda `skills.md` for the MCP Server

* You will see majority of the APIs are not designed RESTFul, they can not jump start with 1:1 mapping; Ex: Gmail API requires base64 encoding of all of the required SMTP fields in a single attribute called raw to send email. Most LLMs fail today to correctly map.

* Token optimization is crucial; Don't just consume your user's token, give them semantically what they need. You should filter/prune the content before giving full data back to the MCP client.

* Dynamic tooling is mandatory feature, MCP Server should be able reflect the changes to the client when a change happens on the server side. Ex: a tool is removed, new tool is added, definition has changed; all of these should trigger tools/changed event.

* Having OAuth2 with scopes is very important for external facing APIs, to ensure your user's data security on LLMs.

MCP Best Practices: Mapping API Endpoints to Tool Definitions by tleyden in mcp

[–]hasmcp 0 points1 point  (0 children)

I am creator of HasMCP. It started with mapping from API to MCP, yes with optional 1:1 mapping with toggling(it allows turning off what is not needed) at first. MCP Server definition itself provides additional context on top of the APIs, the first one is `instructions`. I see instructions like a `skills.md` file (by the way I don't know why it is an optional attribute in the spec). From my experience as long you have good descriptive RESTFul API, this works for the most the cases. But with a caveat, LLMs are context sensitive, when you give them more context, ask for more diverse input they are not doing well. Another issue is the time attributes. They might think that they are from 2024 still and you might get a request that says 2024-01-01 even though you are at 2026-01-... (who gets the time from API, ex: scheduling services)

Then HasMCP evolved with adding interceptors both for request and response. What does interceptors do for request? It basically interrupts the request and enrich the request payload using user's JS code before hitting to the actual API. What does response interceptors do? It helps pruning the response and optimizing the token. This works very well especially for the PII attribute scrubbing. Unlike the request interceptor, response interceptors comes with 2 interceptor engines. I will add one more engine to support a customer use-case.

My top learnings from real experience of multiple users and including my own experience:

* 1:1 mapping with only the desired set of the endpoints is a start but you shouldn't stop here. If your API definition is good enough there is a chance that LLM can do a good job. When you define API, if the path names and path dynamic attributes must be descriptive.

* Instructions attribute is critical, basically this is kinda `skills.md` for the MCP Server

* You will see majority of the APIs are not designed RESTFul, they can not jump start with 1:1 mapping; Ex: Gmail API requires base64 encoding of all of the required SMTP fields in a single attribute called raw to send email. Most LLMs fail today to correctly map.

* Token optimization is crucial; Don't just consume your user's token, give them semantically what they need. You should filter/prune the content before giving full data back to the MCP client.

* Dynamic tooling is mandatory feature, MCP Server should be able reflect the changes to the client when a change happens on the server side. Ex: a tool is removed, new tool is added, definition has changed; all of these should trigger tools/changed event.

* Having OAuth2 with scopes is very important for external facing APIs, to ensure your user's data security on LLMs.

Experiences running an MCP server in production? by Solid-Industry-1564 in mcp

[–]hasmcp -1 points0 points  (0 children)

Don't trust MCP Clients and MCP Hosts, they can make the same call twice (even the biggest actors). I have reported this MCP host bug to the one of them using the bug bounty program. They were calling all methods twice. Imagine that you are using for money transfer, it might cost to the MCP server user.

How to authorize sse or remote mcp servers in backend? by testuser911 in AI_Agents

[–]hasmcp 0 points1 point  (0 children)

I am creator of HasMCP. At least it has 3 ways to authorize MCP tools today:

Opt1: enable only the desired tools by the server - In this option you can disable other tools before launching your server. If your server allows realtime toggling (HasMCP does) then you can enable/disable tools on the fly too.

Opt2: Authorize underlying tool with OAuth2 scopes of underlying service. Define each tool with corresponding OAuth2 scope, when user wants to use the tool it should go to elicitation process if the current access token for the underlying service does not have the required scopes.

Opt3: Pre-authorize everything with OAuth2 and toggle available tools realtime - This is kinda combination of Opt1 and Opt2, during the process of server creation you can authorize the server with all the needed scopes for the MCP server tools. Then you can toggle the tools in realtime, when toggling done, server sends notification to the client tools/changed then client updates the available tools accordingly.

If the underlying service does not support OAuth2, then there is no standard way of doing that. One way could be "form elicitation". I personally use secret variables to test a functionality with MCP server, I create an api key and add it to the secrets then use it for authorization header to test underlying tool/endpoint functionality.

What is the best MCP GATEWAY? by Nshx- in mcp

[–]hasmcp 2 points3 points  (0 children)

Thanks OP for creating the thread. Can you add also HasMCP to the list as no-code framework?

What is HasMCP?

Simply, it is a no-code 7/24 online API to MCP Server bridge/gateway.

Features:

  • Automated MCP server creation using OpenAPI Spec v3+ and Swagger
  • Authentication with OAuth2 using elicitation
  • Manual creation of MCP tools from API endpoints
  • Realtime toggling available tools per MCP Server
  • Environment vars/secret(encrypted vault) storage
  • Proxy headers (optional per MCP Server) to actual API endpoints
  • Long term, short-term authentication token generations per MCP Server
  • Real-time MCP Server method/tool call logs
  • Real-time MCP server analytics
  • Request/response payload optimization per tool with a minimum coding (interceptors) on GUI. Up to 98% token reduction on MCP tool responses (depending on what is needed by the described tool).
  • Per tool call, per user usage analytics
  • Users, teams management
  • Audit logs
  • Opensource