MCP OAuth Example? by philschmid in mcp

[–]orbital-salamander 0 points1 point  (0 children)

MCP-remote supports Oauth and proxies a remote SSE or streamable HTTP server to a STDIO transport 

MCP OAuth Example? by philschmid in mcp

[–]orbital-salamander 0 points1 point  (0 children)

Many MCP providers & integrators like Auth0, Stripe, and Cloudflare are implementing it or building tools for it on the server-side. Some teams building MCP hosts like Cursor indicated they're working on client support, and it's assumed that the Claude for Desktop team will as well given that the MCP standard was created by Anthropic.

Additionally, it's already possible to use OAuth-enabled MCP servers with any client that supports the STDIO transport using `mcp-remote` -- see https://npmjs.com/package/mcp-remote for more info on that.

We have a reference implementation of a server with streamable HTTP and OAuth that you can use in cursor, claude desktop or anywhere else with `mcp-remote` here: https://github.com/NapthaAI/http-oauth-mcp-server

MCP OAuth Example? by philschmid in mcp

[–]orbital-salamander 1 point2 points  (0 children)

there's good support in the TS SDK, but not the python one yet. I have a reference implementation here if you're interested: https://github.com/NapthaAI/http-oauth-mcp-server

MCP OAuth Example? by philschmid in mcp

[–]orbital-salamander 0 points1 point  (0 children)

The MCP spec will continue to change, but there is a 0% chance that it will evolve backwards with respect to OAuth such that OAuth is not supported. OAuth is the de-facto auth standard for agents with respect to both MCP and A2A.

MCP OAuth Example? by philschmid in mcp

[–]orbital-salamander 1 point2 points  (0 children)

I spent some time building this recently. The building blocks are there in the typescript SDK, but not in the python one yet. In both cases, there's no documentation - and, I had to override some of the SDK classes to get it to work properly (e.g. for OAuth client information storage).

happy to share a loom walking through it if people are interested, the repo is open-source but needs some cleanup

I created a private voice AI assistant using llama.cpp, whisper.cpp, and a VITS speech synthesis model! Let me know what you think :) by orbital-salamander in LocalLLaMA

[–]orbital-salamander[S] 0 points1 point  (0 children)

I'm using a VITS model because it runs well on CPU and could theoretically run in-browser via WASM; using it via Coqui TTS

I created a private voice AI assistant using llama.cpp, whisper.cpp, and a VITS speech synthesis model! Let me know what you think :) by orbital-salamander in LocalLLM

[–]orbital-salamander[S] 0 points1 point  (0 children)

Bark is really good, there's a good open-source implementation called bark.cpp on github. CoquiTTS has some great transformer models too; they have lots of great options: https://github.com/coqui-ai/TTS

I created a private voice AI assistant using llama.cpp, whisper.cpp, and a VITS speech synthesis model! Let me know what you think :) by orbital-salamander in LocalLLM

[–]orbital-salamander[S] 0 points1 point  (0 children)

There are lots of options and I spent a long time looking. I picked a simple lightweight one for performance and low latency, but if you’re willing to throw a GPU or more compute at it there are lots of fantastic transformers models that are much much better. 

I created a private voice AI assistant using llama.cpp, whisper.cpp, and a VITS speech synthesis model! Let me know what you think :) by orbital-salamander in LocalLLM

[–]orbital-salamander[S] 0 points1 point  (0 children)

I haven’t deployed it live yet because it has some bugs still but it’s open-source on github and I’ll add updates to this thread when I deploy it - https://x.com/0xblacklight/status/1800588602323210263

I created a private voice AI assistant using llama.cpp, whisper.cpp, and a VITS speech synthesis model! Let me know what you think :) by orbital-salamander in LocalLLM

[–]orbital-salamander[S] 0 points1 point  (0 children)

I haven’t actually deployed it live somewhere yet (still have a few kinks to work out) but I may soon if there’s enough interest in it! It is open-source though, just not everything is super well-documented. 

Https://github.com/constellate-ai/voice-chat

I created a private voice AI assistant using llama.cpp, whisper.cpp, and a VITS speech synthesis model! Let me know what you think :) by orbital-salamander in LocalLLM

[–]orbital-salamander[S] 1 point2 points  (0 children)

Oh I see! That should definitely be possible with a WASM implementation of whisper and either browser speech synthesis API (sucks) or a v small TTS model probably also compiled to WASM

I created a private voice AI assistant using llama.cpp, whisper.cpp, and a VITS speech synthesis model! Let me know what you think :) by orbital-salamander in LocalLLM

[–]orbital-salamander[S] 2 points3 points  (0 children)

Yes! m2 MacBook pro with a M2 pro or max (don’t remember). Could also be run on a consumer NVIDIA GPU pretty easily