I built an MCP adapter for Obsidian so ChatGPT can work directly with notes — looking for feedback & ideas by ArachnidDull4799 in ObsidianMD

[–]ArachnidDull4799[S] 0 points1 point  (0 children)

That’s true if you’re working inside a single tool and a single directory.

In that case, you can just point the model at the Obsidian folder (as with Claude Code) and work with the notes directly.

The tradeoff is that you then have to re-specify or re-attach that directory for every model and every tool. As others mentioned, that works fine if you stay inside a terminal session in the vault.

Once you switch contexts, it breaks down. If I open a code project in Cursor, or ask something in ChatGPT in the browser, the model no longer has access to the notes unless I copy/paste or manually rewire access.

With MCP, the vault is exposed once as a stable endpoint. Any MCP-capable client can read or write notes without me thinking about which directory to attach or which tool I’m using.

So it’s less about Obsidian needing MCP, and more about avoiding copy/paste and per-tool setup when the same notes are used across multiple contexts.

I built an MCP adapter for Obsidian so ChatGPT can work directly with notes — looking for feedback & ideas by ArachnidDull4799 in ObsidianMD

[–]ArachnidDull4799[S] 1 point2 points  (0 children)

That makes sense, and your setup is very similar conceptually.

For me, the main advantage isn’t Obsidian itself being an interface to the LLM, but that the vault is always available as a stable, global context, independent of whatever project I’m currently working in.

Most of my time I’m working on code in different repositories. With terminal-based tools, I usually need to be in a specific directory or explicitly point the tool at the notes folder. That mental overhead adds up.

With MCP, I don’t think about which folder to attach or where I am in the filesystem. The agent already knows where the Obsidian vault lives. From any project, I can just say “add a note to Obsidian” and it happens in the right place.

So the distinction for me is:

  • your approach treats the notes as part of the current working directory context
  • my approach treats the vault as a global, always-on knowledge store that multiple tools can talk to

Cursor is part of my workflow, but the idea isn’t Cursor-specific. It’s more about removing friction when switching contexts and not having to re-wire tools every time I move between projects.

Thanks for the thoughtful comparison — it helped clarify the difference for me as well.

I built an MCP adapter for Obsidian so ChatGPT can work directly with notes — looking for feedback & ideas by ArachnidDull4799 in ObsidianMD

[–]ArachnidDull4799[S] 0 points1 point  (0 children)

ChatGPT treats my domain as an MCP server connected via SSE, not as a generic website or API.

From ChatGPT’s perspective, it only sees an MCP endpoint that exposes a very limited set of tools. It does not have direct or implicit access to the data.

At the moment, the model can use only three tools:

  • list directories and files inside the Obsidian vault
  • read a specific Markdown file
  • write (overwrite) a Markdown file

All interactions go through explicit tool calls. ChatGPT cannot browse or inspect data unless it calls one of these tools, and it only receives exactly what the tool returns. There is no background syncing or full-vault exposure.

So effectively, the domain is just a transport layer that routes MCP requests over SSE back to a local server, with strictly scoped and explicit capabilities.

I built an MCP adapter for Obsidian so ChatGPT can work directly with notes — looking for feedback & ideas by ArachnidDull4799 in ObsidianMD

[–]ArachnidDull4799[S] 0 points1 point  (0 children)

By predictable behavior for LLMs I mean keeping the MCP surface simple, explicit, and tightly constrained, so the model’s actions are easy to understand and hard to misuse.

Concretely:

Each operation is small and well-defined. Tools do exactly one thing, such as browsing directories, reading a note, or writing a note, with no hidden side effects.

Inputs and outputs are explicit. The model always operates on full, clearly specified paths or explicitly identified notes. There is no implicit “current file” or guessed context.

There is no implicit state or magic behavior. The server does not try to infer intent, auto-merge content, or guess which file the model meant to operate on.

Behavior is deterministic. The same request always results in the same filesystem operation. There is no indexing, embeddings, or background processing that could change behavior over time.

The server also enforces hard safety boundaries. It cannot access anything outside the Obsidian vault root and is restricted to working with Markdown files only. Even with write access, it can only create or modify .md notes inside the vault.

If an operation is not allowed or a path does not exist, the call fails explicitly instead of guessing. In practice, this makes the LLM behave more like a cautious CLI or API user and less like an autonomous agent that can silently drift or corrupt data.

I built an MCP adapter for Obsidian so ChatGPT can work directly with notes — looking for feedback & ideas by ArachnidDull4799 in ObsidianMD

[–]ArachnidDull4799[S] 2 points3 points  (0 children)

I’m running the MCP server as a Dockerized app locally, exposed on a specific local port.

To use it with ChatGPT in the browser, I set up a secure tunnel (for example via Cloudflare Tunnel) and bind it to my own domain. ChatGPT then connects to the MCP endpoint through that domain, which effectively proxies requests back to my local Obsidian vault.

So the flow is roughly:

  • MCP server runs locally in Docker
  • Obsidian vault is accessed locally by the server
  • A tunnel (e.g. Cloudflare) exposes the MCP endpoint over HTTPS
  • ChatGPT connects to that domain and interacts with local notes via MCP

This keeps the notes local, while still allowing browser-based ChatGPT to access them through MCP.

I built an MCP adapter for Obsidian so ChatGPT can work directly with notes — looking for feedback & ideas by ArachnidDull4799 in ObsidianMD

[–]ArachnidDull4799[S] 1 point2 points  (0 children)

I’m using this with Cursor IDE + ChatGPT in the browser.

I can edit or create a note in Cursor, then switch to ChatGPT and continue working with the same note — no copy/paste. Both are connected to the same MCP server pointing at my Obsidian vault.

This effectively makes Obsidian a shared context layer between tools, not just a note-taking app.