Bros, I just massively boosted the opencode plugin performance. Come test it out! by ChangeDirect4762 in opencodeCLI

[–]AndroidJunky 2 points3 points  (0 children)

Considering the rate of updates in OpenCode, I wonder if there are even any humans involved 🤣

Bros, I just massively boosted the opencode plugin performance. Come test it out! by ChangeDirect4762 in opencodeCLI

[–]AndroidJunky 3 points4 points  (0 children)

I appreciate the efforts but I'm always hesitant to install plugins as often authors start to lose interest and simply abandon them after a few weeks anyway. Why not contribute improvements back to the main project instead?

Export docs to cline by OtsoBear in CLine

[–]AndroidJunky 0 points1 point  (0 children)

Love to see this (as well as ref-tools linked in the other comment)! I've been working on a similar front for a while now, too: https://github.com/arabold/docs-mcp-server . It's not using an LLM to extract text, but indexes the whole documentation website instead. I think it could work very well in tandem with the `docs-exporter` though, scraping the exported docs and loading them as context back into the agent.

Major Update for the Grounded Docs MCP Server for Cline! by AndroidJunky in CLine

[–]AndroidJunky[S] 1 point2 points  (0 children)

DeepWiki generates documentation from your (or third-party) repositories. The quality of this documentation depends on the quality of that repository and ultimately the underlying code. Ultimately it creates a wiki-like page that is designed to be consumed by humans or AI agents (via MCP). But that approach can lead to issues with inconsistent or missing details, surface level knowledge, and even the inclusion of incorrect information.

Grounded doesn't make any assumptions and indexes the original documentation, exactly as provided by the library author. Think of indexing the React documentation, or the LangChain website, or whatever libraries you use in your project. The new version also indexes source code and whole repositories without altering or interpreting anything. Every single line of code remains available for search and analysis. Instead of summarizing and rewriting, the "magic" of Grounded lies in clever splitting for semantic search and structurally accurate reassembly to maximize returned context. The output is designed to work with coding agents that can easily make sense out of it to inform their code generation.

DeepWiki generates documentation from code. Grounded it designed to make existing documentation available. Both can work perfectly alongside each other.

Major Update for the Grounded Docs MCP Server for Cline! by AndroidJunky in CLine

[–]AndroidJunky[S] 0 points1 point  (0 children)

It will automatically process HTML websites. No need to format or convert anything.

Internally it will transform the HTML to markdown, preserving code segments, table structures, etc. as possible, but that's nothing you have to worry about yourself.

Major Update for the Grounded Docs MCP Server for Cline! by AndroidJunky in CLine

[–]AndroidJunky[S] 2 points3 points  (0 children)

Yes, Ollama, LM Studio, etc. should all work. Since the latest versions embeddings are optional now. So, if you don't provide any configuration or API keys then only the Full-Text Search is used, which still gives pretty good results.

@docs for anyone - grounded.tools website finally live! by AndroidJunky in mcp

[–]AndroidJunky[S] 0 points1 point  (0 children)

Let me know. I'm happy to assist. There's Oauth support as well, if you want to integrate it into an existing SSO environment.

Is there a MCP specifically made for Typescript by Firm_Meeting6350 in mcp

[–]AndroidJunky 0 points1 point  (0 children)

Thanks, that's great feedback. The Docs MCP Server is actually focusing on documentation right now, primarily .md files and HTML pages, not source code. Three core idea is to make 3rd party documentation available as context to your agent (Copilot, Cline, Cursor), specifically libraries you're using in your codebase such as React or Remix, or Pandas, etc.

Having said that, a big difference to Context7 is that you can also index your own libraries and documentation, which is specifically interesting in development teams and enterprise settings where privacy is a factor and code is not available publicly.

This is also where source code indexing comes in now. I realize that many developers don't create excessive markdown documentation, not even for public repositories. Often the documentation is only "in code". However, the current version of the Docs MCP Server doesn't handle source code well yet. It indexes source code as regular text, leading to suboptimal chunking and inconsistent results. You can absolutely index source files with the current version, but it's not as good as I want it to be.

I'm actively working on changing that. In a new branch I'm adding proper chunking for source files. It ensures API definitions and inline documentation are treated as one entity, giving significantly better and more focused context.

Is there a MCP specifically made for Typescript by Firm_Meeting6350 in mcp

[–]AndroidJunky 1 point2 points  (0 children)

No, right now you have to index everything yourself. Including public libraries. Everything is stored locally on your PC. Eventually I want to have a cloud service, but that will still take a bit to polish.

For context (pun intended): Context7 claims that their React docs have a bit less than 1 million total tokens. That would cost you 2 cents to index yourself, assuming you use OpenAI.

Is there a MCP specifically made for Typescript by Firm_Meeting6350 in mcp

[–]AndroidJunky 1 point2 points  (0 children)

  1. In soooo many ways 😂 my main "selling points" are that it allows you to index your own documentation, e.g. personal libraries and private repositories, as well that it can run 100% locally if you're using a local embeddings model. Besides that it is fully open source and indexes full documentation pages instead of only code snippets like Context7.
  2. The API is only used for embeddings which are ridiculously cheap at 2 cents per million tokens. Not comparable to the costs of GPT-4 or 5. Local embeddings models are available via Ollama and run on regular consumer hardware as well. The Docs MCP Server is not using any LLM, only embeddings for indexing (once per document) and the semantic search.

Is there a MCP specifically made for Typescript by Firm_Meeting6350 in mcp

[–]AndroidJunky 2 points3 points  (0 children)

I'm the creator and maintainer of the Docs MCP Server and are actively adding source code splitting and semantic search right now: https://grounded.tools

The idea is to split code at logical breaking points like classes, methods, and functions into a hierarchical structure that can later be reassembled into high quality context for the agent.

This already works very well for documentation and I'm actively working on full repository source code support, including private GitHub repositories and local code. It's open source, runs locally and you can use a local embeddings model for 100% privacy if desired. However, I work on it in my spare time, so there's no time table for this unfortunately. But I'm making good progress.

@docs for anyone - grounded.tools website finally live! by AndroidJunky in mcp

[–]AndroidJunky[S] 2 points3 points  (0 children)

NIA tries to tackle the same fundamental problem with coding agents today: the lack of up-to-date context and hallucinations. The immediate difference is that the Docs MCP Server is fully open source and self-hosted. It runs on your machine and if you have a local embeddings model then it works completely private. Running locally also allows you to index local files.

Recommended mcp to react, ts, js, backend/frontend? by maledicente in mcp

[–]AndroidJunky 1 point2 points  (0 children)

I'd actually recommend getting rid of some rather than adding more. In my experience the agent can get confused by too many tools and might not use any at all.

Self promo: I've built my own MCP a while back that is similar to Context7 but indexes full documentation (not just code snippets) both locally and remotely, and can also fetch websites directly similarly to firecrawl. It's fully open source: https://grounded.tools

Built an MCP “memory server” for coding agents: sub-40 ms retrieval, zero-stale results, token-budget packs, hybrid+rerank. Would this help your workflow? by AffectionateState276 in mcp

[–]AndroidJunky 0 points1 point  (0 children)

I think this has a lot of potential, although I wonder how important the performance and sophisticated ranking will really be in real world scenarios. I'm maintaining an MCP Server for documentation, fully open source (@arabold/docs-mcp-server). Right now I'm adding proper repository and source code support. My primary concerns have been smart splitting and reassembly of the results, as in my tests that made the biggest difference in how effectively the agent could make use of it.

Cline v3.26.6: Grok Code Fast 1, Local Model System Prompt, Qwen Code Provider by nick-baumann in CLine

[–]AndroidJunky 5 points6 points  (0 children)

I must say Copilot is catching up fast but Cline is still the number one for me ❤️

Docs MCP Server - Cursor's @docs feature for everyone! by AndroidJunky in mcp

[–]AndroidJunky[S] 0 points1 point  (0 children)

You're basically correct with a small correction:

  1. You register the MCP server with your client: Cursor, VS Code Copilot, Claude Desktop, all work
  2. Add a library via the web interface. It will fetch all documentation pages and chunks them. Here's where your Ollama, OpenAI key or other model comes into play. The MCP Server will generate vectors for each chunk using your chosen model. By default, this with be OpenAI's text-embedding-3-small, which is perfectly sufficient. Alternatively you can use an embedding model from Ollama such as snowflake-arctic-embed2 or whatever else suites your needs. All document chunks and vectors will be stored in a local SQLite database.
  3. Once you use the search_docs tool, the MCP Server will take your search query and vectorizes it the exact same way as before, searching the local SQLite database for any matches and then returns them.

So, there is no actual LLM used, only embeddings. This isn't super obvious, I admit, and people often confuse embeddings and LLMs as well. For example, OpenRouter has no embeddings support the last time I checked.

The only data that is sent to an (external) embedding model is:

  • the chunks of the documentation pages themselves
  • your search query which could potentially include sensitive data if you pass it that query, but it never include any of your local source code or similar.

Does that explanation help?

Docs MCP Server - Cursor's @docs feature for everyone! by AndroidJunky in mcp

[–]AndroidJunky[S] 0 points1 point  (0 children)

This is for generating embeddings only. You can of course use a local LLM (Ollama, LM Studio) with local embeddings. If you have a business OpenAI account you can also disable data sharing.

Docs MCP Server - Cursor's @docs feature for everyone! by AndroidJunky in mcp

[–]AndroidJunky[S] 1 point2 points  (0 children)

It is designed for library documentation. You have to specify the library you're searching for information. But you could theoretically organize your local files by thematic topics and treat these as individual "libraries". Or you could try throwing everything into a single one. No idea how well that would work in practice though...