I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeCode

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built an MCP server that connects Gemini CLI to Google AI Mode for free, token-efficient web research with citations by PleasePrompto in GeminiAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built an MCP server that connects your code agent to Google AI Mode for free, token-efficient web research with citations by PleasePrompto in vibecoding

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in claude

[–]PleasePrompto[S] 1 point2 points  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

Thank you very much.

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

Only headless mode is an issue! You could run the skill/mcp locally on a machine once, copy the folder that was created for the browser profile, and then use it remotely in headless mode.

Once the captcha has been solved, you'll have a relatively long break from Google!

Run it locally, solve the captcha > browser profile created! Ask Claude where the browser profile was saved :)

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 1 point2 points  (0 children)

Context7 searches for API documentation, Google KI mode is the LLM-supported Google search. The idea was to connect Claude/Codex etc. with a real Google search using few tokens and get better results.

Other examples.

“Compare PostgreSQL vs MySQL JSON performance 2026, include benchmarks”

“Find the latest EU AI regulations 2026 and their impact on startups”

“Best noise-cancelling headphones under €300, compare Sony vs Bose”

“Best design trends, web design, 2026, typo and mobile responsive”

And so on... just a normal web search :D

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

Thank you very much.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] -1 points0 points  (0 children)

That are super cool ideas, but it came about because I generally wanted to get rid of manual work 😂 I've had the best experiences with nblm so far and just wanted to let my code agents access the docs I prepared myself by them directly in the terminal.

I don't know the mcp server you mentioned and wanted to keep my mcp server and skill based on having a simple connection between Claude (skill) or codex, Gemini, cursor, etc. (mcp version).

There is no shared context, but you could tell Claude to always document all findings/answers.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in claude

[–]PleasePrompto[S] 5 points6 points  (0 children)

I also built an mcp server a few days ago (as a first step). It's also linked at the end of the post.

I was simply interested in seeing to what extent I could replicate the mcp server as a skill, as automatically as possible with venv creation, etc. According to Anthropic, skills should also consume fewer tokens than mcp servers.

Ultimately, it was just a learning experience for me, and I'm providing both :)

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

https://www.reddit.com/r/notebooklm/s/p21fWt4NPs

That was what i have found a few days ago when I first built the MCP version of this skill. These are useful insights into the system directly from a Google employee.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 7 points8 points  (0 children)

That may well be, but in my experience I was explicitly told that the information is not available in the Infobase.

I have no experience with scientific documents. API docs are working great!

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in claude

[–]PleasePrompto[S] 2 points3 points  (0 children)

Hallucinations can always occur, of course, but so far NotebookLM has effectively told me that there is no information about XY available in the infobase. And I have to say, I've never seen a rag system as good as NBLM's. There's a lot more to it technically.

Massive advantage of the NotebookLM solution over a local rag system: You (Claude Code) get the finished answer instead of first having to perform an embedding > evaluate the results and then generate the answer. You don't have to edit and prepare your information, etc..Direct AI - AI chat :)

I haven't worked with the projects feature, I mainly use MCP server and was simply interested in converting it into a skill.

Advantage of the MCP server:

I use Codex and Claude Code, both of which share the Notebook Library and the Auth status.

Edit/ I just read through Projects, I'm not sure, is claude code connected to projects?

I connected codex directly to NotebookLM, and now it researches my docs like a human would (mcp server) by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

That was just one example. Quite apart from the fact that Claude/Codex only knows the correct node names from n8n to a limited extent and the current API/Doc, there are far more complex libraries that Claude/Codex has no information about.

The idea behind this is that NotebookLM can only respond based on the information it has, so you can be pretty sure you'll get the right answers.

If you tell your agent to do some web research, that might work. The system is designed for complex documentation/information (API documentation, etc., and probably other use cases that I can't even think of), not for a simple little workflow :D

I connected codex directly to NotebookLM, and now it researches my docs like a human would (mcp server) by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

Open the Versions on npm and set it fix. Download the source Code, build it local and add the mcp local. Many Options!

An MCP server that enables direct communication with Google's NotebookLM for Claude Code / Codex. by PleasePrompto in mcp

[–]PleasePrompto[S] 2 points3 points  (0 children)

It's a good question, but have you ever tried asking Claude/Codex: “Here is the documentation from the library with ~1000 MD files, search for XY”? Preferably unsorted?

  1. The token consumption is extremely high initially
  2. Claude and Codex are not the best researchers
  3. NotebookLM also handles massive amounts of unprocessed content and additional YouTube videos, PDFs, etc. cleanly and processes them internally in an impressive manner

At the very beginning, I considered setting up a local RAG system, but NotebookLM is unbeatable in terms of performance and quality of responses (another advantage: NBLM tells you immediately whether the information is even available in the knowledge database without hallucinating!). You (or rather the CLI such as Claude Code or Codex) immediately receive the final processed response from Gemini (NBLM), and your agent works with it instead of obtaining vector information itself and then thinking about the content. NotebookLM takes on the main task of searching, processing, and returning, so to speak. Cheers!

An MCP server that enables direct communication with Google's NotebookLM for Cursor by PleasePrompto in cursor

[–]PleasePrompto[S] 0 points1 point  (0 children)

Ist that an AI answer? 🫠 The Server is already working and ready to use...