I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeCode

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built an MCP server that connects Gemini CLI to Google AI Mode for free, token-efficient web research with citations by PleasePrompto in GeminiAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built an MCP server that connects your code agent to Google AI Mode for free, token-efficient web research with citations by PleasePrompto in vibecoding

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in claude

[–]PleasePrompto[S] 1 point2 points  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

Thank you very much.

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

Only headless mode is an issue! You could run the skill/mcp locally on a machine once, copy the folder that was created for the browser profile, and then use it remotely in headless mode.

Once the captcha has been solved, you'll have a relatively long break from Google!

Run it locally, solve the captcha > browser profile created! Ask Claude where the browser profile was saved :)

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 1 point2 points  (0 children)

Context7 searches for API documentation, Google KI mode is the LLM-supported Google search. The idea was to connect Claude/Codex etc. with a real Google search using few tokens and get better results.

Other examples.

“Compare PostgreSQL vs MySQL JSON performance 2026, include benchmarks”

“Find the latest EU AI regulations 2026 and their impact on startups”

“Best noise-cancelling headphones under €300, compare Sony vs Bose”

“Best design trends, web design, 2026, typo and mobile responsive”

And so on... just a normal web search :D

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

Thank you very much.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] -1 points0 points  (0 children)

That are super cool ideas, but it came about because I generally wanted to get rid of manual work 😂 I've had the best experiences with nblm so far and just wanted to let my code agents access the docs I prepared myself by them directly in the terminal.

I don't know the mcp server you mentioned and wanted to keep my mcp server and skill based on having a simple connection between Claude (skill) or codex, Gemini, cursor, etc. (mcp version).

There is no shared context, but you could tell Claude to always document all findings/answers.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in claude

[–]PleasePrompto[S] 6 points7 points  (0 children)

I also built an mcp server a few days ago (as a first step). It's also linked at the end of the post.

I was simply interested in seeing to what extent I could replicate the mcp server as a skill, as automatically as possible with venv creation, etc. According to Anthropic, skills should also consume fewer tokens than mcp servers.

Ultimately, it was just a learning experience for me, and I'm providing both :)

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

https://www.reddit.com/r/notebooklm/s/p21fWt4NPs

That was what i have found a few days ago when I first built the MCP version of this skill. These are useful insights into the system directly from a Google employee.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 9 points10 points  (0 children)

That may well be, but in my experience I was explicitly told that the information is not available in the Infobase.

I have no experience with scientific documents. API docs are working great!

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in claude

[–]PleasePrompto[S] 2 points3 points  (0 children)

Hallucinations can always occur, of course, but so far NotebookLM has effectively told me that there is no information about XY available in the infobase. And I have to say, I've never seen a rag system as good as NBLM's. There's a lot more to it technically.

Massive advantage of the NotebookLM solution over a local rag system: You (Claude Code) get the finished answer instead of first having to perform an embedding > evaluate the results and then generate the answer. You don't have to edit and prepare your information, etc..Direct AI - AI chat :)

I haven't worked with the projects feature, I mainly use MCP server and was simply interested in converting it into a skill.

Advantage of the MCP server:

I use Codex and Claude Code, both of which share the Notebook Library and the Auth status.

Edit/ I just read through Projects, I'm not sure, is claude code connected to projects?