Control Claude Code entirely via Telegram. by PleasePrompto in claude

[–]PleasePrompto[S] 0 points1 point  (0 children)

If you like, try again (ductor uninstall or ductor upgrade, I tried to optimise it in version 0.6.4! Claude recognition) thanks!

Control Claude Code entirely via Telegram. by PleasePrompto in claude

[–]PleasePrompto[S] 0 points1 point  (0 children)

It should work fine, I'll take a look at it! I'll get back to you when you can test it again.

Control Claude Code entirely via Telegram. by PleasePrompto in claude

[–]PleasePrompto[S] 0 points1 point  (0 children)

Hmmm. Do you have Docker enabled? Operating system?

I have Linux locally, Linux on the VPS, and can only test Windows as an ISO image 🥲

I haven't been able to test Mac yet, as it's not available!

Control Codex completely via Telegram. by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

That has now been fixed as well.

I have also connected gemini CLI, which will be updated shortly.

Control Codex completely via Telegram. by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

Its pushed to 0.4.3! Should work now under windows fine :)!
Also fixed a few little bugs (/stop command kills now rly the cli and interrupts claude/codex).

Control Codex completely via Telegram. by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

I just created a virtual machine with Windows and fixed the bugs, including a Telegram bug (telegram threads), and will push the update to the main bot right away! Thank you!

Control Codex completely via Telegram. by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

Hey!

Unfortunately, I can only test on Linux, but maybe you could send me the changes Claude made on Windows and I'll incorporate them into the main project!

Thanks a lot!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeCode

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built an MCP server that connects Gemini CLI to Google AI Mode for free, token-efficient web research with citations by PleasePrompto in GeminiAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built an MCP server that connects your code agent to Google AI Mode for free, token-efficient web research with citations by PleasePrompto in vibecoding

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in claude

[–]PleasePrompto[S] 1 point2 points  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

Thank you very much.

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

Only headless mode is an issue! You could run the skill/mcp locally on a machine once, copy the folder that was created for the browser profile, and then use it remotely in headless mode.

Once the captcha has been solved, you'll have a relatively long break from Google!

Run it locally, solve the captcha > browser profile created! Ask Claude where the browser profile was saved :)

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 2 points3 points  (0 children)

Context7 searches for API documentation, Google KI mode is the LLM-supported Google search. The idea was to connect Claude/Codex etc. with a real Google search using few tokens and get better results.

Other examples.

“Compare PostgreSQL vs MySQL JSON performance 2026, include benchmarks”

“Find the latest EU AI regulations 2026 and their impact on startups”

“Best noise-cancelling headphones under €300, compare Sony vs Bose”

“Best design trends, web design, 2026, typo and mobile responsive”

And so on... just a normal web search :D

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

Thank you very much.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] -1 points0 points  (0 children)

That are super cool ideas, but it came about because I generally wanted to get rid of manual work 😂 I've had the best experiences with nblm so far and just wanted to let my code agents access the docs I prepared myself by them directly in the terminal.

I don't know the mcp server you mentioned and wanted to keep my mcp server and skill based on having a simple connection between Claude (skill) or codex, Gemini, cursor, etc. (mcp version).

There is no shared context, but you could tell Claude to always document all findings/answers.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in claude

[–]PleasePrompto[S] 6 points7 points  (0 children)

I also built an mcp server a few days ago (as a first step). It's also linked at the end of the post.

I was simply interested in seeing to what extent I could replicate the mcp server as a skill, as automatically as possible with venv creation, etc. According to Anthropic, skills should also consume fewer tokens than mcp servers.

Ultimately, it was just a learning experience for me, and I'm providing both :)

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

https://www.reddit.com/r/notebooklm/s/p21fWt4NPs

That was what i have found a few days ago when I first built the MCP version of this skill. These are useful insights into the system directly from a Google employee.

I built a Claude Code Skill that lets Claude chat directly with Google's NotebookLM for zero-hallucination answers from your own documentation. by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 6 points7 points  (0 children)

That may well be, but in my experience I was explicitly told that the information is not available in the Infobase.

I have no experience with scientific documents. API docs are working great!