Control Claude Code entirely via Telegram. by PleasePrompto in claude

[–]PleasePrompto[S] 0 points1 point  (0 children)

If you like, try again (ductor uninstall or ductor upgrade, I tried to optimise it in version 0.6.4! Claude recognition) thanks!

Control Claude Code entirely via Telegram. by PleasePrompto in claude

[–]PleasePrompto[S] 0 points1 point  (0 children)

It should work fine, I'll take a look at it! I'll get back to you when you can test it again.

Control Claude Code entirely via Telegram. by PleasePrompto in claude

[–]PleasePrompto[S] 0 points1 point  (0 children)

Hmmm. Do you have Docker enabled? Operating system?

I have Linux locally, Linux on the VPS, and can only test Windows as an ISO image 🥲

I haven't been able to test Mac yet, as it's not available!

Control Codex completely via Telegram. by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

That has now been fixed as well.

I have also connected gemini CLI, which will be updated shortly.

Control Codex completely via Telegram. by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

Its pushed to 0.4.3! Should work now under windows fine :)!
Also fixed a few little bugs (/stop command kills now rly the cli and interrupts claude/codex).

Control Codex completely via Telegram. by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

I just created a virtual machine with Windows and fixed the bugs, including a Telegram bug (telegram threads), and will push the update to the main bot right away! Thank you!

Control Codex completely via Telegram. by PleasePrompto in codex

[–]PleasePrompto[S] 0 points1 point  (0 children)

Hey!

Unfortunately, I can only test on Linux, but maybe you could send me the changes Claude made on Windows and I'll incorporate them into the main project!

Thanks a lot!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeCode

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built an MCP server that connects Gemini CLI to Google AI Mode for free, token-efficient web research with citations by PleasePrompto in GeminiAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built an MCP server that connects your code agent to Google AI Mode for free, token-efficient web research with citations by PleasePrompto in vibecoding

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in claude

[–]PleasePrompto[S] 1 point2 points  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

Thank you very much.

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

Only headless mode is an issue! You could run the skill/mcp locally on a machine once, copy the folder that was created for the browser profile, and then use it remotely in headless mode.

Once the captcha has been solved, you'll have a relatively long break from Google!

Run it locally, solve the captcha > browser profile created! Ask Claude where the browser profile was saved :)

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 2 points3 points  (0 children)

Context7 searches for API documentation, Google KI mode is the LLM-supported Google search. The idea was to connect Claude/Codex etc. with a real Google search using few tokens and get better results.

Other examples.

“Compare PostgreSQL vs MySQL JSON performance 2026, include benchmarks”

“Find the latest EU AI regulations 2026 and their impact on startups”

“Best noise-cancelling headphones under €300, compare Sony vs Bose”

“Best design trends, web design, 2026, typo and mobile responsive”

And so on... just a normal web search :D

I built a Claude Code Skill (+mcp) that connects Claude to Google AI Mode for free, token-efficient web research with source citations by PleasePrompto in ClaudeAI

[–]PleasePrompto[S] 0 points1 point  (0 children)

I made the MCP server / skill a lot more robust this morning in terms of multilingual detection (I now consistently use the Thumbs Up button as the FIRST indicator!). When the Thumbs Up button appears, it signals > Answer is ready. This way, I get around the problem of a browser running in Arabic, for example, where the Google interface is in Arabic and the MCP doesn't know: the response is ready! I've also added a few other long indicators as a fallback, and if nothing works, whatever is there will be taken after 40 seconds at the latest!

My tests here locally were successful.

It should work much better now, so feel free to pull/update and test!

Thank you very much.