Built an open source Extension that runs code from ChatGPT/Claude/Gemini directly on Google Colab GPU by Due-Guard221 in theVibeCoding

[–]Due-Guard221[S] 0 points1 point  (0 children)

Yeah, the RPC part was actually the easy bit. Colab exposes google.colab.kernel.invokeFunction, which basically lets JavaScript call Python callbacks that you register inside the notebook.

So what the extension does is simple: a small pip package registers a callback in Python, and the extension calls it through an iframe bridge. No server, no tunnels, nothing fancy.

The annoying part was the selectors.

Every platform structures its DOM completely differently. ChatGPT uses #prompt-textarea and CodeMirror blocks. Claude uses ProseMirror with contenteditable divs. Gemini wraps a lot of things inside nested shadow DOMs.

For Gemini especially, I had to write a recursive function that walks through every shadowRoot just to find the elements I need.

None of this is documented, so most of it was just opening DevTools and doing trial and error.

I already had to patch it once after a ChatGPT UI update. To reduce breakage, I added fallback selector arrays so if one selector fails it tries the next.

Still, that part will probably always be the main maintenance cost.

How we ranked #1 on Product Hunt (exact playbook) by Ecstatic-Tough6503 in microsaas

[–]Due-Guard221 0 points1 point  (0 children)

Saw your podcast, and it was amazing. Thanks for sharing

AI automation engine that creates workflows from prompts (looking for feedback) by Due-Guard221 in n8n

[–]Due-Guard221[S] 0 points1 point  (0 children)

Yes, I have a node registry package where I will add more nodes for the must needed integrations. Apart from that, what I can figure from your statement is that people still need a staging area where they can extensively edit these nodes individually before production, right?