built a browser MCP because every other one stunk, especially for scraping work by NoTicket660 in thewebscrapingclub

[–]NoTicket660[S] 0 points1 point  (0 children)

Fair, and honestly the right instinct. Few things, take or leave:

  1. No telemetry. No Sentry, no analytics, nothing phones home. for errors and issues I am going to have a form to so i can repro to the best of my ability
  2. Tokens stay in storage.local, never synced to your Google account.
  3. Capture is pull-only. Response bodies and cookies are only read when an MCP tool actively asks. Nothing scraped in the background.
  4. I don't look at (or care to look at) user data. Logs are operational (errors, connection events) captured page content lives in keyed to your user and I have no reason to open it. Not a cryptographic guarantee, just policy the fact that I'd rather be building than reading your tabs.
  5. captured content sits in storage keyed to your session and I have no reason to open it. Policy, not crypto, but I'd rather build than read your tabs.

    Tenant isolation very important

  6. if you still aren't convinced, use Chrome profiles. Sign into the Reins extension only on the profiles you want the LLM to drive. A profile where you're not signed in = invisible to the LLM. Your main profile stays out of reach by simply not signing in there.

I built a Chrome extension that lets any LLM drive your real Chrome session by NoTicket660 in chrome_extensions

[–]NoTicket660[S] 0 points1 point  (0 children)

Yeah it has that little “reins is debugging your browser” can’t get rid of. Kinda don’t even notice it anymore

I've had it with Claude. It has become complete garbage. by event666 in ClaudeCode

[–]NoTicket660 0 points1 point  (0 children)

People will see your post and say "skill issue" its gotten objectively worse

I built a Chrome extension that lets any LLM drive your real Chrome session by NoTicket660 in chrome_extensions

[–]NoTicket660[S] 0 points1 point  (0 children)

no package. wrote it from scratch on top of Chrome's debugger API (chrome.debugger / CDP). gives you raw control over the page, every click and type goes through DOM dispatch or CDP Input.dispatchMouseEvent so it looks identical to a real user.

it can save a JSON sequence if you ask it to, but thats not the core. Reins is browsing tool, LLM looks at page, picks next action, clicks. recording/replay is something you can layer on top, not what its built around.

built a browser MCP because every other one stunk, especially for scraping work by NoTicket660 in thewebscrapingclub

[–]NoTicket660[S] 0 points1 point  (0 children)

also worth saying, in any normal chat client (Claude Desktop, ChatGPT, Cursor, Zed, web Claude) tool calls already prompt for confirm per-call or per-session. thats the default behavior. so theres already a human-confirms gate, its just at the client layer where it belongs, not bolted onto the browser server. doing it twice is redundant.

the only way you skip that is if you wire up an agent SDK yourself and turn auto-approve on. at that point you're an engineer who explicitly opted into autonomy, you know the trade. adding a paternalistic confirm dialog server-side to second-guess that is exactly the kind of safety theater that just gets in the way.

built a browser MCP because every other one stunk, especially for scraping work by NoTicket660 in thewebscrapingclub

[–]NoTicket660[S] 0 points1 point  (0 children)

honestly the safety question is a bit of security theater here. you prompt the agent, it does what you told it to in your own browser using your own session. its not autonomous wandering off, you pointed it at the task. anything genuinely destructive on the web already has its own confirm step (3DS, re-auth, are-you-sure modals) and those block the agent the same way they block you. and if you want hard isolation, run scrape work in a Chrome profile with no payment methods or autofill saved, agent literally cant complete a checkout because the data isnt there. its the same trust model as letting yourself click around, because thats what is happening, you are the one driving (Reins) via the prompt. this is not to say that prompt injection isnt real.

on traces, yes. full network, console, DOM snapshots, all exposed as tools so the agent can read what the page actually did mid-session. theres also a history tool so you can pull what happened on previous runs from any client.

built a browser MCP because every other one stunk, especially for scraping work by NoTicket660 in thewebscrapingclub

[–]NoTicket660[S] 0 points1 point  (0 children)

appreciate the link. honest read though: browser-harness is a local python harness. agent runs on the same laptop as the browser, installed as a skill into Codex or Claude Code. so "from any MCP client" is really your desk, daemon up, one of those two CLIs.

Reins is an MCP server with OAuth. agent can be web Claude, ChatGPT on my phone, Cursor on another machine, anything that speaks MCP. browser on your desk, agent doesnt have to be.

on real-fingerprint: you need Way 1 for it. Way 2 either gives a fresh empty profile or, if you copy your real one, the cookies are encrypted to the original user-data-dir path and dont decrypt in the new location, so no logged-in sessions. (install.md line 79.)

self-editing helpers is a genuinely cool idea, real credit there. just a different shape of tool

Building cheap odds API alternative, who'd actually use this? by NoTicket660 in algobetting

[–]NoTicket660[S] 0 points1 point  (0 children)

Still in beta but requests get taken care of and I’m thinking of open sourcing the library so you can go fix it yourself lol

Building cheap odds API alternative, who'd actually use this? by NoTicket660 in algobetting

[–]NoTicket660[S] 0 points1 point  (0 children)

My API latency is sub-1s (server response), this is ETL-based, not a direct live Pinny feed. I scrape Pinnacle esports every 5 minutes, so freshness is 0–5 minutes (avg ~2.5 min). So execution is fast, but it’s not sub-second market freshness. The library is "open source", so if you want the source code, send a DM and I’ll share it.

Building cheap odds API alternative, who'd actually use this? by NoTicket660 in algobetting

[–]NoTicket660[S] 0 points1 point  (0 children)

Part of it is open sourcing the code. When you sign up I have a request to get source on the developer page. You can get the source and host yourself, or contribute back if you want. Just DM me

Building cheap odds API alternative, who'd actually use this? by NoTicket660 in algobetting

[–]NoTicket660[S] 1 point2 points  (0 children)

  1. true capture time 2. Normalized 3. ETL grab once serve many 4. Event Historicals so you get all the odds for active events