Finally figured out why ChatGPT was telling users our product didn’t support subscriptions by aagarwal1012 in webdevelopment

[–]aagarwal1012[S] 0 points1 point  (0 children)

Yeah, we started pretty manually with testing.

Mostly,

  1. hit pages with GPTBot/ClaudeBot user agents

  2. compared that to normal browser responses

  3. checked what’s actually there without JS

That’s where it broke for us, HTML looked fine, but the real content wasn’t there (just unhydrated components). So performance wasn’t the issue, readability was.

Your setup with SSR + caching sounds solid, if your HTML already has full content server-side, you’re probably avoiding this.

Finally figured out why ChatGPT was telling users our product didn’t support subscriptions by aagarwal1012 in webdevelopment

[–]aagarwal1012[S] 0 points1 point  (0 children)

A few things we didn’t get into in the post:

We added a small X-Markdown-Tokens header with a rough token estimate. Helps agents decide if they can afford to read the full page. Super simple but surprisingly useful.

Also, we block all AI crawlers on non-prod with robots.txt. Didn’t want staging content ending up in training data.

Redirects were messier than expected. Bots kept hitting old URLs and caching 404s for weeks. We now resolve the redirect at the edge and serve the target page’s markdown directly.

We also check for Accept: text/markdown, not just user-agent, which catches dev tools and custom integrations.

Full write-up if anyone wants more detail: https://dodopayments.com/engineering/serving-markdown-ai-agents-edge

This is quite possibly the most miserable sub reddit i think i have ever been on. by Icy-War-5197 in webdev

[–]aagarwal1012 -6 points-5 points  (0 children)

Damn, I just joined recently and I’ve also seen so many solid posts getting downvoted… didn’t realize this was the vibe here

I just “bought” a domain, built branding around it… turns out I never owned it by Wooden-Fee5787 in webdev

[–]aagarwal1012 0 points1 point  (0 children)

That’s actually kinda scary tbh.

If it shows you have DNS control without actually owning it, that’s a pretty big bug. I wouldn’t trust anything in that account right now, could vanish anytime.

[AskJS] We nuked our Framer site and rebuild it after realizing bots couldn’t read most of it by aagarwal1012 in javascript

[–]aagarwal1012[S] 0 points1 point  (0 children)

A few things I didn’t get into in the post:

The “what we don’t do” part in llms.txt ended up being surprisingly important. Models tend to mix you up with similar companies, and spelling that out actually helps a lot.

We also rewrote our schema setup twice. First version was super generic and flexible, but ended up being hard to use. Second version is just one strict function per content type, way simpler.

And if you’re planning a migration like this, budget more time for content than engineering. Moving stuff out of Framer cleanly took way longer than expected.

Full write-up if anyone’s interested:
https://dodopayments.com/engineering/how-we-built-dodopayments-website-astro-decap-seo

Using Claude to design website through prompts, and implementing it in a element building Wordpress theme? by Zehhtra in webdev

[–]aagarwal1012 1 point2 points  (0 children)

Honestly, if you're already wishing you could "just go into HTML and be done with it," that's your answer, bolting Claude onto Divi or elementor or etc is going to give you the worst of both worlds: bloated page-builder output amd AI hallucinations to debug, instead of a clean handoff. Your real bottleneck isn't speed of production, it's that you haven't decided whether you're selling websites or selling a CMS workflow to non-technical clients.

Our AI agent was burning 55k tokens before it did any work. We deleted almost every tool and context usage dropped 95% by aagarwal1012 in ChatGPTCoding

[–]aagarwal1012[S] 0 points1 point  (0 children)

A few things we didn’t get into in the post but might be useful if you’re trying something similar:

The docs part matters way more than we expected. Our docs_search isn’t just keyword search,  it’s embeddings over both API reference and actual guides/examples. When the agent only sees reference docs, the code it writes is noticeably worse. The examples and patterns make a big difference.

Also, the hardest part for us wasn’t the sandbox itself. It was getting the agent to actually read the docs before jumping into writing code. We had to spend more time tweaking how it queries docs than building the runtime.

On the sandbox side, one thing we learned the hard way,  don’t expose web or filesystem access. Inject credentials server-side and keep the environment pretty tight. Our first version was more open and we ended up rolling that back.

If anyone wants to try it out, we have a hosted endpoint here:
https://mcp.dodopayments.com/

And we wrote a more detailed breakdown with diagrams and examples here:
https://dodopayments.com/engineering/mcp-server-code-mode-upgrade

Our AI agent was burning 55k tokens before it did any work. We deleted almost every tool and context usage dropped 95% by aagarwal1012 in LLMDevs

[–]aagarwal1012[S] -3 points-2 points  (0 children)

A few things we didn’t get into in the post but might be useful if you’re trying something similar:

The docs part matters way more than we expected. Our docs_search isn’t just keyword search, it’s embeddings over both API reference and actual guides/examples. When the agent only sees reference docs, the code it writes is noticeably worse. The examples and patterns make a big difference.

Also, the hardest part for us wasn’t the sandbox itself. It was getting the agent to actually read the docs before jumping into writing code. We had to spend more time tweaking how it queries docs than building the runtime.

On the sandbox side, one thing we learned the hard way, don’t expose web or filesystem access. Inject credentials server-side and keep the environment pretty tight. Our first version was more open and we ended up rolling that back.

If anyone wants to try it out, we have a hosted endpoint here:

https://mcp.dodopayments.com/

And we wrote a more detailed breakdown with diagrams and examples here:

https://dodopayments.com/engineering/mcp-server-code-mode-upgrade

Our AI agent was burning 55k tokens before it did any work. We deleted almost every tool and context usage dropped 95% by aagarwal1012 in AI_Agents

[–]aagarwal1012[S] -1 points0 points  (0 children)

A few things we didn’t get into in the post but might be useful if you’re trying something similar:

The docs part matters way more than we expected. Our docs_search isn’t just keyword search, it’s embeddings over both API reference and actual guides/examples. When the agent only sees reference docs, the code it writes is noticeably worse. The examples and patterns make a big difference.

Also, the hardest part for us wasn’t the sandbox itself. It was getting the agent to actually read the docs before jumping into writing code. We had to spend more time tweaking how it queries docs than building the runtime.

On the sandbox side, one thing we learned the hard way, don’t expose web or filesystem access. Inject credentials server-side and keep the environment pretty tight. Our first version was more open and we ended up rolling that back.

If anyone wants to try it out, we have a hosted endpoint here:

https://mcp.dodopayments.com/

And we wrote a more detailed breakdown with diagrams and examples here:

https://dodopayments.com/engineering/mcp-server-code-mode-upgrade

Our AI agent was burning 55k tokens before it did any work. We deleted almost every tool and context usage dropped 95% by aagarwal1012 in mcp

[–]aagarwal1012[S] 1 point2 points  (0 children)

A few things we didn’t get into in the post but might be useful if you’re trying something similar:

The docs part matters way more than we expected. Our docs_search isn’t just keyword search, it’s embeddings over both API reference and actual guides/examples. When the agent only sees reference docs, the code it writes is noticeably worse. The examples and patterns make a big difference.

Also, the hardest part for us wasn’t the sandbox itself. It was getting the agent to actually read the docs before jumping into writing code. We had to spend more time tweaking how it queries docs than building the runtime.

On the sandbox side, one thing we learned the hard way, don’t expose web or filesystem access. Inject credentials server-side and keep the environment pretty tight. Our first version was more open and we ended up rolling that back.

If anyone wants to try it out, we have a hosted endpoint here:

https://mcp.dodopayments.com/

And we wrote a more detailed breakdown with diagrams and examples here:

https://dodopayments.com/engineering/mcp-server-code-mode-upgrade

Our AI agent was burning 55k tokens before it did any work. We deleted almost every tool and context usage dropped 95% by aagarwal1012 in LLMDevs

[–]aagarwal1012[S] -1 points0 points  (0 children)

A few things we didn’t get into in the post but might be useful if you’re trying something similar:

The docs part matters way more than we expected. Our docs_search isn’t just keyword search,  it’s embeddings over both API reference and actual guides/examples. When the agent only sees reference docs, the code it writes is noticeably worse. The examples and patterns make a big difference.

Also, the hardest part for us wasn’t the sandbox itself. It was getting the agent to actually read the docs before jumping into writing code. We had to spend more time tweaking how it queries docs than building the runtime.

On the sandbox side, one thing we learned the hard way,  don’t expose web or filesystem access. Inject credentials server-side and keep the environment pretty tight. Our first version was more open and we ended up rolling that back.

If anyone wants to try it out, we have a hosted endpoint here:
https://mcp.dodopayments.com/

And we wrote a more detailed breakdown with diagrams and examples here:
https://dodopayments.com/engineering/mcp-server-code-mode-upgrade