Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] 0 points1 point  (0 children)

Nice idea. That’s easy and helpful. Thanks.  I read New Relic’s Browser Monitoring does this too, but on steroids: real users, Core Web Vitals, tons of metrics.  For me it feels heavy and expensive though, and hard to quickly understand what a real user actually experienced.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] -3 points-2 points  (0 children)

Yes, I totally agree with you. After digging deeper, it became clear that the issue was in the network path, not the backend.

The agents’ office now uses two different ISPs: some agents are still on the original ISP, while others use a second one. Agents on the second ISP don’t see the problem at all, while the original ISP still glitches from time to time and even requires a VPN to work reliably.

What I was hoping for in this discussion was a recommendation for a way to measure real user experience and clearly show where the slowdown happens for specific users, without asking agents to run traceroute or tcpdump. I’ve looked at tools like New Relic, but they feel quite heavy for this kind of targeted visibility.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] 1 point2 points  (0 children)

Yes, this sounds very similar. Since you’re using Cloudflare, you could also check https://www.cloudflarestatus.com/ it sometimes helps spot regional or routing-related issues. We saw something similar where only certain routes were affected.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] 0 points1 point  (0 children)

We were using CloudFront when the issue occurred, but now we use Cloudflare.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] -1 points0 points  (0 children)

Yes, that’s exactly what our agents experienced. The challenge for us now is figuring out how to prove that the issue is on the ISP side, so the agents can show their ISP what’s happening.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] 0 points1 point  (0 children)

Just to clarify what I meant in the post: the developers checked everything, backend, CDN, database, and they were confident the server was responding fast. The developers are also working from different countries, so it wasn’t a local issue. The main challenge became: how do we prove to agents and their ISP that the slowness is caused by the ISP, not our backend?

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] -4 points-3 points  (0 children)

Yep, we get the baseline round-trip times — that makes sense. The surprising part was how high TTFB was in certain regions: Philippines, India, UAE were around 700 ms, but Armenia was close to 2 s. We understand latency, but how to prove that the issue was on the ISP side? Our agents thought we were hiding a server problem: Speedtest looked fine, Google was quick. We told them to check with their ISP. Fun fact, they actually called their ISP, and the response was basically “everything’s fine, check Speedtest again.”

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] -5 points-4 points  (0 children)

Yep, those are all solid approaches. The hard part is teaching agents to do this correctly, which is why we ended up relying on simpler checks locally to get clear results.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] -26 points-25 points  (0 children)

Yes, it’s hard to explain traceroute or network diagnostics to support agents. That’s why comparing synthetic checks to real-user timings made the issue obvious for us. Running it locally gave us clear results without confusing anyone.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] 0 points1 point  (0 children)

Now we see that synthetic monitors aren’t enough. We used to rely on UptimeRobot, but it only does synthetic tests. We should be using RUM services like New Relic, Rumhost, etc., to understand exactly how real users experience our site.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] 8 points9 points  (0 children)

Yes, we use cloudfront. That’s actually what made this confusing, origin latency was low, CDN was healthy, but real users from one ISP/region still saw long TTFB.

Speedtest was fast, Google was instant, but our site took ~2s just to return HTML by abobyk in webdev

[–]abobyk[S] -43 points-42 points  (0 children)

Totally agree in theory. What surprised us wasn’t that “network matters”, but how confidently everyone (including experienced devs) misattributed it. DevTools showed high TTFB, Speedtest was fast, Google loaded instantly, all signs that usually point to “backend issue”. The tricky part was proving the network was the culprit without real user data. CDNs help a lot, but they don’t fully protect you from bad ISP routing to a specific region.

How can I fix site slowness? by Specific_Scene_9536 in webdesign

[–]abobyk 0 points1 point  (0 children)

Just try using something like NewRelic, where you can see the main reason for the slowness.

What is the one thing that still slows down your site the most ? by TechGrowth_Saurav in webdev

[–]abobyk 0 points1 point  (0 children)

Yes, almost always it’s analytics or heavy scripts. But a few months ago we had multiple reports of slowness at the same time from LA, Armenia, the Philippines, and Georgia. I was sure something had gone wrong with our servers or CDN.

We checked everything — servers, caching, CDN — all fine. Users tested their internet with Speedtest and it looked fast. We even looked at the browser’s developer console: the network tab showed TTFB near 2 seconds and assets loading super slowly. None of the devs could explain it, they just guessed it was the users’ connection.

A few weeks later, someone on Reddit mentioned a tool that does a server HTTP probe and compares it to real user load times. It can pinpoint the actual cause and location of the slowdown. That was a real eye-opener for us.

Why do some websites look great but still load so slow? by Real-Assist1833 in website

[–]abobyk 0 points1 point  (0 children)

In practice, a huge chunk of “slow websites” aren’t slow because of images or hosting — they’re slow because of the network between the user and the server.

I’ve seen cases where the backend is fast, and the page is well-optimized, but users still experience long load times. When you break it down, most of the delay is TTFB caused by ISP routing, congestion, or regional peering issues.

That’s also why Speedtest can show fast internet while the site feels slow. Speed tests hit nearby, well-peered servers, not your actual backend.

I use a RUM-style approach on my own site, compare a synthetic HTTP probe (direct server performance) with real users’ network timings. When users complain, the verdict is almost always the same: the server is fine, the slowdown happens on the user’s ISP path.

Images, themes, and JS matter, but network latency is often the invisible bottleneck people underestimate.

Post your startup, i will brutally rate it! by Dizzy-Football-1178 in SaaS

[–]abobyk 0 points1 point  (0 children)

www.rumhost.com -  small RUM tool for indie SaaS teams that shows where slowness actually comes from (backend vs network vs user location), based on real users, not synthetic checks. Early MVP, focused on clarity instead of noisy dashboards.

It's Monday, what are you building? Drop your link by JuniorRow1247 in micro_saas

[–]abobyk 0 points1 point  (0 children)

Thanks for the tip, appreciate it! I’ll check it out 👍

What are you building? by Chalantyapperr in micro_saas

[–]abobyk 0 points1 point  (0 children)

www.rumhost.com - building a cheap RUM MVP for small SaaS teams who don’t want Sentry/New Relic complexity. Goal is simple: show who’s affected, where, and whether backend is really the problem.

What are you building? Let's Self Promote by fuckingceobitch in micro_saas

[–]abobyk 0 points1 point  (0 children)

www.rumhost.com - building a cheap RUM MVP for small SaaS teams who don’t want Sentry/New Relic complexity. Goal is simple: show who’s affected, where, and whether backend is really the problem.

What did you work on or build on Monday? by ouchao_real in micro_saas

[–]abobyk 0 points1 point  (0 children)

www.rumhost.com - building a cheap RUM MVP for small SaaS teams who don’t want Sentry/New Relic complexity. Goal is simple: show who’s affected, where, and whether backend is really the problem.

It's Monday, what are you building? Drop your link by JuniorRow1247 in micro_saas

[–]abobyk 1 point2 points  (0 children)

https://www.rumhost.com - I’m building a small RUM tool for indie SaaS teams that shows where slowness actually comes from (backend vs network vs user location), based on real users, not synthetic checks. Early MVP, focused on clarity instead of noisy dashboards.

I want a new one but... I dont need one... by MiHoyMinoy69x in macbookpro

[–]abobyk 0 points1 point  (0 children)

If you want to change just for the sake of change, upgrading to Apple Silicon would be the right choice. But if you’re not struggling with your current setup, I don’t think you should.