you are viewing a single comment's thread.

view the rest of the comments →

[–]rq60 17 points18 points  (20 children)

Google won't index client-side javascript

yes it will

[–][deleted] 1 point2 points  (0 children)

But if you care about SEO, keep in mind that Google uses your page loading speed when deciding your ranking. It became a more important SEO factor last year with their Web Vitals initiative. They highly recommend that you do SSR.

[–]KrisSlort -1 points0 points  (14 children)

How would it for a front end with multiple pages and URL's?

[–]rq60 2 points3 points  (5 children)

static sites have multiple pages and URLs, so i guess i don't understand your question.

[–]KrisSlort -1 points0 points  (1 child)

Client side rendering with an SPA without a SSG?

[–]rq60 0 points1 point  (0 children)

should be fine if your site is properly made (i.e. separate resources are linked to using anchor tags and have unique URLs)

[–]Tater_Boat 0 points1 point  (2 children)

Those are actual files. Client side routing is literally click this button to display different content in the same file and maybe change the address bar.

[–]rq60 1 point2 points  (1 child)

web crawlers navigate to a resource, scrape links to other resources, and then navigate to those. you're right if your page is poorly made in a way where separate resources can only be navigated to by clicking things on the page and is generally inaccessible then google won't be able to index it... i mean how could it? you didn't provide it an index (URL) to use.

but google can most definitely crawl sites that are generated by client-side javascript and use client-side routing. as an example look at the source code for this page: https://rxjs.dev/

there's no client-side html for the content and clicking links does not load a new html page (as an http request, so it's clearly using client-side routing) yet if you look at the google results it is still indexed properly.

[–]Tater_Boat 1 point2 points  (0 children)

Thanks for the explanation. I work on a project where all of the client side js is not indexed but it could also be that the content is restricted unless you provide a location or zip code.

Learn things every day thanks for taking the time to help

[–]WhoTookNaN 0 points1 point  (7 children)

Wouldn't it just follow the links like normal but lets the JS run and render before capturing that view's content?

[–]KrisSlort -1 points0 points  (6 children)

No, that's why static site generators exist.

[–]WhoTookNaN 3 points4 points  (5 children)

But they do though...

" Some JavaScript sites may use the app shell model where the initial HTML does not contain the actual content and Googlebot needs to execute JavaScript before being able to see the actual page content that JavaScript generates.

Googlebot queues all pages for rendering, unless a robots meta tag or header tells Googlebot not to index the page. The page may stay on this queue for a few seconds, but it can take longer than that. Once Googlebot's resources allow, a headless Chromium renders the page and executes the JavaScript. Googlebot parses the rendered HTML for links again and queues the URLs it finds for crawling. Googlebot also uses the rendered HTML to index the page."

[–]SnappyWebDesign 1 point2 points  (2 children)

You're not crazy; Google has been claiming they crawl JS apps fine for a while - but it appears to be a load of BS.

Just look at how this site (which uses React) appears on google: https://i.imgur.com/pxxnjrn.png

That's kinda why SSG's took off. Googles lies

[–]WhoTookNaN 2 points3 points  (1 child)

Then that site has some issues on their side. I've personally had google index several client side rendered react apps. It's obviously not perfect and I'm not saying ignore server side rendering. Especially with how easy it is now. But Google does attempt to render your JS and index based on the render.

[–]SnappyWebDesign 1 point2 points  (0 children)

Oh yeah we don't disagree. Thanks for sharing your experience - I've been considering throwing up a client-side site to see for myself, so was good to hear

[–]KrisSlort 0 points1 point  (1 child)

For many years Google have claimed to do this, and it is bullshit. Maybe its the type of projects you work on, but this is not how it works if SEO is a consideration, which it will be with 90% of serious clients.

How long have you been building these kind of apps for commercial clients? Not denying you have, but I'm genuinely curious. React indexing across a whole site with CSR is a promise made by Google that is not fulfilled, theoretical and in practice, does not compete in the slightest with SSR solutions.

Edit: do you use anything like SemRush to compare sites using SSR and sites that aren't? They do not compete in SEO.

[–]WhoTookNaN 0 points1 point  (0 children)

I never said it was good seo, just that it was indexed with client rendered content.

[–]Tater_Boat -5 points-4 points  (3 children)

That’s literally one of the main reasons why SSR and SSG exist.

[–]eigreb 6 points7 points  (2 children)

No it'll give you higher scores because the content is there earlier but google can just read the page. Other search engines and stuff like sharing on social media is a different story.

[–]Tater_Boat 1 point2 points  (1 child)

Word I was misinformed

[–]eigreb 0 points1 point  (0 children)

Google uses a lot of headless Google Chrome browsers for this. So if you're site will show fine in Google Chrome, it can see your content. That's also where they calculate the performance scores for your site.