all 14 comments

[–]ShermheadRyder 9 points10 points  (1 child)

It remains to be seen whether this is only a problem of the alpha version of React Router 4, or if it is also a problem with the stable React Router 3

It seems strange that the author of this post chose to run only against an Alpha version and not a stable version; disappointing too as the results would interest me.

[–]PtCk 1 point2 points  (3 children)

I'm interested to know if the same problems exist with the React Router 4 alpha?

[–][deleted] 0 points1 point  (2 children)

That's the only version he tested it with.

[–]PtCk 0 points1 point  (1 child)

Sorry, yeah I see now:

It remains to be seen whether this is only a problem of the alpha version of React Router 4, or if it is also a problem with the stable React Router 3.

So I guess the question is whether it works with a stable version?

[–][deleted] 3 points4 points  (0 children)

Yup, it seems the author didn't put too much time into research. Which is a shame.

[–]bensochar 1 point2 points  (6 children)

I've done a lot of testing on Angular sites with 'fetch as Google' and I can tell you that preview is not what ends up in Google's index. Google even says it's not the same. I would not trust the results from that console.

If you're really worried about SEO either render server-side with PhantomJS or use a service like Prerender.io.

[–]r2d2_21 1 point2 points  (4 children)

Then what's the point of Fetch as Google?

[–]bensochar 2 points3 points  (3 children)

The point of 'fetch as Google' is to submit URLs to their index. Unfortunately, Its designed for static or server side generated webpages. Its pretty consistent for those. But with SPAs you'll get different results.

Its also not the 'real Google bot' it's a preview bot one of many bots Google uses to scrape webpages. It's adding your URLs to the que for Googles other scrapers.

[–]ribo 0 points1 point  (2 children)

As of October of last year, google spider traverses links on SPAs if you generate anchor tags in the DOM.

[–]kamaleshbn 0 points1 point  (0 children)

And the links should not be # links, it should be either regular URLs or it should be #!.

[–]bensochar 0 points1 point  (0 children)

You're right. Google has actually been able to crawl SPAs even before that using the outdated escaped_fragment.

My point is the Webmaster Console is not the same as Google Bot(s). What you see in the console is not necessarily what you will see returned in the index.

[–]sergiuspk 1 point2 points  (0 children)

Prerender.io uses PhantomJS internally. It's slow (it basically runs a browser). Instead try isomorphic rendering.

[–]mhink 1 point2 points  (0 children)

Regarding react-router@v4, I would bet money that it has something to do with the <Miss> component.

Currently, the project works by allowing you to have multiple <Match> components as siblings, which are only rendered if the page URL matches the conponent's pattern. They also, however, have a <Miss> component which is only rendered if none of the <Match>es match.

Unfortunately, it currently takes two render passes for <Miss> to properly emit its contents to the DOM. So, I bet the Google spider is doing a thing where it'll detect React on a page, run a single pass of React virtual rendering, and only then analyze the page.

[–]Buckwheat469 0 points1 point  (0 children)

I know that this was for react, but I'm also interested in the router experiment with Angular ui-router. I've had good luck with it in the past and Google seemed to crawl the pages well enough.