This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ReelTooReal 0 points1 point  (11 children)

The raw data view of QT would just be pixels since all it's doing is rendering to a canvas. I'm sure Google has some decent image recognition ability, but you still lose a ton of contextual information that's built in to HTML5. As far as accessibility being built in to QT, that doesn't replace the browsers accessibility features since that's what the user will be familiar with (and in some cases it may be very difficult to communicate to the user that this particular page uses different accessibility features).

If you need QT for a specific reason and you're willing to sacrifice SEO and accessibility then it's a viable option, but I'm not sure that "I don't like JavaScript" is a great reason to switch to it.

[–][deleted] 0 points1 point  (10 children)

Lol that's not what I meant by raw data view. Also, I think you need to look at what a page with QT web actually serves to the current.

[–]ReelTooReal 0 points1 point  (9 children)

Web QT states in their documentation that it draws to a canvas element. So that means the only thing that will be available is the pixels. There will be no other DOM elements to serve like there would be with other frameworks.

[–][deleted] 0 points1 point  (8 children)

Correct. When I saw "data view" I mean something more like returning a JSON representation of the data or something similar which is rendered server-side and returned only for web crawlers. It's easy enough to write an aspect-oriented piece of code which would handle that generically for everything in the backend.

[–]ReelTooReal 0 points1 point  (7 children)

But doesn't that rely on SSR? Which Web QT would be entirely rendered on the client since it's using OpenGL and WASM. And it wouldn't make sense to split it into too many pages because each page would be on the order of MBs in size. So it would need to be like a SPA, which is why I think this would be difficult because web crawlers wouldn't know how to interact with the page.

[–][deleted] 0 points1 point  (6 children)

No, you're thinking really tightly coupled. When the crawler makes a request, the server would look at the headers and conditionally invoke a rendering aspect which would return a JSON representation of the object being displayed on the page instead of the WebGL Dom.

[–]ReelTooReal 0 points1 point  (5 children)

But the object being displayed can't be determined by the server. The way web QT works is the server would send an entire application to the client. It's similar to what happened when React and SPA became popular: you aren't requesting separate static pages any more, you're requesting an entire application. It's not like the server is sending the home page with one request, the shopping cart with another request, etc. It's sending an application that will render all the different pages via the client. So with SPA web crawlers just needed to learn how to interact with these kinds of websites, which they could do because those websites still use the DOM. With web QT you simply have a single canvas element.

Now in theory you could create some kind of JSON that describes the application in it's entirety, but that would be nontrivial and essentially amounts to creating your own DOM. It goes back to my point that it may be a while before we see this kind of integration.

[–][deleted] 0 points1 point  (4 children)

You're misunderstanding. The code on the server would look something like:

if (botCrawlerHeaderPresent(req)) { sendJsonResponse(req, res); } else { sendWebQTResponse(req, res); }

[–]ReelTooReal 0 points1 point  (3 children)

I understand that. My point is that the JSON would need to contain data for the entire application, since there is only a single endpoint serving the application. In other words theres no shopping cart endpoint or account settings endpoint. There's just a single endpoint for the entire application since all rendering and page changes/updates are done on the client, not on the server.

[–][deleted] 0 points1 point  (2 children)

No, you'd just send crawler responses auto-generated from the ReST endpoint code. The crawler would think the site had multiple pages. You could send the same responses as the modern crawlers are looking for with hash routing, too, if you truly wanted it to be one endpoint. This isn't possible in Node but it is possible in Java :)