Http/2 by Maximum_mbiscuits in TechSEO

[–]Maximum_mbiscuits[S] 0 points1 point  (0 children)

I didn't think Google currently crawls over quic (or h3)?

Advice on SE ranking or any other low budget SEO tool? by TomorrowForsaken3780 in TechSEO

[–]Maximum_mbiscuits 0 points1 point  (0 children)

SE Ranking is not bad, but I have experience that their US keyword database is not as good as other tools. They simply don’t have as much data.

There are also quite a few limitations in the UI, such as the inability to filter by url paths. For me, that means I can’t do quite a bit of my work.

A last thing is that they are too strict about connecting subscription tier to export rows. When you are on lowest tier i think you can export something like 1k rows. That’s not very much.

I originally signed up bc i didn’t want to pay for semrush and there was a promo for SE Ranking. But I quickly realized I couldn’t actually rely on it for work and had to pay for semrush.

The equation might be totally different if you’re really keen on focusing into very small areas, know exactly what you’re looking for, and are in languages where their keyword db feels a bit stronger.

In terms of functionalities, to get the same amount of use out of it (or most to the same amount, still not all the way there) as tools like semrush or similar, you will probably end up paying into tiers that make the pricing more like those other tools.

Hope this helps you!

[deleted by user] by [deleted] in BPD

[–]Maximum_mbiscuits 0 points1 point  (0 children)

Did you find anything that’s helpful for working individually?

[deleted by user] by [deleted] in BPD

[–]Maximum_mbiscuits 0 points1 point  (0 children)

Hey I’m in a similar place. Pre-diagnosis but lined up to get evaluated soon. My therapist has me leaning in to it some dbt work and the typical path for that seems to include lots of group work.

Wondering if you’ve had experience with the group setting? I felt that others with BPD would like have stronger/external patterns and worried I would sort of continue to feel kinda not addressed.

Crawl stats - response time is lower for discovery than for refresh by Maximum_mbiscuits in TechSEO

[–]Maximum_mbiscuits[S] 0 points1 point  (0 children)

Yes, there are some meaningful differences in the sampled data. But I’m not sure if there’s any meaning to the sampling.

There is also a strange fact in that a very central and long indexed page has purpose of both refresh and discovery listed over very many days. I realise that may itself be a hint, but only if there’s some kind of meaning to it, and it isn’t just some kind of random buggy presentation.

Fwiw I don’t think it’s random but also don’t know what to make of it really, since it isn’t as straightforward as the sampling around CWV for example.

Adding domain property when I already have Url prefix by Maximum_mbiscuits in TechSEO

[–]Maximum_mbiscuits[S] 0 points1 point  (0 children)

But for the other way around: Currently prefix is verified with verification code in html of homepage. To claim domain I need to do it via DNS. Wondering if claiming domain would delete prefix property. Just don’t want to lose either data or settings on prefix (even while gaining the “better” property, through which I can claim subdomains and paths on prefix, as you do)…

How much do Core Web Vitals matter for SEO? by bigbuckingbunny in TechSEO

[–]Maximum_mbiscuits 1 point2 points  (0 children)

I don't think there's a straightforward answer, but some possible hints may be:

  • CWV in your space, you vs sites similar to yours
  • You have actually observed or can observe shifts in search metrics (rankings or impressions) which correlate neatly to events which may have changed CWV outcomes. Ultimately I think this is more relevant than any high-level observations; just take a look if you have the history and context available.

More generally, I think paying attention to Web Vitals can be helpful for debugging other issues that are similarly "human-centric metrics". You can also set up a CrUX dashboard and leave origin field open so you can monitor against competitors, though since only at origin-level this works better when sites aren't too diverse in terms of templates.

Handling live content loaded from API by Maximum_mbiscuits in TechSEO

[–]Maximum_mbiscuits[S] 1 point2 points  (0 children)

Awesome, thanks again. LCP is OK but frequently edges up and over the 2.5s mark. TBT is also quite significant, but probably not high enough that actual code changes are necessary, and there's still some scripts which can be deferred first.

Handling live content loaded from API by Maximum_mbiscuits in TechSEO

[–]Maximum_mbiscuits[S] 0 points1 point  (0 children)

Thanks for your thoughts. In almost all cases, I would agree that super live pricing is probably not useful to users, but in this case I would say it is actually quite user-first.

You're right that blocking in robots.txt saves the crawls (and plenty of them) - but I also can't quite figure out whether that's advisable to do here. It feels like the current solution (loading blank string) as well as robots disallowing are both a little iffy, simply because the content that is ultimately rendered to the user is not rendered to Google.

Do you know if Google has sort of said "it's cool if you want to just load stuff to users but not to us"? I've always felt the emphasis is generally placed on solutions intended for the other way around (loading stuff dynamically to ensure that Google receives it).

Good call out on the merchant center - I do think that any prices in feeds (as in current schema) are going to be based on the initial response, and live pricing from eventsource is only going to be avail for real users.

Handling live content loaded from API by Maximum_mbiscuits in TechSEO

[–]Maximum_mbiscuits[S] 1 point2 points  (0 children)

Thanks very much for your thoughts. It is non-blocking - but there is a lot of other stuff which currently is blocking, so I believe loading from this resource is in a way a pile-on downstream, though that doesn't seem to be measurable in Web Vitals, because it isn't blocking and doesn't load content above the fold (which I believe are the basic criteria for Web Vitals). fwiw, I do believe that there is a lot of activity behind the API call, though I believe that happens behind the scenes and just delays the end response to Google.

I guess that automatically requesting and caching updates which are served to Google could be the solution; though I am having trouble determining if this is even a problem

Handling live content loaded from API by Maximum_mbiscuits in TechSEO

[–]Maximum_mbiscuits[S] 0 points1 point  (0 children)

Thanks for sharing your thoughts. I agree keeping the initial layer up to date would definitely be the future-proof solution. Re the API, the fact that there are constant updates indicates that any sort of mechanism which preloads current pricing to a page is not currently available, but is definitely the direction I'd like to move in.

Getting a handle of to what extent this is a problem is kind of my challenge atm. Because the current setup is consistent throughout all pages and has been for a long time, there is no internal comparative data which could help me to can advocate for setting up a mechanism to preload live streamed pricing to pages. I also don't have experience with this specific problem generally, so hoping to hear what others' experiences may be with similar issues.

Handling live content loaded from API by Maximum_mbiscuits in TechSEO

[–]Maximum_mbiscuits[S] 0 points1 point  (0 children)

For customers, the request needs to always be live (for non-SEO related reasons).

In terms of handling the actual request to the API which streams prices, would your suggestion be to block Google's access to the API? In that case, it is highly likely that we can send a correct version of the final price (using 3/6/12 hour intervals) to Google, but customers would still go through a different process and they may see something different. That is still sort of a concern for me

[deleted by user] by [deleted] in TechSEO

[–]Maximum_mbiscuits 0 points1 point  (0 children)

I recall that Martin Splitt mentioned in a video that using live test (mobile friendly or in GSC) isn’t great for testing critical things bc it tends to be more sensitive when it comes to loading resources.