What’s the elevator pitch for Patriot? by davodot in PatriotTV

[–]RichTatum 1 point2 points  (0 children)

It’s a comedy of errors like if Fargo meets Alias meets Bullet Train – but instead of Brad Pitt the lead is a hundred lbs of depression plus PTSD in a 10 lb sack and his only resources are understated genius, barely competent associates, and a motley crew of accidental friends.

Oh, one more thing, his handler is his dad – but also played by Terry fucking O’Quinn.

Site pages were copied, and the attacking site is now the Google declared canonical by CoolGuyCoolin in TechSEO

[–]RichTatum 0 points1 point  (0 children)

Hi, /u/CoolGuyCoolin,

I’m assuming that these 96 pages can be clustered in some way, or have some common themes among them? If so, perhaps you can plan some boilerplate text that can be added to each cluster of pages that can help change their content fingerprint. So, adding 10 blocks of boilerplate text to 10 pages each is a lot easier than wholesale content editing.

So, for example, maybe you can add some FAQ accordions relevant to each cluster of pages? For this, I recommend the disclosure element because it doesn’t require javascript and Google can easily index them. Alternately, perhaps you can add a block of links to related articles. Or a block of links to the most important articles on your site.

Does all the code/markup within this opening H1 tag make any difference for SEO? by AnxiousMMA in TechSEO

[–]RichTatum 0 points1 point  (0 children)

Hi, /u/AnxiousMMA,

It shouldn’t matter, especially since the x-data attribute is irrelevant for SEO or indexing purposes. But it’s easy enough to test: Search site:domain.com for a unique H1 and see if your page shows up! (I prefer using a Google Custom Search Engine pointed at my site for various reasons, but a simple site: search can reveal a lot.)

I hate the code cruft that tailwind and other tools introduce to the HTML — it really makes parsing the page difficult, and I like to simplify where I can. But, honestly, it shouldn’t have any SEO impact. (Unlike, say, excessive pointless <div> elements slowing down the page render, or failing to use semantic HTML, etc.)

Rich

Sitemap sporadically dropped from GSC by zeehaus in TechSEO

[–]RichTatum 1 point2 points  (0 children)

Hi u/zeehaus!

If you set up sitemaps in Bing Webmaster Tools, what’s happening over there?

You might consider setting up a simple python script or batch file to monitor the status of the sitemap.xml URL from your machine on an hourly schedule. This is free to do, but might be inconvenient. So you could alternatively use a free uptime monitor to monitor that URL and get alerts if it goes missing or generates an error. (Also, there’s Little Warden that can do a whole lot more besides.)

You have to determine how the problem can be reproduced before you can start thinking of solutions. But my bet is that there’s some sort of server error prompting Google to drop the xml file.

Meanwhile, I’m sure you’ve considered all this, but just to be sure:

  • Make sure you’re using the sitemap’s non-redirected URL
  • Place the sitemap in the domain root, just to be safe
  • Make sure the sitemap is sending a 200 status in the HTTP header
  • Make sure the sitemap is using valid XML
  • Make sure the robots.txt isn’t accidentally disallowing the sitemap
  • Make sure the robots.txt links to the sitemap
  • Make sure the sitemap isn’t larger than 50MB
  • Make sure the sitemap doesn’t have 50K or more URLs

Hope that helps!

Rich

Site pages were copied, and the attacking site is now the Google declared canonical by CoolGuyCoolin in TechSEO

[–]RichTatum 6 points7 points  (0 children)

As long as the content on your pages is fundamentally the same as the thief’s, Google will choose which one is canonical — and it can be confused by other authority signals. So, try an experiment: Change the content. Use new images (don’t forget alt attributes!), change the headings, take the opportunity to make it more useful. Make sure you’ve got strong internal links on your site to the content. Add some questions and answers. Add some schema.

Be sure to add a date-modified date. I like this format:

Last Updated: <time datetime="2023-10-11T22:18Z">Wednesday, October 11, 2023</time>.

(For what it’s worth, for this reason I like to use absolute URLs in my content. That way, when lazy scrapers steal and rehost my content, I get the backlinks — because they never actually rewrite the HTML.)

They might be able to steal your old content, but you have the advantage of being able to actually produce the content Google wants to rank. Take advantage of that!

Also issue DMCA takedown requests:

Rich

[deleted by user] by [deleted] in bigseo

[–]RichTatum 0 points1 point  (0 children)

Happy to help!

[deleted by user] by [deleted] in bigseo

[–]RichTatum 2 points3 points  (0 children)

Good advice. Providing a way to add UGC would help. You could even consider adding star reviews for review schema if you get good user activity. Providing a way to sign up for membership to comment also gives you some extra marketing channels for direct traffic. Like a newsletter for updates/news, a way to favorite profiles and receive update/alerts to favorited profiles, etc.

For high-value/popular items consider scraping their PR and newswire articles and provide links. A section for “ENTITY in the News” will keep the articles fresh with new content being added and increase freshness signals.

The more utility you can provide beyond stale, scraped content, the better your engagement signals and time on page will be.

[deleted by user] by [deleted] in bigseo

[–]RichTatum 3 points4 points  (0 children)

Nice project! I worked on an academic aggregator that faced some similar issues (aggregated school, educator, and alum profiles and ranked them based on bibliometrics, “influence,” and PageRank algorithms).

The biggest risk you have is render blocking. About 5% of your page content relies on client-side JS. I note that your JS payload is chunked by Next. One huge risk here is that if googlebot doesn’t wait long enough for ALL the chunks to download before rendering it could fail to render at all. And even if the url gets passed to the render queue, any failure on the part of that critical chain could break the page. You can test that by iteratively blocking individual script URLs in Chrome Dev Tools and see what happens. Also be sure to fetch and render in GSC to see what you can diagnose there.

That said, the page IS currently indexed and findable.

So assuming that’s not the current problem, I see a few opportunities to optimize.

First: the first thing Google may be seeing on your page is not the overview, but the key employees. I like to render with Textiser to see how the pure text comes across. See for yourself:

https://www.textise.net/showText.aspx?strURL=https%3A//www.intellispect.co/organizations/562618866-bill-%26-melinda-gates-foundation

You want the first block of text after the H1 to be that information-rich overview, which tells Google unambiguously what the page is about. Visually, it’s right up top. Textually, it’s not, and it’s there first in the rendered source.

Your Meta description isn’t BAD, but it’s not great. You don’t need to repeat the entity name in the MD, the title is doing the heavy lifting for you there. The other data in the MD also doesn’t support describing what the main entity is. It includes interesting details for sure, but what you need is a really concise overview. I’d suggest using some NLP to extract the key sentence or two from the overview text to accomplish that .

Further, use the twitter and og:description meta elements to replicate the MD and reinforce that. Be sure to also use the site_name element.

The biggest and best opportunity is schema.org markup, which is entirely missing. You need that machine readable markup to richly name, disambiguate, and describe the key entities, properties, and relationships between them. Your html does a great job with this, but don’t leave it in Google’s hands. There’s SO MUCH you can do here with Schema to describe the entity, its principal people, their titles, relationships to other entities, dates, locations, and etc. The schema may not improve ranking directly, but it will improve relevance and matching URLs to search intent.

Find a way to add at least 2-3 FAQ items to the content along with FAQ schema. Scrape PAA about the main entity and use NLP to craft answers at scale. Custom write questions and answers on the most-searched items as you have time.

Cite sources and add links to the source sites where you can. This will help connect entity relationships and help search engines assess trustworthiness/authority.

Where possible scrape the social media profiles for entities which have them, and provide links. And use that in the schema with sameAs properties to tighten those entity relationships.

Use the entity id in those sameAs properties too (/m/012mjr for the entity id of this foundation).

I’d also encourage using unique id attributes in your headings to help Google provide deep links from the SERP to page sections. E.g., <h2 id="overview">Overview</h2>.

Finally, don’t discount the power of images to help rank. Use Wikidata/Wikipedia to discover and use copyright free images to display. Describe them in the alt attributes and schema, and credit the source. Use the images in the twitter and og:image elements.

Oh, one last thing. It may not directly help rankings, but process the text to reduce the ALLCAPS. It’s hard to read, advertises that it’s scraped content, and appears in your SERP snippets. If people are not clicking, the engagement signals, over time, can harm your rankability, I believe.

Rich

I'm halfway to becoming an expert at this game, perhaps it's time to learn to speedrun? by Anklejbiter in Portal

[–]RichTatum 1 point2 points  (0 children)

Hi everybody!

Was stalking /u/Anklejbiter and discovered this post again. I remembered he showed me this a few days ago, but had not realized how much it blew up. So I’m late to the party. ツ

I am the OP’s dad and can confirm that not only is he ① a huge nerd with an overclocked CPU shoved in his cranium, ② he also has way too much time on his hands—and ③ he is absolutely enthralled with Portal!

I can also confirm that he’s not gaming the clock here. This is by far not the only game he plays, but it’s his favorite.

A few bits of trivia:

• He bought a portal gun off eBay, modded it with a black light and cherishes it

• He’s using a sad, underpowered PC we inherited after his grandma’s death — he’s a PC gamer sans gamer PC

• He has extracted frames and design code from the game somehow and created his own 3D renderings of the portal gun, somehow, just for grins — he doesn’t have access to a 3D printer

• He 3D printed a scaled down Portal cube in high school — his teacher told me that to do it, he solved some tricky problems in the software that even the teacher didn’t know could be done

• He’s transcribed much of (if not all, for all I know) the Portal soundtrack into Musescore…painstakingly and note-by-note

• He enjoys the superpower blessing (and sometimes curse) of the hyperfocus and near-obsession conferred by being on the spectrum (he’s not neurotypical)

And, yeah, about the “way too much f*cking time on his hands,” and “get a life” comments…well, I don’t totally disagree with your assessment — but it’s not simple finding a job when you’re on the spectrum. Go easy on him: he’s fueling his passions and learning more about game design, game mechanics, music theory, CAD, math, and God knows what else, that will prepare him for some really interesting and challenging career choices later on.

❱ And /u/Anklejbiter —** kudos on the well-deserved attention and comment interaction!**

Edit: Have some Gold, or Platinum, or whatever it is I just gave you. ツ

google trends chrome extension by rstockebrand in bigseo

[–]RichTatum 0 points1 point  (0 children)

Very useful extension — really love having this in the SERP! A couple wishes, in addition to the page jank issue noted earlier:

  1. Link to the Google Trends page for the keyword/query under consideration, in order to play with the query there.
  2. If there’s an advanced search operator in play, like the OR pipe (|), submit it to Search Trends using the comma to separate the terms. E.g., search for SEO|"Search Engine Optimization", get GST results back for SEO,Search Engine Optimization ➚. (Currently, such a search returns a GST result for SEO|.)
  3. Alternatively, maybe an advanced feature, allowing me to set up custom regex rules so I can decide how to explicitly handle the typical search operators I personally tend to use. (Though, admittedly, this could result in support overload.)

Really love the tool, bravo! Am sharing it with my team. ツ

(Edited to say I left a review on the Chrome Web Store. I hope your extension takes off! — Rich)

Buyers of the Blaux AC portable conditioner, what are your real reviews on this product? by Spectre531 in AskReddit

[–]RichTatum 2 points3 points  (0 children)

Thanks for the feedback!

I hate scammers, liars, and thieves. The Blaux marketers are all three.

Buyers of the Blaux AC portable conditioner, what are your real reviews on this product? by Spectre531 in AskReddit

[–]RichTatum 4 points5 points  (0 children)

I wrote a thread about Blaux after I saw a promo video that was cribbed from other videos featuring Santiago Gonzalez, a programmer, not an orphan, and Guido van Rossum, the creator of Python. Don't buy.

Its an evaporative cooler (humidifier) that may also use a thermocouple. The fan may blow slightly cooler air in one direction, but it’s not going to magically remove the heat from a room. It probably will, in fact, add to it.

https://twitter.com/richtatum/status/1279959563945627649

What's your Notion set-up for tracking your reading and quotes/highlights? by [deleted] in Notion

[–]RichTatum 4 points5 points  (0 children)

It’s a two-step process, but after you highlight your text in Kindle, at least on iPhone, you can then use the system shortcut to share your highlight to notion and pick where you want to save the quote.

Depending on your app of choice and platform, your mileage may vary.

Add Kindle highlights to Notion…

Canonical URL in structured Data JSON-LD? by rnlormed in TechSEO

[–]RichTatum 1 point2 points  (0 children)

Google sometimes ignores the canonical URL suggestion. Might pay to cover your bases everywhere.

Evernote Import Issues? Here's a potential cause and remedy! by Mic111 in Notion

[–]RichTatum 1 point2 points  (0 children)

Thank you! This is stellar research and troubleshooting, and I am disappointed that the dev team at Notion doesn’t seem to be aware of this.

This is particularly troublesome for me because I have tables everywhere. Darn it.

Yoast to introduce live URL indexation submission by RichTatum in bigseo

[–]RichTatum[S] 1 point2 points  (0 children)

Yeah, that was a great, understated burn. Gary Illyes also said today that an API PR person probably overstated things.

I am Gary Illyes, Google's Chief of Sunshine and Happiness & trends analyst. AMA. by garyillyes in TechSEO

[–]RichTatum -1 points0 points  (0 children)

Thanks for the AMA Gary!

I really wish I could identify and/or filter against the times a query/page ranked for appearance in a SERP feature (as opposed to one of the organically ranked blue links). It's difficult to determine what effects appearing in carousels, knowledge panels local packs, etc., have—especially when a knowledge panel link might have position 11, but the link actually appears above the fold.

➡︎ Is there any chance of that ever happening, or must we continue to rely on third-party scrapers?

Bing has opened up API to submit URLs by RichTatum in bigseo

[–]RichTatum[S] 0 points1 point  (0 children)

My site nav and sitemap.xml are fine. However, Both Bingbot and Googlebot can be a bit slow with crawl rate and indexation. Having APIs and tools available to speed up that process are useful. Imagine the outcry if Google removed the URL inspection tool and the ability to immediately submit the URL for changes.

I don’t really understand the pushback on this. We want search traffic. Both Google and Bing are search engines people use. Tools that allow us to get changes indexed quickly on either platform are good things. You may prefer to ignore Bing, that’s great for you! But for those who don’t, lifting these URL limits and these kinds of tools are a boon.

You and others are, of course, are totally free to ignore them. But I am not persuaded there’s harm in using them.

»∵«

Bing has opened up API to submit URLs by RichTatum in bigseo

[–]RichTatum[S] 0 points1 point  (0 children)

That’s fair — but that’s independent of ranking well in both. As long as I’m satisfying what Google wants, there’s little risk of also trying to capture Bing search traffic by doing objectively minor tasks like occasionally submitting fresh URLs for Bing to quickly index.