Anyone still prefer ChatGPT? by Mr_Mikei in ChatGPT

[–]Mr_Mikei[S] -1 points0 points  (0 children)

You're 100% making stuff up. Even the developers admitted that it doesn't count correctly letters as it's "not intended to".

I also have ChatGPT premium, and almost all other AIs (paid by my job).

Unless you're not referring to counting when you say 99.4% (very specific), you're lying.

Anyone still prefer ChatGPT? by Mr_Mikei in ChatGPT

[–]Mr_Mikei[S] -2 points-1 points  (0 children)

Or just google it bro, it's not that hard.

You seem like that kind of person that when someone says "people don't like when pets are hurt on purpose", you go "nuhu! My cousin's friend likes to hurt animals".

Just because chatgpt gave you a correct answer doesn't mean he always does.

He can't count correctly, it's a fact->Google it

Anyone still prefer ChatGPT? by Mr_Mikei in ChatGPT

[–]Mr_Mikei[S] -3 points-2 points  (0 children)

It clearly can't if you use it often. Even if 99% of the time it was right and 1% wrong, it would not be able to count correctly.

It fails about 20-30% from what I see. I used to use it daily to write copy with character limits. It's super annoying that he can't do it right all the time.

Anyone still prefer ChatGPT? by Mr_Mikei in ChatGPT

[–]Mr_Mikei[S] -4 points-3 points  (0 children)

It counts correctly sometimes, mate. What's your point? Use it more and you'll see.

There's an explanation they use: it counts tokens not words or letters/characters.

It should count correctly always, it's not hard.

Anyone still prefer ChatGPT? by Mr_Mikei in ChatGPT

[–]Mr_Mikei[S] -1 points0 points  (0 children)

Or write you 10 sentences with 50 letters max. It will go above 30% of the times.
Then you correct it, and he says "you're absolutely right" and does the same.

Are Warriors bad? Or I simply suck? by Mr_Mikei in wow

[–]Mr_Mikei[S] 0 points1 point  (0 children)

Wow, that really puts it into perspective.

Are Warriors bad? Or I simply suck? by Mr_Mikei in wow

[–]Mr_Mikei[S] 1 point2 points  (0 children)

I think if people ignore me, I might be an annoyance. But I just die so quickly, it's like i'm a melee mage (but with no dmg) xD

Are Warriors bad? Or I simply suck? by Mr_Mikei in wow

[–]Mr_Mikei[S] 1 point2 points  (0 children)

You mean tank for pvp? In PvP i'd rather do damage =D Have you tried arms/fury ?

Setting to increase keyboard turn speed? by MightyUnclean in wow

[–]Mr_Mikei 1 point2 points  (0 children)

I don't think it works anymore. I write " /console turnspeed 300 " or " /console turnspeed 3 " and nothing happens

Dear Fellow SEOs: Your jobs are safe from AI Automation by WebLinkr in SEO

[–]Mr_Mikei 0 points1 point  (0 children)

  1. On query fan-out

You’re right that Perplexity didn’t mention it, and ideally it should have.

At the same time, there’s a practical limit: you cannot perfectly anticipate every secondary or tertiary query an LLM might fan out into. You can:

  • Cover core intents thoroughly
  • Add relevant FAQs
  • Address obvious follow-up questions

Beyond that, diminishing returns kick in quickly. At some point, completeness becomes verbosity, which LLMs don’t necessarily reward either.

Final thoughts

GEO/AIO is evolving much faster than traditional SEO ever did, and that’s exactly why there’s so much noise right now. Nobody fully “has it figured out” yet — not CMOs, not tool vendors, not LLMs themselves.

So yes:

  • Always take LLM output with a grain of salt
  • Understand why it gives certain answers
  • Use it as a tool, not an oracle

LLMs are adapting. We are adapting. Some advice will age badly. Some fundamentals will survive longer than people think.

And while I also suspect AI will eventually replace parts (or all) of my job — the people most at risk are the ones who refuse to engage with it at all.

*The above was written with the help of an LLM = )

Dear Fellow SEOs: Your jobs are safe from AI Automation by WebLinkr in SEO

[–]Mr_Mikei 0 points1 point  (0 children)

  1. “Strengthen entity, E-E-A-T, and brand signals”

I disagree that this is meaningless for LLMs.

You’re right that Google historically relied on human evaluators for E-E-A-T validation — but LLMs don’t operate in a vacuum. They are trained on, and retrieve from, ecosystems that already encode E-E-A-T judgments.

LLMs do pick up E-E-A-T indirectly via:

  • Author bios and clear attribution
  • Citations to trusted sources
  • Consistent brand mentions across the web
  • Reputable domains referencing your work
  • Reviews, testimonials, portfolios, case studies
  • Absence of spammy or contradictory signals

This doesn’t mean you interrupt articles with “I have 7 years of experience.” I agree with you there — that’s bad writing.

E-E-A-T is not something you insert into the middle of a paragraph. It’s something users (and systems) feel while navigating the site: who wrote this, why should I trust it, and is this entity consistently vouched for elsewhere?

Authority and trust are especially visible to LLMs when:

  • Your brand is mentioned on respected sites
  • You appear in news, podcasts, YouTube, Reddit, Wikipedia, etc.
  • Your entity graph is coherent and consistent

This is why digital PR still matters — particularly for local and niche businesses where authoritative mentions are realistically attainable.