Debunking common Google E-E-A-T misconceptions

Learn why Google doesn't use author bios and other assumed E-E-A-T elements for search ranking and what they may actually use.

Chat with SearchBot

For the longest time, I’ve avoided writing or talking about E-E-A-T.

Having been a Google quality rater myself (almost a decade ago now), I quickly realized what E-E-A-T was: human language to describe the ultimate goal of the algorithm so raters without access to Google data can evaluate algorithms.

With the recent clarification that E-E-A-T is not a ranking signal, factor, or system, I want to jump in and hit on several key points.

First of all, what is E-E-A-T? 

As you probably know, E-E-A-T stands for Experience, Expertise, Authoritativeness, and Trustworthiness. The Experience part is the newest. The concept was originally launched as just E-A-T. 

Many have argued that it should be E-E-A-T-T to include timeliness, but in that case, I think we could come up with some much more interesting acronyms.

Where did E-E-A-T come from? 

E-E-A-T comes from Google’s Search Quality Rater Guidelines. It’s important to remember that the QRG is not a list of ranking factors, systems, or signals. They’re guides for human raters to use for various tasks. 

Those tasks can include comparing sets of search results and seeing which is better or comparing pages to see which is more relevant to a query. 

The rater data can be used when evaluating proposed algorithm changes or to create test sets that Google uses in other internal evaluations. However, the raters have no direct impact on actual ranking algorithms, penalties, etc. 

Why are you talking about E-E-A-T right now? 

Thanks to some wording changes in the SEO starter guide and tweets by Google Search Liaison Danny Sullivan, questions are popping up around the topic. That led me to do an X thread, and several people replied asking for a blog post, so here we are. 

It all started with this tweet where Sullivan says that the common elements of E-E-A-T SEOs talk about aren’t actually ranking factors.

Here, Sullivan is talking about E-E-A-T in general and what SEOs think make up E-E-A-T. He clarifies that none of them are actually ranking factors. 

For a while, SEOs have been talking about tactics that have been rumored to make up E-E-A-T like:

  • Having author bios and profiles on pages.
  • Making sure the advice says it has been reviewed by an expert.
  • Including relevant contact information on the page.
  • Linking to or getting links from authorities.

The catch is they don’t, because there isn’t any such thing as an E-E-A-T score.

Get the daily newsletter search marketers rely on.


Why doesn’t Google use these things?

The web is huge and diverse. There are so many ways to code things and so many ways to screw up coding things that it’s hard to glean specific types of information from pages.

This is one of the reasons search engines like Google and Bing created structured data, schema and XML sitemaps – to make their job easier.

Remember when Google used to have rel=author markup? How many SEOs abused that? The answer is lots!

If you’ve ever tried creating your own web crawler (and you should!) you’ll know how hard it is just to extract a date from a page.  With numerous formats, coding methods, and potential locations, numerous libraries exist solely for guessing dates.

It’s the same way with authorship or contact information. It’s not easy to crawl and scrape at the scale of the web. Using the stuff SEOs think Google uses in a robust and scalable way would be difficult. 

They could probably figure it out, but then there’s the whole SEO problem. SEOs love to manipulate this stuff. 

As soon as SEOs started saying we need author profiles to rank (reminder: we don’t), all the black hats started creating fake authors and profiles for their AI-generated content. They started saying that it was reviewed by an expert, etc. 

Should they get a ranking boost for that? How do you tell that they just made it up instead of actually doing it? Humans can easily tell this with research and critical thinking – but can a bot? Should a bot?

If concepts like expertise and authority were just derived from taking your word for it on the page, we wouldn’t even need concepts like expertise and authoritativeness in the first place. 

Search engines can do better than taking your word for it

Search engines have lots of signals they can use that don’t rely on taking your word about your E-E-A-T.

Side note: When I use terms like token, factor, signal, and system, we use them to mean distinct things. For purposes of Google documentation, though, as Sullivan clarifies, they are often used interchangeably. 

For clarity, here’s how I use the terms:

  • Token: The smallest piece of data from a query, document, etc. It could be a word part, a word, a n-gram, etc
  • Signal: Any characteristic of a document, link, query, etc. 
  • Factor: Something with a weight used in ranking. It could be a signal, a combination of signals, the output of a system, etc.
  • System: Processes factors and/or signals. It can manipulate rankings, output signals or other factors.

Using my definitions, E-E-A-T isn’t a signal, a factor, or a system. Let’s get that out of the way. 

So, if search engines aren’t using the stuff they mention in the QRG, what might they be using?

If I had to guess, I’d say that the actual signals used to reward authoritative sites boil down to a version of PageRank (i.e., link authority) and aggregate click data from search logs that feed into some sort of machine learning algorithm.

What do I mean by aggregate click data? It’s about looking at massive amounts of click data, and not “for this query users clicked this site.” 

We’re talking about data like “over 100 million clicks, the most clicked on results all had higher PageRank and included the keyword in the title and 700 other things….” 

Could there be some domain-level metrics here? Maybe, but it really doesn’t matter for the scope of this article. 

Rather than take your word on your authoritativeness, search engines can instead take the word of their users as a whole. If your site is more authoritative and trustworthy, people will link to it more.

But links aren’t enough; they can be spammed. That’s where aggregate click data comes in.

If your site is authoritative, users are going to click on it. Remember, I’m talking at the aggregate macro level here. Log file analysis! I’m not saying clicks to an individual site for a specific query are a ranking factor. That’s a whole different debate. 

Look at the SERP as a whole, though. If one ranking algorithm variant gets more clicks on the higher-ranked sites than another, it might be doing a better job rewarding the more trustworthy sites. 

A machine learning algorithm can quickly figure out if the top-clicked sites share the same common features. A search engine can use this type of data to evaluate algorithms or adjust rankings.

(Again, this is not based on individual clicks but on finding the common set of features that the top-clicked sites share. These are likely all weird math things about the content and links.) 

So, where does the QRG definition come in? 

Remember the raters? They:

  • Don’t have access to link data or click data. 
  • Don’t have machine learning outputs.
  • Don’t have hundreds of signals about every site to look at. 
  • Aren’t directly affecting any site’s ranking.
  • Aren’t training the algorithm. 

Rather, they provide consistent data for Google engineers to measure algorithm changes. 

To do this, they need human language for what types of things a human thinks align with expertise, authoritativeness, and trustworthiness. 

Ideally, the algorithmic signals will align with the human ones – and if they don’t, Google will keep tweaking.

The good news is that since none of those traditional E-E-A-T signals (author bios, etc.) are fed into the machine learning algorithms, you don’t really need them (or need to fake them) to rank. 

If ranking is the only thing you care about, then, no, you don’t need it. 

That said, most of us care about users, conversions, sales, etc. – and users love this stuff.

For many searches, users prefer to read content written by a real person. But that doesn’t mean your dictionary definition or sweat pant product descriptions need human author bios. No real human wants that.

Likewise, humans searching for medical information want factual information from a doctor or reviewed by one. Still, it doesn’t mean you need to have a doctor review your article about recycling tires or building a treehouse.

Almost everything SEOs recommend to do for E-E-A-T are good things to do for users – you know, your actual audience. So yes, do that stuff if it makes sense for your users. 

The better their experience, the more likely they are to link to you, share your content, pass on your business card, or click on your results. That stuff might actually help you rank higher. 

Please make sure it makes sense for your users before spending a ton of money on experts you might not need, and your users might not want.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Ryan Jones
Contributor
Ryan Jones is a Senior Vice President of SEO at Razorfish where he co-leads the SEO practice. Prior to being an SEO, Ryan worked as a software engineer. His vast technical and marketing experience gives him a unique lens into SEO and technical problems as well as the ability to rapidly prototype or speak to various stakeholders. Ryan has created several industry tools including SEOdataviz.com and serverheaders.com as well as the satirical blog WTFSEO.com. When he's not doing SEO Ryan enjoys playing hockey, softball, golf, and attempting to take over the world - which he would have already gotten away with had it not been for those meddling kids and their dog.

Get the must-read newsletter for search marketers.