What Today’s Google Search Ranking Volatility Says About Tomorrow’s SEO Strategies

By Ethan Lazuk

Last updated:

The Old Man and the Google Sea artwork by Ethan Lazuk of an old man on calm water with a Google colored wave in the background.
The Old Man and the Google Sea.

Something helpful about being an SEO is we tend to create evergreen themes for content. That’s come in handy here, as this is now the third major update of this blog post, pun intended. 😉

When I first published this article about Google Search rankings volatility on July 29th, 2023, the Semrush Sensor reached 9.2 out of 10, a “Googlequake.”

Semrush Sensor at a 9.2 Googlequake

What was the reason for the summer’s volatility?

Well, many in the SEO community speculated as to what could be causing the fluctuations and also coming down Google’s pipe as a result.

Here’s a summary of the announced Google Search updates thereafter:

One that stands out is the September 2023 helpful content update, otherwise known as the “third HCU” or just “the HCU.”

When I last updated this post on March 1st, 2024, talk of the HCU was still ongoing (as we’ll recap below).

At the time, we hadn’t seen any announced updates from Google since November of last year. That meant speculation about what ranking updates were soon to come, including high hopes for HCU recoveries, were rampant.

Well guess what? Just four days later, on March 5th, Google began rolling out its March 2024 core update and a spam update. Its webspam team also unleashed manual actions.

Significantly, this March core update marked a change for the helpful content system, which is now integrated into Google’s core ranking systems and is no longer site-wide but page-level.

Expect a fourth update to this post in April 2024, after we get a resolution to the volatility.

Below is a summary of discussions around rankings volatility, including leading up to the major updates we had in 2023 and most recently what came before these latest March 2024 updates.

Before diving into the chatter around Google updates, though, let’s summarize a couple of key themes:

Ongoing rankings volatility is our new reality, given Google’s AI-based ranking systems.

Volatile search rankings are likely our new normal, given how some Google ranking systems are AI-driven (using machine learning, neural networks, and deep learning).

Here’s a reference to the dynamic nature of those systems based on a custom GPT Marie Haynes built that references Pandu Nayak’s testimony:

ChatGPT prompt: “How does Google Search use machine learning in its ranking systems?”

ChatGPT response: Deep Learning Systems Integration: Google incorporates various deep learning models, such as RankBrain, DeepRank, and RankEmbed, into its ranking processes. These systems are part of a larger set of algorithms that work to understand the intent behind queries, the relevance of content, and the quality of sources.

User Engagement Signals: Google’s machine learning models also use user engagement signals (such as clicks, scrolls, and mouse hovers) to train and optimize its ranking systems. These signals contribute to understanding what users find helpful and relevant, influencing how content is ranked.

Continuous Training and Retraining: Machine learning models in Google Search are continually trained on fresh user data, including clicks and queries, as well as feedback from human quality raters. This ongoing process ensures the models stay updated with the latest information and user preferences, maintaining the relevance and accuracy of search results.

Google’s use of machine learning in search ranking systems signifies a shift from static, rule-based algorithms to more dynamic, learning-based approaches. These systems are designed to better understand user intent, content relevance, and quality, providing more accurate and personalized search results.” [Highlights added.]

– ChatGPT: Understand more about Search (GPT)

The degrees of SERP volatility may change, leading to periodic observations about “high volatility” on Google Search, but its existence will be persistent.

As a result, keyword rankings are becoming obsolete.

This is more of a personal belief, but I’ve heard others agree that keyword rankings are becoming obsolete as a measure of SEO performance.

Instead, our SEO strategies should emphasize cumulative brand visibility across user journeys in different types of organic search results and surfaces, all while aligning content with E-E-A-T criteria and measuring goals by “qualified” clicks, (assisted) conversions, and business revenue.

Who should read this article?

If you’re in a position to care about organic keyword rankings and traffic on Google Search, either because you oversee a website’s SEO strategy or rely on organic traffic for your business goals, it’s good to know the context behind reported rankings fluctuations, subsequent ranking system updates, and what it all could mean for the future of Google’s search results and your SEO strategies.

We’ve got a long and fascinating journey ahead of us.

Here’s a summary of what we’ll cover:

This article’s information spans from today (March of 2024) back to July of 2023.

It can be easy to get caught up in moments, yet by putting rankings volatility in context, we can appreciate how current Google rankings fluxes and updates relate to yesterday’s volatility or tomorrow’s SEO strategies.

*New: The state of Google Search volatility and related discussions (as of March 2024)

Last summer, the SEO world had been buzzing for months over rankings volatility in Google Search. (That Semrush Sensor chart above gives you some idea about why.)

In December of 2023, I updated this post with explanations about how that volatility was a harbinger of several major ranking system updates that rolled out later that year.

Google also had other changes that impacted rankings, like the completed rollout in November of its hidden gems improvements to core ranking systems, which had actually been live for months.

The other big news was the release of more details about how Google’s ranking systems work based on testimony from Pandu Nayak (as referenced above) as well as the release of exhibits about search rankings from Google’s anti-trust trial.

Since then, we’ve continued to hear chatter about search ranking volatility, plus news of Google Search changes, but still no announced updates … yet. 😉

The volatility chatter continues into March.

Talk of rankings volatility in December was first related to Google’s last announced updates:

Followed by unannounced (and speculated) updates or rankings system adjustments:

And this continued through the tail end of February:

Other recent changes have impacted Google Search.

We’ve seen increasing reports of spam in Google’s search results from early 2024 on.

I’ve also been noticing more variety of social media content appearing in Search, which I believe is part of a hidden gems/Perspectives paradigm shift that began in March of 2023 and amplified from November onward.

Specifically on that topic, Reddit has not only increased its visibility in terms of indexed URLs on Search, but Google penned a deal that likely ensures we’ll see more Reddit content:

We also have Google’s growing investment in consumer AI products, like the Gemini model, which is now rebranded from Bard and incorporated into Workspace as well as the mobile Google Search app.

That rise of Gemini has also led to discussions like this recent article from Wired about how Search may no longer be king at Google:

Wired article about Google Prepares for a Future Where Search Isn't King.

Anticipation and dissatisfaction with Google ensue for SEOs.

There were two major themes around this current environment.

On the one hand, there was a tremendous amount of anticipation in late-February to early-March that something big was on the horizon — perhaps a major spam update, a fourth helpful content update, or a core update. (More on this in a moment.)

On the other hand, there was dissatisfaction amongst SEOs with Google:

Let’s speak about the dissatisfaction first.

I tend to read the comments in SERoundtable articles to get a sense of community feelings around stories, specifically, Google ranking updates and volatility, as it’s often fascinating to see the sheer number of comments left for these topics.

Between when I last updated this post (December 2023) and February 29th, there were 14 SER articles written about Google updates. Six mention, “volatility,” and the majority have 200+ comments.

Search Engine Roundtable articles about Google updates.

One of the articles from December had 100+ comments after one day, and then one week later, it had over 1,000 comments:

Seeing how people describe their experiences in those comments is also interesting.

Here’s an early comment from the December article above:

Person who left a review on Search Engine Roundtable article about SERP volatility

And here’s another from a week later:

Comment about Google rankings in a SERP volatility article on SERoundtable.

On first impression, it appears people tend to comment when they’re upset about the SERP changes, presumably about being outranked for queries they care about. 

We saw a fair bit of this sentiment from niche site owners right after the HCU, and it would appear that update is still a topic of fierce conversation.

That was in December.

But how about comments from recent articles in late February?

Comment about the HCU on SER in reference to dissatisfaction with Google.

Five months later, and the HCU was still referenced in comments, in this case related to that poll about dissatisfaction with Google, not even rankings directly.

This theme of dissatisfaction with Google hasn’t always been so prevalent, though, at least judging from SER article comments.

February 2024 also marked the 13th anniversary of Google’s Panda update rollout:

The first thing that popped out when reading the comments from that 2011 article was the tone of optimism and satisfaction around Google for that algorithm update:

2011 SER comment about Panda.
Positive 2011 SER comment about Panda.
Positive SER comment about Panda.

Now, compare those comments to this perspective left in the same article’s comment thread but in February of 2024:

2024 comment on 2011 SER Panda article mentioning HCU.

Not only is it critical, but it also mentions the HCU.

Relatedly, SEOs like Eli Schwartz have also drawn parallels between the lessons content creators learned after Panda and now after the HCU:

“Helpful Content Updates incorporate AI signals to beat AI content and the secret to beating this algorithm is not to be even more sneaky with quality signals, it is to integrate the lesson I and many others did after Panda: focus on the user. [Highlights added.]

– Helpful Content Update = Panda of Today, Eli Schwartz (Product Led SEO newsletter, 2/29/24)

That also plays into the anticipation (and speculation) side of things.

Spam and HCU anticipation.

Would we see a fourth helpful content update, or something else?

First, here was my speculation.

An interesting trend (albeit based on a tiny sample size) was the proximity of Google’s two most recent helpful content updates with spam updates.

The second HCU rolled out in December of 2022 and was sandwiched between an October and December spam update. The third HCU in September of 2023 (the one that still gets talked about) was then followed by an October spam update.

Excerpt from Google's ranking updates page.

We’ve heard a lot of reports and criticism about spam in Google’s search results over the last couple of months.

I thought perhaps we’d see an upcoming spam update that corresponded with a fourth helpful content update, maybe before or after a core update in March or April of 2024.

(Update: As it turned out, we got a core update and a spam update, and the helpful content system is now part of core updates. I’ll write more about how this impacted sites when the rollout completes in April.)

Other HCU predictions or analyses had been shared recently by SEOs like Glenn Gabe and Marie Haynes.

Glenn Gabe recently hypothesized about several evolutionary directions the helpful content system could go, including being less severe, being more granular, applying to more large-scale sites with pockets of unhelpful content, having a change in intensity towards page experience signals (namely ads), or continuing as is. You can dig into his article for richer descriptions and history:

Marie Haynes shared her hypothesis in a recent newsletter (ep. 326) that the HCU will get stronger as a result of Google’s Gemini 1.5 AI model; more details on that model here:

Other factors at play?

The other thought was maybe there are just so many dynamic adjustment in Google’s systems that we can’t tell what’s going on!

We got this cryptic post from Danny Sullivan (Google Search Liaison) in late January of 2024, but along with it came an interesting observation from Jason Barnard:

So often we hear “SERP” volatility and think of rankings, but it could be other factors at play causing those fluctuations less directly, like knowledge graph updates.

Let’s also keep local results in mind.

Last summer, a few days after we got volatility charts shared from Moz, Semrush, and other rank trackers in SER on July 16th, there were reports of volatility in local packs from BrightLocal on July 18th .

Well, here’s a recent SER report on rankings volatility from February 24th:

Followed by BrightLocal reporting volatility in local results on February 27th:

On a related note, Barry Schwartz and Glenn Gabe discussed rankings updates in a February SEO for Paws Charity Live Stream.

*To support that cause of helping cat and dog shelters in Ukraine, you can give to Anton Shulke.

The rest of this article is largely the way it was left after the second update in December. It offers good historical context for recent ranking volatility.

Expect another update to this post in April.

Enjoy!

Google Search ranking systems & changes (a little context)

Google Search releases several core updates to its ranking systems every year. Core updates are publicly announced by the search engine company (along with other significant ranking system updates) because they are broad changes with the potential to affect many websites. 

Tellingly, a Q&A document posted by Danny Sullivan (Google Search Liaison) in the Google Search Central blog during the November core (and reviews) update explained how the back-to-back October and November core updates were related to separate systems, along with giving context on how systems differ from updates:

We have different systems that are considered core to our ranking process; this month’s core update involves an improvement to a different core system than last month. …

Ranking systems are what we use to generate search results. We use multiple ranking systems that do different things. … Updates are when we make an improvement to a ranking system. [Highlights added.]

– Danny Sullivan, Google Search Central Blog, A Q&A on Google Search updates

What’s notable here are the vocabularies around rankings, systems, and updates.

Google Search has a “ranking process” with different “core systems.” In addition, there are other ranking systems, including ones with their own announced updates, like:

  • Spam detection systems
  • Reviews system
  • Helpful content system

The helpful content system also marked a vocabulary shift for Google from discussing ranking updates to ranking systems.

In a Search Off the Record podcast episode called Let’s talk ranking updates, which aired on August 22nd, the same day the August 2023 core update began rolling out, Danny Sullivan provided this context on rankings systems and updates:

“So one of the changes we did last year is we started talking about our ranking systems as opposed to ranking updates, and that was trying to clean up some legacy stuff that we had inherited.

People may recall things like, you know, the “Panda” update, when it would happen. And what it was, is, this was a system designed to provide more relevant results. But it was called an update because people were used to the time that, any time Google Search results would maybe shift because of a ranking shift or whatever, we called that an update. …

By the time we got to something like the helpful content update, as we called it when we launched it, it was going to get really confusing to say, “Well, now we’ve done the ‘helpful content update update’. …

So it was really this reset to say, “Look, we have these ranking systems, for example, the helpful content system, and periodically some of these systems get updated.” …

As a creator, none of this really should cause you to do anything different. … But, of course, if you’ve seen a change after one of these systems has been launched or an updated system has been launched, then that’s probably a sign that maybe you’re not as aligned as you should be with what these things have been looking for, what Google’s generally trying to look for. So rereview that advice, and maybe it’ll help you get aligned with those systems better. [Highlights added.]

– Google Search Central, Search Off the Record, Episode 63: Let’s talk ranking updates (PDF Transcript)

Following Google Search’s guidance, in general, can thus help you align your content with their ranking systems, which may tamper down the rankings volatility your site experiences, at least during announced updates. But if you’re site is impacted by an update, understanding the associated ranking system may help you align yourself better with its criteria.

Spam updates

Google Search employs several spam detection systems, most notably SpamBrain. Google defines SpamBrain as “our AI-based spam-prevention system,” so it’s singular.

Google also says when it makes a notable improvement to these Spam systems, it gets reported as an update:

“While Google’s automated systems to detect search spam are constantly operating, we occasionally make notable improvements to how they work. When we do, we refer to this as a spam update and share when they happen on our list of Google Search ranking updates.”

– Google Search Central, Google Search spam updates and your site

The October 2023 spam update didn’t mention SpamBrain, but rather “multiple systems”:

“We’re releasing an update to our spam detections systems today that will improve our coverage in many languages and spam types.”

– Duy Nguyen, Google Search Central Blog, October 2023 Spam Update

Reviews updates

The November 2023 reviews update (which finished rolling out on December 7th) also marked a point, says Danny Sullivan in the Q&A article, “when we’ll no longer be giving periodic notifications of improvements to our reviews system, because they will be happening at a regular and ongoing pace.”

In other words, the reviews system is now rolling and updating in real-time. We also get a sense of this behavior, as well as how broad this system may be, from the ranking systems documentation:

“The reviews system works to ensure that people see reviews that share in-depth research, rather than thin content that simply summarizes a bunch of products, services or other things. The reviews system is improved at a regular and ongoing pace.

The reviews system is designed to evaluate articles, blog posts, pages or similar first-party standalone content written with the purpose of providing a recommendation, giving an opinion, or providing analysis. …

Reviews can be about any topic. …

The reviews system primarily evaluates review content on a page-level basis. However, for sites that have a substantial amount of review content, any content within a site might be evaluated by the system.” [Highlights added.]

– Google Search Central, Documentation: Google Search’s reviews system and your website

The Q&A document doesn’t say, however, if this system is now part of the “core” system.

Helpful content updates

As for the helpful content system, this one requires some nuance to understand:

“The helpful content system … generates a site-wide signal that we consider among many other signals for use in Google Search (which includes Discover). The system automatically identifies content that seems to have little value, low-added value or is otherwise not particularly helpful to people. …

Our classifier runs continuously, allowing it to monitor newly-launched sites and existing ones. As it determines that the unhelpful content hasn’t returned in the long-term, the classification will no longer apply.

Periodically, we refine how the classifier detects unhelpful content. When we do this in a notable way, we share this as a “helpful content update” on our Google Search ranking updates page. After such an update finishes rolling out, and if the refined classifier sees that content has improved, then the unhelpful classification from our previous classifier may no longer apply.” [Highlights added.]

– Google Search Central, Documentation: Google Search’s helpful content system and your website

In other words, the helpful content system is rolling, and if your site triggers an unhelpful content classifier, which happens automatically, then you can get it removed over the long term.

However, if the classifier is refined during a “helpful content update,” its new criteria could remove the unhelpful classification. Which is to say, you probably need to wait for an HCU, but not necessarily.

If that rings a bell, it’s because core update recoveries can work similarly.

Core updates

In Google Search’s core updates and your website, Google Search Central defines core updates as “significant, broad changes” to Google’s “search algorithms and systems.”

Notably, it doesn’t say, “core systems,” but just “systems.” Maybe I’m reading too much into that admission, but it is a difference in language from the Q&A document mentioned above.

But like the helpful content system’s unhelpful content classifier and your site’s recovery, the case with core updates is that:

“Broad core updates tend to happen every few months. Content that was impacted in Search or Discover by one might not recover—assuming improvements have been made—until the next broad core update is released.” [Highlights added.]

– Google Search Central, Google Search’s core updates and your website, How long does it take to recover from a core update?

It’s also interesting the verbiage changed to “broad” core updates when it only referred to “broad changes” prior.

Other Google rankings systems that SEOs may recognize

In its ranking systems guide, Google Search Central also mentions others with names SEOs will recognize but that aren’t associated with updates (at least announced ones).

These systems include:

  • BERT (“an AI system Google uses that allows us to understand how combinations of words express different meanings and intent”)
  • Freshness systems (“various “query deserves freshness” systems designed to show fresher content for queries where it would be expected.”)
  • Link analysis systems and PageRank (“various systems that understand how pages link to each other as a way to determine what pages are about and which might be most helpful in response to a query. Among these is PageRank, one of our core ranking systems used when Google first launched.”)
  • MUM (“an AI system capable of both understanding and generating language. It’s not currently used for general ranking in Search but rather for some specific applications”)
  • Neural matching (“an AI system that Google uses to understand representations of concepts in queries and pages and match them to one another.”)
  • Passage ranking system (“an AI system we use to identify individual sections or “passages” of a web page to better understand how relevant a page is to a search.”)
  • RankBrain (“an AI system that helps us understand how words are related to concepts. It means we can better return relevant content even if it doesn’t contain all the exact words used in a search”)

Pretty interesting how many of them contain the words “AI system.” As we’ll see, that holds importance for Google’s emphasis on machine learning and deep learning models.

As Danny Sullivan also explains about that ranking systems guide in the podcast mentioned earlier:

[E]very single ranking system we have is not listed. But a lot of really interesting ones are.” [Highlights added.]

– Google Search Central, Search Off the Record, Episode 63: Let’s talk ranking updates (PDF Transcript)

We’ll also learn why that’s significant below, such as in the context of core systems like Navboost.

Retired systems

Google Search also has historical systems that have been folded into its core systems.

In 2011, Google Search released Panda. However, Google says Panda “evolved and became part of our core ranking systems in 2015.”

The same happened in 2016 to Penguin, which was announced in 2012.

As noted above, the reviews system now runs continuously, but it hasn’t been said (that I saw) if it’s part of the “core ranking systems” or not.

Improvements … (question mark?)

We know the 2023 “hidden gems” improvement is part of the core ranking systems.

Except, I still don’t know technically if that’s what to call it.

Google Search Central called hidden gems work prior to us finding out it had rolled out.

Barry Schwartz on SERoundtable described it as the hidden gems ranking system or update,” so either a system or an update to one.

Then in SEL, Barry refers to it as ranking improvements (so plural), as well as an update that “is part of the Google core ranking system,” with system being singular, instead of “core ranking systems” (plural) like we saw Danny Sullivan write in the Q&A document above. But Barry was also quoting a Google Search senior director …

Confused yet?

Well, I’ve got more.

In an article on The Keyword, Google’s blog, in May of 2023, Google Search’s Lauren Clark wrote:

“In addition to making it easier to find authentic perspectives, we’re also improving how we rank results in Search overall, with a greater focus on content with unique expertise and experience. Last year, we launched the helpful content system to show more content made for people, and less content made to attract clicks. In the coming months, we’ll roll out an update to this system that more deeply understands content created from a personal or expert point of view, allowing us to rank more of this useful information on Search.

Helpful information can often live in unexpected or hard-to-find places: a comment in a forum thread, a post on a little-known blog, or an article with unique expertise on a topic. Our helpful content ranking system will soon show more of these “hidden gems” on Search, particularly when we think they’ll improve the results.” [Highlights added.]

– Lauren Clark, Google, The Keyword: Learn from others’ experiences with more perspectives on Search

It’s pretty clear from that passage that “hidden gems” would be an “update” to the “helpful content ranking system,” yet by November, it became an improvement(s), or maybe an update, to the core ranking system(s).

I’m going with this — hidden gems is an improvement to Google’s core ranking systems.

The reason is that “improvement” was how Hummingbird was described by Google Search: “This was a major improvement to our overall ranking systems made in August 2013. Our ranking systems have continued to evolve since then, just as they had been evolving before.” [Highlight added.]

Note the description of Hummingbird as a “major improvement,” while hidden gems was described as “improving how we rank results in Search overall.”

It’s fair to say, just as Hummingbird “set the stage for dramatic advances in search,” as Roger Montti describes in SEJ, maybe we can expect the same impact long-term from hidden gems, as well as the helpful content system, regardless of how they’re related or described. 😉

Bringing it all back home

So why do we need ALL of this detail …

True, counting ranking systems or signals is probably akin to counting stars — some burn out, others are concealed behind clouds from time to time, and ultimately, no one really cares. 🙂

At least, I don’t personally.

Knowing how many ranking signals Google Search uses wouldn’t do anything for my SEO work.

I almost think it’s better to go the other direction and surrender to the unknowable.

As Pandu Nayak, Vice President in Search at Google, wrote in a blog post in 2021 for The Keyword:

“New language models like MUM have enormous potential to transform our ability to understand language and information about the world. And while they may be powerful, they do not make our existing systems obsolete. Today, Google Search employs hundreds of algorithms and machine learning models, none of which are wholly reliant on any singular, large model. …

We look forward to making Search a better, more helpful product with improved information understanding from these advanced language models, and bringing these new capabilities to Search in a responsible way.” [Highlights added.]

Pandu Nayak, Google, The Keyword: Responsibly applying AI models to Search

So it’s clear from this excerpt that Google Search has “existing systems” as well as “new language models” that contribute to “hundreds of algorithms.”

But what’s important to grasp is the relationship between them, and where it’s all headed for Google’s ability to rank search results …

Deep learning models

Danny Goodwin explains in a recent SEL analysis of Pandu Nayak’s 2023 testimony:

Google uses core algorithms to reduce the number of matches for a query down to “several hundred” documents. Those core algorithms give the documents initial rankings or scores.

Navboost “is one of the important signals” that Google has, Nayak said. This “core system” is focused on web results and is one you won’t find on Google’s guide to ranking systems. It is also referred to as a memorization system. …

Navboost and Glue are two signals that help Google find and rank what ultimately appears on the SERP. …

Google “started using deep learning in 2015,” according to Nayak (the year RankBrain launched).

Once Google has a smaller set of documents, then the deep learning can be used to adjust document scores.

Some deep learning systems are also involved in the retrieval process (e.g., RankEmbed). Most of the retrieval process happens under the core system.

Will Google Search ever trust its deep learning systems entirely for ranking? Nayak said no.” [Highlights added.]

– Danny Goodwin, Search Engine Land: How Google Search and ranking works, according to Google’s Pandu Nayak

If you’re not familiar with deep learning, Google Cloud has a whole page about it:

Deep learning is a type of machine learning that uses artificial neural networks to learn from data. Artificial neural networks are inspired by the human brain, and they can be used to solve a wide variety of problems, including image recognition, natural language processing, and speech recognition. …

Deep learning works by using artificial neural networks to learn from data. Neural networks are made up of layers of interconnected nodes, and each node is responsible for learning a specific feature of the data. …

As the network learns, the weights on the connections between the nodes are adjusted so that the network can better classify the data. This process is called training, and it can be done using a variety of techniques, such as supervised learning, unsupervised learning, and reinforcement learning.

Once a neural network has been trained, it can be used to make predictions with new data it’s received. …

Both deep learning and machine learning are branches of artificial intelligence, but machine learning is a broader term that encompasses a variety of techniques, including deep learning. ML algorithms are typically trained on large datasets of labeled data, while DL algorithms are trained on massive datasets of unlabeled data.” [Highlights added.]

– Google Cloud, What is Deep Learning?

The more I’ve delved into learning about Google Search’s ranking systems, the more interested I’ve become in deep learning in general.

Here’s another definition, along with a graphic of how deep learning fits within the realm of AI, taken from a more scientific context:

“Deep Learning (DL) serves as a subset within the expansive domain of Machine Learning, harnessing Neural Networks-similar to the neurons in the human brain-to replicate brain-like functionalities. DL algorithms delve into intricate patterns of information processing, mirroring the cognitive behavior observed in the human brain. This approach enables DL to discern and categorize information akin to the human brain’s pattern recognition. Notably, DL operates on more extensive datasets than traditional Machine Learning, and its predictive capabilities are autonomously administered by machines, emphasizing its capacity for sophisticated self-learning and decision-making processes.” [Highlights added.]

– Dinesh Elumalai, Sify.com: Distinguishing Deep Learning, Machine Learning, and Artificial Intelligence
Chart showing deep learning, neural networks, machine learning, and AI.
Image Credit: ResearchGate

From that same article, here’s another interesting comparative perspective about how DL compares to ML and AI generally — it also contains a word interesting for SEO, “ranking”:

  • Artificial Intelligence: “Increasing the likelihood of success is essentially the goal, not accuracy”
  • Machine Learning: “Without much concern for the success ratio, the goal is to increase accuracy.”
  • Deep Learning: “When trained with a vast amount of data, it achieves the greatest accuracy ranking.” [Highlight and bolding added.]

Google’s first use of deep learning systems in Search was in 2015 with RankBrain. The evolution from there of using AI ranking systems to deliver “helpful search results” was further explained in a 2022 article in Google’s The Keyword blog by Pandu Nayak himself:

“We’ve developed hundreds of algorithms over the years, like our early spelling system, to help deliver relevant search results. When we develop new AI systems, our legacy algorithms and systems don’t just get shelved away. In fact, Search runs on hundreds of algorithms and machine learning models, and we’re able to improve it when our systems — new and old — can play well together. …

When we launched RankBrain in 2015, it was the first deep learning system deployed in Search. … RankBrain helps us find information we weren’t able to before by more broadly understanding how words in a search relate to real-world concepts. …

Neural networks underpin many modern AI systems today. But it wasn’t until 2018, when we introduced neural matching to Search, that we could use them to better understand how queries relate to pages. Neural matching … looks at an entire query or page rather than just keywords, developing a better understanding of the underlying concepts represented in them. …

Launched in 2019, BERT was a huge step change in natural language understanding, helping us understand how combinations of words express different meanings and intents. Rather than simply searching for content that matches individual words, BERT comprehends how a combination of words expresses a complex idea. BERT understands words in a sequence and how they relate to each other, so it ensures we don’t drop important words from your query — no matter how small they are. …

A thousand times more powerful than BERT, MUM is capable of both understanding and generating language. … MUM is also multimodal, meaning it can understand information across multiple modalities such as text, images and more in the future.” [Highlights added.]

– Pandu Nayak, The Keyword, Google: How AI powers great search results

We also recently learned that Gemini — the hottest AI model on the block introduced on December 6th by Google — is already integrated into SGE, helping with latency but also “improvements in quality”:

In Google DeepMind’s technical report on Gemini, they explain more about its development and capabilities:

Gemini models are trained on a dataset that is both multimodal and multilingual. Our pretraining dataset uses data from web documents, books, and code, and includes image, audio, and video data. We use the SentencePiece tokenizer (Kudo and Richardson, 2018) and find that training the tokenizer on a large sample of the entire training corpus improves the inferred vocabulary and subsequently improves model performance. …

We find that data quality is critical to a highly-performing model, and believe that many interesting questions remain around finding the optimal dataset distribution for pretraining. …

As discussed in the section on “Training Data”, we filter training data for high-risk content and to ensure all training data is sufficiently high quality. …

Gemini can also be combined with additional techniques such as search and tool-use to create powerful reasoning systems that can tackle more complex multi-step problems. One example of such a system is AlphaCode 2, a new state-of-the-art agent that excels at solving competitive programming problems (Leblond et al, 2023). AlphaCode 2 uses a specialized version of Gemini Pro – tuned on competitive programming data similar to the data used in Li et al. (2022) – to conduct a massive search over the space of possible programs. This is followed by a tailored filtering, clustering and reranking mechanism. Gemini Pro is fine-tuned both to be a coding model to generate proposal solution candidates, and to be a reward model that is leveraged to recognize and extract the most promising code candidates. …

The composition of powerful pretrained models with search and reasoning mechanisms is an exciting direction towards more general agents; another key ingredient is deep understanding across a range of modalities …

We have also observed that data quality is more important than quantity (Touvron et al., 2023; Zhou et al., 2023), especially for larger models. …

In the natural language domain, the performance gains from careful developments in data and model training at scale continue to deliver quality improvements, setting new state of the art in several benchmarks.” [Highlights added.]

– Google DeepMind, Gemini: A Family of Highly Capable Multimodal Models

Deep learning as a science seems important for SEOs to grasp to understand the future of Google Search rankings.

The multi-modal and multilingual parts interest me, particularly because I’ve noticed instances where SGE ranked results in other languages for English queries:

But keep in mind the enduring value of traditional information retrieval processes and legacy ranking systems, and how it all may play together as new AI technologies evolve.

As Danny Goodwin’s SEL article also mentioned: “Will Google Search ever trust its deep learning systems entirely for ranking? Nayak said no.”

So why we should care about all of this?

Having this background knowledge helps us contextualize the rankings process — Google has “core ranking systems,” including Navboost (a memorization system), which narrow down the list of matches for a query, at which point its “deep learning systems” adjust the final document scores for ranking.

I also found it interesting, as Google Cloud says, deep learning systems can be trained on “datasets of unlabeled data” — which sounds different from the labeled data of search quality raters — this reminded me of a quote from Eric Lehman, a former Google employee: “Huge amounts of user feedback can be largely replaced by unsupervised learning of raw text.”

Remember that BERT and MUM are both AI models as well:

Yet, the Gemini technical PDF also reminds us how the quality of the training data — “data quality is more important than quantity” — can influence the output.

If nothing else, the simple takeaway is that Google Search uses many ranking systems with different purposes, and evolution is constant.

Lastly, a word of caution (which I echoed in my helpful, people-first content guide) about focusing too much on Google’s ranking systems as opposed to your users:

Rather than trying to understand how Google’s ranking systems work to manipulate search results, instead focus on delivering what your users want and earn search rankings.

To quote Danny Sullivan from the podcast episode again:

“I feel like more of our shift these days is less about “these are the specific things” and more of “these are the mindsets you should be following.

For an ordinary person to say that they see the SEO, I don’t think they are necessarily seeing the SEO as much as they’re using that for a euphemism of, “This content really wasn’t designed for me, it was designed just to rank in a search engine.” And then, and of course, that’s not what we want people to do.

And that bigger picture really is, more than anything else, just put yourself in the shoes of someone who arrives at that content and what they’re going to be thinking about. [Highlights added.]

– Google Search Central, Search Off the Record, Episode 63: Let’s talk ranking updates (PDF Transcript)

Search engine-first tactics that worked in the past, well, they likely have a shelf life in an era of increasingly sophisticated AI-driven ranking systems:

One constant about search results and the systems that rank and surface them is they change.

So appreciate the search engine’s ranking systems’ capabilities as they are today (compared to years earlier), optimize as equally (if not more) for your users’ benefit than the search engine’s, and stay ahead of the rankings curve.

Be like the old man in the Google sea. 🙂

Now with that context of how Google Search may rank results in mind, let’s dig into what all that summer volatility buzz was about …

How the SEO community interpreted the summer’s rankings volatility in 2023, and what became reality by that fall & winter

During the summer, many SEOs felt the volatility in search results could be related to Google testing changes for an update. What kind was the question.

Some suspected the volatility was a harbinger of the next core update — the last one, at that time, was the March 2023 core update. If we expect 3-4 core updates per year on average, we were certainly due at that point in July:

Update: Looks like that’s what happened! At least in part. The August 2023 core update began rolling out on August 22nd, just over 3 weeks after this was first published, and it ran till September 7th.

Not everyone felt like the volatility was consequential for rankings and traffic though.

There were also words of caution about how it may be impacting lower-ranking results:

Or that it could be from unrelated causes to rankings or traffic:

Dr. Pete Meyers of Moz cautioned as well that rank volatility tools may be more sophisticated than are given credit for, to which Glenn Gabe suggested the ongoing volatility was due to a planned update to the helpful content system:

Update: Another correct hypothesis! Keep in mind that the first two helpful content updates in August and December of 2022 were pretty mild by comparison to the September 2023 helpful content update, which ran from September 14th to the 28th. (Its impacts also largely inspired my 11x content approach.)

Marie Haynes thought the summer’s rankings volatility may be a part of our new reality, that as Google’s machine learning systems are continuously learning, so too are the rankings always changing.

Update: I mean, it’s also a spot-on interpretation! Just check out the number of times “volatility” (also “unconfirmed”) was mentioned between late July and early December on SERoundtable’s Google algorithm update page:

Algorithm Updates on Search Engine Roundtable with volatility highlighted.

Lily Ray felt the volatility could be the result of unannounced updates to other ranking systems, most especially the reviews system.

Update: True, as well! Though it happened a little later on, the November 2023 reviews update did roll out from November 8th to December 7th — wow, that’s a long time. (The previous one was in April.) Also interesting, as we learned later, is that Google’s “hidden gems” improvement to the core ranking systems had been live for “a few months” prior to November — how many is anyone’s guess.

So, looks like the lesson here is that if you predict a Google Search ranking systems update, just wait long enough and it’ll happen eventually. 😉

No, but in all sincerity, it’s impressive to SEOs make predictions backed by data (and their experience), then look back months later (when hindsight is always 20-20) and see how closely they align to what happened.

But is Google Search actually pushing more updates these days? Not necessarily.

As Danny Sullivan said in that podcast mentioned earlier:

“First of all, it’s not that we’re doing more updates to our ranking systems than ever before, but we’re hopefully communicating more about it, which is what everybody had said they wanted. So it’s like, before, it’d be like, “Did something happen?” It’s like, “Maybe.” And now it’s like, “Yes, we are telling you there was this change to our ranking systems. This is what it was, so you know.”

That also leads back to all the mystery updates that happen. It’s like, we do do updates all the time. But when we talk about them, those are the ones where really, like, you probably should pay attention to. We think they’re notable in some way.

Sometimes, as we all know, everybody starts talking about a ranking update and we’re like, we don’t even know. Like we’re all running around behind the scenes going, “Did we do anything?” And they’re like, “No, we didn’t do anything. We don’t know what’s going on. [Highlights added.]

– Google Search Central, Search Off the Record, Episode 63: Let’s talk ranking updates (PDF Transcript)

So, the volatility we’re seeing, it’s not necessarily from more updates, but perhaps more so from the nature of how AI-driven ranking systems operate — where machine learning (or deep learning) is always learning. Also, bear in mind that the volatility SEOs report, well, even Google doesn’t always know the cause. 😉

But remember, SEO KPIs are more than rankings

The amount of ranking system updates and levels of volatility we saw in 2023 shows how monitoring keyword rankings probably has diminishing value in today’s SEO strategies.

Rankings refer to the average position where a webpage (or other result) appears in organic search results for a query.

Achieving higher rankings for target keywords has been an SEO KPI as old as time.

Except businesses don’t earn revenue from rankings, which is why SEOs also report on traffic and conversions.

As I explained in my helpful content guide regarding qualified clicks, the goal of SEO is to reach your audience in organic search results along their buyer’s journey and ultimately inspire them to convert (take a meaningful action for your business goals).

I personally have changed how I report to clients over the years. While I still primarily report on conversions and/or revenue today, I’ve also broadened my data sources and begun experimenting with different attribution models.

And when it comes to the SERPs themselves, I focus on the visibility of the brand, the awareness generated with target personas, and how organic search visibility plays into a complex matrix of touch-points along the audience’s buyer’s journey.

This can also involve accounting for so-called “keyword cannibalization” when SERPs show diversified solutions for different (or fragmented) users’ search intents:

It’s also important to note just how often SERPs can change or appear differently to different users, and how that skews perceptions of rankings.

In 2021 alone, Google launched 4,366 changes and experimented 11,553 times. 

If you follow the SEO community on social media, you’ll often see people posting screenshots of SERP “tests” they’re in on Google Search.

I cover these in the SERP test sections on my Hamsterdam SEO news recaps. But if you want the freshest source, as well as to find what’s new on Google Search, follow Barry Schwartz’s SERoundtable Google page.

Some of Google’s SERP tests wither away, while others eventually roll out to all U.S. or global users.

Still others come and go, sometimes reappearing months or years later.

The larger point is that given the ever-evolving nature of Google’s SERPs, along with the introduction of generative AI (SGE) and alternative surfaces of information, like the Perspectives filter, Discover feeds, or SGE while browsing, while showing “good rankings” in a GSC chart, Looker Studio report, or third-party rank tracker may feel good or please clients momentarily, the ultimate goal is to drive brand awareness, trust, and credibility to translate organic search visibility along your audience’s buyer’s journey into qualified clicks that lead to conversions and revenue.

Measuring all of that will be hard. That’s why explaining its complexity, and synthesizing its value, might be the best course to get buy-in for SEO strategies going forward.

Here’s how I might frame it …

Why “visibility” may be the new “rankings”

The ongoing volatility of Google organic search rankings today, likely related to the search engine’s machine learning systems constantly, well, learning, may be our new reality.

That means the average position of where a webpage ranks for a query may be a less helpful indicator of performance today because that number is based on much more dynamic SERPs and rankings than years ago.

A firsthand example of variable rankings

Around the time I first wrote this article, I got pulled into a project. There was a particular high-volume head term keyword that a website owner cared about … very much, and it dropped in average position from a previous 2-3 to now 9-10+. Suffice it to say that traffic to the main page ranking for the keyword plummeted.

This keyword was a medical product, so very YMYL, and also directly tied to conversions and revenue.

After hours of page updates, I began monitoring rankings closely.

When I searched the query, I often saw it ranking around positions 8-10 on both mobile and desktop Search. But I’d seen it get as high as position 4, for a few hours … then go back to 10.

When my parents searched the query from the opposite side of the country, they saw the page ranking in positions 3-4 on mobile and desktop, but also around 5-8.

Meanwhile, Semrush’s keyword overview showed the page at position 5.

With that much variation, how could we accurately say where the page is “ranking” based on its average position in GSC or a rank tracking tool, or where it appeared for manual searches? 

Well, it all worked out …

It got better from there.

The page landed at position 1 for the query and remained there for two months. (Pages on related topics we optimized improved overall as well in many keyword rankings and traffic.) Until the November 2023 core update, when the main page slid to position 2 for the query, overtaken by a result from a different type of website (not a competitor), and surrounded by more of the same, implying an intent shift.

But to the larger point, as a result of the brand’s cumulative visibility, October and November were historic months for sales of the product.

Generative AI answers escalate the trend toward visibility

Volatile rankings in current search results will not be the whole story. We must couple today’s rankings volatility with the expanding introduction of AI chatbots in search results, and outside of them.

Not only are we contending with Google’s SGE or Microsoft’s Copilot (Bing Chat), we also have ChatGPT’s ability to browse the web, and now functionalities like this from Google’s Gemini:

Eli Schwartz made some great points back in May when SGE testing first rolled out regarding concerns over keyword rankings, traffic, and a future focus on visibility: 

One question is, if rankings reports hold less value for telling the larger story, what will be the replacement?

For clients who don’t have the time or knowledge to interpret complex reports from different data sources, we may have to do that work for them (maybe using LLMs to synthesize key takeaways), which could balloon reporting time and eat into SEO budgets.

Alternatively, if we can frame the complexity of the Search landscape from the start, and get stakeholders to understand the holistic value of organic search visibility for contributing to overall revenue, then maybe positive search visibility itself can be a form of ROI …

A new frontier for SEO visibility and content

For many websites, a navigational query on Google Search today brings up their homepage with sitelinks, or maybe a related category, product, or service page, as the first organic result.

Increasingly for ecommerce retailers, that first organic result will be encircled by Merchant Center-driven results, both paid as well as free product listings.

On SGE, however, it’s likely the generative AI pulls a summary or perspective on the brand or its products, followed by data from Google’s shopping graph. SGE won’t always auto-generate for navigational queries, at least in my experience, but more and more it does, especially when there’s even a hint of transactional ambiguity in the search intent.

Here’s an example of a desktop SERP for [crest toothpaste], which in my mind is a query with navigational and transactional intent:

Google desktop search results for Crest toothpaste.

The initial search has an SGE preview, along with Filter by fields on the left, Things to know on the right, and shopping ads above it all (these only appeared on my second search, by the way), followed by a single category page from the brand’s website, and then a 4×2 group of free product listings.

Suffice it to say, the odds of that user’s click going Crest’s website directly from that traditional search result, well, they’re pretty slim.

And if the SGE answer is extended by clicking Show more, the normal organic results are pushed below the fold about two screen lengths:

Extended Google SGE answer for Crest toothpaste.

Meanwhile, the retailers mentioned in those SGE product listings, they’re usually not the brand, but Walmart, Amazon.com, Target, and others.

Then clicking on one of those product results creates a product knowledge panel, with customer reviews (UGC), links to those same retailers, as well as an About this product section that links to a Crest website product page:

Product knowledge panel on Google desktop for Crest toothpaste.

But here’s the thing … if I’m Crest, I’m not mad at this.

I mean, it’s pretty clear that Crest shouldn’t count on a user searching [crest toothpaste] and then clicking on the site to make a purchase. But there’s also a tremendous amount of brand visibility throughout that SERP and SGE answer, largely with a positive sentiment and transactional-informational context.

By the time the user decides to click through on an organic result (assuming they do), they could be quite educated about the different types of Crest toothpaste products.

Instead of shopping on Crest’s website, they’ll just do so on Google itself. And by the time they do click (probably on a retailer), they’ll be ready to make a purchase — that’s a qualified click and that earns revenue.

Of course, it’s difficult to measure all of that as traditional SEO “keyword rankings,” and likely metrics like clicks, sessions from organic search, and CTR, will appear lower, yet revenue will still be earned, and the user saves time and likely learns more about the brand.

The trick is understanding how that organic visibility leads to a final purchase, as the relationship is often indirect and thus requires considerations for different attribution models (as I suggested in this mini case study about Shopify organic search revenue).

The Google for Retail website mentions three goals that apply to organic search visibility by using features beyond your website:

  1. Build Online Presence: This involves getting discovered by local shoppers (Google Business Profile) and showcasing your products to online shoppers (Merchant Center).
  2. Drive Online Sales: This is largely in the context of paid (Performance Max) but does apply to free product listings.
  3. Drive Offline Sales: This also relates to paid channels, but can also be attributed to GBP products or free listings that inspire in-store visits and purchases.

Another element to this is Google’s Perspectives filter, which appeared on mobile results in May and started rolling out to Desktop in November.

If we return to that [crest toothpaste] SERP example, and click the Perspectives filter atop the search results, we’re presented with results from TikTok, Quora, YouTube, Instagram, Reddit, and more.

Again, this is not website content that would contribute to rankings or clicks, but it is organic results appearing on Google Search, and therefore it’s part of SEO.

Google desktop perspectives filter for Crest toothpaste.

As you can see, the universe of what “SEO content” means is much broader today than simply website content.

Back in July, Tory Gray published an awesome thread on X (then Twitter) about using UGC (namely reviews) to get visibility in Google’s SGE. Marie Haynes commented how, just as product listings are powered by Google’s shopping graph, so too are data like customer reviews.

While so far those are examples of non-website content getting organic visibility, there are also opportunities to earn qualified clicks to website content without targeting particular queries.

We have some of the standards like People also ask (and all its variations) and increasingly Things to know (as mentioned in the above SERP):

On mobile, there’s also Google Discover, a feed of suggested results on the Google app or homepage screen:

Google Discover mobile feed.

Google also introduced the ability to follow topics (including search queries), which can influence Discover feeds (and has improved mine, at least so far).

Google Explore, which Glenn Gabe has written about, is a feed of organic mobile results (in groups of three) from related queries below the normal SERP, which can take you to new individual results or a different SERP:

Google Explore on mobile for helpful content query.

Interestingly, the results for followed topics in Google Discover look remarkably similar to Google Explore:

There’s also SGE while browsing, which can show Explore more or People also view:

SGE while browsing.

And less likely to be clicked, though still available, is Google’s About this result, which leads to More about this page, and eventually related results under About the topic:

About the topic and related results on Google mobile.

We also have, of course, conversational results from LLMs and generative AI chatbots (including Bard, Copilit, and ChatGPT with web browsing), as well as SGE’s follow-ups:

Google SGE conversation about helpful content for SEO with follow-up answers

There are also web stories and new evolutions of SERP features:

Not to mention personalized SERP results, not just for location-specific content, but even based on preferences:

Of course, there’s also the risk of generative AI showing negative brand sentiments, not attributing sources, or even showing hallucinations and misinformation from dishonest sources — hopefully, after everything we’ve learned about Google’s ranking systems above, you know this result is a fantasy:

What does this all portend for creating content for SEO in the future?

Back in July, just after Google’s 2023 Q2 earnings call, Matt G. Southern pointed out something brilliant in an SEJ article about how this switch toward measuring visibility instead of rankings for content (my words, not his) could be beneficial to smaller sites as well, or it could present new challenges — the opportunity is all in the value of the content.

Speaking about SGE results specifically, Matt mentioned: 

“Moving to a more conversational and contextual search that synthesizes information could benefit smaller sites with authoritative, in-depth content on niche topics. … However, there are risks, too.

One challenge will be optimizing for semantic search rather than exact keywords.

Suppose Google doesn’t require precise keyword matching. In that case, it may be harder for smaller sites to rank for specific queries unless their overall content is robust enough for the algorithm to make contextual connections.” [Highlights added.]

– Matt G. Southern, Search Engine Journal: Google’s AI Innovations Drive Search & Ad Performance: Q2 2023 Insights

Marie Haynes also wrote an analysis of that earnings call, which in the key takeaways mentioned “Gemini” as well as how “Google sees big potential in multimodal search.”

Broadly, if we consider Google’s “hidden gems” ranking improvement, as well as its introduction of features like the Perspectives filter, Things to know, related results from About this result, and standbys like Discover, Explore, and People also ask, we quickly see how the concept of visibility — and a potentially more level yet dynamic playing field on Search — may influence how brand’s approach their SEO content and overall strategies.

We also have to consider personalization

With SERP complexity comes the value of SEO fundamentals

My opinion when I first wrote this article in July is the same as today in December: the landscape of Google’s volatile search rankings, coupled with a shift in SEO KPIs from rankings to visibility and an expanded universe of SERP opportunities creates a complex environment for doing SEO, yet it doesn’t have to be a daunting one, if we remember the fundamentals.

At the time I first wrote this, I explained how Google’s guidance for creating helpful, reliable, people-first content was important to keep in mind.

To be honest, reading this again now, several months later, I’m amazed by how much my opinions are the same, except far more impassioned. That’s partly because I’ve done a lot more research on these topics since then, but also because the trends I’ve seen on my own site and working with clients have shown me the value of this line of thinking for people-first SEO.

Here is the original takeaway I had in July, followed by a link to my recent guide on this topic:

Google’s helpful, people-first content guidance begins:

“Google’s automated ranking systems are designed to present helpful, reliable information that’s primarily created to benefit people, not to gain search engine rankings, in the top Search results.”

– Google Search Central, Creating helpful, reliable, people-first content

It then introduces a list of questions to self-assess your content regarding quality and expertise.

This is followed by emphases on page experience, E-E-A-T, and keeping the quality rater guidelines in mind. 

Finally, it goes into the who, how, and why: who created the content, how was it created, and why was it created?

We know that Google has said using AI-generated or assisted content is fine, as long as the content is high quality.

But then the question becomes, can you create helpful, reliable, people-first content with a ChatGPT prompt and a few keywords to target? Likely not.

If anything, the ongoing rankings volatility on Google Search tells us that SEO as a discipline is getting more complex, but also more simplified.

Let’s take it back once more to the fundamentals of how Google ranks results for a query.

  1. The search engine starts with the meaning of the query: what type of information do the keywords in the search bar refer to; what is this searcher’s true intent?
  2. Next, it considers the relevance of the content: does this content satisfy the intent of the person searching, or would they need to click back to view other results or search again? (Think about SGE or the Gemini demo above, and how Google wants users to combine what would traditionally be multiple searches into a single query summarized by the AI response and leading to qualified clicks.)
  3. Then there’s the quality of the content: maybe the answer is relevant to the keyword searched, but is the information accurate and trustworthy; is it backed by real expertise or experience and uniquely valuable?
  4. Then there’s usability: is the page experience good; can the user find their answer quickly and easily on their device without security risks or annoyances from ads or interstitials?
  5. And lastly, there’s the context: the personalization factor; where is this person located, what is their search history, and would that information be relevant to improving their results?

Rankings are important in SEO because clicks are important, and clicks are important because conversions are important.

But in a world of increasingly dynamic rankings and new ways to achieve organic search visibility, it’s not the rankings we should focus on; it’s how to achieve visibility with the right audience. 

Ok, now that you have seen those past thoughts, feel free to check out my latest deep-dive into creating helpful, people-first content for 2024 (and beyond), where you’ll see how I’ve translated those opinions into actionable insights.

Related resources

If you’d like to learn more about this topic of SERP volatility, here are some related resources I’d recommend:

I’ve also summarized the highlights of this article in a video on YouTube:

“Bearing down”

As Google’s rankings volatility continues (into March of 2024), the prevalence of generative AI and other updates to Search evolves, and we encounter new AI models like Gemini, the focus of SEO strategies should be on the fundamentals — making content that is helpful, reliable, and people-first, backed by a good overall user experience on a website with solid technical foundations — like the old man in the Google sea.

Sure, measuring the impact of organic search visibility may get even more challenging over time, but in the dynamic mix of what search results are today and could be tomorrow, the goal is making sure your particular audience finds your result as the perfect click during their buyer’s journey to earn you revenue — today, tomorrow, and always.

I’ll revisit and update this article soon.

Until then, enjoy the vibes:

Thanks for reading. Happy optimizing! 🙂

Editorial history:

Created by Ethan Lazuk on:

Last updated:

Need a hand with SEO audits or content strategy?

I’m an independent strategist and consultant. Learn about my SEO services or contact me for more information!

Leave a Reply

Discover more from Ethan Lazuk

Subscribe now to keep reading and get access to the full archive.

Continue reading