Hamsterdam Part 60: Weekly SEO & AI News Recap (5/27 to 6/2, 2024)
By Ethan Lazuk
Last updated:
A weekly look-back at SEO & AI news, tips, and other content shared on social media & beyond.

Opening notes, thoughts, and other musings:
- Welcome to another week of Hamsterdam!
- This was a busy week for SEO news, and me personally, hence this is coming a little later in the day than usual. 😉
- I had a bunch of opening notes planned to put here, but instead, I just moved them to the introduction below.
- I also added a lot of images for our vocabulary lessons, so I was selective on the information in the weekly recap to ensure this loads well on mobile.
- Without further ado, let’s get started with our weekly vocab and a history lesson, followed by an introduction and recap of the week’s news!
Feel free to jump ahead to the main news recap if you’re in a hurry. 🙂
And if you’d like these delivered every week, subscribe to my free newsletter here.
Marketing word of the week: “click through rate”
Click through rate (CTR) is a ratio of the number of clicks divided by the number of impressions for digital content:
CTR = clicks / impressions
In the context of SEO reporting, CTR measures how often organic search results or features get clicked in Google Search, Bing, or another search engine based on how often they appear.
You can review your site’s CTR data in Google Search Console:

I recommend viewing CTR with filters applied (when applicable) for:
- Date
- Country
- Device
- Page
- Search appearance
- Query
You generally want to understand the CTR of a given page (URL) for a specific keyword (query) with a given audience (country and device) during a time period (Date) for a type of search result (Search appearance).
You can also look at groups of related queries or pages, or otherwise consolidate the data in a way that makes sense. What you don’t want to do is look at CTR too broadly, like site-wide.
The reason is you’re looking for opportunities for improvement or indications of successful tactics, and context matters. How your organic result appears for one user (US on mobile) might be different than for another user (Canada on desktop).
If you have a low CTR for a high-impression query, that might imply a page’s title link, snippet, or thumbnail image isn’t as optimized or appealing to users as it could be. Of course, these elements are largely automated by Google these days.
Other causes of low CTR could be when a page isn’t a good match for a user’s search intent, or the user is finding better alternatives in the SERP, like maybe from an AI Overview or featured snippet, other image, video, social, or shopping results, or a more popular brand.
There are many variables behind CTR data, so improvement strategies should depend on researching and understanding the context.
When I work with clients, improving the CTR of branded queries or high-value non-branded queries is often an early goal in a strategy.
This can involve adjusting on-page elements to influence title links, snippets, featured images, or rich results from structured data (like prices or review stars for products).
That said, it’s helpful to manually search for a query on different devices and with different profiles or settings (like for location) to get a more holistic picture.
For example, look at how the desktop and mobile presentations of these SERPs could impact the CTR to my post about helpful content.
Here it is above the fold and people also ask questions on desktop:

While on mobile, it’s below the fold and PAA:

CTR can apply in other contexts of organic search, as well, like in Google Discover, people also view, or shopping results.
As for generative AI search experiences, the CTR of citations for organic web results in chatbots, assistants, or summaries, like Microsoft Copilot or Google’s AI Overviews, will be quite different compared to normal web results because it’s a wholly different context.
Fabrice Canel of Bing has talked about the “perfect click,” for example, or qualified clicks that happen when a user goes through much of their search journey in an AI chatbot before finding a highly satisfactory result.
Google’s AI Overviews offer fewer links than normal search results, typically, which can heighten the CTR from them (similar to other SERP features, like sitelinks or knowledge panels). Sundar Pichai of Google has also said AI Overviews produce higher rates of clicks. Hopefully, we’ll get data in GSC soon to verify CTR, clicks, and other AIO metrics. 😉
As a takeaway, improving the CTR of your important pages ranking for valuable queries in search results or appearing in other organic contexts can be a key part of a holistic SEO strategy.
AI word of the week: “classification model”
A classification model is a type of machine learning (ML) model that uses supervised learning to predict the predefined category or class to which input data belongs.

Classification models can implement classifiers, the algorithms or sets of rules or procedures the models use to assign input data to different classes or categories.
Supervised learning involves training models on labeled data, where each data point has a known class label. The goal is to learn patterns or decision boundaries. (This is different from clustering algorithms (unsupervised learning), which use unlabeled data and discover inherent groupings or structures based on similarity measures.)
A classification model might determine labels like the language of text, the type of animal in an image, or a positive or negative diagnosis for a medical condition.
There are manually coded classification models, such as rule-based classifiers that use a set of manually defined rules or criteria to determine the class label of an input.

However, many classification models are automated.
Decision trees split data based on feature values to create a tree-like structure, where decisions at each node lead to predicted class labels of leaf nodes.

Support vector machines (SVMs) find a hyperplane to separate data points of different classes.

Naive Bayes classifiers are simple and fast classification models that assume features in classes are unrelated to other features.

K-Nearest Neighbor (KNN) classify new instances of data based on the majority class of their k nearest neighbors in the training data.

Logistic regression is typically used for binary classification, modeling the probability of a data point belonging to a particular class using a logistic function.

There are also neural networks used for classification tasks.
Recurrent neural networks (RNNs) process sequential data, looping data inputs back through past layers to retain information from previous steps, and can be used for sentiment analysis of text, for example.

Convolutional neural networks (CNNs) are widely used for image and video data, detecting local patterns or features from objects or images.

Transformer-based models like BERT or RoBERTa can be fine-tuned for specific classification tasks using labeled data, like for sentiment or topic classification.

Graph neural networks (GNNs) can be used for classification when the data is structured as a graph, with nodes and edges representing entities and relationships, applying a classification layer to predict the class of individual nodes or the entire graph, like for recommendation systems or fraud detection.

In short, classification models can encompass a wide range of ML models, including neural networks, and implement classifiers (algorithms) that predict a predefined label for input data, based on supervised learning from labeled training data.
This week in SEO history: “Bing is born” (2009)
On May 28th, 2009, Microsoft unveiled Bing.
A few days later on June 3rd, 2009, Bing was fully deployed worldwide, replacing Live Search (as you can see in the screenshot below), which itself was derived from Windows Live Search and the original MSN Search before it.

Microsoft described Bing as a “new Decision Engine and consumer brand” that would help users take a “first step in moving beyond search to help make faster, more informed decisions.”
Bing’s mission was to “build on the benefits of today’s search engines” but also “move beyond this experience with a new approach to user experience and intuitive tools to help customers make better decisions.”
It focused on “four key vertical areas,” which included:
- Making a purchase decision
- Planning a trip
- Researching a health condition
- Finding a local business
All that’s to say, Bing was intended to “help people more easily navigate through the information overload that has come to characterize many of today’s search experiences.”
We mentioned “qualified clicks” in our CTR definition earlier. This statistic Microsoft mentioned in 2009 is interesting in that context:
“Results from a custom comScore Inc. study across core search engines show that as many as 30 percent of searches are abandoned without a satisfactory result. The data also showed that approximately two-thirds of the remaining searches required a refinement or requery on the search results page.”
– Microsoft blog (2009)
Here’s another quote from Microsoft’s CEO at the time, Steve Ballmer:
“Today, search engines do a decent job of helping people navigate the Web and find information, but they don’t do a very good job of enabling people to use the information they find.
When we set out to build Bing, we grounded ourselves in a deep understanding of how people really want to use the Web. Bing is an important first step forward in our long-term effort to deliver innovations in search that enable people to find information quickly and use the information they’ve found to accomplish tasks and make smart decisions.”
– Steve Ballmer (2009)
It’s interesting how Bing was described as a “first step forward,” while the goal was putting information from the web to practice for accomplishing tasks or making decisions.
In its 10-year anniversary post for Bing’s debut in 2019, Microsoft Bing Blogs published a post referencing innovations that started in 2017:
“From the beginning we knew search needed to become more than just a list of blue links. We continued to iterate and innovate across the search experience; in 2017, we recognized a similar unmet need – search should give you answers faster, be more comprehensive and allow everyone to engage more naturally. Search needed to become more intelligent.
We made a commitment to invest in search experiences that help people discover facts, uncover multiple perspectives, find better options, and see the bigger picture.
These advancements are enabled by Microsoft Research labs, deep neural networks, state of the art machine learning and passionate teams working to deliver the best search experience possible.”
– Microsoft Bing Blogs
The year 2017 also was notable for NLP developments, in general, because of the introduction of the transformer architecture and its attention mechanism, which was the basis for LLMs like the GPT models. Today, GPT-4o is under the hood of Copilot.
I wrote more about Bing’s deep learning-based search features and overall history in my article about the history of Microsoft’s search engines.
Happy 15th birthday Bing!

Speaking of search today …
Let’s get to our introduction this week, talking about the standards of research.
Introduction to week 60: “why I dropped out of grad school”

When I got my first SEO-related job for Haystak Digital Marketing in Fort Myers, Florida, in 2015, I was already enrolled in graduate school.
I got my undergrad degree in 2012, but then bounced around doing odd jobs for a few years.
I almost went to journalism school for post-grad at ASU, but I wasn’t feeling it.
Then I eventually enrolled at Johns Hopkins for government studies in 2014.
I completed all of my course work and wrote a thesis. I don’t even remember what my thesis was about, but I do know it was like 160 pages.
One of my advisors was a hard-nosed professor who I’d taken a political polling (statistics) class with.
He gave me back my thesis and basically told me to start over.
I was mad.
I left school for a bit.
That’s when I got more interested in SEO.
I eventually returned to JHU and started a new thesis. I think I restarted it like two or three times, actually, only this time with a digital marketing focus.
Then it hit me.
I don’t care about government and politics. I’m trying to force it to fit my current interests.
So, I dropped out.
When I started in marketing, I considered myself an “academic.”
I don’t mean that in an elitist, tweed-jacket sense.
I was raised in rural Montana, a strange upbringing where we’d bump Tupac while driving past cow pastures.
But as an adult, I’d always planned to get a Ph.D. in the social sciences and be a researcher.
I ended up writing blog posts for websites about cars, plumbers, and HVAC systems in between.
Nothing against those enterprises, but the work felt inconsequential to me, at first.
Except, the more I got exposed to SEO through the writings of certain individuals who I followed on Twitter, the more I saw the light.
People need cars, plumbing, and AC, and I can help them and the companies who provide those services to grow all while satisfying my own intellectual curiosity.
SEO has been a win-win ever since.
But why did I tell you about my failed thesis?
This week’s big news was the Google API Content Warehouse.
I’ve really enjoyed reading people’s interpretations of it, and I spent a good amount of time going through the information myself.
As an SEO practitioner, it’s been intellectually stimulating to learn about attributes and go back and forth with ChatGPT over what they could mean.
As an SEO consultant, however, I’d urge caution.
When it comes to SEO, as in everything in life, context matters.
My old standards of research used to be based on peer review, defense of arguments, and evidence.
That’s a little harder to come by in SEO, for a variety of reasons.
“Things happen on the street. Proof is hard to come by.”
– Proposition Joe
What’s nice about having this small personal site is I can focus on what I think matters or what interests me, and there are no advisors to say, “You can’t publish that blog post, try again.”
But that cuts both ways.
I trust the integrity of the people included in this Hamsterdam recap. I’ve also seen “Google ranking factors” content on TikTok that we all know where it leads.
I’ll continue to call myself a user-first SEO proponent.
Just as I consider myself a Google “truster.”
That simply means that I pay attention to the information and statements that come out of Google, and I place value in them.
Just as I do the opinions of accomplished SEO professionals in Hamsterdam recaps.
Does Google lie?
I don’t know. Does gas power a car?
No really, does it?
On the one hand, obviously yes, it does.
On the other hand, the engine powers the car, and the engine uses gas.
But also, gas are fumes. It really uses gasoline.
And some cars are electric.
So does gas power a car?
Depends on who you ask.
I wrote about this a bit in my Google AI Overviews post this week, but it’s kind of the semantic difference between “uses” and “uses directly.”
Nuance and context count for a lot, in all situations.
It’s like the difference between AI Overviews vs. The Onion telling us that Geologists recommend eating a rock a day.
I also know from working in politics and government a little bit that interpersonal relationships and personal goals are human nature.
“I don’t know if anybody’s ever told you that half the time, this business comes down to ‘I don’t like that guy.’”
– Roger Sterling
I consider myself an SEO for life and love this profession. It’s been very good to me.
I’ve also gotten the most intellectual satisfaction I’ve had in years from learning more about neural networks, LLMs, and vector embeddings over the last few months.
I feel a kindred connection with these areas of research.
As they say, find your beach.
Then again, while not everything in SEO is a peer-reviewed certainty, I can tell you one thing that is …
Paying back student loans. 😉
Buckle up for a full week’s recap, and enjoy the vibes:
Thank you for supporting Hamsterdam and the cause of SEO & AI learning.
Missed last week? Don’t worry, I got you! Read Part 59 to catch up.
Other great sources of weekly SEO news:
- The SEO Weekly – Garret Sussman, iPullRank
- SEOFOMO – Aleyda Solis
- Weekly Video Recaps – Barry Schwartz, SER
- Weekly SEO News YouTube channel – Olga Zarr, Seosly
- Niche Surfer – Yoyao Hsueh
Now, time for our weekly review of SEO social posts, articles, & more …

Quick summary
- Rand Fishkin and Mike King wrote about the Google API Content Warehouse; others’ analyses followed; Google spokesperson cautioned about it
- Liz Reid explained how Google is improving AI Overviews; Barry Schwartz summarized the post in SEL
- My pick of the week is Professor Emily Bender’s AI Overviews newsletter
- My sneaky pick of the week is about UX personalization with generative AI written by Marc Seefelder
- And much more!
Jump to a section of this week’s recap:
- News, Google updates, & SERP tests
- SEO tips & tidbits
- Fundamentals & resources
- Articles, videos & case studies
- Local SEO
- Technical SEO
- Content marketing
- Local SEO
- Data analysis & reporting
- AI, LLMS, & machine learning
- TikTok section
- Humor
- Miscellaneous & general posts
- Older stuff that’s good!
Or keep scrolling to see it all.
Ok, time to step inside the white flags of Hamsterdam …

SEO news, Google updates, SERP tests, or key posts
Notable updates or news related to Google Search or related SEO topics.
SEO tips & tidbits
Actionable tips, cool tidbits, and other findings and observations that can be teaching moments.
Essential information, concepts, or resources to learn about SEO or AI.
Longer-form content pieces shared on social, in newsletters, and elsewhere.
What are attributes? Digging into the Google API docs – Marie Haynes

From what is helpful content to user journeys and beyond.
Local SEO
From Google Business Profiles or reviews and more!
Data analysis & reporting
Showing that what you’re doing is helping.
How to create and configure custom dimensions in GA4 – Gemma Fontané, SEL

AI, machine learning, & LLMs
News related to models, papers, and companies.
Why it matters: USER-LLM creates user embeddings from user interaction data that LLMs can use to dynamically personalize experiences. Think of it like custom instructions for ChatGPT but happening organically. I suspect this could play a role in the increasing personalization of search results, as well. For more, I wrote about USER-LLM in Hamsterdam Research this week.
Humor
Subjectively funny content.
General marketing & miscellaneous
This is for great content that isn’t necessarily SEO or marketing-specific. PPC, PR, dev, design, and social friends, check it out!
Transforming UX with Generative AI – Marc C. Seefelder, UX Collective

15 brand tone of voice examples to help you find yours – Nika Prpic, FileStage

Older stuff that’s good!
Not everything I find worth sharing is new as of this week, so these are gems I came across published in the past.
Great job making it to the end. You rock!
Want help with your SEO strategy?
I’m an independent SEO consultant based in Orlando, Florida, focusing on custom audits and strategies for brands. Don’t hesitate to reach out, or visit my about page for more information.
Let’s connect!
Hit me up anytime via text or call at 813-557-9745 or on social or email:
Cheers!
Leave a Reply