What Are Google’s Reliable Information Systems & How Might They Be Used in Discussions and Forums

By Ethan Lazuk

Last updated:

Artistic sunset over a futuristic landscape with a Google logo.

Let’s have a forum for discussion.

Willy Wonka strike that reverse it GIF.

Let’s talk about Discussions and forums.

Back on September 28th, 2022, Lauren Clark co-authored a blog post on The Keyword (Google’s blog) called “Bringing more voices to Search.”

Bringing more voices to Search.

This introduced Discussions and forums.

[Aside: Lauren also authored the blog post introducing the Perspectives filter eight months later, which was changed to the Forums filter six months after that, interestingly enough.]

In the September 2022 blog post, the authors write:

“Forums can be a useful place to find first-hand advice, and to learn from people who have experience with something you’re interested in. We’ve heard from you that you want to see more of this content in Search, so we’ve been exploring new ways to make it easier to find. Starting today, a new feature will appear when you search for something that might benefit from the diverse personal experiences found in online discussions.

The new feature, labeled ‘Discussions and forums,’ will include helpful content from a variety of popular forums and online discussions across the web.”

– The Keyword (2022)

There wasn’t a lot of hubbub about this SERP feature in subsequent months, as I remember it.

Perspectives seemed to be the bigger focus, at least for UGC, and then of course SGE was getting attention, which came out the same day as Perspectives were announced (May 10th, 2023).

My recollection is that Reddit and Quora content (and social media content in general) began appearing more in regular search results during the rankings updates after the summer of 2023’s turbulence.

But Discussions and forums started to pick up as a topic of discussion (pun intended) around the early winter of 2023 and going into early 2024, and especially lately (April).

Let’s take a look at the context for the latest discussions around Discussions and forums and how Google Search’s Reliable information systems may play a role in its visibility for YMYL queries.

Spoiler alert summary: The hypothesis is that Google dampens its freshness signals and relies on consensus from authoritative sources to verify forum answers, and also likely expects them to appear for medical queries based on the Quality Rater Guidelines.

The context

Around November 27th, 2023, a full year after the initial blog post co-authored by Lauren Clark introduced Discussions and forums, Google Search Central introduced support for Discussion forum structured data:

“When forum sites add this markup, Google Search can better identify online discussions across the web and make use of this markup in features such as Discussions and Forums and Perspectives.”

– Google Search Central

More and more often from late 2023 onward (as I recall it), users (or at least search marketers) were observing Discussions and forums SERP features.

These features were even showing in product knowledge panels, first reported as a test in Search Engine Roundtable on November 14th, 2023.

A few months later on February 14th, 2024, Glenn Allsopp published a study of 10,000 product review queries and found the Discussions and forums feature was present for 77% of them.

Not long after that on February 22nd, 2024, we learned Google and Reddit had made a deal for access to the latter’s data by the former. This was notable incidentally because Reddit has been a common presence in Discussions and forums.

Most recently, though, a conversation about Discussions and forums was triggered by Lily Ray pointing out it was appearing for “weight loss,” a presumably YMYL medical query.

In his initial response to Lily Ray’s question (the one being quoted), Danny Sullivan (Search Liaison) references a November 15th, 2023 blog post on The Keyword titled, “New ways to find just what you need on Search.”

This post had several announcements, including Notes, the Perspectives filter coming to desktop (now the Forums filter), but also this sentence:

“As part of this work, we’ve also rolled out a series of ranking improvements to show more first-person perspectives in results, so it’s easier to find this content across Search.”

– The Keyword (2023)

In the SEO community, we associated this announcement with the completed rollout and incorporation of the “hidden gems” ranking system improvements with Google’s core ranking systems.

What’s interesting is that blog post makes no mention of Discussions and forums, only “first-person perspectives.”

Perhaps we can now draw a firmer connection between hidden gems and Discussions and forums.

But what about if the feature should appear for YMYL health queries?

In a portion of his reply that’s cut off above, Danny raises an example that ties forums to the Experience side of E-E-A-T:

“Forums probably do have a role for people seeking information. People suffering illnesses, for example, might want to hear how people are coping generally outside of potential treatments. I’d expect we’ll keep looking at how to improve here.”

– Search Liaison on X

(That’ll become more interesting later, as we’ll see when looking at the Quality Rater Guidelines.)

As Aleyda Solis and Lily Ray point out, forums can also be rife with spammers and frauds:

Of course, medical sites aren’t always immune from SEO-first weirdness themselves:

But that’s what really seems to be at the heart of the matter here:

Users like Reddit and are publicly tired of “SEO spam.” Yet, while we can largely trust Google’s systems to filter out most medical spam pages for YMYL queries and reward quality SEO work (i.e., helpful, reliable, and people-first content), the question is can Google also be a good judge of Reddit content and forums generally for YMYL medical topics?

What’s interesting is that Danny links his reply above to the Reliable information systems section of Google’s guide to Search ranking systems.

Lets look deeper at these systems and how they could relate to Discussions and forums.

Google’s reliable information systems

Here’s the full Reliable information systems section with its links intact (which we’ll review after):

“Multiple systems work in various ways to show the most reliable information possible, such as to help surface more authoritative pages and demote low-quality content and to elevate quality journalism. In cases where reliable information might be lacking, our systems automatically display content advisories about rapidly-changing topics or when our systems don’t have high confidence in the overall quality of the results available for the search. These provide tips on how to search in ways that might lead to more helpful results. Learn more about our approach to delivering high-quality information in Search.”

– Google Search Central

This section mentions pages, content, and quality journalism, but not forums or the like specifically.

Taking a step back, it’s interesting that the original Discussions and forums announcement from September 2022 on The Keyword was also co-authored by Itamir Snir of Google News:

“We’re also announcing a new way we’re helping to avoid language barriers when it comes to getting local perspectives on international news stories. … In early 2023, we’ll launch a new feature that will give people a simple way to find translated news coverage using machine translation.”

– The Keyword (2022)

Other than the mention of “local perspectives,” this seems like a translation feature. But it is interesting that Discussions and forums was packaged with a news item, while the Reliable information systems section also mentions news.

Let’s explore each of the linked documents from that section for more details.

First link

The first link in the section goes to a blog post on The Keyword from April 25th, 2017, called, “Our latest quality improvements for Search.” It was authored by Ben Gomes.

Our latest quality improvements for Search.

The post starts out by mentioning efforts to address search engine-first tactics and spammers:

“… our algorithms have always had to grapple with individuals or systems seeking to ‘game’ our systems in order to appear higher in search results—using low-quality ‘content farms,’ hidden text and other deceptive practices. We’ve tackled these problems, and others over the years, by making regular updates to our algorithms and introducing other features that prevent people from gaming the system.”

– The Keyword (2017)

The new challenge afoot is misinformation:

“… there are new ways that people try to game the system. The most high profile of these issues is the phenomenon of ‘fake news,’ where content on the web has contributed to the spread of blatantly misleading, low quality, offensive or downright false information.”

– The Keyword (2017)

Ben explains how the first step in solving the problem is updating the Search Quality Rater Guidelines. (Interestingly, the most recent QRG updates on March 5th, 2024 pertained to “factual inaccuracies.” We’ll discuss and explore this document more later, as well.)

The second step was ranking changes, namely surfacing more authoritative content and demoting lower-quality content.

Other initiatives included user feedback mechanisms and transparency about how search works.

Second link

The second link in the Reliable information systems section goes to a blog post on The Keyword from March 20th, 2018, called, “Elevating quality journalism on the open web.”

Elevating quality journalism on the open web.

This is where it gets interesting, as we have our first mention of “forums” in the context of breaking news:

“During breaking news or crisis situations, stemming the tide of misinformation can be challenging. Speculation can outrun facts as legitimate news outlets on the ground are still investigating. At the same time, bad actors are publishing content on forums and social media with the intent to mislead and capture people’s attention as they rush to find trusted information online.”

– The Keyword (2018)

Maybe we can assume similar (or perhaps the same) systems used to identify trustworthy forum information during breaking news situations can apply to evergreen queries, including about weight loss.

It also seems the best answer to fighting misinformation is not rewarding freshness as much:

“To reduce the visibility of this type of content during crisis or breaking news events, we’ve improved our systems to put more emphasis on authoritative results over factors like freshness or relevancy.”

– The Keyword (2018)

I’d be curious to know if the Discussions and forums answers eligible for YMYL queries are more recent or older.

In my experience, these answers tend to be posted based on relevance more than freshness, and there has even been criticisms of some forum answers being older. So maybe there’s a connection?

The blog post also says this ability to reduce information builds on the announcement from last year, which was the Ben Gomes blog post mentioned previously.

The rest of the post discusses partnerships and training.

Third link

The third link in the Reliable information systems section goes to a blog post on The Keyword from August 11th, 2022, called, “New ways we’re helping you find high-quality information.” It was written by Pandu Nayak.

New ways we're helping you find high-quality information.

Again, this post makes a mention of both Search and News:

“We have deeply invested in both information quality and information literacy on Google Search and News, and today we have a few new developments about this important work.”

– The Keyword (2022)

Based on this explanation of how the systems work, it sounds to me like a neural network, but that’s only based on speculation about Google “constantly refining these systems” and knowledge of Nayak’s background with AI:

“We train our systems to identify and prioritize these signals of reliability. And we’re constantly refining these systems — we make thousands of improvements every year to help people get high-quality information quickly.”

– The Keyword (2022)

That’s the existing technology portion of the post.

The new announcement is then around using MuM to show higher-quality featured snippets.

However, the concept of “consensus” behind the featured snippet quality improvement wouldn’t surprise me to be the same trust signal used for forum answers:

“By using our latest AI model, Multitask Unified Model (MUM), our systems can now understand the notion of consensus, which is when multiple high-quality sources on the web all agree on the same fact. Our systems can check snippet callouts (the word or words called out above the featured snippet in a larger font) against other high-quality sources on the web, to see if there’s a general consensus for that callout, even if sources use different words or concepts to describe the same thing. We’ve found that this consensus-based technique has meaningfully improved the quality and helpfulness of featured snippet callouts.”

– The Keyword (2022)

Other portions of the blog post speak to About this result, content advisories, and training.

Fourth link

The final link in the Reliable information systems section was a blog post on The Keyword from September 10th, 2022, called, “How Google delivers reliable information in Search.” It was authored by Danny Sullivan.

How Google delivers reliable information in Search.

He explains how Google ensures quality information:

“But people often ask: What do you mean by quality, and how do you figure out how to ensure that the information people find on Google is reliable?

A simple way to think about it is that there are three key elements to our approach to information quality:

  • First, we fundamentally design our ranking systems to identify information that people are likely to find useful and reliable.
  • To complement those efforts, we also have developed a number of Search features that not only help you make sense of all the information you’re seeing online, but that also provide direct access to information from authorities—like health organizations or government entities.
  • Finally, we have policies for what can appear in Search features to make sure that we’re showing high quality and helpful content.”
– The Keyword (2022)

The next few paragraphs are pretty interesting. Some concepts reminded me of slides from the recent anti-trust trial discoveries, but I think the big takeaway is the role of quality raters.

“Updates to our language understanding systems certainly make Search results more relevant and improve the experience overall. But when it comes to high-quality, trustworthy information, even with our advanced information understanding capabilities, search engines like Google do not understand content the way humans do. We often can’t tell from the words or images alone if something is exaggerated, incorrect, low-quality or otherwise unhelpful.

Instead, search engines largely understand the quality of content through what are commonly called ‘signals.’ You can think of these as clues about the characteristics of a page that align with what humans might interpret as high quality or reliable. For example, the number of quality pages that link to a particular page is a signal that a page may be a trusted source of information on a topic.

We consider a variety of other quality signals, and to understand if our mixture of quality signals is working, we run a lot of tests. We have more than 10,000 search quality raters, people who collectively perform millions of sample searches and rate the quality of the results according to how well they measure up against what we call E-A-T: Expertise, Authoritativeness and Trustworthiness.”

– The Keyword (2022)

I do wonder if Discussions and forums could be appearing more indirectly based in part on the verification of their value from quality raters’ tests.

But that’s also a little tricky, because it’d be harder to verify the E-E-A-T of random people in a forum than a website.

On a similar point, the blog post continues:

“For topics where quality information is particularly important—like health, finance, civic information, and crisis situations—we place an even greater emphasis on factors related to expertise and trustworthiness. We’ve learned that sites that demonstrate authoritativeness and expertise on a topic are less likely to publish false or misleading information, so if we can build our systems to identify signals of those characteristics, we can continue to provide reliable information. The design of these systems is our greatest defense against low-quality content, including potential misinformation, and is work that we’ve been investing in for many years.”

– The Keyword (2022)

It’s hard to reconcile that Discussions and forums could be deemed helpful for YMYL queries per quality rater test findings (indirectly) with the possibility of unhelpful forum contributions being added later.

However, we should also bear in mind this blog post from Danny predates the introduction of the 2nd “E” for “Experience by two months.

The next part of the post mentions making “information from authoritative organizations like local governments, health agencies and elections commissions available directly on Search.”

Then it talks about fact checks and guidelines for “general Search features, like knowledge panels, featured snippets and Autocomplete.”

How might Google’s reliable information systems verify forum content quality?

In summarizing these four links from the Reliable information systems section, it seems like the second and third are the most likely scenarios.

Freshness and consensus

The second link spoke about dampening freshness relative to authoritativeness as a safeguard against fake news and bad actors and even mentioned forums.

The third link spoke about consensus among high-quality sources, albeit in the context of featured snippets with no mention of forums.

Maybe we can hypothesize that Discussions and forums answers are restricted by freshness dampening and held to consensus from other sources.

Quality rater guidelines

The first and fourth links, if applicable, would suggest quality raters being involved, which is harder to relate to the context of YMYL and medical queries, at least on the surface.

However, Google’s Quality Rater Guidelines has a section called, “9.3 Ratings for Forums and Q&A Pages,” starting on page 78 (in the March 5th, 2024 version).

Google instructs quality raters to “Rate from the point of view of a user who visits the page , rather than a participant involved in the discussion.”

As for E-E-A-T and forums, the SQRG says:

  • “The E-E-A-T of a discussion among users can often be judged by the posts or comments themselves.
    • For some topics, Experience is the most important dimension of Trust. For other topics, assessing Expertise through the posts may be important. In some cases, the posters themselves will highlight either their own Experience or Expertise, or other people will comment on it.
    • Pages on YMYL topics require more attention to Trust and more care in the assessment of E-E-A-T.
  • Highest quality forum/Q&A pages have extremely satisfying conversations, including participation from users who have put a great deal of effort into their posts and have a wealth of Experience and/or Expertise on the topic. Such conversations can be very satisfying because of the depth of discussion, the unique insights, or the sharing of experiences that many would not have access to in their real-world community.”
– Search Quality Rater Guidelines (2024)

Of course, the same concerns we’ve been pointing out about forum quality are listed in the SQRG as well:

“Another challenge occurs when the discussion on the page drifts, becomes combative, or becomes dominated by misleading or spammy content. When rating, value the insightful, meaningful discussion that exists. If the page is a mix of high and low quality characteristics but has insightful, meaningful discussion, the Medium rating may be most appropriate as long as the page is not potentially harmful.”

– Search Quality Rater Guidelines (2024)

Here’s what else is interesting.

There are two medical examples of forum content given in the SQRG, at both ends of the spectrum.

The first is an example of the lowest quality:

Google Search Quality Evaluator Guidelines example of lowest quality forum content.

The next is an example of the highest:

Google Search Quality Evaluator Guidelines example of highest quality forum content.

In other words, Google implies in the Quality Rater Guidelines that forums are eligible for medical queries.

The example is also similar to what Danny referenced in his post on X.

Pay more attention to this topic?

Another interesting point is on March 5th, 2024, Google called out the Reliable information systems section via a link in its March 2024 core update announcement:

“Just as we use multiple systems to identify reliable information, we have enhanced our core ranking systems to show more helpful results using a variety of innovative signals and approaches.”

– Google Search Central (2024)

I don’t much remember this being referenced that often in past announcements or social media posts.

I could be wrong, but it does seem worth paying attention to in reference to forums.

Examples coming! (plus one sneak preview now)

Stay tuned for an updated version of this post with examples of Discussions and forums answers for different categories of YMYL queries.

But for a teaser, let’s look at one example of a medical query that’s personal to my lifestyle: “vegan iron deficiency.”

Here, Discussions and forums shows up after the fifth web result:

Google desktop SERP for vegan iron deficiency.

It contained two Reddit answers and on from Quora:

Discussions and forums results for vegan iron deficiency query.

As we can see, the Reddit answers are both at least 1 year old, and Quora is 8 months old, likely discounting freshness.

What about consensus?

To make things easier, I’m just going to take a PDF version of the first Reddit forum page and ask Gemini 1.5 Pro via Google AI Studio to summarize the findings. Then I’ll verify them again using Gemini Advanced with Google Search.

Based on that exercise, it seems like the forum was trustworthy at a glance:

Gemini summary of a Reddit thread about vegan iron deficiency.

Keep in mind, I did the lazy thing here using only AI chatbots for one forum and didn’t analyze all the answers closely.

But!

Stay tuned and we’ll look at more examples soon to see if any patterns emerge that further confirm if dampening freshness and rewarding medical consensus play a role.

Until then, enjoy the vibes:

Thanks for reading. Happy optimizing! 🙂

Editorial history:

Created by Ethan Lazuk on:

Last updated:

Need a hand with SEO audits or content strategy?

I’m an independent strategist and consultant. Learn about my SEO services or contact me for more information!

Leave a Reply

Discover more from Ethan Lazuk

Subscribe now to keep reading and get access to the full archive.

Continue reading