Ethan Lazuk

SEO & marketing professional.


Hamsterdam Part 58: Weekly SEO & AI News Recap (5/13 to 5/19, 2024)

By Ethan Lazuk

Last updated:

A weekly look-back at SEO & AI news, tips, and other content shared on social media & beyond.

Hamsterdam Part 58 Weekly SEO Recap 5/13 to 5/19 with Sam Altman quote.
Source: Sam Altman

Opening notes:

  • Welcome to another week of Hamsterdam! (So much stuff this week, I had to remove like 40% of it for mobile loading …)
  • We hit no. 1 on Friday for “weekly SEO news” on Google. Pretty cool, right?! (Of course, we dropped to page 2 a few days before, so who knows?!)
Weekly SEO News Google Desktop SERP.

Want each week’s Hamsterdam recap delivered? Subscribe to the free newsletter! (It’s pretty much a link to this article, but it’ll be conveniently emailed to you.) 😉

*Feel free to jump to the news recap below, or continue reading for words of the week, “This week in SEO history,” plus an introduction and summary first!


Marketing word of the week: “above the fold”

Above the fold refers to the visible section of a webpage that loads in the viewport before a user scrolls down.

Above the fold vs below the fold.
Source: VWO

Earlier newspapers especially were printed on large sheets of paper and folded to sell at newsstands. This meant stories above the fold were seen first by passersby, so their headlines had to grab attention!

Dumb and Dumber Moon Landing Newspaper GIF.

The same applies to webpages today. (There’s a TikTok video on this topic later in the recap.)

You generally want to include your primary heading (typically an H1), so users know what content to expect.

You might also have a CTA (call-to-action) above the fold, if your visitors have clear intent to convert. Whereas uncertain visitors might want supporting text, imagery, or navigational elements or jumplinks to learn more.

In Google’s Core Web Vitals (CWV), largest contentful paint (LCP) represents the render time of the largest content element in the viewport (above the fold).

PageSpeed Insights mobile loading screenshots.

While it’s a best practice to lazy load images below the fold (waiting to load them until they’re scrolled into the viewport), above-the-fold content should load as quickly as possible. (A good LCP is under 2.5 seconds.)

Users having a satisfying above-the-fold experience is more likely to improve your metrics like dwell time, time on page, and engagement rates.

You can also measure depth of scrolling with heat maps, like in Microsoft Clarity, which shows an “average fold” dotted line.

Microsoft Clarity Heat Map.

If you monetize your website, beware of overly aggressive ads above the fold. Not only can these slow a page’s load time, but they can turn off users by diminishing the page experience.

You’l also want to be conscientious of how your title links correspond to your primary headings (H1s). If users expect one topic or angle from search results but encounter another on the page itself, they may bounce in frustration.

Overall, the better your above-the-fold experience, the more positive user interaction signals you’ll send back to search engines regarding the relevance and quality of your content, not to mention conversions.


AI word of the week: “activation function”

An activation function is a mathematical operation that takes an input and produces an output. In neural networks, it helps the model to learn nonlinear (complex) relationships between features (or individual characteristics or attributes of the input data that the model uses to make predictions) and a label (or the target output or correct answer the model is trying to predict).

Neural networks are made up of an input layer with one or more hidden layers with interconnected neurons that feed forward to an output layer. The activation function is applied to the weighted sum of the inputs to neurons in the hidden layers. Below is an example of a feed-forward neural network with a Sigmoid activation function.

Feed-forward neural network with sigmoid activation function.
Source: ResearchGate

The concept of nonlinear relationships is important to grasp in machine learning. Many real-world phenomena exhibit complex relationships that linear representations (such as linear regression) can’t capture. This is what makes neural networks valuable for modeling these intricate patterns and dependencies — especially deep neural networks (DNNs) for tasks like image recognition and natural language processing (NLP) — and activation functions are key here.

Linear vs. Non-Linear relationships.
Source: O’Reilly

Common examples of activation functions include Sigmoid, tanh, and ReLU, shown on the left side below.

Activation Functions
Source: Shruti Jadon

As a takeaway, activation functions are used within the hidden layers of neural networks and allow the models to capture intricate (nonlinear or complex) connections between input data (features) and the desired output (label).


This week in SEO history: “Line Mode Browser” (1991)

Line Mode Browser (which is formally called the Libwww Line Mode Browser) was introduced on May 14th, 1991 by a team that included Tim Berners-Lee (the creator of HTML, URI, HTTP, and the first browser, “WorldWideWeb”), Henrik Frystyk Nielsen, and Nicola Pellow.

At least, that’s the date according to Web Design Museum. Other sources say January 1992 was the rollout. Digging in, I think May 1991 was when there was a general release of WWW on central CERN machines.

Line Mode Browser was the second browser ever made for the world wide web, and the first with a cross-platform codebase (Nicola Pillow’s innovation). This meant it could be installed on different kinds of computers (not just the proprietary NeXT operating systems, like the WordWideWeb was made for).

To accomplish this cross-platform use, the browser displayed only text. A mouse couldn’t be used, so links were visited using keyboard inputs.

What’s awesome is there’s a simulator available for Line Mode Browser.

Below you can see numbers in brackets (like “[14]”), which are links to documents, called “references.” To visit the May history document in the example below, we’d type in “14” and hit RETURN. Then the page prints text on the screen line by line, like a teleprinter. It’s a fun, Matrix-like experience.

Line Mode Browser simulator screenshot.

Here’s a part SEOs might find interesting, as well.

According to the WWW Line Mode Browser quick guide (which is still online), there was a search function based on keywords:

“Some documents are indexes. These contain little text, but allow you to search for information with keywords. Type ‘find’ or ‘f’ (space) and the keywords. For example, ‘f sgml examples’ searches the index for items with keywords SGML and EXAMPLE. You can only use the ‘find’ command when it is present in the prompt. You can omit the ‘f’ if the first keyword doesn’t conflict with existing commands .”

– WWW Line Mode Browser quick guide

Pretty cool!

Speaking of developments in search …

Let’s get to our introduction this week, talking about AI, but not how you might think.


Introduction to week 58: “blank stare banker”

Blank stare cartoon dog banker.

You’ve probably heard a lot about Google I/O this week. Combined with other announcements, it might even seem like AI is taking over everything we talk about these days.

I have a story that shows that might not be the case … though I think it should be.

But first, let’s recap the week a bit.

Google I/O was on Tuesday (May 14th). Here’s a list of the top 100 announcements. There were others, as well (like Model Explorer).

Google introduced several search related features. The main announcement was the rollout of AI Overviews (formerly SGE via Search Labs) to all U.S. users now and 1 billion people globally this year.

Another update was video queries (video inputs via Google Lens with AI Overview responses), although this is a coming Search Labs feature. Personally, I see video inputs as more germane to Gemini (say that 5 times fast) than search, but we shall see!

The sneaky big news for search, in my opinion, was AI-organized result pages, which are personalized to users. This is already live for all U.S. English searches. My expectation is it’s the truer future direction of Google Search.

If you visited followed topics from Discover in Search and used the About this result feature, you likely saw how widespread personalization was already. (That trend also goes back decades.)

While using generative AI to summarize search results is helpful for some uses (I reference them often, tbh), I see AI Overviews more as a Perplexity response that’s most pertinent to early adopters.

Meanwhile, personalizing an entire SERP with a variety of media and search journey categories, so users can “slice and dice” or “explore” (remember those terms from our 2009 Searchology history lesson of “Search Options”), that feels more universally helpful and in line with Google’s bigger goal of eliminating friction points.

Personally, I didn’t see anything that changed my approach to SEO strategies, other than needing to update “SGE” mentions in my service pages to say “AI Overviews.” 😉

It was all pretty much in line with expectations, and I’m personally excited about an era of multimodal search that crosses multiple surfaces. (Attribution will get improved over time, I’m sure.)

But I do think the nature of SEO workflows is changing quickly.

The big news for consumers at I/O, I thought, were the upgrades to Gemini Advanced (like data analysis, 1.5 Pro, and 1 million token context window) and the demoed “agentive” experiences, a carry over from what we saw at Cloud Next last month.

As OpenAI’s Sam Altman said in an interview (which you can find below in the TikTok section), asking Siri on iPhone to set an alarm is much easier than doing it manually. However, shopping on DoorDash or ordering an Uber is still something most would prefer to do themselves.

But digging through email to find a receipt? Yeah, you got that one, Gemini!

I also thought big news overall were Gemini 1.5 Flash (which in testing so far is super fast) and expanding the context window of 1.5 Pro to 2 million tokens. Rather than a custom RAG at that point, just upload your documents and carry on.

But why did I call this introduction “blank stare banker”?

Well, for all the excitement this week at OpenAI and from Google I/O (and we also have Microsoft Build coming up!), I also got a reminder that kept things in context.

I was at Wells Fargo on Monday (during the GPT-4o announcement) to open a business account, when the banker (a talkative young person) told me, “You have the wrong role assigned on your state business form.”

It was something like that. I don’t recall exactly.

And I said, “Oh, you know what, I think I just asked ChatGPT which option to pick for that.”

They went quiet and stared blankly, and that’s when I realized, this smart, capable person had no idea what ChatGPT was.

I wasn’t shocked, truth be told. I hear similar from other professionals, that they’re “holding off” before getting involved with AI.

This is just my opinion, but I think everyone needs to drop what they’re doing and get involved right now.

To what degree is up to you.

But there’s a learning curve with AI, especially the limits of its operations.

AI models can save us a ton of time, or they can create extra work — like explaining when they can’t do SEO. 😉

Learning that takes time.

Outside of using AI for SEO work efficiencies, I enjoy studying deep neural networks from an academic perspective to learn how the models work.

For other professionals, it might be beneficial to get familiar with the different companies creating models, and the types of products available.

As you’ll hear in a few videos below, some companies are skipping tools like Copilot (due to cost) and making their own custom ones with open-source models (like Meta’s Llama 3). Meanwhile, the rate of new model deployment is expected to be about every 4-6 months.

Just as SEOs advise clients on digital marketing, so too can we give suggestions for AI usage based on us having a more in-depth familiarity.

AI has already been a part of most aspects of society, with some of these topics, like neural networks, going back to the 1960s. Yet it will soon be more ubiquitous from a consumer and professional perspective.

I wouldn’t wait for anyone to show you the way. Instead, take the bull by the horns and go explore! Paying attention is all you need. 🙂

One place to start getting inspired is Google Labs. Another is on YouTube, where the quality levels of DNN lectures and free tutorials for Python, PyTorch, etc. is refreshing.

You might be surprised just how quickly you can accomplish your goals, or even discover new ones. 😉

Buckle up for a full week’s recap, and enjoy the vibes:

Thank you for supporting Hamsterdam and the cause of SEO & AI learning.

Missed last week? Don’t worry, I got you! Read Part 57 to catch up.

Other great sources of weekly SEO news:


Now, time for our weekly review of SEO social posts, articles, & more …

The Big Lebowski is this your homework Larry scene.

Quick summary

  • ChatGPT GPT-4o released (natively multimodal); Sam Altman called it “her,” which stirred excitement; mystery chatbot solved?
  • Google I/O happened; AI Overviews rolled out (U.S.); sneaky big news is AI-organized results pages; Gemini Advanced to get 1.5 Pro and 1 million token context window
  • Google launches web filter; JM speaks about HCU-impacted sites
  • Perplexity’s head of search gives interview; CEO trolls Google; Microsoft does, too
  • And much more!

Jump to a section of this week’s recap:

Or keep scrolling to see it all.

Ok, time to step inside the white flags of Hamsterdam …

Hamsterdam scene from The Wire with Carver pointing at the white flags.

I/O news

Some stuff Google shared on social during I/O 2024.

You can watch the full I/O keynote on YouTube here.

SEO news, Google updates, & SERP tests

Notable updates or news related to Google Search or related SEO topics.

Related: Here’s an article from The Verge (yes, I’m mending fences, like a true search diplomat) that has some user comments that seem positive on this development.

Circle to Search may no longer be an Android exclusive, could come to Chrome on iOS – Ryan McNeal, Android Authority

Circle to Search
Excerpt: “The folks over at The Mac Observer spotted something interesting hiding in Chrome for iOS. There appears to be a new “Lens Circle to Search” flag that was quietly added to the app. Once the flag is enabled, the Circle to Search feature will be available within Google Lens on iOS. This means iPhone users could start circling and searching to their heart’s content.”

SEO tips & tidbits

Actionable tips, cool tidbits, and other findings and observations that can be teaching moments.

Related: Site-wide, page-level, sections, oh the intrigue! I updated my article on creating helpful, people-first content this week based on March’s developments. I didn’t realize it, but I predicted the helpful content system would be integrated into the core system (like Panda). Of course, I removed that “hypothetical” part now. (ICYM my 11x content article, that might be relevant, also!)

SEO (and AI) fundamentals & resources

Essential information, concepts, or resources to learn about SEO or AI.

Articles, videos, case studies & more

Longer-form content pieces shared on social, in newsletters, and elsewhere.

Google is redesigning its search engine — and it’s AI all the way down – David Pierce, The Verge

Google AI Overviews article on The Verge.
Excerpt: “That combination of the Knowledge Graph and AI — Google’s old search tool and its new one — is key for Reid and her team. Some things in search are a solved problem, like sports scores … Gemini’s job, in that case, is to make sure you get the score no matter how strangely you ask for it. “You can think about expanding the types of questions that would successfully trigger the scores,” she says, “but you still want that canonical sports data.” … Part of the impetus for creating the new search-specific Gemini model, Reid tells me, was to focus it on getting things right. “There’s a balance between creativity and factuality” with any language model, she says. “We’re really going to skew it toward the factuality side.” AI Overviews may not be fun or charming, but as a result, they might get things right more often. … But she’s also convinced and says early data shows that this new way of searching will actually lead to more clicks to the open web. Sure, it may undercut low-value content, she says, but “if you think about [links] as digging deeper, websites that do a great job of providing perspective or color or experience or expertise — people still want that.” She notes that young users in particular are always looking for a human perspective on their query and says it’s still Google’s job to give that to them.”
Excerpt: “With every algorithm and change, we move further away from the old days of tricking the search engines and closer to having to do real marketing. If you aren’t thinking about user needs, personas and intent, you’re already failing.
Too often, I meet with SEOs and businesses whose approach is backward. They start off saying, “I have this thing. Make it rank for this keyword.” That’s the wrong approach. A better approach is to start with the keyword, understand the user intent and what they would find useful – and then go build that.”
Excerpt: “Google’s documentation says that following their guidelines for ranking in the regular search is all you have to do for ranking in AI Overviews. … Obviously, keywords and synonyms in queries and documents play a role. But in my opinion they play an oversized role in SEO. There are many ways that a search engine can annotate a document in order to match a webpage to a topic, like what Googler Martin Splitt referred to as a centerpiece annotation. A centerpiece annotation is used by Google to label a webpage with what that webpage is about.”

Alexandr Yarats, Head of Search at Perplexity – Interview Series – Antoine Tardif, Unite.AI

Unite AI interview with Alexandr Yarats of Perplexity AI.
Excerpt: “We use LLMs everywhere, both for real-time and offline processing. LLMs allow us to focus on the most important and relevant parts of web pages. They go beyond anything before in maximizing the signal-to-noise ratio, which makes it much easier to tackle many things that were not tractable before by a small team. In general, this is perhaps the most important aspect of LLMs: they enable you to do sophisticated things with a very small team. … We are optimizing a completely different ranking metric than classical search engines. Our ranking objective is designed to natively combine the retrieval system and LLMs. This approach is quite different from that of classical search engines, which optimize the probability of a click or ad impression.”

Local SEO

What’s happening in your local neck of the woods; well, actually in local search.

Technical SEO

Everything from basics to advanced moves (and also tools).

Excerpt: “If you’re not sure how Google is treating one of your temporary redirects, paste the redirected URL into Search Console’s URL Inspection tool. If it shows the “URL is not on Google” warning, Google must be treating the redirect as permanent. If it is on Google, then Google’s treating it as temporary.” (This is a super in-depth article. Recommend bookmarking!)

Content marketing

From what is helpful content to user journeys and beyond.

Data analysis & reporting

Showing that what you’re doing is helping.

AI, machine learning, & LLMs

Last week’s “im-also-a-good-gpt2-chatbot” mystery is solved. (I didn’t see anyone else raise this point, though …)
Excerpt: “The 1-bit Transformer was first introduced by Kim et al. (2020) as a way to reduce the memory footprint and computational complexity of the original Transformer architecture. The key idea behind 1-bit Transformers is to quantize the weights and activations of the model to 1-bit values, i.e., -1 or 1. This quantization process not only reduces the memory requirements of the model but also enables the use of binary operations, which are significantly faster than floating-point operations. … The main advantage of 1-bit Transformers is their ability to achieve comparable performance to their full-precision counterparts while using significantly less memory and computational resources. The low memory requirements are a revolution in themselves.”

Why it matters: 1-bit transformers are a new type of neural network architecture that replace 32-bit weights with 1-bit weights, making them smaller and more efficient. This leads to benefits like running on devices with less power and memory. These transformers could be used to develop more powerful chatbots or virtual assistants or even new applications like real-time machine translation or on-device NLP for mobile devices.

Building on our commitment to delivering responsible AI – Lila Ibrahim & James Manyika, The Keyword (Google)

Google responsible AI commitment article.
Excerpt: “One new experimental tool we’ve built to make knowledge more accessible and digestible is called Illuminate. It uses Gemini 1.5 Pro’s long context capabilities to transform complex research papers into short audio dialogues. … We announced AlphaFold 3, an update to our revolutionary model that can now predict the structure and interactions of DNA, RNA and ligands in addition to proteins — helping transform our understanding of the biological world and drug discovery.”

Why it matters: AlphaFold 3 is BIG TIME. Also, the advancements in the education field, I think, will revolutionize opportunity, especially for people who like self-learning. 😉 Spread the word!

How ‘Chain of Thought’ Makes Transformers Smarter – Vineet Kumar, MarkTech Post

Chat of Thought MarkTech Post article.
Excerpt: “Essentially, they found that without the chain of thought, transformers are limited to efficiently performing only parallel computations, meaning they can solve problems that can be broken down into independent sub-tasks that can be computed simultaneously. However, many complex reasoning tasks require inherently serial computations, where one step follows from the previous step. And this is where the chain of thought helps transformers a lot. By generating step-by-step reasoning, the model can perform many more serial computations than it could without CoT. The researchers proved theoretically that while a basic transformer without CoT can only solve problems up to a certain complexity level, allowing a polynomial number of CoT steps makes transformers powerful enough to solve almost any computationally hard problem, at least from a theoretical perspective.”

Why it matters: The paper explains the theoretical reasons why chain of thought (CoT) is effective for improving reasoning capabilities of LLMs. CoT involves prompt engineering to instruct LLMs to think step-by-step. It can be as simple as adding “think step by step” to the end of a prompt. This encourages the model to break down a problem into smaller sequential steps. Without CoT, transformers did parallel computations, or processing different pieces of information simultaneously, while struggling at reasoning sequentially, where one step depends on the outcome of another. As the article points out, CoT makes transformers powerful enough for any computationally hard problem, theoretically. Here’s a link to the paper. This was applied to “decoder-only transformers through the lens of expressiveness,” meaning generative tasks relevant to LLMs.

This AI Paper by Microsoft and Tsinghua University Introduces YOCO: A Decoder-Decoder Architectures for Language Models – Nikhil, MarkTech Post

YOCO paper about Microsoft researchers.
Excerpt: “Microsoft Research and Tsinghua University researchers have introduced a novel architecture, You Only Cache Once (YOCO), for large language models. The YOCO architecture presents a unique decoder-decoder framework that diverges from traditional approaches by caching key-value pairs only once. This method significantly reduces the computational overhead and memory usage typically associated with repetitive caching in large language models. YOCO efficiently processes long sequences by leveraging precomputed global KV caches throughout the model’s operation, streamlining the attention mechanism and enhancing overall performance by employing a self-decoder and a cross-decoder.”

Why it matters: YOCO is highly efficient at processing long text sequences, like would be needed for document understanding, code generation, or conversational responses. In short, it’s a new take on a language model that combines the benefits of, say, GPT and BERT. Here’s a link to the paper. The researchers note how, “Experimental results demonstrate that YOCO achieves favorable performance compared to Transformer in various settings of scaling up model size and number of training tokens.” And given the 1 million token context window of Gemini, it’s notable how they said, “We also extend YOCO to 1M context length with near-perfect needle retrieval accuracy.”

The future of Microsoft’s Copilot – CNBC

CNBC segment on Copilot.
Had to remove the video embed but can watch at CNBC.com.

Why it matters: This video talks about Copilot cost limitations for businesses vs. creating custom tools with open-source models.

Excerpt: “The Gemini 1.5 Pro presented in this report is an update over the previous Gemini 1.5 Pro February version and it outperforms it predecessor on most capabilities and benchmarks. All in all, the Gemini 1.5 series represents a generational leap in model performance and training efficiency. Gemini 1.5 Pro surpasses Gemini 1.0 Pro and 1.0 Ultra on a wide array of benchmarks while requiring significantly less compute to train. Similarly, Gemini 1.5 Flash performs uniformly better compared to 1.0 Pro and even performs at a similar level to 1.0 Ultra on several benchmarks.”

Why it matters: Here’s a link to the PDF. I had Gemini 1.5 Flash summarize it for us (it took under 10s and used 115k tokens, or about 12% of the 1 million available, or 1.2% of 10 million): “This paper introduces Gemini 1.5 Pro and Gemini 1.5 Flash, the next generation of multimodal language models capable of handling massive amounts of context (up to 10 million tokens), significantly exceeding the capabilities of current models like Claude 3.0 and GPT-4 Turbo. Notably, Gemini 1.5 models achieve near-perfect recall on long-context retrieval tasks across all modalities (text, video, audio) and demonstrate improved performance on various benchmarks, including long-document question answering and long-video question answering. These advancements in long-context capabilities do not come at the expense of core capabilities like math, science, reasoning, code, and multilingual understanding, with Gemini 1.5 Pro outperforming even its previous, more powerful iteration, Gemini 1.0 Ultra. While the paper highlights several real-world use cases, SEO professionals might be particularly interested in Gemini 1.5’s ability to process and analyze large amounts of data, potentially influencing the way SEO strategies are developed and executed.”

TikTok content

It’s a search engine, right?

@allinpodcastclips

Sam Altman on OpenAI creating a new phone

♬ original sound – All-In Pod clips
@bloombergbusiness Never mind #movies, #television and even TikTok — #AI is going to lead to a whole "new form of content," argues #DreamWorks co-founder Jeffrey Katzenberg at the #QatarEconomicForum#film #TV #tech #artificialitelligence #entertainment ♬ original sound – Bloomberg Business
@radwebdesigns Let’s make the perfect portfolio! After reviewing hundreds of award winning agencies and portfolio sites, I think this is the best above the fold hero section in web design. Start with clear and concise header that tells people what you do and who do you that for. It’s so easy to bloviate or try to be unique with forced copywriting. The best are simple, around 10 words, and help frame the type of work visitors are about to see below. Speaking of work, let’s talk about the things that can get web visitors to scroll: the preview! By simply showing a graphic of your design work peaking through the bottom, you can make the hero more interesting and get users to scroll. It’s easy to throw on a “scroll” label at the bottom, but don’t tell me to scroll. Show me what I’m scrolling to. Let me know what you think and if you’ll start implementing this on your agency/portfolio site! #webdesign #portfolio #webagency #portfoliodesign ♬ original sound – RadWebDesigns
@jasmine_bina I invited ethnographic researcher and brand strategist Peter Spear to come and speak to us for our series “Talks at Concept Bureau” about how to create brand mythologies. Here he talks about his favorite question to ask as a brand marketer when you’re doing user research. #brand #branding #brandstrategy #marketing #marketingtips #brandingtips #userresearch #brandrepsearch #brandmarketing ♬ original sound – Jasmine Bina 🎯 brands+culture
@verge With the help of Gemini Pro and other language models, Google believes it can finally build truly universal digital assistants that succeed where Alexa and Siri never could with Project Astra. #vergecast #googleio #ai #podcast #techtok ♬ original sound – The Verge
@dan..mbae more about JavaScript #tech #danmbae ♬ original sound – Dan mbae

Humor

Subjectively funny content.

General marketing & miscellaneous

This is for great content that isn’t necessarily SEO or marketing-specific. PPC, PR, dev, design, and social friends, check it out!

Older stuff that’s good!

Not everything I find worth sharing is new as of this week, so these are gems I came across published in the past.

Great job making it to the end. You rock!

Want help with your SEO strategy?

I’m an independent SEO consultant based in Orlando, Florida, focusing on custom audits and strategies for brands. Don’t hesitate to reach out, or visit my about page for more information.

Let’s connect!

Hit me up anytime via text or call at 813-557-9745 or on social or email:

Cheers!

Editorial history:

Created by Ethan Lazuk on:

Last updated:

Need a hand with a brand audit or marketing strategy?

I’m an independent brand strategist and marketing consultant. Learn about my services or contact me for more information!

Leave a Reply

Discover more from Ethan Lazuk

Subscribe now to keep reading and get access to the full archive.

Continue reading

GDPR Cookie Consent with Real Cookie Banner