Category: Google

Auto Added by WPeMatico

Four tools to check for title changes in the SERPs

On August 24, Google confirmed that it changed how it creates titles for search result listings. The confirmation came roughly a week after search professionals began noticing such changes — in the interim (and even after the confirmation), SEOs raised concerns about how these Google-altered titles may affect their traffic.

Unfortunately, title change information isn’t available in Google Search Console or Google Analytics. So, SEOs have turned to third-party tools to see whether their titles are being changed. Below is a list of tools you can use to check for title changes and instructions on how to do so.

Ahrefs. Title changes can be checked in Ahrefs, although it is a manual process. You can check for changes via historical SERPs in Site Explorer > Organic Keywords 2.0.

Image: Ahrefs.

Since this method shows a list of search results for a given keyword, toggling the “Target only” switch (as shown below), which only shows the snippet from your site, can help you get to the information you’re looking for a bit faster. You can then compare titles by changing dates.

Image: Ahrefs.

Rank Ranger. The SEO Monitor tool from Rank Ranger is designed to monitor URLs and show you how they perform in Google Search, based on historical data. The data is displayed in a graph that shows ranking changes over time (shown below).

The top 20 URLs for the keyword “buy books,” over a 30-day period. The bold line represents the URL currently being tracked (in this case, Amazon.com). Image: Rank Ranger.

Below the chart is a list of all the changes to the page title and description in Google Search. This means if you or Google make any changes to your title or description, it’ll be displayed here with the date that the change occurred.

The list of changes for the keyword being tracked. Image: Rank Ranger.

This enables SEOs to cross-reference rankings changes with title changes, although Google has said that title changes do not affect rankings.

Semrush. It is possible to track title changes using Semrush, although the toolset provider does not have a specific feature to do so. For keywords you’ve been tracking in the Position Tracking tool, click on the SERP icon next to the keyword.

Image: Semrush.

That will pull the search results page for the date selected in the report, as shown below.

Image: Semrush.

If you suspect a title was changed, you can confirm this by changing the date in the report and repeating this process to compare titles. Note: you can only view this information for the period you were tracking those particular keywords.

SISTRIX. In the left-hand navigation, under SERPs > SERP-Snippets, there is a button to “Show title changes,” which takes you to this screen:

Image: SISTRIX.

The red text indicates words that have been dropped from the title and the green text indicates words that have been added.

Other tool providers. We also reached out to a number of other toolset providers. Screamingfrog and Sitebulb do not support this functionality. And, Moz and STAT did not immediately respond to our inquiries.

Why we care. Knowing when your titles are getting changed, and what they’re getting changed to, can be useful for analyzing any correlation the changes may have on your clickthrough rate. Together, these details may help you decide whether to adjust your titles, or if you’re seeing positive changes, they can also tell you what may be resonating with your audience.

The post Four tools to check for title changes in the SERPs appeared first on Search Engine Land.

Read More
Jason August 30, 2021 0 Comments

Ask the expert: Demystifying AI and Machine Learning in search

The world of AI and Machine Learning has many layers and can be quite complex to learn. Many terms are out there and unless you have a basic understanding of the landscape it can be quite confusing. In this article, expert Eric Enge will introduce the basic concepts and try to demystify it all for you. This is also the first of a four-part article series to cover many of the more interesting aspects of the AI landscape.

The other three articles in this series will be:

  • Introduction to Natural Language Processing
  • GPT-3: What It Is and How to Leverage It
  • Current Google AI Algorithms: Rankbrain, BERT, MUM, and SMITH

Basic background on AI

There are so many different terms that it can be hard to sort out what they all mean. So let’s start with some definitions:

  • Artificial Intelligence – This refers to intelligence possessed/demonstrated by machines, as opposed to natural intelligence, which is what we see in humans and other animals.
  • Artificial General Intelligence (AGI) – This is a level of intelligence where machines are able to address any task that a human can. It does not exist yet, but many are striving to create it.
  • Machine Learning – This is a subset of AI that uses data and iterative testing to learn how to perform specific tasks.
  • Deep Learning – This is a subset of machine learning that leverages highly complex neural networks to solve more complex machine learning problems.
  • Natural Language Processing (NLP) – This is the field of AI-focused specifically on processing and understanding language.
  • Neural Networks – This is one of the more popular types of machine learning algorithms which attempts to model the way that neurons interact in the brain.

These are all closely related and it’s helpful to see how they all fit together:

In summary, Artificial intelligence encompasses all of these concepts, deep learning is a subset of machine learning, and natural language processing uses a wide range of AI algorithms to better understand language.

Sample illustration of how a neural network works

There are many different types of machine learning algorithms. The most well-known of these are neural network algorithms and to provide you with a little context that’s what I’ll cover next.

Consider the problem of determining the salary for an employee. For example, what do we pay someone with 10 years of experience? To answer that question we can collect some data on what others are being paid and their years of experience, and that might look like this:

With data like this we can easily calculate what this particular employee should get paid by creating a line graph:

For this particular person, it suggests a salary of a little over $90,000 per year. However, we can all quickly recognize that this is not really a sufficient view as we also need to consider the nature of the job and the performance level of the employee. Introducing those two variables will lead us to a data chart more like this one:

It’s a much tougher problem to solve but one that machine learning can do relatively easily. Yet, we’re not really done with adding complexity to the factors that impact salaries, as where you are located also has a large impact.  For example, San Francisco Bay Area jobs in technology pay significantly more than the same jobs in many other parts of the country, in large part due to the large differences in the cost of living.

Vector isolated illustration of simplified administrative map of USA (United States of America). Borders and names of the states (regions). Grey silhouettes. White outline.

The basic approach that neural networks would use is to guess at the correct equation using the variables (job, years experience, performance level) and calculating the potential salary using that equation and seeing how well it matches our real-world data. This process is how neural networks are tuned and it is referred to as “gradient descent”. The simple English way to explain it would be to call it “successive approximation.”

The original salary data is what a neural network would use as “training data” so that it can know when it has built an algorithm that matches up with real-world experience. Let’s walk through a simple example starting with our original data set with just the years of experience and the salary data.

To keep our example simpler, let’s assume that the neural network that we’ll use for this understands that 0 years of experience equates to $45,000 in salary and that the basic form of the equation should be: Salary = Years of Service * X + $45,000.  We need to work out the value of X to come up with the right equation to use.  As a first step, the neural network might guess that the value of X is $1,500. In practice, these algorithms make these initial guesses randomly, but this will do for now. Here is what we get when we try a value of $1500:

As we can see from the resulting data, the calculated values are too low. Neural networks are designed to compare the calculated values with the real values and provide that as feedback which can then be used to try a second guess at what the correct answer is.  For our illustration, let’s have $3,000 be our next guess as the correct value for X. Here is what we get this time:

As we can see our results have improved, which is good! However, we still need to guess again because we’re not close enough to the right values. So, let’s try a guess of $6000 this time:

Interestingly, we now see that our margin of error has increased slightly, but we’re now too high! Perhaps we need to adjust our equations back down a bit. Let’s try $4500:

Now we see we’re quite close! We can keep trying additional values to see how much more we can improve the results. This brings into play another key value in machine learning which is how precise we want our algorithm to be and when do we stop iterating. But for purposes of our example here we’re close enough and hopefully you have an idea of how all this works.

Our example machine learning exercise had an extremely simple algorithm to build as we only needed to derive an equation in this form: Salary = Years of Service * X + $45,000 (aka y = mx + b). However, if we were trying to calculate a true salary algorithm that takes into all the factors that impact user salaries we would need:

  • a much larger data set to use as our training data
  • to build a much more complex algorithm

You can see how machine learning models can rapidly become highly complex. Imagine the complexities when we’re dealing with something on the scale of natural language processing!

Other types of basic machine learning algorithms

The machine learning example shared above is an example of what we call “supervised machine learning.” We call it supervised because we provided a training data set that contained target output values and the algorithm was able to use that to produce an equation that would generate the same (or close to the same) output results. There is also a class of machine learning algorithms that perform “unsupervised machine learning.”

With this class of algorithms, we still provide an input data set but don’t provide examples of the output data. The machine learning algorithms need to review the data and find meaning within the data on their own. This may sound scarily like human intelligence, but no, we’re not quite there yet. Let’s illustrate with two examples of this type of machine learning in the world.

One example of unsupervised machine learning is Google News. Google has the systems to discover articles getting the most traffic from hot new search queries that appear to be driven by new events. But how does it know that all the articles are on the same topic? While it can do traditional relevance matching the way they do in regular search in Google News this is done by algorithms that help them determine similarity between pieces of content.

As shown in the example image above, Google has successfully grouped numerous articles on the passage of the infrastructure bill on August 10th, 2021. As you might expect, each article that is focused on describing the event and the bill itself likely have substantial similarities in content. Recognizing these similarities and identifying articles is also an example of unsupervised machine learning in action.

Another interesting class of machine learning is what we call “recommender systems.”  We see this in the real world on e-commerce sites like Amazon, or on movie sites like Netflix. On Amazon, we may see “Frequently Bought Together” underneath a listing on a product page.  On other sites, this might be labeled something like “People who bought this also bought this.”

Movie sites like Netflix use similar systems to make movie recommendations to you. These might be based on specified preferences, movies you’ve rated, or your movie selection history. One popular approach to this is to compare the movies you’ve watched and rated highly with movies that have been watched and rated similarly by other users.

For example, if you’ve rated 4 action movies quite highly, and a different user (who we’ll call John) also rates action movies highly, the system might recommend to you other movies that John has watched but that you haven’t. This general approach is what is called “collaborative filtering” and is one of several approaches to building a recommender system.

Note: Thanks to Chris Penn for reviewing this article and providing guidance.

The post Ask the expert: Demystifying AI and Machine Learning in search appeared first on Search Engine Land.

Read More
Jason August 26, 2021 0 Comments

Search marketers should remember their power in the Google-SEO relationship

Google’s recent change in algorithms that choose which titles show up in SERPs has caused quite a stir in the SEO community. You’ve all probably seen the tweets and blogs and help forum replies, so I won’t rehash them all here. But the gist is that a few people could not care less and lots of people are upset with the changes.

It’s something our PPC counterparts have experienced for a while now — Google doing some version of automation overreach — taking away more of their controls and the data behind what’s working and what’s not. We’ve all adapted to (not provided) and we continue to adapt, so I’m sure this will be no different, but the principle is what’s catching a lot of SEOs off guard. 

What’s going on?

Google has essentially said that SEOs (or those attempting SEO) have not always used page titles how they should be for a while (since 2012). “Title tags can sometimes be very long or ‘stuffed’ with keywords because creators mistakenly think adding a bunch of words will increase the chances that a page will rank better,” according to the Search Central blog. Or, in the opposite case, the title tag hasn’t been optimized at all: “Home pages might simply be called ‘Home’. In other cases, all pages in a site might be called ‘Untitled’ or simply have the name of the site.” And so the change is “designed to produce more readable and accessible titles for pages.”

Presumptuousness aside, someone rightfully pointed out that content writers in highly regulated industries often have to go through legal and multiple approvals processes before content goes live. This process can include days, weeks, months of nitpicking over single words in titles and headers. Only for Google’s algorithm to decide that it can do whatever it wants. Google’s representative pointed out that these companies cannot be liable for content on a third-party site (Google’s). It’s not a one-to-one comparison, but the same industries often have to do the same tedious approvals process for ad copy (which is why DSAs are often a no-no in these niches) to cover their bases for the content that shows up solely in Google’s search results.

When I work with SEO clients, I often tell them that instead of focusing on Google’s goals (which many get caught up in), we need to be focusing on our customers’ goals. (You can check out my SMX Advanced keynote, which is essentially all about this — or read the high points here.) Google says it’s moving toward this automation to improve the searchers’ experience. But I think it’s important to note that Google is not improving the user experience because it’s some benevolent overlord that loves searchers. It’s doing it to keep searchers using Google and clicking ads. 

Either way, the message seems to be “Google knows best” when it comes to automating SERPs. In theory, Google has amassed tons of data across probably millions of searches to train their models on what searchers want when they type something into the search engine. However, as the pandemonium across the SEO community indicates, this isn’t always the case in practice.

Google’s history of half-baked ideas

Google has a history of shipping half-baked concepts and ideas. It might be part of the startup culture that fuels many tech companies: move fast, break things. These organizations ship a minimum viable product and iterate and improve while the technology is live. We’ve seen it before with multiple projects that Google has launched, done a mediocre job of promoting, and then gotten rid of when no one liked or used it.

I wrote about this a while back when they first launched GMB messaging. Their initial implementation was an example of poor UX and poorly thought out use cases. While GMB messaging may still be around, most SMBs and local businesses I know don’t use it because it’s a hassle and could also be a regulatory compliance issue for them.

The irony is not lost on us that Danny Sullivan thought it was an overstep on Google’s part when it affected a small business in 2013. The idea would be that the technology would hopefully evolve, right? Google’s SERP titles should be more intuitive not word salads pulled from random parts of a page.

This title tag system change seems to be another one of those that maybe worked fine in a lab, but is not performing well in the wild. The intention was to help searchers better understand what a page or site is about from the title, but many examples we’ve seen have shown the exact opposite. 

Google and its advocates continue to claim that this is “not new” (does anyone else hear this phrase in Iago’s voice from Aladdin?), and they’re technically correct. The representatives and Google stans reiterate that the company never said they’d use the title tags you wrote, which given the scope of how terrible this first iteration is showing up to be in SERPs, almost seems like a bully’s playground taunt to a kid who’s already down. 

Google is saying they’re making this large, sweeping change in titles because most people don’t know how to correctly indicate what a page is about. But SEOs are often skilled in doing extensive keyword and user research, so it seems like of all pages that should NOT be rewritten, it’s the ones we carefully investigated, planned, and optimized.

How far is too far?

I’m one of those people who doesn’t like it, but is often resigned to the whims of the half-baked stunts that Google does because, really, what choice do I have? Google owns their own SERP, but we, as SEOs, feel entitled to it because it’s our work being put up for aggregation. It’s like a group project where you do all the work, and the one person who sweeps in last minute to present to the class mucks it all up. YOU HAD ONE JOB! So while we can analyze the data and trends, we also need to make our feedback known.

SEOs’ relationship with Google has always been chicken and egg to me. The search engine would not exist if we didn’t willingly offer our content to it for indexing and retrieval (not to mention the participation of our PPC counterparts), and we wouldn’t be able to drive such traffic to our businesses without Google including our content in the search engine.

Why do marketers have such a contentious relationship with Google? To put it frankly, Google does what’s best for Google, and often that does not align with what’s best for search marketers. But we have to ask ourselves where is the line between content aggregator and content creator? I’m not saying that the individuals or teams at Google are inherently evil or even have bad intentions. They actually likely have the best aspirations for their products and services. But the momentum of the company as a whole feels perpetual at this point, which can feel like we practitioners have no input in matters.

We’ve seen Google slowly take over the SERP with their own properties or features that don’t need a click-through — YouTube, rich snippets, carousels, etc. While I don’t think Google will ever “rewrite” anything on our actual websites, changes like this make search marketers wonder what is the next step? And which of our critical KPIs will potentially fall victim to the search engine’s next big test?

When I interviewed for this position at Search Engine Land, someone asked me about my position on Google (I guess to determine if I was biased one way or another). I’m an SEO first and a journalist second, so my answer was essentially that Google exists because marketers make it so. 

To me, the situation is that Google has grown up beyond its original roots as a search engine and has evolved into a tech company and an advertising giant. It’s left the search marketers behind and is focused on itself, its revenues, its bottom line. And that’s what businesses are wont to do when they get to that size. The power dynamic is heavily weighted to Google’s side, and they know it. But the key is to remember that we’re not completely powerless in this relationship. Google’s search engine, as a business, relies on us (in both SEO and PPC) participating in its business model.

The post Search marketers should remember their power in the Google-SEO relationship appeared first on Search Engine Land.

Read More
Jason August 26, 2021 0 Comments

SERP trends of the rich and featured: Top tactics for content resilience in a dynamic search landscape

These days, there is a lot going on at the top of the SERP. New features, different configurations, variations for devices and challenges for specific verticals pop up every day. Seasoned SEOs will tell you that this is ‘not new’, but the pace of change can sometimes pose a challenge for clients and webmasters alike. 

So how do you get ahead of the curve? How do you make your site better prepared for possible new SERP enhancements and make better use of what’s available now? I’ve outlined 4 potential strategies for SERP resilience in my session at SMX Advanced:

1. Prepare to share (more)

From Featured Snippets and Google for Jobs to Recipe Cards and Knowledge Graphs, Google is unceasing in its efforts to create more dynamic user-pleasing SERPs. This is great for users because Google can serve lighting fast results that are full of eye-catching information that is easy to navigate even on the smallest mobile screen. And with the range of search services available for a query – Google Lens, Google Maps, Google Shopping to name but a few – the granularity of Google’s ability to provide information at the most crucial point of need is immense. 

For SEOs, this means that it’s becoming increasingly rare to rank exclusively in the top spot of a given query. Even without ads, and even with the Featured Snippet, the top results can often include a mix of links, videos, and/or images from different domains. 

Have a look at this Featured Snippet for the query “What is a Featured Snippet”. 

Here, the main paragraph and blue link come from Backlinko site, but there are four linked images in the carousel before you get to the text. And only two of them are from Backlinko’s page, the others are from pages ranking 9th and 2nd on the main SERP page.  So, while some tools would report the paragraph snippet result as ranking “first”, from a user perspective the text result is the fifth clickable link.

And while this scenario is not new, it does illustrate something that we are seeing more regularly and in more complex configurations. 

For instance, in the query for 50 Books to Read Before You Die, Google is serving a host of results ‘From sources across the web’ in an accordion. Then within each drop-down is a carousel of results that includes web pages and bookmarked YouTube videos. 

That means that plain blue links aren’t visible until after a row of ads and then after 20+ links from the accordion carousel.

This presents both challenges and opportunities for SEOs.

Strategic challenges from mixed SERPs

For those who are looking to protect traffic, the challenge is to ensure that you are offering users a means of connecting with your content via multiple forms of media and mediums. Relying on a single content type (written blog without images for instance) could leave your traffic vulnerable to changes in the SERP. So, a strategic approach to your most important SERPs should include a mix of written, video and/or image content. This will ensure that you are optimized for how users are searching, as well as what they are searching for. 

Strategic opportunities from mixed SERPs 

For sites looking to gain traffic from established rivals, top SERPs with multiple site links present an opportunity to gain precious ground by optimizing for search services that your rivals are ignoring. So as well as looking for keyword gaps, make sure your content plan is looking for gaps in media formats.

Use a good technical SEO framework

In both cases, the multi-media content you develop should be underpinned by sound technical infrastructure, like a good CDN, image sitemaps for unique images, structured data, and well-formatted on-page SEO.

2. Invest in knowledge hubs

In Nov 2020 and Jan 2021, Google Tested Featured Snippet Contextual Links which added reference links to other websites from within the Featured Snippets. Then May 21 Google “bug” showed Featured Snippets that included links to further searches in Google.

While Google has yet to outline any specific plans to roll this out as standard, they have been known to test new SERP features on live results in the past. For instance, they were testing image carousels with Featured Snippets in 2018 before the wider rollout in 2020

Not only that, but rivals at Bing are already using these techniques extensively. Their SERPs are bursting with contextual links pulling images, copy, and clickable links into the SERPs from Wikipedia. 

This suggests to me that contextual links are likely to become a Google thing in the near future. 

How might you be able to optimize for this possible feature development? In my humble opinion, it is worth spending some time investing in knowledge hub-style content. Hubs to enable you to become a reference for users on your own site and the wider web. While it is likely that much of the traffic for potential contextual links would go to reference sites like Wikipedia, it is also the case that not every niche term or topic will have a wiki page. So, if you start building now, you could be adding value for current users and future needs of Google’s bots. 

Example of a simple knowledge hub

A knowledge hub can be technically simple, or complex, but should be underpinned with good on-page SEO and unique content that is written in natural language. 

3. Stay ready so you don’t have to get ready (with structured data)

At the top of the SERP plain blue links are becoming increasingly rare and today your search results are likely to include a mix of links and information from Google managed channels like:

  • Google for Jobs
  • Video snippets, predominantly from Google-owned YouTube
  • Structured Data enabled Rich Results as we see the recipe cards and/or Google Ads 

These features are generated using Google APIs, YouTube, services liked Google Ads, and also largely through Structured Data specifications. This serves them well because they deliver the information with a more consistent user experience which is particularly crucial in the constraints of a mobile-first web.

I bring this up in a discussion about SERP resilience because, as these new and shiny features are added, they take the place of plain blue links and, historically, they have been seen to replace Featured Snippets. 

For instance, we saw a significant drop in Featured Snippets in 2017 as Google-managed Knowledge Panels increased.

During this time, one of the most prominent Featured Snippet category types was for recipes. But Google soon found a more user-friendly way to display this content via mobile-friendly Rich Results.

Now, you might say, Well 2017 was a long time ago, but we’ve seen similar activity this year in February when Moz reported that as the number of Featured Snippets temporarily dropped to historic lows, we also saw a rise in rich results for video at the same time.

And though many of the Featured Snippets returned, the phenomena of SERPs neither being created or destroyed, but simply changing form is a regular occurrence. Even this Summer, it is the case that the prevalence of People Also Ask is steadily declining as Videos increase.

June/July 2021 SERP feature changes as reported by Semrush Sensor

This means that Google SERP developments can cause traffic disruption for pages that are optimized for a single type of search result. 

The TLDR of this is, don’t put all your eggs in one basket. 

If you have a page that performs well as a top-ranking link, Feature Snippet or other feature, don’t expect that to be the case forever because the SERP itself, could completely change.

  • Protect your traffic by optimizing your pages for relevant APIs and strategic structured data for your niche, alongside your on-page optimizations.
  • Gain traffic by identifying competitors who are not using structured data and target your efforts accordingly.
  • Monitor your niche for changes to Rich Results and Google features, plan accordingly.  This will include many of your regular tools, but also manually reviewing the SERP to understand new and emerging elements.

4. Dig into core topics for passage ranking

Google’s commitment to natural language processing within its algorithms gained pace in the last 12 months when Google introduced Passage Ranking at autumn’s Search On 2020 and MUM at Google I/O in Spring 2021.

Often confused with jump to text links, Google has explained that Passage Ranking is intended to help them to understand content more intelligently. Specifically to enable them to find ‘needle in a haystack’ passages that answer queries more accurately, even if the page as a whole is not particularly well-formatted. 

The analogy that I often use is that, if we imagine that the SERP was a playlist of songs, then previously, the whole song would have to be strong to make it on to list. Passage Ranking is essentially saying that if the rest of the song is so-so, but the guitar solo is really, really good, then it’s still worth adding that song to the playlist. 

On 10 Feb 2021, this update went live and Google said that it would affect 7% of searches and SEOs had a lot of questions:

  • Will Passage Ranking affect what the SERPs look like? 
  • Will Passage Ranking affect what Featured Snippets look like? 
  • Will Passage Ranking affect Featured Snippets exclusively or only Feature Snippets?

Speaking with Barry Schwartz via Google’s Search Liaison, Danny Sullivan, said the answer was No, No, and No. 

So why am I bringing this up in a discussion about SERPs?

Well, since Passage Ranking is now a contributing factor for ranking, and Featured Snippets are elevated from the top-ranking SERP results, in my opinion, we are likely to see more variation in the kinds of pages that achieve Featured Snippet status. So alongside pages that follow all of the content formatting best practices to the letter, we are likely to see more pages that are offering query satisfying information in a less polished way. 

“The goal of this entire endeavor is to help pages that have great information kind of accidentally buried in bad content structure to potentially get seen by users that are looking for this piece of information” – Martin Splitt

Confronted with these results, SEOs who love an If X, Then Y approach may be perplexed but my research has led me to believe that one of the contributing factors is user intent. 

Ranking shifts directly following the Passage Ranking update suggest that the content that was boosted sought to answer both the what and why behind the user queries. Case in point, a website that was traditionally optimized for the query different colors of ladybirds owned the Featured Snippet in January. 

This page is optimized using many of the established SEO techniques

  • Literally optimized for the search query 
  • Includes significant formatting optimizations 
  • Covers keyword topic directly to answer What are the different colors of Ladybirds
  • Core Copy is around 500 words

But after the Passage Ranking update, the same query returned a page that was less literally optimized but provided better contextualization. This usurper showed a better understanding of why ladybirds were different colors and jumped from 5th to 1st position during February.

Reviewing the page itself, we see that in contrast to the earlier snippet, on this page

  • Core Copy is over 1000 words
  • Includes limited formatting
  • Covers intent-based topic, in general, to answer Why

Other examples and other big movers during this time showed a similar correlation with intent-focused search results. 

Possible example of Passage Ranking in impact on Investopedia
Possible example of Passage Ranking impact on National Zoo 

In each case, it seems that Google is attempting to think ahead about user intent replying to queries with less literal results to better satisfy the thought process behind the query. Their machine learning tools now allow Google to better understand topics as well as keywords. 

So, what does this mean for SEOs?

Passage ranking looks like good news for long-form content

Well, where you have a genuine, unique perspective on a topic, Passage Ranking could be an incentive to create more thorough and in-depth content centered around users’ needs rather than search volume alone. 

  • Protect your traffic by optimizing your content for longtail keywords and intent.
  • Gain snippet traffic by creating intent-focused content. Answer the so what and don’t be afraid of detail.
  • Consider topics as well as keywords in content, navigation and customer journey. 

From a technical SEO perspective, top tactics include solid internal link architecture optimized with long-form content templates with tables of contents.

How can you build SEO resilience for a dynamic SERP?

The same way you dress for a pumpkin spiced autumn day, with layers.

In this blog, I’ve discussed tactics for 

  • Optimizing content for mixed media Featured Snippet panel results
  • Creating knowledge hubs for potential contextual linking developments
  • Building structured data into your website before rich results arise
  • Using Intent Focused Long Form content to potentially benefit from Passage Ranking

There is no single tactic that works in isolation. The SERP is so highly dynamic at the moment, that aiming for, or banking on a single part of the SERP is likely to leave you vulnerable to traffic disruption if/when things evolve. Think about how you can use these tactics to build upon and level up your existing SEO foundations. Change is the only constant, plan accordingly. 

The post SERP trends of the rich and featured: Top tactics for content resilience in a dynamic search landscape appeared first on Search Engine Land.

Read More
Jason August 23, 2021 0 Comments

Top stories images aren’t showing in Google Search

Beginning early this morning, there have been numerous reports of images not loading in Google’s Top stories carousel (as seen below). The issue seems to affect search results globally and Google has confirmed that it’s a bug.

The Top stories carousel, where featured images are not currently loading. Some have also taken screenshots in which only one or two of the stories feature an image that loaded as usual.

A problem on Google’s end. “Yeah, it looks like an issue on our side,” Google’s John Mueller tweeted regarding the issue. The company is currently working to correct the bug, Danny Sullivan, the company’s public search liaison, has confirmed.

Why we care. A blurred featured image may negatively affect your clickthrough rate, so make sure to annotate your reports to reflect this oddity. Some professionals have taken screenshots showing the Top stories carousel with just one or two images that loaded successfully, which could mean fewer clicks for the stories that didn’t show a featured image. And, since the Top stories carousel is an important source of visibility and traffic for some publishers, this could also impact advertising revenue and other marketing opportunities that depend on getting a user onto your site if the bug goes unresolved for an extended period. Fortunately, Google is already aware of the issue — we’ll continue to provide updates as they come in.

The post Top stories images aren’t showing in Google Search appeared first on Search Engine Land.

Read More
Jason August 18, 2021 0 Comments

There are new requirements to appear in Google Podcasts recommendations

Beginning on September 21, Google will enforce new requirements for podcasts to show in recommendations on the Google Podcasts platform, the company told podcast owners via email on Thursday. Podcasts that do not provide the required information can still appear in Google and Google Podcasts search results and users can still subscribe to them, they just won’t be eligible to be featured as a recommendation.

The new requirements. Starting on September 21, to be eligible to show as a recommendation, podcast RSS feeds must include:

  • A valid, crawlable image: This image must be accessible to Google (not blocked to Google’s crawler or require a login).
  • A show description: Include a user-friendly show description that accurately describes the show.
  • A valid owner email address: This email address is used to verify show ownership. You must have access to email sent to this address.
  • A link to a homepage for the show: Linking your podcast to a homepage will help the discovery and presentation of your podcast on Google surfaces.
  • The podcast author’s name: A name to show in Google Podcasts as the author of the podcast. This does not need to be the same as the owner.

More details can be found in the accompanying forum post.

Why we care. Recommendations in Google Podcasts provide greater visibility, which can help the podcasts that are able to appear there attract more listeners. Following the new requirements will help to ensure that your podcasts are eligible for those free, highly visible placements.

In addition, recommendations in Google Podcasts are personalized, so there’s a higher likelihood that, if your podcast appears as a recommendation, it’ll be more relevant to a listener’s interests, which may benefit your marketing goals.

The post There are new requirements to appear in Google Podcasts recommendations appeared first on Search Engine Land.

Read More
Jason August 12, 2021 0 Comments

Display & Video 360 gets new frequency and reach metrics

Google is adding a dedicated data visualization in Display & Video 360 (DV360) to show reach gains for each campaign that spans across channels and has a frequency goal set at the campaign level, the company announced Thursday. In addition, DV360 will also calculate the added reach advertisers get for each Programmatic Guaranteed deal using DV360’s frequency management solution.

DV360’s frequency management data visualization. Image: Google.

Why we care

Having access to real-time reach gains can help advertisers gauge their campaign performance and manage their programmatic campaigns across channels. This new data visualization may also enable advertisers to save time that might otherwise be spent experimenting to test the impact of their frequency management strategies across various media types.

And, the added reach data for Programmatic Guaranteed deals can help advertisers understand how those deals add to the incremental reach they get for their frequency management efforts. 

More on the announcement

  • DV360 uses log data to compare the reach obtained by a cross-channel campaign against the reach that an advertiser would have obtained with separate campaigns, each with a single channel and its own frequency goal.
  • The information in the data visualization can also be accessed at the advertiser or partner level by creating an offline report in the standard DV360 reporting.

The post Display & Video 360 gets new frequency and reach metrics appeared first on Search Engine Land.

Read More
Jason July 29, 2021 0 Comments

Google passes on 2% “Regulatory Operating Cost” for ads served in India and Italy

Beginning on October 1, 2021, Google will include a 2% “Regulatory Operating Cost” surcharge to advertisers’ invoices for ads served in India and Italy, according to an email sent to Google advertisers on Tuesday. The surcharge applies to ads purchased through Google Ads and for YouTube placements purchased on a reservation basis.

A screenshot of the email sent to advertisers. The link at the bottom takes advertisers to Google Ads’ jurisdiction-specific surcharges page.

Why we care

Google was already passing on digital service taxes to advertisers for ads served in Austria, Turkey, the UK, France and Spain. Beginning in October, it will be doing the same for ads served in India and Italy.

Advertisers should be aware that these fees are charged in addition to their account budgets. As such, the surcharges won’t be reflected in the cost per conversion metrics in their campaign reporting. Advertisers should take these factors into account when creating their budgets.

Additionally, as Greg Finn, partner at digital agency Cypress North, advised on Twitter when Google first announced that it was passing on this surcharge last year, applying the “People in or regularly in your targeted locations” setting can result in racking up more surcharges.

More on the news

The post Google passes on 2% “Regulatory Operating Cost” for ads served in India and Italy appeared first on Search Engine Land.

Read More
Jason July 27, 2021 0 Comments

Google publishes timelines for Privacy Sandbox proposals

On Friday, Google published a timeline reflecting the stages of development for various categories of Privacy Sandbox initiatives. 

Caption: The Privacy Sandbox timeline, as of July 23, 2021.

The timeline (shown above) divides initiatives into four categories (“fight spam and fraud on the web,” “show relevant content and ads,” “measure digital ads,” and “strengthen cross-site privacy boundaries”). The phases indicated on the timeline are as follows:

  • Discussion – The technologies and their prototypes are discussed in forums such as GitHub or W3C groups.
  • Testing – All technologies for the use case are available for developers to test and may be refined based on results.
  • Ready for adoption – Once the development process is complete, the successful technologies are ready to be used at scale. They will be launched in Chrome and ready for scaled use across the web.
  • Transition period: Stage 1 – APIs for each use case are available for adoption. Chrome will monitor adoption and feedback carefully before moving to next stage.
  • Transition period: Stage 2 – Chrome will phase out support for third-party cookies over a three-month period finishing in late 2023.

Why we care

This timeline provides search marketers with a general idea of when various Privacy Sandbox initiatives should be ready for adoption. That can give marketers some indication as to whether the company will meet its new deadline (late 2023) to deprecate third-party cookies.

Transition period: Stage 1 (in which APIs for each use case are available for adoption) is currently forecasted to begin Q4 2022. Sometime after that, we should have a clearer picture of what advertising with Google looks like as third-party cookies are phased out.

More on the news

  • APIs shown on the timeline are based on Google’s current expectations and are subject to change. The timeline will be updated monthly.
  • Google expects Stage 1 of the transition period to last nine months. At some point during Stage 1, the company will announce a new timeline that decreases third-party cookies’ “Time to Live.”
  • The transition period will begin once APIs for all of the use cases are ready for scaled adoption. Chrome will announce the start of the transition on the timeline page and on the Keyword blog.

The post Google publishes timelines for Privacy Sandbox proposals appeared first on Search Engine Land.

Read More
Jason July 24, 2021 0 Comments

Google’s three-strikes ad policy isn’t the problem, it’s policy application that worries advertisers

Over the last few years, the movement in favor of greater transparency and consumer protection has garnered more mainstream attention. In response, Google is making greater efforts to explain how its systems work. On the organic side, the search engine is now showing why it ranked a specific result. On the paid side, the company introduced a three-strikes pilot program earlier this week in order to prevent harmful ads from showing on its platform.

The three-strikes program will help to improve consumer safety on Google and advertisers largely agree with the policies, but the company’s record with incorrectly flagged ads has PPC professionals concerned that false positives may carry greater ramifications, ranging from more time spent communicating with Google Ads support representatives to account suspension for repeated violations that aren’t resolved in time.

Google Ads’ three-strikes pilot system

Google Ads’ new three-strikes program, which begins in September 2021, applies to violations of its Enabling Dishonest Behavior, Unapproved Substances and Dangerous Products or Services policies. “This includes ads promoting deceptive behavior or products such as the creation of false documents, hacking services, and spyware, as well as tobacco, drugs and weapons, among other types of content,” the company said its announcement. Though these ad types have long been prohibited, Google’s system to enforce these policies is new.

If an advertiser is found to be in violation of Google’s policies, they’ll receive a warning for the first infraction. After that, penalties become increasingly strict with each violation, leading up to account suspension after the third strike.

Type Trigger Penalty
Warning First instance of ad content violating our Enabling Dishonest Behavior, Unapproved Substances and Dangerous Products or Services policies No penalties beyond the removal of the
relevant ads
First strike Violation of the same policy for which you’ve received a warning within 90 days The account will be placed on a temporary hold for three days, during which ads will not be eligible to run
Second strike Violation of the same policy for which you’ve received a first strike within 90 days of the first strike The account will be placed on a temporary hold for seven days, during which ads will not be eligible to run. This will serve as the last and final notice for the advertiser to avoid account suspension
Third strike Violation of the same policy for which you’ve received a second strike within 90 days of the second strike Account suspension for repeat violation of our policies

Strikes expire after 90 days and Google has systems in place to prevent advertisers from circumventing its policies (by creating new accounts to bypass a suspension, for example). The company also plans to expand its three-strikes program after the initial pilot to include more policy types.

“The policies aren’t the issue, it’s the unequal and sometimes plain incorrect application of the policy”

The PPC professionals that spoke to us for this article were largely in favor of the three-strikes system. However, Google’s enforcement, which can be haphazard, has them concerned.

“I want to be clear that the policies aren’t the issue, it’s the unequal and sometimes plain incorrect application of the policy,” said Amalia Fowler, director of marketing at Snaptech Marketing, “It’s the fact that an account where I have previously appealed multiple times still gets flagged for the same reason . . . If I trusted the appeal process to be smooth or that repeat flags wouldn’t occur, I would not be as worried as I am.”

“Ludicrous disapproval[s].” One might expect flagged ads to be a fact of life in heavily regulated sectors such as healthcare, but stories of inappropriately flagged ads are quite common, even in sectors like CPG and event management.

“Two of my clients were disapproved for unapproved substances and dangerous products earlier this year,” said Amy Bishop, owner of Cultivative, “I distinctly remember because the clients and I got a good chuckle out of it, considering one of them is in the event management SaaS space and the other was in the CPG space.” “It was an equally ludicrous disapproval for both of their businesses,” she added, noting that both instances were successfully appealed.

“I’ve had ‘false flags’ come up specifically in these categories a handful of times over the past year (including display ads for a cybersecurity company being disapproved for promoting drug use!),” said Tim Jensen, campaign manager at Clix Marketing. While these examples are especially concerning for ads identified as violating Google’s three-strikes policy, it speaks to a larger issue that PPC professionals have been navigating for some time: “I have clients that receive erroneous disapprovals all the time,” Bishop said, caveating that the disapprovals aren’t all for categories addressed by the new policy.

The Google Ads team seems to be aware. “I assure you, there’s no drinking during ad reviews :),” Ginny Marvin, ads product liaison at Google, tweeted in response to a comment made in jest, “I’ve passed this feedback along to the teams.”

Google is no stranger to complaints from advertisers about inappropriately flagged ads, but the new consequences raise the stakes, and advertisers want assurances that the odds aren’t stacked against them. It’s also in Google’s best interest to improve its ad violation detection systems, since ads are the company’s main source of revenue and tying up Google Ads representatives with a deluge of support requests is an unsustainable proposition.

The potential impact on advertisers

Previously, falsely flagged ads might have been a minor frustration for marketers. But, under the new system, they could prove to be roadblocks to revenue.

For clients. “For my clients in particular, there are two or three that continuously get inappropriately flagged for substances and dangerous weapons, and we are constantly appealing,” Fowler said. “Google Ads is what drives their e-commerce presence, which makes up a very large portion of their overall business, so getting flagged incorrectly is one thing, but having it result in account suspension is another,” she added.

“I have clients that receive erroneous disapprovals all the time,” Bishop said, caveating that “They aren’t for the categories being addressed in the three-strike rule, so I doubt this will impact my clients but I can see where folks with clients in other industries, especially healthcare, might have concerns.”

Fortunately for advertisers, strikes expire after 90 days and they can appeal strikes they believe were inappropriately applied. Successful appeals aren’t counted towards the three-strike limit.

For agencies. “For the agency, this could entail additional time spent haggling with support/reps to ensure that ‘false flags’ don’t count against us,” Jensen said.

“If I write an ad, it gets disapproved, I appeal, and it gets rejected, how many more times am I likely to try with this short runway?” Fowler said, adding, “In cases where the policy is administered unfairly or poorly, or simply incorrectly, it stifles a company’s ability to advertise and adds a level of anxiety.”

More time spent “haggling” with Google Ads representatives may mean less time spent optimizing the actual campaign. And, while these false flags can stall progress, accounts that are prone to receiving a lot of them may end up getting suspended. The combination of these factors can negatively affect the agency-client relationship.

What advertisers can do to prepare for Google’s three-strikes system

In all likelihood, Google is probably improving its systems to minimize incorrectly flagged ads. However, the three-strikes program is set to begin in a matter of weeks, so it may be in advertisers’ best interests to tread carefully.

“Watch for violations that are flagged and be ready to appeal,” Jensen advised, “You can’t always predict when ads might be flagged, but just be extra mindful of display ad imagery and wording that might somehow be construed to fit these policies.”

“Understand that it isn’t just ad text that causes these violations,” Fowler said, recommending that advertisers check their extensions, destination URLs and their site as a whole. “If you have an account that historically has been flagged under these policies despite not violating them, make sure you are happy with your account setup and the number of ads. I would create ads, make sure they get approved, and then pause them for future testing so you don’t have to create them under this new policy,” she added.

Additionally, communicating the change with stakeholders and clients ahead of time can help you frame their expectations once the new policies come into effect. “In the future, we plan to expand the strikes system in phases to scope more of our policies in,” Google said in its announcement, which means that, eventually, more advertisers may potentially run afoul of the system, so it’s best to get ahead of it now instead of allowing that first warning or strike to be issued.

The post Google’s three-strikes ad policy isn’t the problem, it’s policy application that worries advertisers appeared first on Search Engine Land.

Read More
Jason July 23, 2021 0 Comments