Category: SEO

Microsoft Bing adds automobile and car search features

Microsoft Bing now has new car and automobile search features to let you find your next car, the company announced on its blog. You can search for used cars on Bing, you can search for specific car makes and models and then Bing will provide you search box that you can filter to find the car you are looking for.

What it looks like. Here are some screenshots of the search box in Microsoft Bing for searches on Kia K5 and SUV for sale:

Also, there are links to the MSN Autos marketplace in the knowledge panels for these car related searches under the “learn more” and “shop used cars” right above the car specifications.

Local. You can also browse local car inventory on Bing Local and Bing Maps. Microsoft said you “can now quickly and easily find vehicles for sale on Bing.com’s new Local Auto Dealership Storefront or using the browse-by-map feature.”

List your car. Microsoft allows you to list your car on MSN autos over here, where you fill out a listing form that will post your vehicle ready for sale. “In just a few simple steps, sellers enter the basics about the car for sale, upload photos, add a detailed description and finish the listing with any extra info that might make the car appealing to buyers. Your car is now for sale,” Microsoft said.

Why we care. Buying a car these days, in this market, is incredibly hard – there is very little inventory, what is available is very expensive and hard to get your hands on. Maybe this will make it easier for you to find the car you are looking for and also make it easier to sell your used car faster.

The post Microsoft Bing adds automobile and car search features appeared first on Search Engine Land.

Read More
Jason February 4, 2022 0 Comments

How Google uses artificial intelligence In Google Search

As Google continues to leverage more artificial intelligence and machine learning in Google Search, one may wonder in what ways does AI and machine learning help Google Search perform its daily tasks. Since 2015, when Google introduced its first AI into search named RankBrain, Google has continued to deploy AI systems to better understand language and thus improve the search results Google presents to its searches.

Several months ago we sent Google a number of questions around how Google uses its AI in search, including RankBrain, neural matching, BERT and Google’s latest AI breakthrough – MUM. We’ve come up with more of an understanding of when Google uses AI, which AI does what in Google Search, how these various AI algorithms may work together, how they have changed over the years and what, if anything, search marketers need to know when it comes to how Google uses AI in search.

We spoke with Danny Sullivan, the Public Liaison for Google Search, to help with the answers to many of these questions. In short, RankBrain, neural matching and BERT are used in Google’s ranking system across many, if not most, queries and look at understanding the language of both the query and content it is ranking. However, MUM is not currently used for ranking purposes, it is currently only used for COVID vaccine naming and powers the related topics in videos results.

It starts by writing content for humans

You hear it all the time from Google representatives and from many SEOs: write content for humans. In the older days of SEO, when the algorithms were maybe simpler, you would have many SEOs who would craft content for each and every search engine (back then there were dozens of different search engines). Now, there is primarily Google, with a little bit of Bing and some ruffling from DuckDuckGo – but the algorithms are much more complex and with machine learning and AI, the algorithms understand language more like a human would understand language.

So the advice Google has given is write for humans, and that you can’t optimize your site for BERT or any AI. If you write content that humans understand, then the algorithms and AI search engines use will also understand it. In short, this article is not aimed at trying to give you SEO tips on how to optimize your sites for any specific AI, but rather to communicate how Google uses AI in Google Search.

Overview of AI used in Google Search

RankBrain. It starts with RankBrain, Google’s first attempt at using AI in search dates back to 2015. Google told us RankBrain helps Google understand how words are related to concepts and can take a broad query and better define how that query relates to real-world concepts. While it launched in 2015 and was used in 15% of queries, Google said it is now, in 2022, widely used in many queries and in all languages and regions. RankBrain does specifically help Google rank search results and is part of the ranking algorithm.

  • Year Launched: 2015
  • Used For Ranking: Yes
  • Looks at the query and content language
  • Works for all languages
  • Very commonly used for many queries

Here is an example provided by Google of how RankBrain is used, if you search for “what’s the title of the consumer at the highest level of a food chain,” Google’s systems learn from seeing those words on various pages that the concept of a food chain may have to do with animals, and not human consumers. By understanding and matching these words to their related concepts, RankBrain helps Google understand that you’re looking for what’s commonly referred to as an “apex predator.”

Neural matching. Neural matching was the next AI Google released for search, it was released in 2018 and then expanded to the local search results in 2019. In fact, we have an article explaining the differences between RankBrain and neural matching over here. Google told us neural matching helps Google understand how queries relate to pages by looking at the entire query or content on the page and understanding it within the context of that page or query. Today, neural matching is used in many, if not most, queries, for all languages, in all regions, across most verticals of search. Neural matching does specifically help Google rank search results and is part of the ranking algorithm.

  • Year Launched: 2018
  • Used For Ranking: Yes
  • Looks at the query and content language
  • Works for all languages
  • Very commonly used for many queries

Here is an example provided by Google of how neural matching is used, if you search for “insights how to manage a green,” for example. Google said “if a friend asked you this, you’d probably be stumped.” “But with neural matching, we’re able to make sense of this quizzical search. By looking at the broader representations of concepts in the query — management, leadership, personality and more — neural matching can decipher that this searcher is looking for management tips based on a popular, color-based personality guide,” Google told us.

BERT. BERT, Bidirectional Encoder Representations from Transformers, came in 2019, it is a neural network-based technique for natural language processing pre-training. Google told us BERT helps Google understand how combinations of words express different meanings and intents, including looking at the sequence of words on a page, so even seemingly unimportant words in your queries are counted for. When BERT launched, it was used in 10% of all English queries but expanded to more languages and used in almost all English queries early on. Today it is used in most queries and is supported in all languages. BERT does specifically help Google rank search results and is part of the ranking algorithm.

  • Year Launched: 2019
  • Used For Ranking: Yes
  • Looks at the query and content language
  • Works for all languages but Google said BERT “plays critical role in almost every English query”
  • Very commonly used for many queries

Here is an example provided by Google of how BERT is used, if you search for “if you search for “can you get medicine for someone pharmacy,” BERT helps us understand that you’re trying to figure out if you can pick up medicine for someone else. Before BERT, we took that short preposition for granted, mostly surfacing results about how to fill a prescription,” Google told us.

MUM. MUM, Multitask Unified Model, is Google’s most recent AI in search. MUM was introduced in 2021 and then expanded again at the end of 2021 for more applications, with a lot of promising uses for it in the future. Google told us that MUM helps Google not just with understanding languages but also generating languages, so it can be used to understand variations in new terms and languages. MUM is not used for any ranking purposes right now in Google Search but does support all languages and regions.

  • Year Launched: 2021
  • Used For Ranking: No
  • Not query or languages specific
  • Works for all languages but Google not used for ranking purposes today
  • Used for a limited number of purposes

Currently, MUM is used to improve searches for COVID-19 vaccine information, and Google said it is “looking forward to offering more intuitive ways to search using a combination of both text and images in Google Lens in the coming months.”

AI used together in search but may be specialized for search verticals

Danny Sullivan from Google also explained that while these are individual AI-based algorithms, they often work together to help with ranking and understanding the same query.

Google told us that all of these AI systems “are used to understand language including the query and potentially relevant results,” adding that “they are not designed to act in isolation to analyze just a query or a page.” Previously, it may have been assumed and understood that one AI system may have looked more at understanding the query and not the content on the page, but that is not the case, at least not in 2022.

Google also confirmed that in 2022 RankBrain, neural matching, and BERT are used globally, in all languages that Google Search operates in.

And when it comes to web search versus local search versus images, shopping and other verticals, Google explained that RankBrain, neural matching, and BERT are used for web search. Other modes or verticals of Google Search such as images or shopping mode use separate, specialized AI systems, according to Google.

What about core updates and AI

As explained above, Google uses RankBrain, neural matching, and BERT in most queries you enter into Google Search, but Google also has core updates. The Google broad core updates that Google rolls out a few times per year is often noticed by site owners, publishers, and SEOs more than when Google releases these larger AI-based systems.

But Google said these all can work together, with core updates. Google said these three, RankBrain, neural matching, and BERT are the larger AI systems they have. But they have many AI systems within search and some within the core updates that Google rolls out.

Google told us they do have other machine learning systems in Google Search. “RankBrain, neural matching, and BERT are just some of our more powerful and prominent systems,” Google said. Google added, “there are other AI elements that can impact core updates that don’t pertain to those specific three AI systems.”

The post How Google uses artificial intelligence In Google Search appeared first on Search Engine Land.

Read More
Jason February 3, 2022 0 Comments

Functions for Core Web Vitals Tactics with Cloudflare’s HTMLRewriter

Our Guide to A/B Testing for Core Web Vitals explained a series of small steps with two services and a browser extension to write tests for frontend code tactics. Thirty years ago, we would copy a page’s raw source to run find-and-replace operations until we could manage a facsimile of a page put in a web-enabled folder to demonstrate the same kinds of recommendations.

We don’t have to do that anymore.

Setting up a reverse proxy and writing software for conducting SEO twenty years ago was limited to a small set of companies that built and hosted the infrastructure themselves. Cloudflare now provides us with a turnkey solution. You can get up and running using a free account. To change frontend code, use Cloudflare’s HTMLRewriter() JavaScript API.

The code is relatively easy to comprehend.

With Core Web Vitals, it’s the immediacy, the perceived need and the rapidity of being able to cycle through varying tests that ultimately shows value and really impresses. The fundamental platform is available to you through the steps outlined in our guide. We’ll write functions for making commonplace changes so that you can begin testing real tactics straight away.

HTMLRewriter()

If you’ve been following along, you may know our script provides the option to preload an element that you can specify in a request parameter for LCP. We return a form when the value is missing, just to make it easy to add your reference. There is also a placeholder for something called importance, which we’ll be addressing as well. What’s important is to understand what we’re going to do.

The HTMLRewriter() API gives us the ability to use jQuery-style element selectors to attach to HTML elements in raw page source to run JavaScript from that foothold. You’ll be able to modify elements, a whole group of elements or even the base document in powerful ways. You can edit a page’s title, for example. In production, your edit becomes the title and is what gets indexed at Google and Bing.

One complication you will encounter is that you can only edit raw source, not a hydrated Document Object Model (DOM). One quick way to view raw source is with the browser’s built-in view-source functionality. With Firefox, view-source highlights validation errors in red, for example. Even when browsers “fix” broken HTML, this can usually be fixed with our Worker.

Working inside DevTools, the “Sources” tab provides access to raw source. Use preference settings to always “pretty print” source, which will format it so you can scan the code to look for optimizations. Another preference tip is a setting to bypass cache when DevTools is open. This workflow will help you as you go so your optimizations don’t result in reference errors.

Element Selectors

When you spot something you want to fix with HTMLRewriter(), you’re going to need to narrow changes and isolate the element to avoid altering more code than you intend. Use the most exclusive selector possible, which can be very easy when elements have unique IDs. Otherwise, find a tell-tale sign, such as a reference to a unique location in href or src attributes.

You will find the ability to use wildcards and “command mode” vim-style regular expressions matching attribute values. You can also supply more than one criteria, even with the same attribute name. Use your vim powers to narrow matches to single elements, or match a group of elements with broader expressions. Logic can then separate concerns between changes.

Example matching wildcard “fonts.g” prefetch link elements to remove those for: fonts.googleapis.com.

.on(`link[rel="dns-prefetch"][href*="fonts.g"]`, removeEl())

Example showing two matches for the href attribute, narrowing it a single file among many.

.on('link[href^="https://example.com/static/version"][href$="/print.css"]', unblockCSS())

The first example above uses that wildcard match where the string “fonts.g” can appear anywhere in the href attribute of link elements. It’s an example for a broad match that might attach to more than one link element for an appropriate action, like removing the element(s) that match, if any.

The second example from above shows how you can select a particular link element that starts with a string, and ends with another string, but which can have anything between. This is useful for selecting a single element that is part of a build system whereby there may be a versioning token directory for browser cache-busing that is dynamically named.

Link elements

Link elements are multifaceted by virtue of their several attributes. Thus, they can serve a number of purposes. Not to be confused with links (as in anchors), link elements are typically where you start looking for quick-hitting performance strategies. Some preload and preconnect link elements may be actually getting in the way or maybe entirely unnecessary.

You only get maximum six hosts to connect simultaneously. Your first strategy will be to make the most of them. Try removing all priority hint link element statements and test the result. If timings go the wrong way, then add them back one at a time and test the real impact of each. You’re going to need to learn how to read the WebpageTest waterfall chart in-depth.

Following this, tactics go to resource loading, which also involves link elements pretty heavily, but not exclusively. At this point, we want to look at scripts as well. The order in which resources load can affect things very negatively. Our testbed is perfect for trying various tactics gleaned from reading the waterfall chart. Keep the console drawer of DevTools open to check for errors as you work.

Removing elements

Removing elements is exceptionally simple to do. Once you’ve selected an element, or a group of them, the next field in HTMLRewriter().on() statements is where you write a script block. You can do this in place with curly braces. You can reference a named function. Or you can build a new class instance for an object defined earlier, which in this context, may be over-engineering.

When you encounter sample Worker code you may see class initializers. All that’s really needed to remove and element is the following function. Anything done with a named class object can be done with a plain function (object) using less code, for fewer bugs, with more readable syntax and far more teachable. We’ll revisit class constructors when we delve into Durable Objects.

element: (el) => { el.remove(); }

In a nutshell, this block defines a variable “el” in reference to the element instance and the code block calls the built-in remove() element method, which you will find detailed in the corresponding documentation. All HTMLRewriter() element methods are available to you for use with instances of your element matches. Removing elements is one of the simpler ones to comprehend.

Unblocking render blocking resources

Unblocking script elements is much easier than unblocking stylesheet resources. As luck would have it, we have a boolean attribute for signaling the browser that we want to asynchronously load a script or defer it altogether (for when there is idle time). That’s ideal! Stylesheets, on the other hand, need a little “hack” to get them unblocked — they requires some inline Javascript.

Essentially, we turn a stylesheet link element reference into preload to unblock it. But that changes the nature of the link element to one where the style rules will not get applied. Preload downloads resources to store them in local cache, ready for when needed, but that’s it. DevTools warns you when a resource is preloaded and not used expediently — that’s when you know you can remove it!

Preloading and then using an onload attribute to run JavaScript to change it back from preload to stylesheet is the CSS “hack” to unblock what otherwise is a naturally render blocking resource. Using JavaScript’s this keyword allows you to change its properties, including the rel attribute (and the onload attribute itself). The pattern has a backfill for non-JavaScript sessions, as well.

Here is our unblockCSS() function which implements the strategy using ready-made element methods.

const unblockCSS = () => ({
element: (el) => {
el.removeAttribute('media');
el.setAttribute('rel', 'preload');
el.setAttribute('as', 'style');
el.setAttribute('onload', "this.onload=null;this.rel='stylesheet';this.media='all'");
el.after(`
<noscript><link rel="stylesheet" href="${el.getAttribute("href")}"></noscript>
`, { html: true }); }});

Select the link element stylesheet references that are render blocking and call this function on them. It allows the browser to begin downloading the stylesheet by preloading it. Once loaded, the rel attribute switches back to stylesheet and the CSS rules get immediately applied. If style problems occur after this change, then one or more sheets need to load in normal request order.

The function acts as a reusable code block. Toggle your element selections using HTMLRewriter() and test the difference unblocking CSS sheets one at a time, or in groups, depending on your approach. Utilize the tactic to achieve an overall strategy unblocking as much as you can. However, always remember to look for problems resulting from changes to CSS and Script resources.

Script priorities

The order in which you load styles can botch the design. Unexpectedly fast-loading stylesheet rules will overwrite ones more sluggishly loaded. You also have to watch while loading scripts in alternate order so that they get evaluated and are resident in memory when the document is evaluated. Reference errors can cascade to dozens or hundreds of script errors.

The best way to check for problems is to watch the console drawer and simulate slow network connections. This can exaggerate problems to the point they should be evident in DevTools. If script resources are processed using more powerful CPUs and load over cable modem speed, or faster, it is possible you’ll miss a critical error. Requests get nicely spaced out, as well.

Here is our function changing, or adding, async and defer attributes.

const makeAsyncJS = () => ({
element: (el) => {
el.removeAttribute("defer");
el.setAttribute("async", "async");
}
});

const makeDeferJS = () => ({
element: (el) => {
el.removeAttribute("async");
el.setAttribute("defer", "defer");
}
});

If a script doesn’t originally have async or defer, it’s harmless to run the removeAttribute() element method for a more reusable code block. You can safely disregard this if you’re working quickly on a one-off project where you might be writing this inline rather than calling a function you defined previously in the script.

Alt attributes for SEO

As mentioned, our Guide to A/B Core Web Vitals tactics was, by design, meant for us to have a fully functioning Edge Computing testbed up and running to demonstrate content with future SEO for Developers articles and future events. During our SMX West event last year (2021), we demonstrated using Cloudflare Workers for a website, achieving Lighthouse fireworks (scoring 100 across all its tests).

There are lots of things which need to be in place to get the fireworks. One important aspect is that all images must have valid alt attributes. The test can detect when the text in in alt attributes are “nondescript,” or present, but empty. You need words that depict what’s in the associated image. One way to do that might be to parse the file name from the src attribute.

Here is a function that extracts text from img src attributes to power alt text from filenames with hyphens.

const img_alt = element.getAttribute('alt');
const img_src = element.getAttribute('src');
if (!img_alt) {
element.setAttribute('alt', img_src.replace('-', ' '));
}

In a nutshell, this will look for the condition on images where there is no alt attribute value. When there’s a likelihood its src attribute filename is hyphenated, it will replace hyphens with spaces to formulate what may be a suitable value. This version won’t work for the majority of cases. It doesn’t replace forward slashes or the protocol and domain. This merely serves as a starting point.

Why we care

Having a testbed for trying out various Core Web Vitals Performance Optimization tactics is incredibly impressive to site owners. You should have this capability in your agency arsenal. A slight Google rankings boost with good scores is both measurable and largely achievable for most sites through tactics we will discuss and demonstrate. Tune in for a live performance March 8-9th.

SEO technicians have long recommended performance improvements for search engine ranking. The benefit to rankings has never been clearer. Google literally defined the metrics and publishes about their effect. We have Cloudflare Workers to implement Edge SEO remedies, as demonstrated here with alt attributes for images. Our reverse proxy testbed by virtue of Cloudflare sets the stage for rich communication with developers.

The post Functions for Core Web Vitals Tactics with Cloudflare’s HTMLRewriter appeared first on Search Engine Land.

Read More
Jason February 3, 2022 0 Comments

Google Search Console snapshot in search results now can show domain properties

Back in 2018, Google began showing a snapshot of your Google Search Console data and analytics in the web search results for verified properties. Now, Google will also show this snapshot for domain property verification methods, not just the older verification methods.

The announcement. Google announced this on Twitter saying “we are happy to share that starting today Search Console in Search results feature will also support domain properties.”

What it looks like. Here is a screenshot of what this looks like in the search results:

What changed. When this snapshot card launched, it worked for all the verification methods Google Search Console initially supported. But when Google released domain properties the following year, the snapshot was not supported for that verification method. Now Google is supporting displaying the Search Console snapshot for domain properties.

How do you see it. This card can appear only if you are an owner or full user of the site in Google Search Console, and you are signed in with that Google account while searching for your site or queries it might rank for. You can turn this snapshot on or off as well, those instructions are in this help document.

Why we care. Having this snapshot come up can be useful for site owners who have Search Console access but might not check their Search Console data too often. This is a useful reminder to those site owners that there is a treasure trove of data and information in Search Console that they can look at or hire an SEO agency to look at on their behalf.

The post Google Search Console snapshot in search results now can show domain properties appeared first on Search Engine Land.

Read More
Jason February 2, 2022 0 Comments

To disavow or not? Getting it right, 10 years later.

Google’s disavow links tool launched nearly a decade ago, on October 16, 2012.  As we approach the tenth anniversary, webmasters still have confusion and disagreement regarding how to approach a link analysis and properly use backlink data when considering a disavow.  A lot has changed since 2012!  

Whether you’re disavowing as a preventative measure or a means to recover your rankings, we’ll review current-day approaches to take based on our experience disavowing links over the past decade.  

Let’s begin by answering who likely doesn’t need a disavow, and that’s most of you.  If you’ve stuck with natural link acquisition and SEO traffic is on the rise, a link disavow is unlikely to help.  This is especially true if your site already has a relatively small number of backlinks or is in a less competitive vertical.  Submitting a disavow can even hurt the rankings of otherwise healthy websites if the tool isn’t used wisely.

Consider analyzing your backlinks and submitting a disavow if:

  1. You have an “unnatural links” notice in Google Search Console and corresponding manual action.
  2. You know unnatural links were acquired to your website, either recently or at any time in the past. Even links from years ago can come back to bite you as Google continues mapping out artificial link networks.
  3. You’ve experienced unexplainable traffic/ranking loss or traffic loss near the time of a known Google link-based update or core algorithm update. Similarly, traffic may be flat over long periods of otherwise strong on-page SEO and content creation initiatives, and you suspect off-page factors may be the reason why.
  4. You see a lot of new spammy links pointing to your website regularly and may be the target of a negative SEO attack.
  5. You don’t fully trust the algorithm and want to get a better understanding of your current link profile and level of risk.

Links from scrapers and other obvious spam are likely to get filtered out and ignored by Google, providing no value but also not counting against you. Nearly all websites have them, and you can usually ignore these yourself or include them in your disavow if you’re worried. But links from known link sellers and link networks can become a big problem. Frequent link-building tactics necessitating a link disavow include:

  • Buying guest blog posts or “sponsored content” without the appropriate link attributes.
  • Buying links with a guaranteed minimum level of “authority.” 
  • Buying links from a list of sites that have varying pricing for placement.
  • Obtaining keyword-rich anchor links pointing directly to SEO landing pages. 
  • Buying links at all, for that matter, especially from anyone offering pre-selected placements.

Compiling your backlinks & properly analyzing them

For an advanced SEO looking for the most comprehensive look and their link data, merging multiple datasets (Google Search Console, Ahrefs, Moz, Majestic, Semrush, and so on) will paint the most complete picture of your backlink profile.  For the rest of you, hiring a professional to help is the best path forward for the rest of you – a second reminder that disavowing can do more harm than good if not fully confident in your approach. Should you choose to do it alone, downloading the links provided in Google Search Console will likely suffice, even if they’re only showing a small “sampling” of your overall link profile.

Once your link data is obtained, you’ll have to make some decisions on how to analyze your backlinks. Most webmasters take shortcuts, relying on software to tell them how “authoritative” or “toxic” a link might be. This is a quick but dangerous way to compile links for your disavow.  

Although convenient, we do not recommend relying on:

  1. Third-party link metrics from SEO software listing the “authority,” “trust,” or “rating” of your links. These scores better represent a site’s ability to rank itself than its ability to pass link equity (or harm) to you. None of the companies who provide these metrics are Google, Google doesn’t use their data, their scoring is based on their unique & often limited crawl, their data and link values all vary from each other, and they generally don’t consider if a website which links to you has disavowed any of its own links or has been penalized by Google for selling links. Ironically, many penalized sites will receive a high “authority,” “trust score,” or “rating” due to the quantity of their (spammy) backlinks, and these are certainly not sites you’d want a link from!
  1. Blindly pasting any software’s “toxic” or “spam” link list into your disavow. We’ve seen webmasters rely on this all too often, leading to further traffic loss. A third reminder: a disavow can do more harm than good if completed improperly.
  2. Making decisions based on a linking site’s traffic levels. A link can be natural and relevant, even from a town library, local nonprofit, or hobbyist website. These sites likely have low traffic levels since they traditionally don’t rank for large amounts of commercial phrases. However, links from them are still natural & freely given to support your overall link profile. Don’t disavow these!

Instead, ask yourself:

  1. Does the site linking to you appear to be a good resource, put online to provide value to its audience? Is it maintained by someone who has subject-matter expertise or a strong interest in the topic at hand? Are they linking to you in a natural way, as an extension of their own content and compiled resources?  If so, this is likely a great link to have and one you won’t have to worry about causing issues.
  2. What does the linking site’s link neighborhood itself look like? Are the backlinks natural, or do they appear manipulated for SEO purposes?  Are the external links throughout the website there to provide more information about the topic being discussed and consistent with the site’s theme? If the site’s internal & external links pass the smell test, you’re likely safe to exclude this link from your disavow file.
  1. Is the website linking to you filled with varying content and many unrelated external links? Is it a blog you’ve never heard of with articles about everything, always linking out to a commercial website within each article?  Links from sites fitting this pattern are likely in a link network or database, can potentially be harmful to your SEO performance, and were a primary target of Google’s link spam update last summer. You’ll want to consider links from websites fitting this mold for your disavow, especially if they’ve never sent you any direct traffic via someone actually clicking on your link.

Preventative or reactionary analysis & disavow frequency

Like most SEO efforts, staying on top of your link profile is rarely a one & done initiative and more often resembles a game of cat & mouse, depending on the scenario. If your website and its traffic levels are healthy and growing, revisiting your backlink profile can be done on a less frequent basis. Semi-annually or yearly may be appropriate depending on your level of concern.

A preventative disavow may make sense in this situation; if troubles arise, Google is months behind on reconsideration requests, and that’s not a situation you want to find yourself in.  Always remember that links are really hard to get and a primary part of Google’s ranking equation, so being conservative with a disavow here is usually the best approach.

On the other hand, webmasters may find it worthwhile to review their backlinks and update their disavow files more regularly if they’ve been affected by manual action or link-based updates in the past, or they suspect they are being targeted by a negative SEO campaign. More frequent revisions can help ensure you’re ahead of the algorithm when disassociating yourself with links that have the potential to cause issues in the near or long term.  

Final thoughts

From its early days a decade ago, Google’s disavow links tool has remained an often misunderstood part of its Search Console for webmasters. From initially being needed solely as a response to 2012’s “Penguin” algorithm rollout and as a way to resolve manual actions, its use cases have evolved for both preventative and reactionary scenarios. Likewise, the way webmasters review their links for a variety of purposes has changed over the past decade. 

Regardless of your need to visit the disavow tool, it’s important to keep in mind how earning natural, trusted links can be one of the biggest SEO growth drivers, directly contributing to traffic and ranking increases over time. Safe & effective link earning reduces risks in your backlink profile and helps avoid the need for disavowing at all. 

The post To disavow or not? Getting it right, 10 years later. appeared first on Search Engine Land.

Read More
Jason February 1, 2022 0 Comments

Google Search Console error reporting for Breadcrumbs and HowTo structured data changed

Google has made changes to the way it handles Breadcrumbs and HowTo structured data within Google Search Console’s reporting tools. Google said it “changed the way that it evaluates and reports errors in Breadcrumbs and HowTo structured data.”

The impact. This may impact the number of errors, issues and other metrics Google reports on within Google Search Console’s enhancement reports related to Breadcrumbs and HowTo structured data.

Reporting change only. This is a reporting change only and this does not impact the visibility of your rich results in Google Search.

What to do next. It is recommended that if you have Breadcrumbs and HowTo structured data, you should check the reports in Google Search Console and address the revised errors and issues that Google is now reporting. Google said “you may see changes in the number of Breadcrumbs and HowTo entities and issues reported for your property, as well as a change in severity of some issues from errors to warnings.”

Why we care. Again, if you have Breadcrumbs and HowTo structured data on your site, you may now find new opportunities to resolve new errors or issues with your structured data. This may help you maintain your rich results in Google Search for those types of search result snippets.

The post Google Search Console error reporting for Breadcrumbs and HowTo structured data changed appeared first on Search Engine Land.

Read More
Jason January 31, 2022 0 Comments

Google merges its SafeSearch help information into a single new document

Google has published a new help document for SafeSearch that merges together all of Google’s SafeSearch details into one larger help document. This new document explains how SafeSearch works, adds some troubleshooting but the overall guidance of SafeSearch has not changed.

What is SafeSearch. SafeSearch is Google’s adult content filter that aims to filter out explicit content from your results. Explicit results include sexually explicit content like pornography, violence, and gore, according to Google.

New document. The new document is now located over here and it explains what SafeSearch is, how how SafeSearch works, how to see if SafeSearch is filtering out your site’s content and how to optimize your site for SafeSearch. It also goes through the metadata you can use with SafeSearch, as well as how to group your explicit content on your site into sections on your site for Google to better understand it. Finally, there is also a troubleshooting section at the bottom of the document.

Some tips from the document. Again, the guidance in the new document are not new, they are the guidance Google has been sharing for years. Google does say it uses machine learning “and a variety of signals to identify explicit content, including words on the hosting web page and in links.”

  • You can use a site command with SafeSearch on to see if Google is filtering out all or some of your URLs for SafeSearch.
  • You can the meta rating tag to define if your content is adult, this includes both content=”adult” or content=”RTA-5042-1996-1400-1577-RTA”.
  • Google recommends you group your explicit pages into sections of your site using a separate domain, subdomain or separate directory.
  • It can take Google two to three months for Google to process adult pages properly, it is slow.
  • Even if you blur explicit images, Google may still decide the page is explicit if the image can be unblurred or leads to an unblurred image.
  • Medical nudity doesn’t make it not explicit
  • Explicit content is not eligible for rich snippets, featured snippets or video previews.

Why we care. Sometimes sites can be labeled as explicit and be filtered out by Google’s SafeSearch filter. It doesn’t happen often but I see it come up from time to time and when it does, it can be frustrating to deal with. This document helps you understand how the SafeSearch filter works and what you can do to help all or parts of your site not be filtered, in an unintended way, by SafeSearch.

The post Google merges its SafeSearch help information into a single new document appeared first on Search Engine Land.

Read More
Jason January 30, 2022 0 Comments

‘Untitled’ search results sending users to spam sites, Google ‘working on it’

Google search results have been showing an “Untitled” title tag for some sites over the past 3 days. People who click on those sites are being sent to spam sites, according to postings from users on Hacker News and Reddit.

‘Untitled’ Google results. Here’s what a Hacker News user posted:

“Over the last few days I’ve noticed several distinct Google results that are simply ‘Untitled’, that redirect to other sites that are definitely spam and possibly malware (I didn’t stay long enough to investigate). I’ve seen other examples of titles such as ‘Oh’ redirecting to the same spam sites. From the result preview below the title, the results otherwise seem somewhat relevant to the query, but most often end up loading a fake captcha page.“

nsilvestri on Hacker News

There is speculation in that thread that some of the reports of ‘Untitled’ results are due to compromised WordPress sites.

That thread reference another Hacker News thread, which included additional evidence of the issue in a discussion about Google rewriting page titles: 

“Something has to be fishy with this because I get tons of “Untitled” results now which directly lead to spam. This sucks big time because I usually got really good results since I search a lot coding related things and now I cannot use this account anymore for searching.”


5Qn8mNbc2FNCiVV on Hacker News

On Reddit, there is additional discussion of this issue. One user shared what the “Untitled” titles look like:

Google’s response. When alerted about the issue and thread via Twitter, Google Search Liaison Danny Sullivan tweeted: “We’re working on it.”

On Reddit, Sullivan added some additional context: “It’s not malware. It’s spam, something our systems normally would typically catch, so we’re checking on it to improve.”

He also added: “I can’t reproduce that myself, but it still helps understanding you’re seeing it happen on desktop and your phone. We’re looking into it.“

Why we care. Many have questioned the quality of Google’s search results in recent months (to be fair: some SEO professionals have been questioning the quality of Google’s search results for even longer than that!). But spam or malware sites in search results is bad for users, which is bad for Google. While this issue won’t cause most users to abandon Google (where are they going to go?), it’s stuff like this that gives SEO and search a bad name.

The post ‘Untitled’ search results sending users to spam sites, Google ‘working on it’ appeared first on Search Engine Land.

Read More
Jason January 29, 2022 0 Comments

Messy SEO Part 6: Pillar pages and topic clusters

Messy SEO is a column covering the nitty-gritty, unpolished tasks involved in the auditing, planning, and optimization of websites, using MarTech’s new domain as a case study.


This installment of “Messy SEO” details my process of working with our marketing, content and development teams to further clean up the search engine results pages for MarTech. In Part 5, we discussed the fallout and improvements of our title changes and site structure issues.

RELATED: How to optimize your site for better findability

Identifying issues with our content and topics

Our MarTech website houses a lot of marketing industry content. In addition to the pieces we’ve published since its launch in May 2021, the domain has all of the content that was previously featured on Marketing Land and MarTech Today.

One would think that with so much industry-specific content, Google would have an easy time finding and serving up relevant results for searchers. Unfortunately, it seems like the search engine is having a difficult time identifying our main topics.

Many of the MarTech topics (shown below) that we cover are still getting little interaction in the SERPs.


Queries Clicks Impressions CTR Position
customer experience 5 4,651 0.10% 28
email marketing 22 24,239 0.09% 40.04
agile marketing 5 7,046 0.10% 48.4
marketing automation 11 66,534 0.02% 53.93
crm 0 10 0% 57.7
MarTech queries that are receiving little interaction.

After researching these keywords/topics and their related pages — taking note of the site structure issues we’d already identified — the problem we were experiencing became clear: We were missing pillar pages.

Understanding the importance of pillar pages

Content pillar pages are designed to be the go-to source for your site’s main topics. They cover subjects in-depth, linking to related pieces covering the same topic (known as topic clusters), which helps site users find all the information they’re searching for. They serve as the ideal landing pages, introducing readers to your site’s subject matter.

From a ranking perspective, pillar pages are gold. They have the potential to rank well for given topics and pass ranking signals to their related pages.

After our content analysis, our team quickly realized the MarTech site was missing these key pillar pages. We had plenty of content covering a slew of marketing and technology topics, but no central pages that gave in-depth overviews on any subject in particular.

Our top-ranking pages for the keywords shared above were largely evergreen “how to” articles. These are helpful resources for users, but don’t serve as good pillar pages.


Queries Top ranking page Position
customer experience https://martech.org/5-ways-marketers-can-improve-customer-experiences-with-personalization/ 6.18
email marketing https://martech.org/how-to-leverage-intent-and-engagement-in-the-buying-cycle/ 7
agile marketing https://martech.org/6-key-elements-of-a-great-agile-marketing-backlog/ 6
marketing automation https://martech.org/martech-landscape-what-is-marketing-automation-software/ 2
crm https://martech.org/is-there-a-place-for-crms-in-a-cdp-world/ 3
Top pages ranking for MarTech topics.

The top-ranking page that came closest to the “pillar” style was our “Marketing Automation Landscape” article. It gave an overview of the topic, linked to related pages and was longer than an average piece of content on our site. So, seeing its potential, we added more in-depth content and links to other related pages.

We then analyzed the rest of these pages and mapped out a strategy for creating new pillar pages, consolidating similar pages into hubs, and updating old content.

Creating pillar pages that connect topic clusters

Developing pillar pages became our next big project for MarTech. Our team outlined the highest-ranking pages for the site’s main topics (as described above) and reviewed their structure. We also looked for pages that weren’t ranking well but had the potential to become pillar content.

We believe this was our missing puzzle piece. The issue wasn’t our lack of authoritative content; it was how we structured that content on this new MarTech domain, a conglomeration of content from two well-established marketing news sites.

We began creating new pillar pages (and modifying pages with pillar potential) that met the following conditions:

  • The content went in-depth on a relevant topic.
  • It contained at least 2,000 words.
  • It linked to at least five relevant pages.
  • It featured authoritative information on the topic, citing sources when necessary.

There’s no magic formula to crafting a high-ranking, engaging pillar page. We simply found these criteria helped us create content that meets users’ needs and establishes topical hierarchy.

Avoiding keyword cannibalization

While undergoing this process, our team is doing its best to avoid keyword cannibalization — the unfortunate scenario when multiple pages on your site are competing for the same keyword or topic. This scenario could end up harming our organic search performance.

To prevent this issue, we are creating pillar pages under the following guidelines:

  • Avoid long-tail keywords and topics (these are for sub-topic pages).
  • Review the site to see if any existing pages are competitors.
  • Add internal links from sub-topic pages to the main pillar page.
  • Consolidate pages that aren’t unique enough into pillar pages.

No guideline is foolproof; Google may still force these pillar pages to compete with similar content on our site. But we believe adding these content hubs to our site structure will help users and search engines find out what MarTech is all about.

Have you had difficulties ranking for your site’s main topics? How are you addressing the issue? Email me at cpatterson@thirddoormedia.com with the subject line “Messy SEO Part 6” to let me know.

More Messy SEO

Read more about our new MarTech domain’s SEO case study.

The post Messy SEO Part 6: Pillar pages and topic clusters appeared first on Search Engine Land.

Read More
Jason January 28, 2022 0 Comments

Building trust builds brands: SEO link-building strategies that work

If you are looking to outrank and outperform within organic search, link building is essential. However, there are many misconceptions about effective, safe link-building strategies and exactly how brands can harness links to propel their Google rankings, trust and authority in 2022.

Improve your strategy by joining 17-year SEO veteran, Jon Lightfoot of StrategicSEOSolutions.com, who uncovers the most effective link-building strategies and the metrics that matter. Rise above the competition with creative, effective and sustainable link-building strategies.

To learn more, register today for “Building Trust, Builds Brands: SEO Link-Building Strategies That Work” presented by Strategic SEO Solutions.

The post Building trust builds brands: SEO link-building strategies that work appeared first on Search Engine Land.

Read More
Jason January 28, 2022 0 Comments