Category: Google: SEO

Auto Added by WPeMatico

Messy SEO Part 6: Pillar pages and topic clusters

Messy SEO is a column covering the nitty-gritty, unpolished tasks involved in the auditing, planning, and optimization of websites, using MarTech’s new domain as a case study.


This installment of “Messy SEO” details my process of working with our marketing, content and development teams to further clean up the search engine results pages for MarTech. In Part 5, we discussed the fallout and improvements of our title changes and site structure issues.

RELATED: How to optimize your site for better findability

Identifying issues with our content and topics

Our MarTech website houses a lot of marketing industry content. In addition to the pieces we’ve published since its launch in May 2021, the domain has all of the content that was previously featured on Marketing Land and MarTech Today.

One would think that with so much industry-specific content, Google would have an easy time finding and serving up relevant results for searchers. Unfortunately, it seems like the search engine is having a difficult time identifying our main topics.

Many of the MarTech topics (shown below) that we cover are still getting little interaction in the SERPs.


Queries Clicks Impressions CTR Position
customer experience 5 4,651 0.10% 28
email marketing 22 24,239 0.09% 40.04
agile marketing 5 7,046 0.10% 48.4
marketing automation 11 66,534 0.02% 53.93
crm 0 10 0% 57.7
MarTech queries that are receiving little interaction.

After researching these keywords/topics and their related pages — taking note of the site structure issues we’d already identified — the problem we were experiencing became clear: We were missing pillar pages.

Understanding the importance of pillar pages

Content pillar pages are designed to be the go-to source for your site’s main topics. They cover subjects in-depth, linking to related pieces covering the same topic (known as topic clusters), which helps site users find all the information they’re searching for. They serve as the ideal landing pages, introducing readers to your site’s subject matter.

From a ranking perspective, pillar pages are gold. They have the potential to rank well for given topics and pass ranking signals to their related pages.

After our content analysis, our team quickly realized the MarTech site was missing these key pillar pages. We had plenty of content covering a slew of marketing and technology topics, but no central pages that gave in-depth overviews on any subject in particular.

Our top-ranking pages for the keywords shared above were largely evergreen “how to” articles. These are helpful resources for users, but don’t serve as good pillar pages.


Queries Top ranking page Position
customer experience https://martech.org/5-ways-marketers-can-improve-customer-experiences-with-personalization/ 6.18
email marketing https://martech.org/how-to-leverage-intent-and-engagement-in-the-buying-cycle/ 7
agile marketing https://martech.org/6-key-elements-of-a-great-agile-marketing-backlog/ 6
marketing automation https://martech.org/martech-landscape-what-is-marketing-automation-software/ 2
crm https://martech.org/is-there-a-place-for-crms-in-a-cdp-world/ 3
Top pages ranking for MarTech topics.

The top-ranking page that came closest to the “pillar” style was our “Marketing Automation Landscape” article. It gave an overview of the topic, linked to related pages and was longer than an average piece of content on our site. So, seeing its potential, we added more in-depth content and links to other related pages.

We then analyzed the rest of these pages and mapped out a strategy for creating new pillar pages, consolidating similar pages into hubs, and updating old content.

Creating pillar pages that connect topic clusters

Developing pillar pages became our next big project for MarTech. Our team outlined the highest-ranking pages for the site’s main topics (as described above) and reviewed their structure. We also looked for pages that weren’t ranking well but had the potential to become pillar content.

We believe this was our missing puzzle piece. The issue wasn’t our lack of authoritative content; it was how we structured that content on this new MarTech domain, a conglomeration of content from two well-established marketing news sites.

We began creating new pillar pages (and modifying pages with pillar potential) that met the following conditions:

  • The content went in-depth on a relevant topic.
  • It contained at least 2,000 words.
  • It linked to at least five relevant pages.
  • It featured authoritative information on the topic, citing sources when necessary.

There’s no magic formula to crafting a high-ranking, engaging pillar page. We simply found these criteria helped us create content that meets users’ needs and establishes topical hierarchy.

Avoiding keyword cannibalization

While undergoing this process, our team is doing its best to avoid keyword cannibalization — the unfortunate scenario when multiple pages on your site are competing for the same keyword or topic. This scenario could end up harming our organic search performance.

To prevent this issue, we are creating pillar pages under the following guidelines:

  • Avoid long-tail keywords and topics (these are for sub-topic pages).
  • Review the site to see if any existing pages are competitors.
  • Add internal links from sub-topic pages to the main pillar page.
  • Consolidate pages that aren’t unique enough into pillar pages.

No guideline is foolproof; Google may still force these pillar pages to compete with similar content on our site. But we believe adding these content hubs to our site structure will help users and search engines find out what MarTech is all about.

Have you had difficulties ranking for your site’s main topics? How are you addressing the issue? Email me at cpatterson@thirddoormedia.com with the subject line “Messy SEO Part 6” to let me know.

More Messy SEO

Read more about our new MarTech domain’s SEO case study.

The post Messy SEO Part 6: Pillar pages and topic clusters appeared first on Search Engine Land.

Read More
Jason January 28, 2022 0 Comments

Evolving Core Web Vitals tactics using Cloudflare and WebpageTest

In our guide to Core Web Vitals tactics using Cloudflare and WebpageTest, we outlined basic requirements for using Cloudflare as a reverse proxy for testing tactical HTML changes with WebpageTest. Our version of the test is simplified from Patrick Meenan’s original concept, which uses HTMLRewriter() to select an element and modify code.

We’re going in-depth with this tutorial, but if you’re just looking for the Cloudflare Worker script, you can find it here.

Our first installment noted that it won’t keep up with changes at Search Engine Land. The LCP was hard-coded and we would need it to interact with a dynamic page and its values. While WebpageTest has, at the time of publication, the most well-thought-out waterfall chart and more details than you can imagine, it isn’t the fastest way to get results.

Lighthouse from the Command Line

Running the Lighthouse CLI (Command Line Interpreter) program with --extra-headers options needed for the test allows us to also simulate standard settings for Core Web Vitals the way we did with WebpageTest. You’ll need to work from a terminal emulator.

The easiest way to install Lighthouse is with NPM (Node Package Manager). Once installed, run the following statement:

$ lighthouse https://sel.deckart.workers.dev
--extra-headers "{"x-host":"searchengineland.com", "x-bypass-transform":"false"}"
--form-factor=mobile
--throttling.cpuSlowdownMultiplier=4
--only-categories=performance
--view

The evolution of our Testbed

Our aim is to demonstrate an evolution from an original concept for a testbed to a project suitable for our future events and articles. The testbed should not be confined to running performance evaluations; that’s just where we’ll start. But, it has to work fairly well for a number of situations with websites and this can prove pretty difficult. We’ll supply methods to help.

For example, sites often use relative paths to asset resources rather than absolute (with HTTP protocol and all). We’ll supply a block to match these so HTML will generally work. After applying this, when things still don’t work, switching troublesome references between the test and test subject hostnames often does the trick, even for CORS policy violations.

That’s where the beauty of Cloudflare’s HTMLRewriter() really shines. Site-wide assets are usually loaded as page HEAD child elements. With flexibility matching like jQuery, even similar syntax, we can select child elements of HEAD when necessary. You can use XPath selectors and regular expressions. Let’s keep it simple and look for relative paths that start with “/” for src or href attributes:

return new HTMLRewriter()
  .on('link', {
    element: el => {
      link_href = el.getAttribute('href');
      if (link_href && link_href.startsWith('/')) {
        el.setAttribute('href', 'https://' + host + link_href);
      }
    }
  })
  .on('script', {
    element: el => {
      script_src = el.getAttribute('src');
      if (script_src && script_src.startsWith('/')) {
        el.setAttribute('src', 'https://' + host + script_src);
      }
    }
  })
  .on('img', {
    element: el => {
      img_src = el.getAttribute('src');
      if (img_src && img_src.startsWith('/')) {
        el.setAttribute('src', 'https://' + host + img_src);
      }
    }
  })

We’re leveraging the power (and cost effectiveness) of Edge Computing to conduct seriously useful tests. Modify the x-host request header to load different sites in the testbed and open DevTools. Transformations may not be needed, but your mileage will vary. Frontend experience gives you a feel for it.

Comment blocks like switches will fail and require a little experimentation (which may be all you need). For example, some asset references may be spelled without HTTP colon. You would need to write another conditional to check for paths where href or src starts with “//” and then modify the selected element value in the script. Try to end up with no console errors the actual site doesn’t have.

Lighthouse gives you LCP

It’s relatively easy to retrieve LCP references using Lighthouse, PageSpeed Insights or WebpageTest. Presuming the LCP qualifies for preload, like when it’s not a <div> or a <p>, and when it isn’t already getting preloaded, provide our script the href value by URL ‘query param’ structure (or return HTML with a form) to test for changes to a page’s LCP timing with preload.

Most technical SEO practitioners are handy at modifying request query parameters to process different things in server-side programs, like Google search results. Using the same interface, our script will preload the LCP using the path you apply in the “lcp” parameter value and passes it to a function called addPreloadAfter() for interpolating HTML for the test.

async function handleRequest(request) {
  const { searchParams } = new URL(request.url);
  let lcpHref = searchParams.get("lcp");

  return new HTMLRewriter()
    .on('title', addPreloadAfter(lcpHref))
  .transform(newResponse);
}

The addPreloadAfter() function takes our “lcpHref” value from searchParams.get() and processes it as “href” to build HTML.

const addPreloadAfter = (href) => ({
  element: (el) => {
    el.after(`<link rel="preload" href="${href}" />`, { html: true });
  }
});

Notice the option “html: true”? This is an option setting Cloudflare requires for safety when using Workers with HTMLRewriter() API methods that write HTML. You are going to want to learn its capabilities and constraints for coding your own tests.

Cloudflare’s KV

If we’re ever going to do anything remotely interesting, we need a way to store persistent data between script executions. Luckily, Cloudflare also offers a neat little data storage mechanism called KV that we can bind with our Workers to store a small data ‘value‘ field, accessible by its ‘key.’ It’s surprisingly easy to comprehend and implement. To demonstrate how to use it we’ll write a quick little hit counter.

const counter = await KV.get("counter");

if (!host || counter > 1000) {
  return new Response('hit limit exceeded or x-host missing', {status: 403});
} else {
  await KV.put("counter", parseInt(counter) + 1);
}

Find the KV navigation menu item under Workers.

Add a KV Namespace and counter variable with zero for a starting value

Once you’ve created a Namespace (“SEL” is used in the example above), use the KV dashboard UI to create your first Key (‘counter‘ in the above case) and assign a starting value. Once set up, navigate back to the Worker dashboard for the interface required to bind our new KV Namespace with Cloudflare Workers so they can access Keys and the associated stored Values.

Bind KV Namespaces to Workers

Choose the Worker you want to bind with and click its Settings menu to find the submenu for Variables (directly under General). Notice you can define environment variables, Durable Object Bindings (which we’ll explore in a future installment), and finally KV Namespace Bindings. Click Edit Variables and add the Variable you want to use in script.

In the following case, you can see our redundantly named ‘KV‘ variable that we’ll be using in the associated Worker script, the one we navigated from. Our use of ‘KV‘ was named for illustrative purposes. Select it from the dropdown, save it, and you’ll immediately be able to use your variable in the script. Create as many scripts and KV Namespaces combinations as you like.

KV Namespace bindings.
KV Namespace Bindings.

The trick is remembering to bind a Variable you want used in the Worker. It’s so flexible that you can feel free to munge about and make a mess at first. You’ll probably be able to organize it into something cohesive at a later date, which is exactly what you want for being able to prototype applications or author Microservices for use in your applications.

Once you’ve gotten your KV service and starting values set up, navigate back to the Worker and open the built-in “Quick Edit.” Replace what’s there with this updated gist, which includes the hit counter, and everything else written about in this post. Click “Save and Deploy” and you should have the service up and running at your publicly available, Workers demo URL.

Why we care

Our original guide was meant to whet your appetite, get you excited to start and excited for more valuable learning. In order to supply that, we have a free platform and code combination that is simple enough to understand on its own, coupled with a process that should be easy enough to follow and achieve a test result.

Standardizing website testing to demonstrate SEO to developers shouldn’t require understanding code when you can copy and paste script into Cloudflare, follow steps and test certain SEO tactics. Core Web Vitals tests are about as reliable as we’re going to get for improving RUM (Real User Metrics) performance scores for a boost in rankings, given how metrics dependent it is.

The post Evolving Core Web Vitals tactics using Cloudflare and WebpageTest appeared first on Search Engine Land.

Read More
Jason January 24, 2022 0 Comments

How AI can automate SEO tasks at scale

Artificial intelligence, machine learning and neural networks are major buzzwords in the SEO community today. Marketers have highlighted these technologies’ ability to automate time-consuming tasks at scale, which can lead to more successful campaigns. Yet many professionals often have trouble distinguishing between these concepts.

“Artificial intelligence is essentially the term that defines the whole space,” said Eric Enge, president of Pilot Holding and former principal at Perficient, in his presentation at SMX Next. “Machine learning is a subset of that [AI] set around specific algorithms.”

Natural language processing (NLP) is another system that’s been used for SEO tasks in recent years. It’s primarily focused on understanding the meanings behind human speech.

“NLP is about helping computers better understand language the way a human does, including the contextual nuances,” he said.

With so many developing technologies available, marketers would be wise to learn how they can be applied to their campaigns. Here are three ways AI and its branches can automate SEO tasks at scale.

AI can address customers’ long-tail needs

Enge pointed to a customer search engagement study from Bloomreach that found that 82% of B2C shoppers’ experience is spent searching and browsing. This leaves room for plenty of long-tail searches, which are more niche in nature and, consequently, often overlooked by marketers.

Bloomreach’s own AI tool focuses primarily on extracting insights from this phase of discovery, Enge explained. It can identify site content that’s both underutilized and matches customer long-tail searches.

“AI improves pages by presenting more related pages that currently aren’t being linked to,” he said, “Or even potentially create new pages to fill the holes of those long-tail needs to create a better customer experience.”

Source: Eric Enge and Bloomreach

Marketers can use AI systems to generate more relevant pages based on these long-tail interests. But, there are some caveats to be aware of.

“Just be careful not to create too many new pages,” Enge said. “There are certainly cases where too many pages can be a bad thing. But deployed properly, this can be very effective.”

AI can enable automated content creation

Enge shared some information about GPT-3, a popular AI language model, to demonstrate AI’s content creation capabilities. While impressive, he noted how a system like this can get out of control if there aren’t proper constraints.

“They [AI systems] currently don’t have any model of the real world,” he said. “They only have the data that they were trained on. They don’t have any perspective or context for anything, so they can make really bad mistakes, and when they write, they’re prone to bias.”

“The wonderful thing about the web is that it has all the world’s information on it — the terrible thing about the web is all the world’s disinformation is on it, too,” he added.

Despite these weaknesses, AI systems have a lot of promise. Continuous improvements in these technologies can help marketers scale content efforts to meet customer expectations.

GPT-3, in particular, has the ability to generate content in a variety of formats, allowing SEOs to focus more on optimization efforts.

“You can use it [GPT-3] to create new content,” Enge said. “You’re going to have to put in a lot of effort and bring a lot of expertise to the table to do it. It might be more cost-effective than writing from scratch, or it may not, depending on how good you are.”

AI can leverage deep learning to help establish topical authority

Having topical authority means your site is a perceived expert on a given subject. This is one of the factors many SEOs believe is vital for improving rankings, which is why so many have leveraged AI’s capabilities.

Enge pointed to seoClarity, which uses an AI tool called Content Fusion designed to help brands write with more authority, to highlight these deep learning capabilities: “The approach is to leverage deep learning to identify entities and words that help you establish authority in a topic,” Enge said. “It extracts intent, entities, terms and potentially related topics. Then they apply their machine learning models that are specific to your market space.”

deep learning content fusion pipeline
Source: Eric Enge and seoClarity

The deep learning capabilities offer marketers a clearer view of their brand’s area of expertise, which can then be used to further develop their web properties. Establishing an automated deep learning system can provide them with fresh data to help demonstrate E-A-T (Expertise, Authoritativeness, Trustworthiness).

Every AI integration will look different, but each one has the potential to streamline your SEO efforts through automation and machine learning.

“There’s an incredible amount of stuff happening out there with AI,” Enge said. “Some of it you can take into your own hands if you’re willing to do the programming; in other cases, you can use tools. It’s up to you.”

Watch the full SMX Next presentation here (free registration required).

The post How AI can automate SEO tasks at scale appeared first on Search Engine Land.

Read More
Jason January 21, 2022 0 Comments

Google adds new robots tag indexifembedded

Google has a new robots tag for when you use embedded content on your pages named indexifembedded. Google said with this new tag “you can tell Google you’d still like your content indexed when it’s embedded through iframes and similar HTML tags in other pages, even when the content page has the noindex tag.”

Why we care. If you embed content on your site and want to control indexing of the content on the page, now you have more control with this new indexifembedded robots tag. Give it a try and see if it helps you with any indexing issues you may have had with pages where you embed content.

Why a new tag. Google explained that sometimes publishers want the content on the page to be indexed and sometimes not, when they embed content. This new robots tag gives you more control over communicating those wishes to Google Search.

“The indexifembedded tag addresses a common issue that especially affects media publishers: while they may want their content indexed when it’s embedded on third-party pages, they don’t necessarily want their media pages indexed on their own,” Google said, “Because they don’t want the media pages indexed, they currently use a noindex tag in such pages. However, the noindex tag also prevents embedding the content in other pages during indexing.”

Noindex and indexifembedded. Google said this new indexifembedded tag works with the original noindex tag: “The new robots tag, indexifembedded, works in combination with the noindex tag only when the page with noindex is embedded into another page through an iframe or similar HTML tag, like object.”

The example Google gave was if podcast.host.example/playpage?podcast=12345 has both the noindex and indexifembedded tag, it means Google can embed the content hosted on that page in recipe.site.example/my-recipes.html during indexing.

Code examples. Here are code examples of how to implement it, the first is via normal meta robots tag and the second is via the x-robots implementation:

meta robots
X-Robots

Other search engines. It seems Google is the only search engine to currently support this new robots meta tag.

Why use it? I asked John Mueller of Google why would anyone use this? I am still not sure I am convinced but this is what he said:

The post Google adds new robots tag indexifembedded appeared first on Search Engine Land.

Read More
Jason January 21, 2022 0 Comments

Google updates product structured data for car review snippets

Google has added a note to the product structured data help documentation to explain how to specify car markup and still have Product review snippet feature eligibility.

What is new. The note says “currently Car is not supported automatically as a subtype of Product.” You “will need to include both Car and Product types if you would like to attach ratings to it and be eligible for the Search feature,” Google said.

Code example. Google then provided this example in JSON-LD:

Image: Google.

Why we care. If you are just using car schema or just using product schema on your car and automobile landing pages, you will want to make sure to use both. If you do not, any review snippets may not show up in the Google search results for your site and web pages.

The post Google updates product structured data for car review snippets appeared first on Search Engine Land.

Read More
Jason January 21, 2022 0 Comments

How analyzing search data can improve your decision-making

Much like a physical marketplace, the online search environment has both its successful businesses and those that fail to gain traction. Matt Colebourne, CEO of Searchmetrics, used the analogy of a “high street” — the main area of commercial or shopping — to describe the current state of search marketing in his presentation at SMX Next.

“Just as you have the winners and the losers in the physical space, you have the same in the digital space; page two of Google or any search engine is fundamentally the ‘backstreet,’” he said. “That’s where a lot less audience is going to end up.”

Marketers have long used search data to optimize their content so it meets user needs. But many fail to apply those same insights to inform decisions that impact the long game.

“A lot of companies make the mistake of optimizing for growth way too soon,” Colebourne said. “They settled for their current product set and their question becomes, ‘How can we optimize sales of what we have?’ Whereas the questions they should be asking are, ‘What are the sales that we could have? How much of our target market do we have right now?’”

Each day Google processes over 3.5 billion searches, which provides marketers with a wealth of data. Here are three reasons why analyzing this search data improves marketers’ decision-making processes.

Search data shows where your growth is coming from

“Currently, about 15% of search terms that appear on Google every month are new,” said Colebourne, “So, that starts to give you an inkling of the pace of change that we have to deal with. We see trends come and go in months, and some cases even weeks. And as businesses, we have to respond.”

Many organizations focus too much energy on driving growth while neglecting to determine where that growth is coming from. And in this digital age, there’s a good chance much of it is coming from search. This data offers marketers valuable insights, especially those relating to their industry segment.

“You have to understand how your industry and category is structured and ask the right questions,” he said. “If, for example, you sell specialty sports shoes, it doesn’t make a lot of sense to compare yourselves with Nike or similar companies who have much much bigger coverage, but may not be leaders in certain segments.”

It helps address your most significant decision-making challenges

Data — specifically search data — should be part of any company’s core decision-making process. To show how often brands use it, Colebourne highlighted a survey from Alpha (now Feedback Loop) that asked 300 managers how they made decisions.

“The question they asked was, ‘How important or not important is it to you to use data to make decisions?’” said Colebourne. “And I think nobody is going to be surprised by the results — 91% think data-driven decision-making is important . . . But the corollary to this question was, ‘How frequently or infrequently do you use that?’”

Source: Matt Colebourne

The answer was just 58%.

Clearly, knowing search data is valuable isn’t enough to be successful — marketers need to use these insights from searchers to make better business decisions. Otherwise, they’re going to miss out on a good source of traffic insights.

“65% of all e-commerce sessions start with a Google search,” Colebourne said. “I would argue that makes it a good source for decision-making. It’s a massive sample set, completely up to date, and it’s continually refreshed.”

Search data gives you more consumer context

“That [search] data — sensibly managed and processed — can show you the target market and provide you with the consumer demand,” said Colebourne. “It can show you if the market is growing or contracting.”

chart showing what search data really means
Source: Matt Colebourne

Analyzing search data can give marketers a clearer view of their consumers, especially for those groups they haven’t reached yet. Reviewing what people are searching for, how often they’re searching and how your competitors are addressing the challenge can make decision-making that much easier.

But more than that, marketers must look at the marketplace as a whole, using search data to inform decision-making.

“We’re all very focused on keywords and rankings and all these good things that we know how to manage,” Colebourne said. “But what we need to do is step beyond that and not just look at what we have or what competitors have, but look at the totality of the market.”

“Let’s look at the input search data to understand what the real demand is and how big this market is,” he added.

Watch the full SMX Next presentation here (free registration required).

The post How analyzing search data can improve your decision-making appeared first on Search Engine Land.

Read More
Jason January 20, 2022 0 Comments

Google recipe markup now requires specific times, no more time ranges

Google has updated the recipe schema markup help documents to remove all references to time ranges for food prep, cook time and total time as a supported field type. Now you need to specify a singular time, and no longer provide time ranges.

What changed. Google wrote that it “removed guidance about specifying a range for the cookTimeprepTime, and totalTime properties in the Recipe documentation. Currently, the only supported method is an exact time; time ranges aren’t supported. If you’re currently specifying a time range and you’d like Google to better understand your time values, we recommend updating that value in your structured data to a single value (for example, "cookTime": "PT30M").”

Old docs. the old documentation had references to using minimum and maximum time frames for the range of time it takes to prepare and cook the dish. Here is an old screenshot from the totalTime field about the mix and max ranges:

Now there are only references to using a singular and fixed time without any ranges.

Why we care. If you use recipe schema markup on your pages and have time ranges in that markup, you will want to adjust those ranges to use singular and fixed times. One would assume the Search Console reports will soon show errors for the use of ranges but you should jump on this and modify any uses of ranges in your markup.

The post Google recipe markup now requires specific times, no more time ranges appeared first on Search Engine Land.

Read More
Jason January 19, 2022 0 Comments

Google Search Console launches desktop page experience report

With the upcoming Google page experience update coming to desktop, today Google launched a new page experience report for desktop in Google Search Console. “To support the upcoming rollout of page experience ranking to desktop, Search Console now has a dedicated desktop section in its Page Experience report to help site owners understand Google’s ‘good page experience’ criteria,” Google wrote.

How to access. You can access the report by clicking here or by going to Google Search Console, and clicking on the Page Experience link under the experience tab.

What it looks like. Here is a screenshot of this report for one of my sites:

More details. Google first launched the page experience report in April 2021 before the launch of the page experience update. The new Google Page Experience report offers metrics, such as the percentage of URLs with good page experience and search impressions over time, enabling you to quickly evaluate performance. You can also drill into specific pages to see what improvements need to be made.

Why we care. You can use this report to make the necessary adjustments to the desktop versions of your pages before Google rolls out the desktop version of the page experience update. As a reminder, we do not expect there to be a huge ranking change due to this update, but it may impact sites more if their stories show in the top stories section, since a solid page experience score is required to show in the top stories carousel.

The post Google Search Console launches desktop page experience report appeared first on Search Engine Land.

Read More
Jason January 17, 2022 0 Comments

Shopify chat bug leads to titles with (1) in Google’s search results

Over the past couple of weeks there have been complaints from some Shopify site owners that Google was showing a (1) in the title name for their pages in the Google search results page. The issue turned out to be related to a chat feature activated on those Shopify sites, the chat feature fixed the issue and the Google search results should soon no longer show (1) in the title name.

What it looked like. I found a screenshot of this happening for a site in the Shopify forums dating back a couple of weeks ago, here is that screenshot showing the (1) at the beginning of the title name in Google Search.

What it looks like now. The issue was resolved and Google recrawled and processed this specific URL, so the (1) is no longer there:

It will take time. If you still see a (1) before your title name in the Google Search results, give it more time. Google has to recrawl and reprocess all of the URLs that were impacted and that can take time. If you want to expedite it, you can use the URL inspection tool in Google Search Console and submit that URL to the index manually. But again, the issue will resolve itself over time.

Google’s statement. Google published a statement on this issue in the Google forums, basically saying it was an issue with the chat feature dynamically embedding (1) in the title attributes of these pages and thus Googlebot picked up on it and indexed it. Google’s Caio Barros wrote:

Hello, all!

We have been receiving some reports of a “(1)” showing up in some titles in search results. Upon some investigation, our Product Experts noticed that this behavior happened to websites built in Shopify and were using a chat app. It looks like these sites used a chat-bot script which added a “(1)” to the page’s title element. Titles changed with JavaScript can still be picked up, and used as title links in Search.

However, it looks like that script has been fixed to no longer change the page’s title element, so as Googlebot reprocess pages, it will no longer see the “(1)” as a part of the pages’ title, and we can take that into account when generating title links in Search. Keep in mind that title links in Search aren’t always exactly the same as the title element of a page, so it’s not guaranteed that Google will drop that element immediately after reprocessing.

There’s no need to do anything special to have pages reprocessed. This should happen automatically over time. We crawl and reprocess pages at different rates, usually you’ll see important pages like a site’s homepage reprocessed fairly quickly, within a few days at most. Other pages may take longer to be reprocessed.

Thank you all for the reports!

Why we care. If you see (1) in your titles in the Google or Bing search results, it was likely due to this chat feature in Shopify. Again, the chat feature fixed the issue and the search engines will eventually recrawl and reprocess those titles and show them correctly in the search results. It is a widespread issue, not a Google bug, but it was related to a feature in Shopify that had this unintended consequence in search.

The post Shopify chat bug leads to titles with (1) in Google’s search results appeared first on Search Engine Land.

Read More
Jason January 14, 2022 0 Comments

How to build a long-term, search-first marketing strategy

“There are roughly three and a half billion Google searches made every day,” said Craig Dunham, CEO of enterprise SEO platform Deepcrawl, at our recent MarTech conference. “According to research from Moz, 84% of people use Google at least three times a day, and about half of all product searches start with Google. The way that consumers are engaging with brands is changing, and it’s doing so rapidly.”

He added, “Consumers begin their journey with the tool that many of us use hundreds of times a day. Thus, the connection to revenue becomes clear — it starts with search.”

Source: Craig Dunham and Scott Brinker

The concept of digital transformation has been around for years, but it’s taken a whole new form in the wake of recent societal shifts. New technologies and the 2020 pandemic have led to a “greater focus on the need to drive optimal digital experiences for our customers,” says Dunham.

A brand’s website is often the first, and most lasting, impression customers will have of your organization. Here are some strategic actions he recommends marketers take to ensure their online properties are optimized for the search-first age.

“The website is a shared responsibility and it requires proper strategic leadership,” Dunham said. “The first step is to take some time and educate yourself, your leadership, your board and your organization so they more broadly promote organic KPIs as business-wide objectives.”

“There’s great data out there on the impact of the efficiency of SEO as a low-cost acquisition channel,” he added.

Source: Craig Dunham

Aside from sharing communication from Google on the importance of search from a business perspective, marketers can look for case studies from reputable organizations to encourage search prioritization. This can help higher-ups start seeing organic traffic as a key business metric.

“I was in a meeting recently and I had a digital leader say to me that you know website performance should not be an SEO metric — it has to be a business metric,” he said.

Create a cross-functional search ops task force

“Much of the data and insight generated by CEOs and their tools today are rarely utilized to their full potential,” Dunham said. “This is in part due to SEO not being seen as a business priority. As a result, it’s been siloed — pulling in teams from across the organization breaks down those silos.”

The more team members are involved with search processes, the more they’ll see its impact. People from each department will have more opportunities to contribute to growing online visibility using their unique skillsets.

“We know that businesses that are able to implement these organizational-wide search operations systems and practices — connecting a range of perspectives and search activities that are happening — are going to be the ones that will have a competitive advantage,” said Dunham.

Apply SEO testing automation

More and more brands are turning to automation tools to streamline tasks. According to Dunham, these solutions can be used for search-related activities as well.

“Automation can be well-deployed within web development processes,” Dunham said. “Until recently, this technology didn’t exist.”

Brands now have access to a wide variety of automation tools to streamline SEO-related tasks. The key is to pick solutions that align with your organization’s goals and give you full control over their deployment: “There are additional risk mechanisms that can be put in place to ensure you don’t release bad code that will result in large traffic losses, ultimately driving down revenue across your critical web pages,” said Dunham.

If brands can optimize their internal process, teams and tools around organic search, they’ll increase their chances of achieving long-term success in the search-first digital landscape.

The post How to build a long-term, search-first marketing strategy appeared first on Search Engine Land.

Read More
Jason January 13, 2022 0 Comments