On August 24, Google confirmed that it changed how it creates titles for search result listings. The confirmation came roughly a week after search professionals began noticing such changes — in the interim (and even after the confirmation), SEOs raised concerns about how these Google-altered titles may affect their traffic.
Unfortunately, title change information isn’t available in Google Search Console or Google Analytics. So, SEOs have turned to third-party tools to see whether their titles are being changed. Below is a list of tools you can use to check for title changes and instructions on how to do so.
Ahrefs. Title changes can be checked in Ahrefs, although it is a manual process. You can check for changes via historical SERPs in Site Explorer > Organic Keywords 2.0.
Since this method shows a list of search results for a given keyword, toggling the “Target only” switch (as shown below), which only shows the snippet from your site, can help you get to the information you’re looking for a bit faster. You can then compare titles by changing dates.
Rank Ranger. The SEO Monitor tool from Rank Ranger is designed to monitor URLs and show you how they perform in Google Search, based on historical data. The data is displayed in a graph that shows ranking changes over time (shown below).
Below the chart is a list of all the changes to the page title and description in Google Search. This means if you or Google make any changes to your title or description, it’ll be displayed here with the date that the change occurred.
Semrush. It is possible to track title changes using Semrush, although the toolset provider does not have a specific feature to do so. For keywords you’ve been tracking in the Position Tracking tool, click on the SERP icon next to the keyword.
That will pull the search results page for the date selected in the report, as shown below.
If you suspect a title was changed, you can confirm this by changing the date in the report and repeating this process to compare titles. Note: you can only view this information for the period you were tracking those particular keywords.
SISTRIX. In the left-hand navigation, under SERPs > SERP-Snippets, there is a button to “Show title changes,” which takes you to this screen:
The red text indicates words that have been dropped from the title and the green text indicates words that have been added.
Other tool providers. We also reached out to a number of other toolset providers. Screamingfrog and Sitebulb do not support this functionality. And, Moz and STAT did not immediately respond to our inquiries.
Why we care. Knowing when your titles are getting changed, and what they’re getting changed to, can be useful for analyzing any correlation the changes may have on your clickthrough rate. Together, these details may help you decide whether to adjust your titles, or if you’re seeing positive changes, they can also tell you what may be resonating with your audience.
New technologies are impacting businesses in virtually every category, including the mortgage industry. By leveraging the latest technologies, mortgage professionals can gain a decisive edge over their competitors.
However, some loan officers may be hesitant to break away from tradition so that they can take on a new approach to mortgage processing. You may even have a few reservations yourself when it comes to adopting new trends in technology.
Once you see how these new technologies can benefit your business, though, you’ll likely be eager to make the switch.
How Can New Tech Give My Business an Edge?
Think about all of the time you waste distributing leads or processing documents manually. Wouldn’t your skills be better suited to more productive endeavors?
That’s why your business needs to leverage the latest mortgage technologies. By incorporating modern tech tools and CRM software, you can:
Reduce Costs and Maximize Profits
Automating various processes such as lead distribution or document processing can save a ton of time. In turn, this reduces your total operating costs. You’ll no longer have to designate employees to handle menial tasks. Instead, your staff teams can focus their energies on providing an exceptional customer experience.
Having the latest CRM software will also help you to maximize profits. You can reduce processing times and serve more customers. Greater earnings mean that you will have additional funds for advertising. This will allow you to overshadow the competition with strategic marketing practices.
Integrate Seamlessly
Top-tier CRM software integrates seamlessly with existing applications. This means that you can combine it with your existing lead provider and automate tedious processes. Your new software should be able to distribute leads and give you valuable feedback about team member performance.
Many kinds of CRM software also include convenient mobile apps. Staff and clients can use these apps to stay connected and to submit any required information. All parties will be able to stay in the loop from application to closing day.
Eliminate Paper Processing
By digitizing the mortgage processing experience, you can eliminate the need for almost all paper documents. No more stacks of client folders on desks or rooms full of filing cabinets! Instead, you can access all applications from your computer or mobile device.
Guarantee Accuracy of Data
Manually entering client data onto a 1003 form and other documents is time-consuming. Even worse, it leaves the door open for errors. This can cause a client’s application to be rejected, which can delay the mortgage process.
With the latest tech tools at your disposal, your team can automatically import client data. This saves time and guarantees accuracy for all essential documents. You can avoid costly delays and keep your clients on track for their projected closing dates.
Solutions from BNTouch
If you are ready to make the leap and modernize your business model, BNTouch can help. We provide clients with the latest software and tech tools so that they can streamline mortgage processing.
We are confident in our products and want you to have the chance to see them in action. If you are still not convinced,consider booking a free demo. Once you have interacted with our software, you will be hooked. Contact us today to learn more!
Keyword research is one of the most fundamental practices of SEO. It provides valuable insight into your target audience’s questions and helps inform your content and marketing strategies.
For that reason, a well-orchestrated keyword research strategy can set you up for success. Are you looking to improve the way you’re conducting keyword research?
Join experts from Conductor as they deliver a crash course on keyword research tips, tricks and best practices.
Search Engine Land’s daily brief features daily insights, news, tips, and essential bits of wisdom for today’s search marketer. If you would like to read this before the rest of the internet does, sign up here to get it delivered to your inbox daily.
Good morning, Marketers, and a lot can happen in a year.
This past Sunday we had one-year photos taken to commemorate my daughter hitting that first birthday milestone next week. Getting the photo gallery yesterday morning sent me down memory lane thinking about how much has changed in 365 days. Babies grow and learn at such a rapid pace during their first years. A little potato human that couldn’t lift her head can now walk, communicate, and sleep through the night (mostly, thank goodness).
The same is true for us as search marketers. Think about where you were in your career a year ago. Probably stuck at home trying to weather a pandemic. But in the meantime, you may have started your own business, started a new job, learned new skills, executed a stupendous campaign and more.
As you’re prepping for Q4 of this year, keep that momentum going (or start it back up if you’ve felt stagnant recently). Plan your goals and create a blueprint to execute them. I remember reading a story about someone who wanted to go back to school in their 50s and they were worried that it was too late in life to “start over” and go to a four-year college.
The motivational part was this: Those years will pass by whether you work toward your goals or not. So you might as well get started on them now.
Carolyn Lyden, Director of Search Content
Ask the expert: Demystifying AI and machine learning in search
The world of AI and machine learning has many layers and can be quite complex to learn. Many terms are out there and unless you have a basic understanding of the landscape it can be quite confusing. In this article, expert Eric Enge will introduce the basic concepts and try to demystify it all for you.
There are so many different terms that it can be hard to sort out what they all mean. So, let’s start with some definitions:
Artificial Intelligence – This refers to intelligence possessed/demonstrated by machines, as opposed to natural intelligence, which is what we see in humans and other animals.
Artificial General Intelligence (AGI) – This is a level of intelligence where machines are able to address any task that a human can. It does not exist yet, but many are striving to create it.
Machine Learning – This is a subset of AI that uses data and iterative testing to learn how to perform specific tasks.
Deep Learning – This is a subset of machine learning that leverages highly complex neural networks to solve more complex machine learning problems.
Natural Language Processing (NLP) – This is the field of AI-focused specifically on processing and understanding language.
Neural Networks – This is one of the more popular types of machine learning algorithms which attempts to model the way that neurons interact in the brain.
Search marketers should remember their power in the Google-SEO relationship
Google has essentially said that SEOs (or those attempting SEO) have not always used page titles how they should be for a while (since 2012). “Title tags can sometimes be very long or ‘stuffed’ with keywords because creators mistakenly think adding a bunch of words will increase the chances that a page will rank better,” according to the Search Central blog. Or, in the opposite case, the title tag hasn’t been optimized at all: “Home pages might simply be called ‘Home’. In other cases, all pages in a site might be called ‘Untitled’ or simply have the name of the site.” And so the change is “designed to produce more readable and accessible titles for pages.”
This title tag system change seems to be another one of those that maybe worked fine in a lab, but is not performing well in the wild. The intention was to help searchers better understand what a page or site is about from the title, but many examples we’ve seen have shown the exact opposite.
The power dynamic is heavily weighted to Google’s side, and they know it. But the key is to remember that we’re not completely powerless in this relationship. Google’s search engine, as a business, relies on us (in both SEO and PPC) participating in its business model.
Search Shorts: YouTube on misinformation, improving ROAS in Shopping and why it’s time to get responsive
YouTube outlines its approach to policing misinformation and the challenges in effective action. “When people now search for news or information, they get results optimized for quality, not for how sensational the content might be,” wrote Neal Mohan, chief product officer at YouTube.
How to improve Google Shopping Ads ROAS with Priority Bidding. “If you feel more comfortable with Search and Display PPC campaigns, manual is a safe bet as you dip your toes into Shopping,” wrote Susie Marino for WordStream.
Forget mobile-first or mobile-only — It’s time to get truly responsive. “If you’re thinking about your website in terms of the desktop site, welcome to the 2010s. If you’re thinking about it mobile-first, welcome to the 2020s. But it’s 2021. It’s time to think about your site the way Good Designers do- it’s time to get responsive,” said Jess Peck in her latest post.
What We’re Reading: Google’s local search trends: From saturation to depth of content and personalization
The focus of GMB has shifted in recent years from getting more businesses to sign up for the listing service to getting business owners or managers to add even more information about their companies on the platform.
“The new GMB mission is to have businesses provide as much relevant information for as many content areas as possible. Beyond basic contact info, these opportunities include photos, action links, secondary hours, attributes, service details, and several other features. The intent is to make GMB as replete with primary data as possible, so that any pertinent detail a consumer might need to know before choosing a local business is provided in-platform, without the need to click through to other sources,” wrote Damian Rollison for StreetFight.
The local trend matches Google’s overall direction in the search engine results pages: answering everything right there in the SERP. It also does this by personalizing the local results to what it believes is the searcher’s intent.
“The term that has arisen to describe the most prevalent type of local pack personalization is ‘justifications’ (this is apparently Google’s internal term for the feature). Justifications are snippets of content presented as part of the local pack — or, in some cases, as part of the larger business profile — in order to ‘justify’ the search result to the user. Justifications pull evidence from some less-visible part of GMB, from Google users, from the business website, or from local inventory feeds, and publish that evidence as part of the search result,” said Rollison.
So why should marketers care about this? “Personalization represents a broad range of opportunities for businesses to drive relevant traffic from search to store. Answers to questions, photos, website content, and much more can be optimized according to the products and services you most want to surface for in search.“
SANTA ANA, CA, USA, August 25, 2021 /EINPresswire.com/ — LenderHomePage today unveiled details of two new features to its premier mortgage point-of-sale platform, Loanzify POS. The newly released Spanish version digital 1003 and CreditConnect self-pay credit pull are breakthrough features designed to help mortgage professionals better serve their markets and exponentially expand revenue opportunities while saving costs.
First launched in 2019, Loanzify POS is a loan management platform that allows individual and enterprise-level originators to increase their production, accelerate the loan lifecycle, and provide a better consumer experience.
Latino population homeownership in 2020 was 49%, up from 45% in 2014, according to a study by the National Association of Hispanic Real Estate Professionals (NAHREP). This 4% increase is even more notable when considering data from the Urban Institute, where they forecast that Latino consumers will comprise 70% of homeownership growth from 2020-2040 and will serve as the primary engine of the US real estate market.
In response to the exceptional homeownership growth rate that Latinos are experiencing as well as the demand from the industry, LenderHomePage unveiled a Spanish version of their already successful digital mortgage application, part of the Loanzify POS platform. This new release maintains all the features of the English version, including friendly interview-style phrasing, help prompts, and an intelligent UX design with configurable automation and branding capabilities.
Loanzify POS Spanish version digital 1003 provides originators the necessary tools to remain competitive in the market while enabling them to deliver an outstanding borrower experience to the often under-served Latino consumer.
CreditConnect Self-Pay Credit Check:
While digital loan intake and processing significantly reduces loan origination cost, the overall cost per loan still remains high at several thousand per loan. One of the overhead expenses for every loan is the creditworthiness analysis conducted with a credit check. With an average price of $39 per inquiry, each analysis that does not result in a loan becomes an added cost, and in aggregate this number can become significant and reduce an originator’s profits by thousands of dollars every year. If you have a few Loan Officers in your company, this expense can grow even faster, especially in today’s environment of high demand for home financing where a lot of people are still attempting to refinance and/or purchase new homes.
Utilizing eCommerce functionality, Loanzify POS now integrates with payment processors. The first of these is PayPal — one of the most trusted transaction processors — to quickly and securely empower the prospective borrower to self-pay and pull their own credit in real-time and during the application intake, effectively transferring the cost of running a credit report to the consumer. Not only does this reduce potentially thousands of dollars in operational expenses off the top, but it also aids in identifying the “tire-kickers” from the truly motivated mortgage consumers, curtailing costs accrued from dealing with unqualified borrowers.
“Our customers ask and we deliver. That has been our motto for years,” says Rocky Foroutan, CEO of LenderHomePage.com. “One of the most fulfilling parts of my job is when I get on the phone with a client and we co-invent a new feature for our platform,” Foroutan added. “Both of these features were direct requests by our clients and now are powerful tools added to our software. We are glad we were able to once again meet our clients’ expectations.”
The world of AI and Machine Learning has many layers and can be quite complex to learn. Many terms are out there and unless you have a basic understanding of the landscape it can be quite confusing. In this article, expert Eric Enge will introduce the basic concepts and try to demystify it all for you. This is also the first of a four-part article series to cover many of the more interesting aspects of the AI landscape.
Current Google AI Algorithms: Rankbrain, BERT, MUM, and SMITH
Basic background on AI
There are so many different terms that it can be hard to sort out what they all mean. So let’s start with some definitions:
Artificial Intelligence – This refers to intelligence possessed/demonstrated by machines, as opposed to natural intelligence, which is what we see in humans and other animals.
Artificial General Intelligence (AGI) – This is a level of intelligence where machines are able to address any task that a human can. It does not exist yet, but many are striving to create it.
Machine Learning – This is a subset of AI that uses data and iterative testing to learn how to perform specific tasks.
Deep Learning – This is a subset of machine learning that leverages highly complex neural networks to solve more complex machine learning problems.
Natural Language Processing (NLP) – This is the field of AI-focused specifically on processing and understanding language.
Neural Networks – This is one of the more popular types of machine learning algorithms which attempts to model the way that neurons interact in the brain.
These are all closely related and it’s helpful to see how they all fit together:
In summary, Artificial intelligence encompasses all of these concepts, deep learning is a subset of machine learning, and natural language processing uses a wide range of AI algorithms to better understand language.
Sample illustration of how a neural network works
There are many different types of machine learning algorithms. The most well-known of these are neural network algorithms and to provide you with a little context that’s what I’ll cover next.
Consider the problem of determining the salary for an employee. For example, what do we pay someone with 10 years of experience? To answer that question we can collect some data on what others are being paid and their years of experience, and that might look like this:
With data like this we can easily calculate what this particular employee should get paid by creating a line graph:
For this particular person, it suggests a salary of a little over $90,000 per year. However, we can all quickly recognize that this is not really a sufficient view as we also need to consider the nature of the job and the performance level of the employee. Introducing those two variables will lead us to a data chart more like this one:
It’s a much tougher problem to solve but one that machine learning can do relatively easily. Yet, we’re not really done with adding complexity to the factors that impact salaries, as where you are located also has a large impact. For example, San Francisco Bay Area jobs in technology pay significantly more than the same jobs in many other parts of the country, in large part due to the large differences in the cost of living.
The basic approach that neural networks would use is to guess at the correct equation using the variables (job, years experience, performance level) and calculating the potential salary using that equation and seeing how well it matches our real-world data. This process is how neural networks are tuned and it is referred to as “gradient descent”. The simple English way to explain it would be to call it “successive approximation.”
The original salary data is what a neural network would use as “training data” so that it can know when it has built an algorithm that matches up with real-world experience. Let’s walk through a simple example starting with our original data set with just the years of experience and the salary data.
To keep our example simpler, let’s assume that the neural network that we’ll use for this understands that 0 years of experience equates to $45,000 in salary and that the basic form of the equation should be: Salary = Years of Service * X + $45,000. We need to work out the value of X to come up with the right equation to use. As a first step, the neural network might guess that the value of X is $1,500. In practice, these algorithms make these initial guesses randomly, but this will do for now. Here is what we get when we try a value of $1500:
As we can see from the resulting data, the calculated values are too low. Neural networks are designed to compare the calculated values with the real values and provide that as feedback which can then be used to try a second guess at what the correct answer is. For our illustration, let’s have $3,000 be our next guess as the correct value for X. Here is what we get this time:
As we can see our results have improved, which is good! However, we still need to guess again because we’re not close enough to the right values. So, let’s try a guess of $6000 this time:
Interestingly, we now see that our margin of error has increased slightly, but we’re now too high! Perhaps we need to adjust our equations back down a bit. Let’s try $4500:
Now we see we’re quite close! We can keep trying additional values to see how much more we can improve the results. This brings into play another key value in machine learning which is how precise we want our algorithm to be and when do we stop iterating. But for purposes of our example here we’re close enough and hopefully you have an idea of how all this works.
Our example machine learning exercise had an extremely simple algorithm to build as we only needed to derive an equation in this form: Salary = Years of Service * X + $45,000 (aka y = mx + b). However, if we were trying to calculate a true salary algorithm that takes into all the factors that impact user salaries we would need:
a much larger data set to use as our training data
to build a much more complex algorithm
You can see how machine learning models can rapidly become highly complex. Imagine the complexities when we’re dealing with something on the scale of natural language processing!
Other types of basic machine learning algorithms
The machine learning example shared above is an example of what we call “supervised machine learning.” We call it supervised because we provided a training data set that contained target output values and the algorithm was able to use that to produce an equation that would generate the same (or close to the same) output results. There is also a class of machine learning algorithms that perform “unsupervised machine learning.”
With this class of algorithms, we still provide an input data set but don’t provide examples of the output data. The machine learning algorithms need to review the data and find meaning within the data on their own. This may sound scarily like human intelligence, but no, we’re not quite there yet. Let’s illustrate with two examples of this type of machine learning in the world.
One example of unsupervised machine learning is Google News. Google has the systems to discover articles getting the most traffic from hot new search queries that appear to be driven by new events. But how does it know that all the articles are on the same topic? While it can do traditional relevance matching the way they do in regular search in Google News this is done by algorithms that help them determine similarity between pieces of content.
As shown in the example image above, Google has successfully grouped numerous articles on the passage of the infrastructure bill on August 10th, 2021. As you might expect, each article that is focused on describing the event and the bill itself likely have substantial similarities in content. Recognizing these similarities and identifying articles is also an example of unsupervised machine learning in action.
Another interesting class of machine learning is what we call “recommender systems.” We see this in the real world on e-commerce sites like Amazon, or on movie sites like Netflix. On Amazon, we may see “Frequently Bought Together” underneath a listing on a product page. On other sites, this might be labeled something like “People who bought this also bought this.”
Movie sites like Netflix use similar systems to make movie recommendations to you. These might be based on specified preferences, movies you’ve rated, or your movie selection history. One popular approach to this is to compare the movies you’ve watched and rated highly with movies that have been watched and rated similarly by other users.
For example, if you’ve rated 4 action movies quite highly, and a different user (who we’ll call John) also rates action movies highly, the system might recommend to you other movies that John has watched but that you haven’t. This general approach is what is called “collaborative filtering” and is one of several approaches to building a recommender system.
Note: Thanks to Chris Penn for reviewing this article and providing guidance.
Search Engine Land’s daily brief features daily insights, news, tips, and essential bits of wisdom for today’s search marketer. If you would like to read this before the rest of the internet does, sign up here to get it delivered to your inbox daily.
Good morning, Marketers, the title tag debacle has to improve, right?
Google has confirmed it changed how it goes about creating titles for search result listings, and you can read more about that below, but the explanation isn’t very cathartic for search professionals who meticulously craft their titles, factoring in things like keyword research and audience personas.
The company says that, in its tests, searchers actually prefer the new system. It’s hard to believe that people prefer outdated titles that refer to Joe Biden as the vice president, to point out a particularly egregious example. Google knows this — in fact, it has even set up a thread where you can upload examples and leave feedback. That’s a promising start, but in the meantime, click-through rates are suffering and users may be confused.
This is an interesting predicament because the vast majority of sites don’t have SEOs to optimize their titles, which means that the change may potentially benefit those sites and the users who are searching for that content. However, it may negate or even hurt the work that our industry is putting in. Is this a case of the greater good outweighing the needs of the few? Has your brand been impacted? What’s your outlook on this? Send your thoughts my way: gnguyen@thirddoormedia.com.
George Nguyen, Editor
Google confirms it changed how it creates titles for search result listings
After more than a week of bewilderment and, for some, outrage, Google confirmed that it changed how it creates titles for search result listings on Tuesday evening. Previously, Google used the searcher’s query when formulating the title of the search result snippets. Now, the company says it generally doesn’t do that anymore. Instead, Google’s new system is built to describe what the content is about, regardless of the query that was searched.
The new system makes use of text that users can visually see when they arrive on the page (e.g., content within H1 or other header tags, or which is made prominent through style treatments). HTML title tags are still used more than 80% of the time, though, the company said. Google designed this change to produce more readable and accessible titles, but, for many, it seems to be having adverse effects — just check out the replies in the Twitter announcement. Fluctuations in your click-through rate may be related to this change, so we suggest you make note of it in your reporting.
SERP trends of the rich and featured: Top tactics for content resilience in a dynamic search landscape
The search results change all the time — not just the order of the results, but the features and user interface as well. Over the years, we’ve seen blue links get crowded out in favor of featured snippets, then featured snippets became less prominent as knowledge panels proliferated, and so the cycle goes. So, how do you ensure that your content stays resilient and continues to attract the traffic that your business relies on as the SERP evolves?
At SMX Advanced, Crystal Carter, senior digital strategist at Optix Solutions, outlined the following potential strategies for greater SERP resilience:
Optimize content for mixed media featured snippet panel results. There are often multiple site link opportunities in featured snippets (from images and/or videos), so you might be able to attract clicks even if you don’t “own” the snippet.
Create knowledge hubs for potential contextual linking developments. Google often experiments with new features on live search results, as it did with image carousels with featured snippets. Late last year, it tested out contextual links, and for some niches, this could be a way to gain more visibility or further branding if contextual links receive a wider rollout.
Build structured data into your website before rich results arise. It doesn’t take that much effort to do, it informs search engines about the nature of your content and if search engines end up supporting the structured data with a rich result, you’ll be ready.
Use intent-focused long-form content to potentially benefit from passage ranking. This may help you extract extra search visibility from your detailed content.
On August 30, Instagram will deprecate support for the “swipe-up” link (in Instagram Stories) in favor of a “Link Sticker,” TechCrunchreported on Monday. The Link Sticker will behave much like the poll, question and location stickers in that creators will be able to resize it, select different styles and place it anywhere on their Stories.
Why we care. This change may enable greater engagement between audiences and creators. The swipe-up link only enabled users to swipe up (or they could bounce to the next Story), but Stories with Sticker Links behave like any other Story: users can react with an emoji or send a reply.
Creators that can access swipe-up links will have access to Sticker Links. For SMBs, that may mean that they still don’t meet the follower threshold, unfortunately. However, Instagram says it’s evaluating whether to expand this feature to more accounts down the road — I wouldn’t hold my breath for that, as more accessibility to Sticker Links may have safety implications, especially if bad actors use it to promote spam or misinformation.
You saw my Post where, now?
Google Posts can now appear on third-party sites. Google has posted a notice stating that “Your posts will appear on Google services across the web, like Maps and Search, and on third-party sites.” The company has yet to provide further detail on the nature of these sites, the implementation or if they’ll simply be embeddable by all. Thanks to Claire Carlile for bringing this to our attention.
What should you ask at the interview? There are a lot of jobs available right now, so you may have your pick. Jasmine Granton has published a great thread of questions you should ask to narrow down your prospects.
Nike’s robots.txt. If I told you what it said, that’d ruin the joke. Go see it for yourself at nike.com/robots.txt. Thanks to Britney Muller for sharing this one.
Working remotely? Here’s how to make friends with your coworkers
We’re now very, very deep into working during/around the pandemic. Over the course of the last year and a half, you (or some of your colleagues) may have switched jobs and, now that organized happy hours have sort of lost their appeal, you may be wondering how to build relationships with new coworkers.
To help us all be less socially awkward, WIRED’s Alan Henry offered the following tips:
Ask colleagues to hop on a video call just to chat about what they’re working on or their interests. You could do this over a coffee, drink or even ice cream. And, be willing to reschedule if you or they just aren’t up for it that day.
Make plans for after you complete a big project. The work is done and people are more likely to be in a celebratory mood.
Join in on the conversation. When someone shares photos of their pets, you can share some of yours too (also works for kids!). Share your sense of humor by commenting on jokes or sending memes.
Follow your coworkers on Twitter, or Instagram and TikTok, if you’re comfortable with interacting with them on those platforms.
“The fun never happens organically when you’re all stuck behind screens,” Henry wrote, “You have to make it happen, which means putting yourself out there and making yourself (and possibly someone else) slightly uncomfortable.”
Google’s recent change in algorithms that choose which titles show up in SERPs has caused quite a stir in the SEO community. You’ve all probably seen the tweets and blogs and help forum replies, so I won’t rehash them all here. But the gist is that a few people could not care less and lots of people are upset with the changes.
It’s something our PPC counterparts have experienced for a while now — Google doing some version of automation overreach — taking away more of their controls and the data behind what’s working and what’s not. We’ve all adapted to (not provided) and we continue to adapt, so I’m sure this will be no different, but the principle is what’s catching a lot of SEOs off guard.
What’s going on?
Google has essentially said that SEOs (or those attempting SEO) have not always used page titles how they should be for a while (since 2012). “Title tags can sometimes be very long or ‘stuffed’ with keywords because creators mistakenly think adding a bunch of words will increase the chances that a page will rank better,” according to the Search Central blog. Or, in the opposite case, the title tag hasn’t been optimized at all: “Home pages might simply be called ‘Home’. In other cases, all pages in a site might be called ‘Untitled’ or simply have the name of the site.” And so the change is “designed to produce more readable and accessible titles for pages.”
Presumptuousness aside, someone rightfully pointed out that content writers in highly regulated industries often have to go through legal and multiple approvals processes before content goes live. This process can include days, weeks, months of nitpicking over single words in titles and headers. Only for Google’s algorithm to decide that it can do whatever it wants. Google’s representative pointed out that these companies cannot be liable for content on a third-party site (Google’s). It’s not a one-to-one comparison, but the same industries often have to do the same tedious approvals process for ad copy (which is why DSAs are often a no-no in these niches) to cover their bases for the content that shows up solely in Google’s search results.
When I work with SEO clients, I often tell them that instead of focusing on Google’s goals (which many get caught up in), we need to be focusing on our customers’ goals. (You can check out my SMX Advanced keynote, which is essentially all about this — or read the high points here.) Google says it’s moving toward this automation to improve the searchers’ experience. But I think it’s important to note that Google is not improving the user experience because it’s some benevolent overlord that loves searchers. It’s doing it to keep searchers using Google and clicking ads.
Either way, the message seems to be “Google knows best” when it comes to automating SERPs. In theory, Google has amassed tons of data across probably millions of searches to train their models on what searchers want when they type something into the search engine. However, as the pandemonium across the SEO community indicates, this isn’t always the case in practice.
Google’s history of half-baked ideas
Google has a history of shipping half-baked concepts and ideas. It might be part of the startup culture that fuels many tech companies: move fast, break things. These organizations ship a minimum viable product and iterate and improve while the technology is live. We’ve seen it before with multiple projects that Google has launched, done a mediocre job of promoting, and then gotten rid of when no one liked or used it.
I wrote about this a while back when they first launched GMB messaging. Their initial implementation was an example of poor UX and poorly thought out use cases. While GMB messaging may still be around, most SMBs and local businesses I know don’t use it because it’s a hassle and could also be a regulatory compliance issue for them.
The irony is not lost on us that Danny Sullivan thought it was an overstep on Google’s part when it affected a small business in 2013. The idea would be that the technology would hopefully evolve, right? Google’s SERP titles should be more intuitive not word salads pulled from random parts of a page.
This title tag system change seems to be another one of those that maybe worked fine in a lab, but is not performing well in the wild. The intention was to help searchers better understand what a page or site is about from the title, but many examples we’ve seen have shown the exact opposite.
Google and its advocates continue to claim that this is “not new” (does anyone else hear this phrase in Iago’s voice from Aladdin?), and they’re technically correct. The representatives and Google stans reiterate that the company never said they’d use the title tags you wrote, which given the scope of how terrible this first iteration is showing up to be in SERPs, almost seems like a bully’s playground taunt to a kid who’s already down.
Google is saying they’re making this large, sweeping change in titles because most people don’t know how to correctly indicate what a page is about. But SEOs are often skilled in doing extensive keyword and user research, so it seems like of all pages that should NOT be rewritten, it’s the ones we carefully investigated, planned, and optimized.
How far is too far?
I’m one of those people who doesn’t like it, but is often resigned to the whims of the half-baked stunts that Google does because, really, what choice do I have? Google owns their own SERP, but we, as SEOs, feel entitled to it because it’s our work being put up for aggregation. It’s like a group project where you do all the work, and the one person who sweeps in last minute to present to the class mucks it all up. YOU HAD ONE JOB! So while we can analyze the data and trends, we also need to make our feedback known.
SEOs’ relationship with Google has always been chicken and egg to me. The search engine would not exist if we didn’t willingly offer our content to it for indexing and retrieval (not to mention the participation of our PPC counterparts), and we wouldn’t be able to drive such traffic to our businesses without Google including our content in the search engine.
Why do marketers have such a contentious relationship with Google? To put it frankly, Google does what’s best for Google, and often that does not align with what’s best for search marketers. But we have to ask ourselves where is the line between content aggregator and content creator? I’m not saying that the individuals or teams at Google are inherently evil or even have bad intentions. They actually likely have the best aspirations for their products and services. But the momentum of the company as a whole feels perpetual at this point, which can feel like we practitioners have no input in matters.
We’ve seen Google slowly take over the SERP with their own properties or features that don’t need a click-through — YouTube, rich snippets, carousels, etc. While I don’t think Google will ever “rewrite” anything on our actual websites, changes like this make search marketers wonder what is the next step? And which of our critical KPIs will potentially fall victim to the search engine’s next big test?
When I interviewed for this position at Search Engine Land, someone asked me about my position on Google (I guess to determine if I was biased one way or another). I’m an SEO first and a journalist second, so my answer was essentially that Google exists because marketers make it so.
To me, the situation is that Google has grown up beyond its original roots as a search engine and has evolved into a tech company and an advertising giant. It’s left the search marketers behind and is focused on itself, its revenues, its bottom line. And that’s what businesses are wont to do when they get to that size. The power dynamic is heavily weighted to Google’s side, and they know it. But the key is to remember that we’re not completely powerless in this relationship. Google’s search engine, as a business, relies on us (in both SEO and PPC) participating in its business model.
SMX Next returns virtually on November 9-10, 2021 focusing on forward-thinking search marketing.
AI and machine learning have already become part of both paid and organic search performance. Commerce platforms are just as powerful as the traditional search engines for driving sales. And new ways to deliver content across search and social platforms are giving creative marketers more options for driving engagement.
SMX Next will explore next-generation strategies, equipping attendees with emerging SEO and PPC tactics as well as expert insights on the future of the search marketing profession.
Whether you’ve been speaking for years or are just dipping your toes into speaking, please consider submitting a session pitch. We are always looking for new speakers with diverse points of view.
The deadline for SMX Next pitches is September 24th!
Here are a few tips for submitting a compelling session proposal:
Present an original idea and/or unique session format.
Include details about what an attendees will be able to do better or different as a result of attending your session.
Include a case study or specific examples and explain how they can be applied in different types of organizations.
Be realistic about what you can present in the time allotted. You can’t cover everything about your topic. Going more in-depth on a narrow topic is often more valuable to the attendee.
Provide tangible takeaways and a plan of action.
Jump over to this page for more details on how to submit a session idea, or directly to this page to create your profile and submit a session pitch.
If you have questions, feel free to reach out to me directly at kbushman@thirddoormedia.com. I’m looking forward to reading your proposals!