Off with Your Head Terms: Leveraging Long-Tail Opportunity with Content
By Jeff Klein | April 8, 2015
Posted by SimonPenson
Running an agency comes with many privileges, including a first-hand look at large amounts of data on how clients’ sites behave in search, and especially how that behavior changes day-to-day and month-to-month.
While every niche is different and can have subtle nuances that frustrate even the most hardened SEOs or data analysts, there are undoubtedly trends that stick out every so often which are worthy of further investigation.
In the past year, the Zazzle Media team has been monitoring one in particular, and today’s post is designed to shed some light on it in hopes of creating a wider debate.
What is this trend, you ask? In simple terms, it’s what we see as a major shift in the way results are presented, and it’s resulting in more traffic for the long tail.
2014 growth
It’s a conclusion supported by a number of client growth stories throughout the last 12 months, all of whom have seen significant growth coming not from head terms, but from an increasing number of URLs gaining search traffic from organic.
The Searchmetrics visibility chart below is just one example of a brand in the finance space seeing digital growth year-over-year as a direct result of this phenomenon. They’ve even seen some head terms drop backwards by a couple of places while still seeing this overall.
To understand why this may be happening we need to take a very quick crash course into how Google has evolved over the past two years.
Keyword matching
Google built its empire on a smart system; one which was able to match “documents” (webpages) to keywords by scanning and organizing those documents based upon keyword mentions.
It’s an approach that has been getting increasingly too simplistic in a “big data” world.
The answer, it seems, is to focus more on the user intent behind that query and get at exactly what it is the searcher is actually looking for.
Hummingbird
The solution to that challenge is Hummingbird, Google’s new “engine” for sorting the results we see when we search.
In the same way that Caffeine, the former search architecture, allowed the company to produce fresher results and roll worldwide algorithm changes (such as Panda and Penguin) out faster, Hummingbird is designed to do the same for personalized results.
And while we are only at the very beginning of that journey, from the data we have seen over the past year it seems to be crystallizing into more traffic for deeper pages.
Why is this happening? The answer lies in further analysis of what Google is trying to achieve.
Implicit vs. explicit
To better explain this change let’s look at how it is affecting a search for something obvious, like “coffee shop.”
Go back two or so years and a search for this may well have presented 10 blue links of the obvious chains and their location pages.
For the user, however, this isn’t useful—and the search giant knows it. Instead, they want to understand the user intent behind the query, or the “implicit query,” as previously explained by Tom Anthony
on this blog.
What that means, in practice, is that a search for “coffee shop” will actually have context, and one of the reasons for wanting you signed in is to allow the search engine to collect further signals from you to help understand that query in detail. That means things like your location, perhaps even your brand preferences, etc.
Knowing these things allows the search to be personalized to your exact needs, throwing up the details of the closest Starbucks to your current location (if that is your favourite coffee).
If you then expand this trend out into billions of other searches you can see how deeper-level pages, or even articles, present a better, more refined option for Google.
Here we see how a result for something like “Hotels” may change if Google knows where you are, what you do for a living and therefore what kind of disposable income you have. The result may look completely different, for instance, if Google knows you are a company CEO who stays in nice hotels and has a big meeting the following day, thus requiring a quiet room so you can get some sleep.
Instead of the usual “best hotels in London” result we get something much more personalised and, critically, something more useful.
The new long-tail curve
What this appears to be doing is reshaping the traditional long-tail curve we all know so well. It is beginning to change shape along the lines of the chart below:
That’s a noteworthy shift. With another client of ours, we have seen a 135% increase in the number of pages receiving traffic from search, delivering a 98% increase in overall organic traffic because of it.
The primary factor behind this rise is the creation of the “right” content to take advantage of this changing marketplace. Getting that right requires an approach reminiscent of the way traditional marketing has worked for decades—before the web even existed.
In practice, that means understanding the audience you are attempting to capture and, in doing so, outlining the key questions they are asking every day.
This audience-centric marketing approach is something I have written about previously on this blog and others, as it is critical to understanding that “context” and what your customers or clients are actually looking for.
The way to do that? Dive into data, and also speak to those who may already be buying from or working with you.
Digging into available data
The first step of any marketing process is to collect and process any and all available information about your existing audience and those you may want to attract in the future.
This is a huge subject area—one I could easily spend the next 10,000 words writing about—but it has been covered brilliantly on the more traditional research side by sites like
this and this.
The latter of those two links breaks this side of the research process into the two key critical elements you will need to master to ensure you have a thorough understanding of who you are “talking” to in search.
Quantitative concentrates on the numbers. Focus is on larger data sets and statistical information, as opposed to painting a rich picture of the likes and dislikes of your audience.
Qualitative focuses on the words and on painting in the “richness.” The way your customers speak and explain problems, likes and dislikes. It’s more of a study on human behavior than stats.
This information can be combined with a plethora of other data sources from CRMs, email lists, and other customer insight pots, but where we are increasingly seeing more opportunity is in the social data arena.
Platforms such as Facebook can give all brands access to hugely valuable big-data insight about almost any audience you could possibly imagine.
What I’d like to do here is explain how to go about extracting that data to form rich pictures of those we are either already speaking to or the very people we want to attract.
There is also little doubt that the amount of insight you have into your audience is directly proportional to the success of your content, hence the importance of this research cycle.
Persona creation
Your data comes to life through the creation of personas, which are designed to put a human face on that data and group it into a small number of shared interest sets.
Again, the point of this post is not to explain how to best manage this process. Posts like
this one and this one go over that in great detail—the point here is to go over what having them in place allows you to do.
We’ve also created a free persona template, which can help make the process of pulling them together much easier.
When you’ve got them created, you will soon realize that your personas each have very different needs from a content perspective.
To give you an example of that let’s look at these example profiles below:
Here we can see three very distinct segments of the audience, and immediately it is easy to see how each of them is looking for a different experience from your brand.
Take the “Maturing Spender” for example. In this fictional example for a banking brand we can see he not only has very different content needs but is actually “activated” by a different approach to the buying cycle too.
While the traditional buyer will follow a process of awareness, research, evaluation and purchase, a new kind of purchase behaviour is materializing that’s driven by social.
In this new world we are seeing consumers driven to more impulsive purchases that are often driven by social sharing. They’ll see something in their social feeds and are more likely to purchase there and then (or at least within a few days), especially if there is a limited offer on.
Much of this is driven by our increasingly “disposable” culture that creates an accelerated buying process.
You can learn this and other data-driven insights from the personas, and we recommend using a
good persona template, then adding further descriptive detail and “colour” to each one so that everyone understands whom it is they are writing for.
It can also work well to align those characters to famous people, if possible, as doing so makes it much easier to scale understanding across whole organizations.
Having them in place and universally adopted allows you to do many things, including:
- Create focus on the customer
- Allow teams to make and defend decisions
- Create empathy with the audience
Ultimately, however, all of this is designed to ensure you have a better understanding of those you want to converse with, and in doing so you can map out the key questions they ask and understand their individual needs.
If you want to dig into this area more then I highly recommend Mike King’s post from 2014
here on Moz for further background.
New keyword research – personas
Understanding the specific questions your audience is asking is where the real win can be found, and the next stage is to utilize the info gleaned from the persona process in the next phase: keyword research.
To do that, let’s walk through an example for our Happy Couple persona (the first from the above graphic), and see how things plays out for this fictional banking brand.
The first step is to gather a list of tools to help unearth related keywords. Here are the ones we use:
- 1. SEMRush
- 2. Soovle
- 3. Keyword Tool IO
- 4. Google Autocomplete
- 5. Forum searches
There are many more that can help, but it is very easy to complicate the process with data, so we like to limit that as much as possible and focus on where we can get the most benefit quickly.
Before we get into the data mining process, however, we begin with a group brainstorm to surface as many initial questions as possible.
To do this, we will gather four people for a quick 15-minute stand-up conversation around each persona. The aim is to gather five questions from which the main research phase can be constructed.
Some possibilities for our Happy Couple example may include:
- How much can I borrow for a mortgage?
- How do I buy a house?
- How large a deposit do I need to buy a house?
- What is the best regular savings account?
From here we can use this framework as a starting point for the keyword research and there is no better place to start than with our first tool.
SEMRush
For those unfamiliar with this tool it is designed to make it easier to accurately assess competitor and market opportunity by plugging into search data. In this example we will use it to highlight longer-tail keyword opportunity based upon the example questions we have just unearthed.
To uncover related keyword opportunity around the first question we type in something similar to the below:
This will highlight a number of phrases related to our question:
As you can see, this gives us a lot of ammunition from a content perspective to enable us to write about this critical subject consistently without repeating the same titles.
Each of those long-tail terms can be analyzed ever deeper by clicking on them individually. That will generate a further list of even more specifically related terms.
Soovle
The next stage is to use this vastly underrated tool to further mine user search data. It allows you to gather regular search phrases from sites such as YouTube, Yahoo, Bing, Answers.com and Wikipedia in one place.
The result is something a little like the below. It may not be the prettiest but it can save a lot of time and effort as you can download the results in a single CSV.
Google Autocomplete / KeywordTool.io
There are several ways you can tap into Google’s Autocomplete data and with an API in existence there are a number of tools making good use of it. My current favourite is
KeywordTool.io, which actually has its own API, mashing data from Google, YouTube, Bing, and the Apple App Store.
The real value is in how it spits out that data, as you are able to see suggestions by letter or number, creating hundreds of potential areas for content development. The App Store data is particularly useful, as you will often see greater refinement in search behavior here and as a result very specific ‘questions’ to answer.
A great example for this would be “how to prequalify yourself for a mortgage,” a phrase which would be very hard to surface using Google Autocomplete tools alone.
Forum searches
Another fantastic area worthy of research focus is forums. We use these to ask our peers and topic experts questions, so spending some time understanding what is being asked within the key ones for your market can be very helpful.
One of the best ways of doing this is to perform a simple advanced Google search as outlined below:
“keyword” + “forum”
For our example we might type:
This then presents us with more than 85,000 results, many of which will be questions that have been asked on this subject.
Examples include:
- First-time buyer’s mortgage guide
- Getting a Mortgage: Boost your Mortgage Chances
- Mortgage Arrears: What help is available?
- Are Fixed Rate Mortgages best?
As you can see, this also opens up a myriad of content opportunities.
Competitive research
Another way of laterally expanding your reach is to look at the content your best competitors are producing.
In this example we will look at two ways of doing that, firstly by analyzing top content and then by looking at what those competitors rank for that you don’t.
Most shared content
There are several tools that can give you a view on the most-shared content, but my personal favourites are Buzzsumo or the awesome new
ahrefs Content Explorer.
Below, we see a search for “mortgages” using the tool, and we are presented with a list of content on that subject sorted by “most shared.” The result can be filtered by time frame, language, or even by specific domain inclusions or exclusions.
This data can be exported and titles extracted to be used as the basis of further keyword research around that specific topic area, or within a brainstorm.
For example, I might want to look at where the volume is from an organic search perspective for something like “mortgage paperwork.”
I can type this term into SEMRush and search through related phrases for long-tail opportunity on that specific area.
Competitor terms opportunity
A smart way of working out where you can gain further market share is to dive a little deeper into your key competitors and understand what they rank for and, critically, what you don’t.
To do this, we return to SEMRush and make use of a little-publicized but hugely useful tool within the suite called
Domain Comparison Tool.
It allows you to compare two domains and visualize the overlap they have from a keyword ranking perspective. For this example, we will choose to compare two UK banks – Lloyds and HSBC.
To do that simply type both domains into the tool as below:
Next, click on the chart button and you will be presented with two overlapping circles, representing the keywords that each domain ranks for. As we can see, both rank for a similar number of keywords (the overall number affects the size of the circles) with some overlap but there are keywords from both sides that could be exploited.
If we were working for HSBC, for instance, it would be the blue portion of the chart we would be most interested in in this scenario. We can download a full list of keywords that both banks rank for, and then sort by those that HSBC don’t rank for.
You can see in the snapshot below that the data includes columns on where each site ranks for each keyword, so sorting is easy.
Once you have the raw data in spreadsheet format, we would sort by the “HSBC” column so the terms at the top are those we don’t rank for, and then strip away the rest. This leaves you with the opportunity terms that you can create content to cover, and this can be prioritized by search volume or topic area if there are specific sub-topics that are more important than others within your wider plan.
Create the calendar
By this point in the process you should have hundreds, if not thousands of title ideas, and the next job is to ensure that you organise them in a way that makes sense for your audience and also for your brand.
Content flow
To do this properly requires not just a knowledge of your audience via extensive research, but also content strategy.
One of the biggest rules is something we call content flow. In a nutshell, it is the discipline of creating a content calendar that delivers variation over time in a way that keeps the audience engaged.
If you create the same content all of the time it can quickly become a turn-off, and so varying the type (video, image-led piece, infographics, etc.) and read time, or the amount of time you put into creating the piece, will produce that “flow.”
This handy tool can help you sense check it as you go.
Clearly your “other” content requirements as part of your wider strategy will need to fit into this strategy, too. The vast majority of the output here will be article-focused, and it is critical to ensure that other elements of your strategy are also covered to round out your content output.
This
free content strategy toolkit download gives you everything you need to ensure you get the rest of it right.
The result
This is a strategy we have followed for many of our search-focused clients over the last 18 months, and we have some great real-world case studies to prove that it works.
Below you can see how just one of those has played out in search visibility improvement terms over that period as proof of its effectiveness.
All of that growth directly correlates with a huge growth in the number of URLs receiving traffic from search and that is a key metric in measuring the effectiveness of this strategy.
In this example we saw a 15% monthly increase in the number of URLs receiving traffic from search, with organic traffic up 98% year-on-year despite head terms staying relatively static.
Give it a go for yourself as part of your wider strategy and see what it can do for your brand.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Topics: Uncategorized | Comments Off on Off with Your Head Terms: Leveraging Long-Tail Opportunity with Content
Related Links: