• Archives

  • Archive for the ‘Uncategorized’ Category

    The Time It Takes to Write a Buffer Blog Post (And How We Spend Every Minute)

    Wednesday, April 15th, 2015

    In my experience, one of the best ways to write great content is to make time to write great content.

    I’m grateful that the team at Buffer emphasizes the blog as a means of helping others, spreading the word about Buffer, and sharing our learnings and improvements. This allows me to spend the time writing.

    And how do I spend that time?

    I’d love to show you.

    We publish four posts per week on the Buffer blog, each post at least 1,500 words (and typically over …

    The post The Time It Takes to Write a Buffer Blog Post (And How We Spend Every Minute) appeared first on Social.

    How to Keep your Site Fast for Mobile-Friendly

    Monday, April 13th, 2015

    Posted by Zoompf

    Cindy Krum recently published a must-read primer on the upcoming Mobile-Friendly changes which I highly recommend checking out before proceeding. Got it? Good. With the mad rush to optimize mobile sites prior to April 21st, it can be very easy to sacrifice performance in the process. Lest we forget, Google has mentioned on multiple occasions that website performance is also a factor in search ranking, first in 2010 for desktop sites and again in 2013 for mobile sites.

    In this post I’m going to cover a few high-level best practices to keep in mind during your mobile site (re)design efforts. In addition, I suggest you also peruse Google’s excellent documentation on mobile-friendly websites.

    Measuring your mobile site performance

    The first step to improving your mobile performance is to measure where you’re starting. There are a number of excellent free and paid resources to do so, but two of my favorites are Google Chrome’s built-in Developer Tools and WebPageTest. For the sake of simplicity, I’ll be using Chrome Developer Tools in this article.

    Not a developer? Don’t worry, using the Chrome tools are real easy:

    1. Open up Chrome (install if necessary)
    2. Hit the little “hamburger” menu (3 stacked lines) in the top-right corner
    3. Select More Tools, then Developer Tools

    You’ll see a nifty screen with lots of juicy info. Most importantly, at the top there’s a drop-down with many different mobile and tablet emulators. Pretty cool.

    Now, select a device of interest, say Apple iPhone 6. Enter your site address in the address bar, hit enter and voila! You’re now seeing your site rendered as an iPhone 6 would see it. Scroll down to the bottom to see some interesting performance stats like total page load time, size of the page, and the total number of requests. Hit the “Network” tab for a particularly helpful waterfall diagram view, as shown below:

    zoompf_iphone6

    Now let’s get started…

    Optimize those images for mobile

    According to the HTTP Archive, images on average account for over 60% of your total page content. Pretty intuitive, images rule the web. Go ahead and check your own page with Chrome Developer Tools and you’ll likely see similar numbers. When downloading over relatively slow mobile connections speeds, the impact of large images on your site performance can be even more severe.

    While it’s always a best practice to optimize your site using lossless and lossy image optimization techniques, there’s another consideration for mobile: Should you even be downloading that image to begin with? That big, beautiful 1600px wide “hero” image you use on your desktop site might be completely wasted on the smaller display of a phone or tablet, even if that tablet as a high resolution or “retina” screen.

    The solution? Consider loading a smaller image just for your mobile users. Be careful, though; there’s a “right” and “wrong” way of doing this.

    Quick aside: for this example, and your mobile site in general, make sure you’re specifying the viewport meta tag in the head section of your page. Basically, this tells the mobile browser you have a responsive mobile site, and not to try to auto-scale a large desktop site down to mobile resolution (ugly!). Additionally if this tag is not present, you will get different results in your Chrome tests below.

    <meta name="viewport" content="width=device-width, initial-scale=1.0" />

    The “wrong” way

    Responsive design makes heavy use of CSS media queries to style your site differently at the smaller viewport sizes used by mobile devices, so an obvious approach to swap out your images might go something like this:

    <!-- DON'T DO THIS -->
    <style> 
        @media (min-width:376px) {
            .mobile_image {
                display: none;
            }
            .desktop_image {
                display: inline;
            }
        }
        @media (max-width:375px) {
            .mobile_image {
                display: inline;
            }
            .desktop_image {
                display: none;
            }
        }
    </style> 
    <img src="mobile.png" class="mobile_image" />
    <img src="desktop.png" class="desktop_image" />
    

    This code displays one image when the screen resolution is wide, and a different/smaller image when the resolution is smaller.

    This looks just fine on the rendered page, but there’s a big problem: both images get downloaded! To verify, load this sample in Chrome and you’ll see something like this:

    code_mobile_waterfall

    Well that’s not good; in fact that’s even worse! You are wasting time and bandwidth downloading an image that won’t even be shown!

    The right way

    Instead, consider using the background-image style on a DIV to achieve the same effect, for example:

    <!-- DO THIS -->
    <style>
        @media (min-width:376px) {
            .myimage {
                background-image: url("desktop.png");
                width: 700px;
                height: 550px;
            }
        }
        @media (max-width:375px) {
            .myimage {
                background-image: url("mobile.png");
                width: 350px;
                height: 130px;
            }
        }
    </style>
    <div class="myimage"></div>
    

    Loading in Chrome tools, you’ll now see this:

    code_mobile_yes2

    Only the mobile image was loaded… much better! Of course, there is one caveat: to use background-image with a DIV, you need to supply the image width and height in the CSS for that class. This can be cumbersome for a lot of images, or images that change size frequently, but if your “hero” images are relatively static in nature, strategic use of this technique could make a significant improvement to your mobile site performance.

    Takeaway: Where possible, use the CSS media queries and the background-image style to conditionally render mobile images. This may only make sense for your largest images.

    Consider ditching jQuery

    What? Did you read that correctly? jQuery is THE library of choice for writing JavaScript, how can you live without it?

    jQuery is indeed quite useful, but recall one if its original design goals was to provide a consistent interface that matches the W3C recommended API across wildly diverse browsers with different (and often broken) standards implementations. jQuery let’s you avoid writing “if Internet Explorer do this, else do that” code.

    BUT, jQuery’s unifying interface is much less necessary on mobile. Mobile is dominated by WebKit-derived browsers such as Safari or Chrome, so there are fewer issues to abstract away. And weighing in at a hefty 200 KB, jQuery is still a significant library to download, even with liberal use of caching. Even after you compress and minify jQuery, you are dealing with around 30KB.

    But wait, you say; you still want the simplified JavaScript interface jQuery provides? It is pretty nice – so consider Zeptojs instead. While not as fully featured as jQuery, it weighs in at a mere 5 KB in size compressed, roughly 6 times smaller! Since Zepto is largely API compatible with jQuery you shouldn’t have to rewrite any code to use it. For most basic JavaScript sites, Zepto is more than sufficient.

    Takeaway: Minimize the third party libraries you include, and consider using Zeptojs as an alternative to jQuery if your JavaScript needs are basic.

    Review your caching settings

    Smart web developers reduce the size of their resources to minimize page load times. Really smart web developers avoid the need to download those resources in the first place. This is where browser caching comes in. If your images, CSS, or JavaScript rarely change, consider caching them. This way your users only download the resource once, and the next time they hit your site the link is already sitting their on their local machine (or phone or tablet), just waiting to be used.

    Mobify has a nice primer on setting caching headers, and there are many great free tools that can test your caching settings including the super cool REDbot, WooRank, and our own Zoompf. If you’re running an Apache or nginx webserver, consider enabling mod_pagespeed to simplify your caching configuration. If you have a WordPress site, the W3 Total Cache plugin is excellent.

    Takeaway: Caching is one of the most effective performance optimizations you can make, and matters more then ever for mobile sites. Review your caching policies and apply caching to your large, infrequently changing libraries and images.

    Love animated GIFs? Your browser doesn’t!

    Animated GIFs have seen quite the resurgence of late, but the format is dated and showing its age. Dating back almost 30 years, animated GIFs are bloated and cumbersome to download, especially when your animated GIF is a short movie clip. Consider using HTML5 video instead of an animated film GIF. All modern browsers support it, and HTML5 videos are typically 10% or less the size of an equivalent animated GIF.

    Another option is Imgur. When you upload animated GIFs to Imgur, they will automatically convert the animation into a format they call GIFV. GIFV is essentially just an HTML5 video, but with a significantly optimized size. Imgur manages the hosting of your videos, and optionally serves the file up at GIFV or GIF depending on the capabilities of your users’ browser (although most all modern browsers support HTML5 video).

    Takeaway: Try and avoid animated GIFs for movie clips or complex animations. Modern video protocols used by HTML5 video and GIFV offer significant performance boosts and reduced download times for your users.

    The future: HTTP/2

    The web is slowly evolving towards HTTP/2, and not a moment too soon. HTTP/1.1 is over 15 years old and showing signs of its age, especially when it comes to unreliable/intermittent connectivity in mobile devices. HTTP/2 already enjoys widespread browser and server support. While I wouldn’t recommend rushing into an HTTP/2 adoption for the April 21st Mobile-Friendly change, future support for this protocol should definitely be on your roadmap. You can read more about HTTP/2 and its future impact on SEO and web performance in my earlier post.

    Takeaway: Plan to adopt HTTP/2 on your future roadmap, it’s coming!

    In closing

    Building a responsive, mobile-friendly website is more than tweaking styles and tags to please the Google crawler. There are nuanced, mobile specific considerations that, if ignored, can significantly slow down your mobile site and kill your user experience. Fortunately there are numerous free tools to help you evaluate your mobile site performance, including WebPageTest, Chrome Developer Tools, Google PageSpeed Insights, and Zoompf’s Free Report. And of course, make sure to test with Google’s own mobile-friendly test tool.

    Now…go forth and start optimizing!

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Elements of Personalization & How to Perform Better in Personalized Search – Whiteboard Friday

    Friday, April 10th, 2015

    Posted by randfish

    From information about your location and device to searches you’ve performed in the past, Google now has a great deal of information it can use to personalize your search results. In today’s Whiteboard Friday, Rand explains to what extent they’re likely using that information and offers five ways in which you can improve your performance in personalized search.







    For reference, here’s a still of this week’s whiteboard.

    Elements of Personalization Whiteboard

    Click on it to open a high resolution image in a new tab!

    Video transcription

    Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat personalization, talking about the elements that can influence personalization as well as some of the tactical things that web marketers and SEOs specifically can do to help make their sites and their content more personalized friendly.

    How personalization works

    So, what are we talking about when we’re talking about personalization? Well, Google is actually personalizing by a large number of things and probably even a few things I have not listed here that they have not been totally transparent or forthcoming about.

    Logged-in visitors

    The things that we know about include things like:

    • Location. Where is the searcher?
    • Device. What type of device and operating system is the searcher using?
    • Browser. We have seen some browser specific and operating specific forms of searches. Search history, things that you have searched for before and potentially what you’ve clicked on in the results.
    • Your email calendar. So if you’re using Gmail and you’re using Google Calendar, Google will pull in things that they find on your calendar and data from your email and potentially show that to you inside of search results when you search for very particular things. For example, if you have an upcoming plane flight and you search for that flight number or search around that airline, they may show you, you have an upcoming flight tomorrow at 2:07 p.m. with Delta airlines.
    • Google+. A lot of folks are thinking of it as dead, but it’s not particularly dead, in fact no more so than the last year and a half or so. Google+ results will still appear at the bottom of your search results very frequently if you’re logged in and anyone in your Google+ stream that you follow has shared any link or any post in Google+ with the keywords that you’ve searched for. That’s a very broad matching still. Those results can appear higher if Google determines that there’s more relevancy behind that. You’ll also see Google+ data for people you’re connected to when you search for them, that kind of thing.
    • Visit history. If you have visited a domain while logged into an account many times in the past, I’m not exactly sure how many times or what sort of engagement they look at precisely, but they may bias those results higher. So they might say, “Gosh, you know, you really seem to like eBay when you do shopping. We’re going to show eBay’s results for you higher than we would normally show them in an incognito window or for someone who’s not logged in or someone who isn’t as big an eBay fan as you are.”
    • Bookmarks. It’s unclear whether they’re using just the bookmarks from Google Chrome or the personalization that carries over from Chrome instances or the fact that bookmarks are also things that people visit frequency. There’s some discussion about what the overlap is there. Not too important for our purposes.

    Logged-out visitors

    If you are logged out, they still have a number of ways of personalizing, and you can still observe plenty of personalization. Your results may be very different from what you see in a totally new browser with no location applied to it, on a different device with different search and visit history.

    Now, remember when I say “Logged out,” I’m not talking about an incognito window. An incognito window would bias against showing anything based on search history or visit history. However, location and device appear to still remain intact. So a mobile device is going to get sometimes different results than a desktop device. Different locations will get different results than other locations. All that kind of stuff.

    Now you might ask, “Quantify this for me, Rand.” Like let’s say we took a sample set of 500 keywords and we ran them through personalized versus non-personalized kinds of searches. What’s the real delta in the results ordering and the difference of the results that we see?

    Well, we actually did this. It’s almost 18 months old at this point, but Doctor Pete did this in late 2013. Using the MozCast data set, he checked crawlers, Google Webmaster Tools, personalized logged in and incognito. You know what? The delta was very small for personalized versus incognito. I suspect that number’s probably gone up, which means this correlation number — 1.0 would be perfect correlation — 0.977 very, very high correlation. So we’re seeing really similar results for personalized versus incognito at least 18 months ago.

    I suspect that’s probably changed. It’ll probably continue to change a little bit. However, I would also say that it probably won’t drop that low. I would not expect that you would ever find that it’ll be lower than 0.8, maybe even 0.9, just because so much of search is intentional navigation and so much of it is also not fully capable to be personalized in truly intelligent ways. The results are the best results already. There’s not a whole lot of personalization that might be added in besides potentially showing your Google+ follows or something at the bottom and things based on your visit history.

    Performing better in personalized search

    So let’s say you want to perform better in personalized search. You have a belief that, hey, a lot of people are getting personalized bias in my particular SERP sets. We’re very local focused, or we’re very biased by social kinds of data, or we’re seeing a lot of people are getting biased in their results to our competitors because of their search history and visit history. What are things that I need to think about?

    Get potential searchers to know and love your brand before the query

    The answer is you can perform better in personalized search in general, overall by thinking about things like getting potential searchers to know and love your brand and your domain before they ever make the query. It turns out that if you’ve gotten people to your site previously through other forms of navigation and through searches, you may very well find yourself higher up in people’s personalized results as a consequence of the fact that they visited you in the past. We don’t know all the metrics that go into that or what precisely Google uses, but we could surmise that there are probably some bars around engagement, visit history, how many times, how frequently in a certain time frame, all that kind of stuff that goes into that search and visit history.

    Likewise, if you can bias people here and rank higher, you may be getting more and more benefit. It can be a snowball effect. So if you keep showing up higher in their rankings, they keep clicking you, they keep finding information that’s useful, they don’t need to go back to the search results and click somebody else. You’re just going to keep ranking in more and more of their queries as they investigate things. For those of you who are full funnel types of content servers, you’re thinking about people as they’re doing research and educating themselves all the way down to the transaction level with their searches, this is a very exciting opportunity.

    Be visible in all the relevant locations for your business

    For location bias, you want to make sure that you are relevant in all the locations for your business or your service. A lot of times that means getting registered with Google Maps and Google+ local business for maps — I can’t remember what it’s called exactly. I think it’s Google+ Local for Business — and making sure that you are not only registered with those places but then also that your content is helping to serve the areas that you serve. Sometimes that can even mean a larger radius than what Google Maps might give you. You can rank well outside of your specific geographies with content that serves those regions, even if Google is not perfectly location connecting you via your address or your Maps registration, those kinds of things.

    Get those keyword targets dialed in

    Getting keyword targeting dialed in, this is important all the time. Where a lot of people fall down in this is they think, “Hey, I only need to worry about keyword targeting on the pages that are specifically intended to be search landing pages. I’m trying to get search traffic to these pages.” But personalization bias means that if you can get keyword targeting dialed in even on pages that are not necessarily search landing pages, Google might say, “Hey, this wouldn’t normally rank for someone, but because you’ve already earned that traffic, because that person is already biased to your brand, your domain, we’re going to surface that higher than we ordinarily would.” That is a powerful potential tool in your arsenal, hence it’s useful to think about keyword targeting on a page specific level even for pages that you might not think would earn search traffic normally.

    Share content on Google+ and connect with your potential customers

    Google+ still, in my opinion, a very valuable place to earn personalized traffic for two reasons. One, of course you can get people actually over to your site. You may be able to get potential traffic through Google+. You can appear in those search results right at the bottom for anyone who follows you or anyone who’s connected to you via email and other kinds of Google apps. You may have also noticed that when you email with someone, if they’re using Gmail and their Google+ account is connected, you see in the little right-hand corner there that they’ll show their last post or their last few posts sometimes on Google+. Again, also a powerful way to connect with folks and to share the content as you’re emailing back and forth with them.

    For brands, that also shows up in search results sometimes. There’s the brand box on the right-hand side, kind of like Knowledge Graph, and it’ll show your last few posts from Google+. So again, more and more opportunities to be visible if you’re doing Google+.

    I am also going to surmise that, in the future, Google might do stuff with this around Twitter. They just finished re-inking that deal where Twitter gives their full fire hose access to Google and Google starts displaying more and more of that stuff in search results. So I think probably still valuable to think about how that connection might form. Definitely still valuable directly to do it in Google+ even if you’re not getting any traffic from Google+.

    Be multi-device friendly and usable

    Then the last one, of course, being multi-device friendly and usable. This is something where Moz has historically fallen down, and obviously we’re going to be fixing that in the months ahead. I actually hope we fix it after April 21st so we can see whether we really take a hit when they do that mobile thing. I think that would be a noble sacrifice, and then we can see how we perform thereafter and then fix it and see if we can get back in Google’s good graces after that.

    So given these tactics and some of this knowledge about how personalized search works, hopefully you can take advantage of personalized search and help inform your teams, your bosses, your clients about personalization and the potential impacts. Hopefully we’ll be redoing some of those studies, too, to be able to tell you, hey, how much more is personalization affecting SEO over the last 18 months and in the years ahead.

    All right, everyone. Thanks again for joining us, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

    Video transcription by Speechpad.com

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    The Incredible Shrinking SERP – 2015 Edition

    Thursday, April 9th, 2015

    Posted by Dr-Pete

    In the beginning, there were 10 results, and it was good. Then, came expanded site-links and Google’s
    7-result SERP. Around the middle of 2014, we started to hear reports of SERPs with odd numbers of organic results – 9, 8, 6, 5, and even 4 page-1 results. At first, these were sporadic and hard to replicate, but they quietly expanded. This is a recent 4-result SERP for “autism speaks”:

    By some counts, there are as many as 16 non-paid links on this page (not counting images), but by traditional SEO standards, there are only 4 true organic positions for which you can compete. So, what’s going on here? Is it just random, or is there a method to Google’s madness?

    It’s all in the news

    For a couple of months, I just assumed these strange result counts were some kind of glitch. Then I noticed an unusual pattern. Last October, Google rolled out the
    “In The News” Update. This update expanded news results to many new sources, but it also seemed to change the pattern of when news results appear. This is 28 days of data from MozCast’s Feature Graph (10K queries):

    The presence of News results seemed to be cyclical, dipping early in the week and peaking later in the week. I don’t follow News results closely, so it was just a curiosity at first, until I saw another bit of data. This is the average page-1 result count for that same period:

    While the scale of the change was much smaller (please note that both graphs have a restricted Y-axis to make the effect more visible), the opposing shapes of the curves seemed like more than a coincidence. As News results increased, the average page-1 organic result count decreased.

    It’s a vertical, vertical world

    Spot-checking various SERPs, I was able to confirm this effect. If page 1 had a News box, then the organic result count would be decreased by one (to either 9 results or 6, depending on the starting point). Here’s a sample SERP (I’ve removed snippets to simplify the image) for “samsung galaxy tab”:

    This is a basic 10-result SERP, but when a News box comes into play, we’re only left with 9 organic results. This raised the question – were other verticals having a similar impact? Digging deeper, I found that, in addition to News results, Image results and In-depth Articles also occupied one organic position. Remember the example at the top of the post? It’s a brand query, resulting in a 7-result SERP, but it also has News results, Image results, and In-depth Articles. If we do the math: 7 – 1 – 1 – 1 = 4 results. It’s not random at all.

    In the interest of being more methodical, what if we looked at the average page-1 organic result across every combination of verticals in our data set? We’ll stick with a starting point of 10 results, to keep the data clean. Here’s a table with the average counts by vertical combination:

    I’ve taken the average out to two decimal places just to be more transparent, but what we’re seeing here is little more than a tiny bit of measurement error. Generally speaking, each instance of a vertical result type (as a whole, not individual links within these verticals) costs a 10-result SERP one organic ranking position. It’s worth nothing that SERPs with all 3 verticals are pretty rare, but when they occur, each of those 3 verticals costs one position and one opportunity for you to rank on page 1.

    It’s always something

    So, do the same rules apply to 7-result SERPs? Well, Google isn’t a big fan of making my life easy, so it turns out this gets a bit more complicated. When 7-result SERPs originally launched, our data showed that they almost always came with expanded sitelinks in the #1 organic position. By “expanded sitelinks”, I mean something like the following:

    Sitelinks usually appear for queries that either have a strong brand connotation or at least a dominant interpretation. While we typically use 6-packs of expanded sitelinks as an example, actual counts can vary from 1 to 6. Originally, the presence of any sitelinks yielded a 7-result SERP. Now, it’s gotten a bit more complicated, as shown by the table below:

    Since each row of sitelinks can contain up to 2 links, the general logic seems to be that 1 row of sitelinks equates to 1 additional organic result. If you have 3 rows of sitelinks, then Google will remove 3 organic results from page 1.

    Google’s logic here seems to revolve around the actual display of information and length of the page. As they add some elements, they’re going to subtract others. Since the physical display length of of most elements can vary quite a bit, the rules right now are pretty simplistic, but the core logic seems to be based on constraining the total number of results displayed on page 1.

    It’s time to rethink organic

    All of this raises a difficult question – what is an organic result? As SEOs, we typically don’t think of vertical results as “organic” by our fairly narrow definition, but they’re much more organic than paid results or even Knowledge Graph. What’s more, Google is starting to blur the lines with verticals.

    For example, in the past couple of weeks, Google has redesigned the look of In-depth Articles twice. You might think “So what? It’s just a design change,” but take a closer look. At the end of March, Googled removed the “In-depth articles” header. Here’s an example of the new design (for the query “jobs”):

    While the thumbnail images and horizontal dividers still set these results apart somewhat, Google’s intent seems to be to make them appear more organic. Keep in mind, too, that other, organic results use thumbnails as well (including videos and recipes).

    Then, just a couple of weeks later (our systems detected this on the morning of April 8th), Google went much farther, removing the thumbnails and even the byline. Here’s part of a screenshot for “Putin”:

    Can you spot the true organic results here? They’re the first two – the rest of this screenshot is In-depth Articles. The only real clue, beside the count and source-code markers, is the horizontal divider on either end of the 3-pack. On mobile, even the dividers are gone, as every result is treated like a “card” (see below).

    As an SEO, I’m still inclined to call these results “vertical” for two reasons: (1) historical precedent, and (2) these results play by different ranking rules. I think reason #2 is the more important one – In-depth Articles are currently dominated by a core set of big publishers, and the algorithm differs quite a bit from regular, organic results.

    It’s only the beginning…

    You wanna get really crazy? Let’s look at an entire SERP for “polar” on an Android device (Moto G). This result also includes In-depth Articles (warning: scrolling ahead):

    Let’s do the math. For starters, it’s a branded result with expanded sitelinks, so we should have a 7-result page. Remember that those last 3 results are In-depth Articles, so we’ll subtract 1, leaving us with what should be 6 results. See the “app pack” in the middle? That’s an Android-specific vertical, and instead of counting the pack as just 1 result, Google is counting each link as a result. So, we’re only left with 3 traditional organic results on this SERP, despite it being packed with information.

    I strongly suspect this trend will continue, and it will probably expand. The definition of “organic” is blurring, and I think that all of these vertical results represent SEO opportunities that can’t be ignored. If we’re stuck in the mindset of only one “true” organic, then our opportunities are going to keep shrinking every day.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Off with Your Head Terms: Leveraging Long-Tail Opportunity with Content

    Wednesday, April 8th, 2015

    Posted by SimonPenson

    Running an agency comes with many privileges, including a first-hand look at large amounts of data on how clients’ sites behave in search, and especially how that behavior changes day-to-day and month-to-month.

    While every niche is different and can have subtle nuances that frustrate even the most hardened SEOs or data analysts, there are undoubtedly trends that stick out every so often which are worthy of further investigation.

    In the past year, the Zazzle Media team has been monitoring one in particular, and today’s post is designed to shed some light on it in hopes of creating a wider debate.

    What is this trend, you ask? In simple terms, it’s what we see as a major shift in the way results are presented, and it’s resulting in more traffic for the long tail.

    2014 growth

    It’s a conclusion supported by a number of client growth stories throughout the last 12 months, all of whom have seen significant growth coming not from head terms, but from an increasing number of URLs gaining search traffic from organic.

    The Searchmetrics visibility chart below is just one example of a brand in the finance space seeing digital growth year-over-year as a direct result of this phenomenon. They’ve even seen some head terms drop backwards by a couple of places while still seeing this overall.

    To understand why this may be happening we need to take a very quick crash course into how Google has evolved over the past two years.

    Keyword matching

    Google built its empire on a smart system; one which was able to match “documents” (webpages) to keywords by scanning and organizing those documents based upon keyword mentions.

    It’s an approach that has been getting increasingly too simplistic in a “big data” world.

    The answer, it seems, is to focus more on the user intent behind that query and get at exactly what it is the searcher is actually looking for.

    Hummingbird

    The solution to that challenge is Hummingbird, Google’s new “engine” for sorting the results we see when we search.

    In the same way that Caffeine, the former search architecture, allowed the company to produce fresher results and roll worldwide algorithm changes (such as Panda and Penguin) out faster, Hummingbird is designed to do the same for personalized results.

    And while we are only at the very beginning of that journey, from the data we have seen over the past year it seems to be crystallizing into more traffic for deeper pages.

    Why is this happening? The answer lies in further analysis of what Google is trying to achieve.

    Implicit vs. explicit

    To better explain this change let’s look at how it is affecting a search for something obvious, like “coffee shop.”

    Go back two or so years and a search for this may well have presented 10 blue links of the obvious chains and their location pages.

    For the user, however, this isn’t useful—and the search giant knows it. Instead, they want to understand the user intent behind the query, or the “implicit query,” as previously explained by Tom Anthony
    on this blog.

    What that means, in practice, is that a search for “coffee shop” will actually have context, and one of the reasons for wanting you signed in is to allow the search engine to collect further signals from you to help understand that query in detail. That means things like your location, perhaps even your brand preferences, etc.

    Knowing these things allows the search to be personalized to your exact needs, throwing up the details of the closest Starbucks to your current location (if that is your favourite coffee).

    If you then expand this trend out into billions of other searches you can see how deeper-level pages, or even articles, present a better, more refined option for Google.

    Here we see how a result for something like “Hotels” may change if Google knows where you are, what you do for a living and therefore what kind of disposable income you have. The result may look completely different, for instance, if Google knows you are a company CEO who stays in nice hotels and has a big meeting the following day, thus requiring a quiet room so you can get some sleep.

    Instead of the usual “best hotels in London” result we get something much more personalised and, critically, something more useful.

    The new long-tail curve

    What this appears to be doing is reshaping the traditional long-tail curve we all know so well. It is beginning to change shape along the lines of the chart below:

    That’s a noteworthy shift. With another client of ours, we have seen a 135% increase in the number of pages receiving traffic from search, delivering a 98% increase in overall organic traffic because of it.

    The primary factor behind this rise is the creation of the “right” content to take advantage of this changing marketplace. Getting that right requires an approach reminiscent of the way traditional marketing has worked for decades—before the web even existed.

    In practice, that means understanding the audience you are attempting to capture and, in doing so, outlining the key questions they are asking every day.

    This audience-centric marketing approach is something I have written about previously on this blog and others, as it is critical to understanding that “context” and what your customers or clients are actually looking for.

    The way to do that? Dive into data, and also speak to those who may already be buying from or working with you.

    Digging into available data

    The first step of any marketing process is to collect and process any and all available information about your existing audience and those you may want to attract in the future.

    This is a huge subject area—one I could easily spend the next 10,000 words writing about—but it has been covered brilliantly on the more traditional research side by sites like
    this and this.

    The latter of those two links breaks this side of the research process into the two key critical elements you will need to master to ensure you have a thorough understanding of who you are “talking” to in search.

    Quantitative concentrates on the numbers. Focus is on larger data sets and statistical information, as opposed to painting a rich picture of the likes and dislikes of your audience.

    Qualitative focuses on the words and on painting in the “richness.” The way your customers speak and explain problems, likes and dislikes. It’s more of a study on human behavior than stats.

    This information can be combined with a plethora of other data sources from CRMs, email lists, and other customer insight pots, but where we are increasingly seeing more opportunity is in the social data arena.

    Platforms such as Facebook can give all brands access to hugely valuable big-data insight about almost any audience you could possibly imagine.

    What I’d like to do here is explain how to go about extracting that data to form rich pictures of those we are either already speaking to or the very people we want to attract.

    There is also little doubt that the amount of insight you have into your audience is directly proportional to the success of your content, hence the importance of this research cycle.

    Persona creation

    Your data comes to life through the creation of personas, which are designed to put a human face on that data and group it into a small number of shared interest sets.

    Again, the point of this post is not to explain how to best manage this process. Posts like
    this one and this one go over that in great detail—the point here is to go over what having them in place allows you to do.

    We’ve also created a free persona template, which can help make the process of pulling them together much easier.

    When you’ve got them created, you will soon realize that your personas each have very different needs from a content perspective.

    To give you an example of that let’s look at these example profiles below:

    Here we can see three very distinct segments of the audience, and immediately it is easy to see how each of them is looking for a different experience from your brand.

    Take the “Maturing Spender” for example. In this fictional example for a banking brand we can see he not only has very different content needs but is actually “activated” by a different approach to the buying cycle too.

    While the traditional buyer will follow a process of awareness, research, evaluation and purchase, a new kind of purchase behaviour is materializing that’s driven by social.

    In this new world we are seeing consumers driven to more impulsive purchases that are often driven by social sharing. They’ll see something in their social feeds and are more likely to purchase there and then (or at least within a few days), especially if there is a limited offer on.

    Much of this is driven by our increasingly “disposable” culture that creates an accelerated buying process.

    You can learn this and other data-driven insights from the personas, and we recommend using a
    good persona template, then adding further descriptive detail and “colour” to each one so that everyone understands whom it is they are writing for.

    It can also work well to align those characters to famous people, if possible, as doing so makes it much easier to scale understanding across whole organizations.

    Having them in place and universally adopted allows you to do many things, including:

    • Create focus on the customer
    • Allow teams to make and defend decisions
    • Create empathy with the audience

    Ultimately, however, all of this is designed to ensure you have a better understanding of those you want to converse with, and in doing so you can map out the key questions they ask and understand their individual needs.

    If you want to dig into this area more then I highly recommend Mike King’s post from 2014
    here on Moz for further background.

    New keyword research – personas

    Understanding the specific questions your audience is asking is where the real win can be found, and the next stage is to utilize the info gleaned from the persona process in the next phase: keyword research.

    To do that, let’s walk through an example for our Happy Couple persona (the first from the above graphic), and see how things plays out for this fictional banking brand.

    The first step is to gather a list of tools to help unearth related keywords. Here are the ones we use:

    There are many more that can help, but it is very easy to complicate the process with data, so we like to limit that as much as possible and focus on where we can get the most benefit quickly.

    Before we get into the data mining process, however, we begin with a group brainstorm to surface as many initial questions as possible.

    To do this, we will gather four people for a quick 15-minute stand-up conversation around each persona. The aim is to gather five questions from which the main research phase can be constructed.

    Some possibilities for our Happy Couple example may include:

    • How much can I borrow for a mortgage?
    • How do I buy a house?
    • How large a deposit do I need to buy a house?
    • What is the best regular savings account?

    From here we can use this framework as a starting point for the keyword research and there is no better place to start than with our first tool.

    SEMRush

    For those unfamiliar with this tool it is designed to make it easier to accurately assess competitor and market opportunity by plugging into search data. In this example we will use it to highlight longer-tail keyword opportunity based upon the example questions we have just unearthed.

    To uncover related keyword opportunity around the first question we type in something similar to the below:

    This will highlight a number of phrases related to our question:

    As you can see, this gives us a lot of ammunition from a content perspective to enable us to write about this critical subject consistently without repeating the same titles.

    Each of those long-tail terms can be analyzed ever deeper by clicking on them individually. That will generate a further list of even more specifically related terms.

    Soovle

    The next stage is to use this vastly underrated tool to further mine user search data. It allows you to gather regular search phrases from sites such as YouTube, Yahoo, Bing, Answers.com and Wikipedia in one place.

    The result is something a little like the below. It may not be the prettiest but it can save a lot of time and effort as you can download the results in a single CSV.

    Google Autocomplete / KeywordTool.io

    There are several ways you can tap into Google’s Autocomplete data and with an API in existence there are a number of tools making good use of it. My current favourite is
    KeywordTool.io, which actually has its own API, mashing data from Google, YouTube, Bing, and the Apple App Store.

    The real value is in how it spits out that data, as you are able to see suggestions by letter or number, creating hundreds of potential areas for content development. The App Store data is particularly useful, as you will often see greater refinement in search behavior here and as a result very specific ‘questions’ to answer.

    A great example for this would be “how to prequalify yourself for a mortgage,” a phrase which would be very hard to surface using Google Autocomplete tools alone.

    Forum searches

    Another fantastic area worthy of research focus is forums. We use these to ask our peers and topic experts questions, so spending some time understanding what is being asked within the key ones for your market can be very helpful.

    One of the best ways of doing this is to perform a simple advanced Google search as outlined below:

    “keyword” + “forum”

    For our example we might type:

    This then presents us with more than 85,000 results, many of which will be questions that have been asked on this subject.

    Examples include:

    • First-time buyer’s mortgage guide
    • Getting a Mortgage: Boost your Mortgage Chances
    • Mortgage Arrears: What help is available?
    • Are Fixed Rate Mortgages best?

    As you can see, this also opens up a myriad of content opportunities.

    Competitive research

    Another way of laterally expanding your reach is to look at the content your best competitors are producing.

    In this example we will look at two ways of doing that, firstly by analyzing top content and then by looking at what those competitors rank for that you don’t.

    Most shared content

    There are several tools that can give you a view on the most-shared content, but my personal favourites are Buzzsumo or the awesome new
    ahrefs Content Explorer.

    Below, we see a search for “mortgages” using the tool, and we are presented with a list of content on that subject sorted by “most shared.” The result can be filtered by time frame, language, or even by specific domain inclusions or exclusions.

    This data can be exported and titles extracted to be used as the basis of further keyword research around that specific topic area, or within a brainstorm.

    For example, I might want to look at where the volume is from an organic search perspective for something like “mortgage paperwork.”

    I can type this term into SEMRush and search through related phrases for long-tail opportunity on that specific area.

    Competitor terms opportunity

    A smart way of working out where you can gain further market share is to dive a little deeper into your key competitors and understand what they rank for and, critically, what you don’t.

    To do this, we return to SEMRush and make use of a little-publicized but hugely useful tool within the suite called
    Domain Comparison Tool.

    It allows you to compare two domains and visualize the overlap they have from a keyword ranking perspective. For this example, we will choose to compare two UK banks – Lloyds and HSBC.

    To do that simply type both domains into the tool as below:

    Next, click on the chart button and you will be presented with two overlapping circles, representing the keywords that each domain ranks for. As we can see, both rank for a similar number of keywords (the overall number affects the size of the circles) with some overlap but there are keywords from both sides that could be exploited.

    If we were working for HSBC, for instance, it would be the blue portion of the chart we would be most interested in in this scenario. We can download a full list of keywords that both banks rank for, and then sort by those that HSBC don’t rank for.

    You can see in the snapshot below that the data includes columns on where each site ranks for each keyword, so sorting is easy.

    Once you have the raw data in spreadsheet format, we would sort by the “HSBC” column so the terms at the top are those we don’t rank for, and then strip away the rest. This leaves you with the opportunity terms that you can create content to cover, and this can be prioritized by search volume or topic area if there are specific sub-topics that are more important than others within your wider plan.

    Create the calendar

    By this point in the process you should have hundreds, if not thousands of title ideas, and the next job is to ensure that you organise them in a way that makes sense for your audience and also for your brand.

    Content flow

    To do this properly requires not just a knowledge of your audience via extensive research, but also content strategy.

    One of the biggest rules is something we call content flow. In a nutshell, it is the discipline of creating a content calendar that delivers variation over time in a way that keeps the audience engaged.

    If you create the same content all of the time it can quickly become a turn-off, and so varying the type (video, image-led piece, infographics, etc.) and read time, or the amount of time you put into creating the piece, will produce that “flow.”

    This handy tool can help you sense check it as you go.

    Clearly your “other” content requirements as part of your wider strategy will need to fit into this strategy, too. The vast majority of the output here will be article-focused, and it is critical to ensure that other elements of your strategy are also covered to round out your content output.

    This
    free content strategy toolkit download gives you everything you need to ensure you get the rest of it right.

    The result

    This is a strategy we have followed for many of our search-focused clients over the last 18 months, and we have some great real-world case studies to prove that it works.

    Below you can see how just one of those has played out in search visibility improvement terms over that period as proof of its effectiveness.

    All of that growth directly correlates with a huge growth in the number of URLs receiving traffic from search and that is a key metric in measuring the effectiveness of this strategy.

    In this example we saw a 15% monthly increase in the number of URLs receiving traffic from search, with organic traffic up 98% year-on-year despite head terms staying relatively static.

    Give it a go for yourself as part of your wider strategy and see what it can do for your brand.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!