Announcing Keyword Explorer: Moz’s New Keyword Research Tool

Posted by randfish

A year ago, in April of 2015, I pitched a project internally at Moz to design and launch a keyword research tool, one of the few areas of SEO we’ve never comprehensively tried to serve. The pitch took effort and cajoling (the actual, internal pitch deck is available here), but eventually received approval, with one big challenge… We had to do it with a team already dedicated to the maintenance and development of our rankings collections and research tools. This project wouldn’t get additional staffing — we had to find a way to build it with only the spare bandwidth of this crew.

Sure, we didn’t have the biggest team, or the ability to work on the project free from our other obligations, but we had grit. We had passion. We wanted to prove ourselves to our fellow Mozzers and to our customers. We had pride. And we desperately wanted to build something that wasn’t just “good enough,” but was truly great. Today, I think we’ve done that.

Meet our new keyword research tool, Keyword Explorer:

If you want to skip hearing about it and just try it out, head on over. You can run 2 free searches/day without even logging in, another 5 with a free community account, and if you’re a Pro subscriber, you’ve already got access. For those who want to learn more, read on!

The 5 big, unique features of Keyword Explorer

Keyword Explorer (which we’ve taken to calling “KWE” for short) has lots of unique features, metrics, and functionality, but the biggest ones are pretty obvious and, we believe, highly useful:

  1. KWE takes you all the way through the keyword research process — from discovering keyword ideas to getting metrics to building a list, filtering the keywords on it, and prioritizing which ones to target based on the numbers that matter.
  2. KWE features metrics essential to the SEO process — two you’re familiar with — Volume and Difficulty — and three that are less familiar: Opportunity, Importance, and Potential. Opportunity estimates the relative CTR of the organic web results on a SERP. Importance is a metric you can modify to indicate a keyword that’s more or less critical to your campaign/project. And Potential is a combination of all the metrics built to help you prioritize a keyword list.
  3. Our volume score is the first volume estimation metric we know of that goes beyond what AdWords reports. We do that using Russ Jones’ volume bucket methodology and adding in anonymized clickstream data from ~1 million real searchers in the US. From there, Russ has built a model that predicts the search volume range a keyword is likely to have with ~95% accuracy.
  4. Keyword suggestions inside KWE come from almost all the sources we saw SEOs accessing manually in their research processes — Keyword Planner data, Google Suggest, Related Searches, other keywords that the ranking pages also ranked for, topic-modeling ideas, and keywords found from our clickstream data. All of these are available in KWE’s suggestions.
  5. Import and export functionality are strongly supported. If you’ve already got a list of keywords and just want KWE’s metrics, you can easily upload that to us and we’ll fetch them for you. If you like the KWE process and metrics, but have more you want to do in Excel, we support easy, powerful, fast exports. KWE is built with power users in mind, so go ahead and take advantage of the tool’s functionality however works best with your processes.

These five are only some of the time-saving, value-adding features in the tool, but they are, I think, enough to make it worthwhile to give Keyword Explorer a serious look.

A visual walkthrough

As an experiment, I’ve created a visual, slide-by-slide walkthrough of the tool. If you’d rather *see* vs. read the details, this format might be for you:

The Power of Moz’s Keyword Explorer from Rand Fishkin

 

And, for those of you who prefer video, we made a short, 2 minute demo of the tool in that format, too:

 

Of course, there’s a ton of nuance and complexity in a product like this, and given Moz’s dedication to transparency, you can find all of that detail in the more thorough explanation below.

Keyword Explorer’s metrics

KWE’s metrics are among the biggest data-driven advances we’ve made here at Moz, and a ton of credit for that goes to Dr. Pete Meyers and Mr. Russ Jones. Together, these two have crafted something extraordinary — unique metrics that we’ve always needed for SEO-based keyword research, but never had before. Those include:

Keyword volume ranges

Nearly every keyword research tool available uses a single source for volume data: Google AdWords’ Keyword Planner. We all know from studying it that the number AdWords provides is considerably off from reality, and last year, Moz’s Russ Jones was able to quantify those discrepancies in his blog post: Keyword Planner’s Dirty Secrets.

Since we know that Google’s numbers don’t actually have precision, but do indicate a bucket, we realized we could create ranges for volume and be significantly more accurate, more of the time. But, that’s not all… We also have access to anonymized clickstream data here at Moz, purchased through a third-party (we do NOT collect or use any of our own user data via, for example, the MozBar), that we were able to employ in our new volume ranges.

Using sampling, trend data, and the number of searchers and searches for a given keyword from the clickstream, combined with AdWords’ volume data, we produced a volume range that, in our research, showed ~95% accuracy with the true impression counts Google AdWords would report for a keyword whose ad showed during a full month.

We’re pretty excited about this model and the data it produces, but we know it’s not perfect yet. As our clickstream data grows, and our algorithm for volume improves, you should see more and more accurate ranges in the tool for a growing number of keywords. Today, we have volume data on ~500mm (half a billion) English-language search queries. But, you’ll still see plenty of “no data” volume scores in the tool as we can access considerably more terms and phrases for keyword suggestions (more on suggestion sources below).

NOTE: KWE uses volume data modeled on the quantity of searches in the US for a given term/phrase (global English is usually 1.5-3X those numbers). Thus, while the tool can search any Google domain in any country, the volume numbers will always be for US-volume. In the future, we hope to add volume data for other geos as well.

An upgraded, more accurate Keyword Difficulty score

The old Keyword Difficulty tool was one of Moz’s most popular (it’s still around for another month or so, but will be retired soon in favor of Keyword Explorer). But, we knew it had a lot of flaws in its scoring system. For Keyword Explorer, we invested a lot of energy in upgrading the model. Dr. Pete, Dr. Matt Peters, myself, and Russ had 50+ reply email threads back and forth analyzing graphs, suggesting tweaks, and tuning the new score. Eventually, we came up with a Keyword Difficulty metric that:

  • Has far more variation than the old model — you’ll see way more scores in the 20s and 30s as well as the 80s and 90s than the prior model, which put almost every keyword between 50–80.
  • Accounts for pages that haven’t yet been assigned a PA score by using the DA of the domain.
  • Employs a smarter, CTR-curve model to show when weaker pages are ranking higher and a page/site may not need as much link equity to rank.
  • Adjusts for a few domains (like Blogspot and WordPress) where DA is extremely high, but PA is often low and the inherited domain authority shouldn’t pass on as much weight to difficulty.
  • Concentrates on however many results appear on page 1, rather than the top 20 results.

This new scoring model matches better with my own intuition, and I think you’ll find it vastly more useful than the old model.

As you can see from one of my lists above (for Haiku Deck, whose board I joined this year), the difficulty ranges are considerably higher than in the past, and more representative of how relatively hard it would be to rank in the organic results for each of the queries.

A true Click-Through Rate Opportunity score

When you look at Google’s results, it’s pretty clear that some keywords are worthy of pursuit in the organic web results, and some are not. To date, no keyword research tool we know of has attempted to accurately quantify that, but it’s a huge part of determining the right terms and phrases to target.

Once we had access to clickstream data, we realized we could accurately estimate the percent of clicks on a given search result based on the SERP features that appeared. For example, a classic, “ten-blue-links” style search result had 100% of click traffic going to organic results. Put a block of 4 AdWords ads above it, though, and that dropped by ~15%. Add a knowledge graph to the right-hand side and another ~10% of clicks are drawn away.

It would be crazy to treat the prioritization of keywords with loads of SERP features and little CTR on the organic results the same as a keyword with few SERP features and tons of organic CTR, so we created a metric that accurately estimates Click-Through-Rate (CTR), called “Opportunity.”

The search above for “Keanu” has an instant answer, knowledge graph, news results, and images (further down). Hence, its Opportunity Score is a measly 37/100, which means our model estimates ~37% of clicks go to the organic results.

But, this search, for “best free powerpoint software” is one of those rare times Google is showing nothing but the classic 10 blue links. Hence, its Opportunity Score is 100/100.

If you’re prioritizing keywords to target, you need this data. Choosing keywords without it is like throwing darts with a blindfold on — someone’s gonna get hurt.

Importance scores you can modify

We asked a lot of SEOs about their keyword research process early in the design phases of Keyword Explorer and discovered pretty fast that almost everyone does the same thing. We put keyword suggestions from various sources into Excel, get metrics for all of them, and then assign some type of numeric representation to each keyword based on our intuition about how important it is to this particular campaign, or how well it will convert, or how much we know our client/boss/team desperately wants to rank for it.

That self-created score was then used to help weight the final decision for prioritizing which terms and phrases to target first. It makes sense. You have knowledge about keywords both subjective and objective that should influence the process. But it needs to do so in a consistent, numeric fashion that flows with the weighting of prioritization.

Hence, we’ve created a toggle-able “Importance” score in Keyword Explorer:

After you add keywords to a list, you’ll see the Importance score is, by default, set to 3/10. We chose this number to make it easy to increase a keyword’s importance by 3X and easy to bring it down to 1/3rd. As you modify the importance value, overall Keyword Potential (below) will change, and you can re-sort your list based on the inputs you’ve given.

For example, in my list above, I set “free slideshow software” to 2/10, because I know it won’t convert particularly well (the word “free” often does not). But, I also know that churches and religious organizations love Haiku Deck and find it hugely valuable, so I’ve bumped up the importance of “worship presentation software” to 9/10.

Keyword Potential

In order to prioritize keywords, you need a metric that combines all the others — volume, difficulty, opportunity, and importance — with a consistent, sensible algorithm that lets the best keywords rise to the top. In Keyword Explorer, that metric is “Potential.”

Sorting by Potential shows me keywords that have lots of search volume, relatively low difficulty, relatively high CTR opportunity, and uses my custom importance score to push the best keywords to the top. When you build a list in Keyword Explorer, this metric is invaluable for sorting the wheat from the chaff and identifying the terms and phrases with the most promise.

Keyword research & the list building process

Keyword Explorer is built around the idea that, starting from a single keyword search, you can identify suggestions that match your campaign’s goals and include them in your list until you’ve got a robust, comprehensive set of queries to target.

List building is easy — just select the keywords you like from the suggestions page and use the list selector in the top right corner (it scrolls down as you do) to add your chosen keywords to a list, or create a new list:

Once you’ve added keywords to a list, you can go to the lists page to see and compare your sets of keywords:

Each individual list will show you the distribution of metrics and data about the keywords in it via these helpful graphs:

The graphs show distributions of each metric, as well as a chart of SERP features to help illustrate which types of results are most common in the SERPs for the keywords on your list:

For example, you can see in my Rock & Grunge band keywords, there’s a lot of news results, videos, tweets, and a few star reviews, but no maps/local results, shopping ads, or sitelinks, which makes sense. Keyword Explorer is using country-level, non-personalized, non-geo-biased results, and so some SERPs won’t match perfectly to what you see in your local/logged-in results. In the future, we hope to enable even more granular location-based searches in the tool.

The lists themselves have a huge amount of flexibility. You can sort by any column, add, move, or delete in bulk, filter based on any metric, and export to CSV.

If your list gets stale, and you need to update the metrics and SERP features, it’s just a single click to re-gather all the data for every keyword on your list. I was particularly impressed with that feature; to me it’s one of the biggest time-savers in the application.

Keyword Explorer’s unique database of search terms & phrases

No keyword research tool would be complete without a massive database of search terms and phrases, and Keyword Explorer has just that. We started with a raw index of over 2 billion English keywords, then whittled that down to the ~500 million highest-quality ones (we collapsed lots of odd suggestions we found via iterative crawls of AdWords, autosuggest, related searches, Wikipedia titles, topic modeling extractions, SERPscape — via our acquisition last year — and more) into those we felt relatively confident had real volume).

Keyword Explorer’s suggestions corpus features six unique filters to get back ideas. We wanted to include all the types of keyword sources that SEOs normally have to visit many different tools to get, all in one place, to save time and frustration. You can see those filters at the top of the suggestions page:

The six filters are:

  1. Include a Mix of Sources
    • This is the default filter and will mix together results from all the others, as well as ideas crawled from Google Suggest (autocomplete) and Google’s Related Searches.
  2. Only Include Keywords With All of the Keyword Terms
    • This filter will show only suggestions that include all of the terms you’ve entered in the query. For example, if you entered “mustache wax” this filter would only show suggestions that contain both the word “mustache” and the word “wax.”
  3. Exclude Your Query Terms to Get Broader Ideas
    • This filter will show only suggestions that do not include your query terms. For example, if you entered “mustache wax,” suggestions might include “facial grooming products” or “beard oil” but nothing with either “mustache” or “wax.”
  4. Based on Closely Related Topics
    • This filter uses Moz’s topic modeling algorithm to extract terms and phrases we found on many web pages that also contained the query terms. For example, keywords like “hair gel” and “pomade” were found on many of the pages that had the words “mustache wax” and thus will appear in these suggestions.
  5. Based on Broadly Related Topics and Synonyms
    • This filter expands upon the topic modeling system above to include synonyms and more broadly related keywords for a more iterative extraction process and a wider set of keyword suggestions. If “Closely Related Topics” suggestions are too far afield for what you’re seeking, this filter often provides better results.
  6. Related to Keywords with Similar Results Pages
    • This filter looks at the pages that ranked highly for the query entered and then finds other search terms/phrases that also contained those pages. For example, many pages that ranked well for “mustache wax” also ranked well for searches like “beard care products” and “beard conditioner” and thus, those keywords would appear in this filter. We’re big fans of SEMRush here at Moz, and this filter type shows suggestions very similar to what you’d find using their competitive dataset.

Some of my favorite, unique suggestions come from the “closely related topics” filter, which uses that topic modeling algorithm and process. Until now, extracting topically related keywords required using something like Alchemy API or Stanford’s topic modeling software combined with a large content corpus, aka a royal pain in the butt. The KWE team, mostly thanks to Erin, built a suitably powerful English-language corpus, and you can see how well it works:

NOTE: Different filters will work better and worse on different types of keywords. For newly trending searches, topic modeling results are unlikely to be very good, and on longer tail searches, they’re not great either. But for head-of-demand-curve and single word concepts, topic modeling often shows really creative lexical relationships you wouldn’t find elsewhere.

SERPs Analysis

The final feature of Keyword Explorer I’ll cover here (there are lots of cool nooks and crannies I’ve left for you to find on your own) is the SERPs Analysis. We’ve broadened the ability of our SERP data to include all the features that often show up in Google’s results, so you’ll see a page much more representative of what’s actually in the keyword SERP:

Holy smack! There’s only 3 — yes, THREE — organic results on page one for the query “Disneyland.” The rest is sitelinks, tweets, a knowledge graph, news listings, images — it’s madness. But, it’s also well-represented in our SERPs Analysis. And, as you can see, the Opportunity score of “7” effectively represents just how little room there is for organic CTR.

Over time, we’ll be adding and supporting even more features on this page, and trying to grab more of the metrics that matter, too (for example, after Twitter pulled their tweet counts, we had to remove those from the product and are working on a way to get them back).

Yes, you can buy KWE separately (or get it as part of Moz Pro)

Keyword Explorer is the first product in Moz Pro to be available sold separately. It’s part of the efforts we’ve been making with tools like Moz Local, Followerwonk, and Moz Content to offer our software independently rather than forcing you to bundle if you’re only using one piece.

If you’re already a Moz Pro subscriber, you have access to Keyword Explorer right now! If you’re not a subscriber and want to try it out, you can run a few free queries per day (without list building functionality though). And, if you want to use Keyword Explorer on its own, you can buy it for $600/year or $1,800/year depending on your use.

The best part of Keyword Explorer — we’re going to build what you want

There’s lots to like in the new Keyword Explorer, but we also know it’s not complete. This is the first version, and it will certainly need upgrades and additions to reach its full potential. That’s why, in my opinion, the best part of Keyword Explorer is that, for the next 3–6 months, the team that built this product is keeping a big part of their bandwidth open to do nothing but make feature additions and upgrades that YOU need.

It was pretty amazing to have the team’s schedule for Q2 and Q3 of 2016 make the top priority “Keyword Explorer Upgrades & Iterations.” And, in order to take advantage of that bandwidth, we’d love to hear from you. We have dozens (maybe hundreds) of ideas internally of what we want to add next, but your feedback will be a huge part of that. Let us know through the comments below, by tweeting at me, or by sending an email to Rand at Moz.com.

A final note: I want to say a massive thanks to the Keyword Explorer team, who volunteered to take on much more than they bargained for when they agreed to work with me :-) Our fearless, overtime-investing, never-complaining engineers — Evan, Kenny, David, Erin, Tony, Jason, and Jim. One of the best designers I’ve ever worked with — Christine. Our amazingly on-top-of-everything product manager — Kiki. Our superhero-of-an-engineering-manager — Shawn. Our bug-catching SDETs — Uma and Gary. Our product marketing liaison — Brittani. And Russ & Dr. Pete, who helped with so many aspects of the product, metrics, and flow. You folks all took time away from your other projects and responsibilities to make this product a reality. Thank you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3231846
via Auto Feed

Hacking Your Way to 5x Higher Organic Click-Through Rates (and Better Conversion Rates & Rankings, too)

Posted by larry.kim

[Estimated read time: 13 minutes]

Last month we discussed why organic CTR is kind of a big deal. I believe that click-through rate is tremendously valuable and that achieving above-average CTRs can lead to better rankings, particularly on long tail queries.Hacking Your Way to 5x Higher Organic Click-Through Rates

But even if you don’t believe click-through rate can impact rankings, optimizing for a higher CTR still means you’re optimizing toward the goal of attracting more clicks. More clicks means more traffic and higher conversion rates — because if you can make people more worked up about your product/solution, that carries through to conversions, leads, and sales.

All great, important things!

So what the heck — why isn’t every SEO obsessed with raising organic click-through rates like myself and many other PPC marketers are?

Image of a unicorn on a purple background. "Always be yourself. Unless you can be a unicorn. Then be a unicorn."

Why isn’t CTR optimization a bigger deal in organic search today?

For starters, it’s ridiculously hard to tell what your organic CTR is for a keyword. Thanks, Google.

In the Search Analytics section of the Search Console, Google only gives you a sampling of 1,000 queries. Because you only have access to a sample of keywords, you can’t arbitrarily find out a CTR for any individual keywords.

It’s much easier to find our your CTR in paid search with AdWords. You can type in any word and find out what your CTR is for that word.

Another challenge preventing CTR from being a bigger deal today is Google Analytics, which hasn’t provided keywords to us for years. You can figure out the number of impressions and clicks for your top 1,000 pages, but the limited query data (1 percent of total) is a killer. It might be easy to see your CTR data, but it’s hard to know whether what you can see is good or not.

Also, many people just don’t realize how much leverage there is in increasing CTR. Donkey headlines (bottom 10%) tend to do around 3x worse than average, whereas unicorn headlines (top 10%) tend to do around 2x better than average. By converting donkeys to unicorns, you might not realize that by boosting your CTR could increase clicks to your site by 5x.

And one final important point (and yet another reason to kill your donkeys!): low CTRs typically also lead to low conversion rates — this is true for both organic and paid search. You can easily test this out yourself by analyzing your own website data.

Search Query Data for Organic SEO

Conversion Rate vs. CTR for one of my customers.

Introducing Larry’s High CTR Pyramid Scheme

Let’s look at a graph that shows the click-through rate by rank for my 1,000 keywords obtained through Google Search Console:

CTR vs. Ranking

The blue curve shows the CTRs on average for any given spot for all keywords. But that’s an average. An average includes all the top performers (unicorns) as well as the worst performers (donkeys).

There is considerable variance here.

  • The top 10 percent (the unicorns) have CTRs that are more than double the average (~55 percent vs. ~27 percent in first position).
  • The bottom 10 percent (the donkeys) have organic CTRs that are three times lower than average (~27 percent vs. ~8 percent in first position).

This is such a great opportunity. But it’s hard to realize just how great your CTR can be.

You can increase clicks by as much as 5x or even 6x by identifying your crappiest keyword donkeys and making them into high CTR headline unicorns, rather than stupid “optimized” title tag formulas — like:

Main Keyword, Long-Tail Keyword Variation 1, Long-Tail Keyword Variation 2.

This is a title tag optimization formula from ancient times — we’re talking B.H. (Before Hummingbird). This is no longer necessary because Google is now much better at inferring query intent.

Welcome to the new world. To help you adapt, I’ve developed a repeatable SEO workflow to turn your donkeys into unicorns.

Behold! It’s Larry’s High CTR Pyramid Scheme! Here’s how it works.

Detecting your donkeys

Donkeys versus Unicorns: Image of a donkey and a unicorn.

This whole process starts by finding your underperforming content donkeys using another of my hacks — Larry’s Donkey Detection Algorithm. Download all of your query data from the Search Console or Google Analytics. Next, graph CTR vs. Average Position for the queries you rank for organically and add a trend line, like this:

Organic Search Query Data - CTR vs. Ranking

The red line here is your average click through rate.

You want to focus ONLY on the keywords at very bottom of your curve. You don’t want to turn any of your unicorns into donkeys. You only want to turn your donkeys into unicorns!

Now you can sort by secondary metrics, such as conversion rates, if that’s what you care most about. Which of those donkeys have the highest conversion rates? Focus on these first because when you’re able to turn that page into a traffic unicorn, it will also convert more!

If you care most about engagement, then you can filter by that metric. If you can improve the CTR of this page, then you can be reasonably confident that more people will engage with your content.

Your content is a diamond in the rough — or a great book with a terrible cover. Now is the time to polish your diamond and help it become exceptional.

Warning: Don’t go crazy reoptimizing your title

"I'm a unicorn": Screenshot of Ralph with an ice cream cone on his forehead from The Simpsons TV show.

Image courtesy of Fox

This is important. You shouldn’t change the title tag over and over every week because this will cause problems in your quest for a magical cure to your donkey blues.

For one, Google will think your title is being dynamically populated. For another, you’re just guessing, which is probably why you have this CTR issue.

Also, multiple changes will make it hard to get a good reading on why the CTR changed. Is it due to the title tag change or is it something else entirely (a SERP change, a competitor change, seasonality, etc.)? If you keep changing it, you won’t have enough statistically significant data to make a data-driven decision.

Additionally, your ranking position could change, which would also further screw up things.

Bottom line: Don’t just go and change titles willy-nilly.

We can make a unicorn — we have the technology!

"Be a unicorn in a sea of donkeys!" A pink unicorn among dozens of gray donkeys.

To improve your organic click-through rate, you’ll need to collect some data. You can do this by creating ads on Google AdWords for no more than $50.

You’re going to create an ad pointing to the page you’re reoptimizing using 10 different headlines. The reason you need 10 headlines is so you can discover your statistical unicorn, the headline with a CTR that stands above the rest in the top 10 percent.

Think of it like a lottery where the odds of winning are 1 in 10. Your odds of winning are much greater if you buy 10 lottery tickets instead of just one, right?

You can absolutely create more headlines; 10 is just the minimum. If you really want to do this well, writing 12, 13, or 14 headlines dramatically increases the odds that you’ll find a unicorn.

Don’t half-ass your new headlines

"Old Man Yells At Cloud" newspaper headline; clip from The Simpsons TV show.

Image courtesy of Fox

I can’t stress this enough: You really have to try out different headlines. It can’t be the same headline, just with insignificant little changes (e.g., commas in different places, different punctuation, upper case vs. lower case).

Pop quiz: How many headlines do you count here?

  • 1. How to Write a Book Fast
  • 2. How to Write a Book FAST
  • 3. How to Write a Book…FAST
  • 4. How To Write A Book…Fast!
  • 5. How to write a book, fast.

Did you say 5?

WRONG!

No, the answer is 1.

These aren’t different headlines. They’re just different punctuations and capitalizations.

You have to REALLY change the headlines.

Write your headlines using different personas. Who is the person speaking to the reader? Is it the bearer of bad news? The hero? The villain? The comedian? The feel-good friend?

Also change emotional trigger in your headlines. You can use emotional drivers like amusement, surprise, happiness, hope, or excitement:

Plutchik's Wheel of Emotions: The top 10 emotional drivers.

Source: Plutchik’s Wheel of Emotions

Other emotions include anger, disgust, affirmation, and fear. All four of these can become huge winners.

Vary your headlines. Get super creative!

What keywords should you choose?

Add the keywords that you were hoping to appear for when you created the content, along with keywords you’re currently ranking for using query data from Google Analytics. Set those keywords to the broad match keyword match type.

Broad match is the default keyword match type and reaches the widest audience. It makes your ad eligible to appear whenever a user’s search query includes any word in your key phrase, in any order, and any syllables.

For example, if you use broad match on “luxury car,” your ad might be displayed if a user types “luxury cars,” “fast cars,” “luxury apartments,” or even “expensive vehicles,” which doesn’t include any of the terms in your keyword. Broad match will, in a way, act like RankBrain does — testing your headlines against a diverse set of queries, including related terms.

It’s a perfect keyword sample set.

10 awesome tips to help you write outstanding headlines

Ultimately, you want to think about three things when writing your ads: your target customer; the persona you want to use to speak to them; and what emotionally-charged words you can use to incite action.

Steve Rayson of BuzzSumo recently shared some great research on the five elements of viral headlines. Here’s what your headlines need to have:

  • Emotional Hook: This could be a certain emotional word or superlative — words like: amazing, unbelievable, shocking, disgusting, or inspiring.
  • Content Type: This tells the reader exactly what your content is — is your content images, quotes, pictures, or facts?
  • Topic: Think of this as your keyword — it could be something evergreen like “content marketing” or more news-oriented like a Google algorithm update or SERP test.
  • Format: This sets the expectation of the format your content will be in, whether it’s a listicle, quiz, ebook, or something else.
  • Promise Element: The reader benefit — tell the reader why your content will solve a problem, make them smarter or better at something, or that it provides vital information they need to know.

Here are five additional tips:

  • Choose your words wisely: Go either extremely positive (e.g., best, greatest, biggest) or negative (e.g., stop, avoid, don’t) with your headline word choices.
  • Be specific: Make it clear to the reader what your content is about.
  • Be unique: Show some personality. Create content that nobody else is doing (or improve on what others have already done). Dare to be different from your competitors.
  • Create a sense of urgency: What will the reader learn, lose, fail at, or miss out on it they don’t click right now?
  • Be useful: How does clicking on your content benefit the reader?

So let’s go back to our earlier headline example, How to Write a Book Fast. Based on this advice, what are some new headlines you could test? How about:

  • Write Your Book Fast: X Trusted Time-Saving Tips
  • X Surprising Tricks Nobody Told You About Writing Books Fast
  • How to Finish Writing Your Book 5x Faster
  • Write Fast Right Now: What Published Authors Don’t Want You to Know
  • X Ridiculously Easy Steps to Write Your Book Faster
  • What’s the Secret of Writing Great Books Fast?
  • X Inspiring Tips That Will Help You Write Your Book Faster
  • This Unusual Book Writing Technique Will Make You Write Faster
  • Your Book is Doomed: How I Write Way Faster Than You

Which one of these do you think would win our ad test? The answer may just surprise all of us.

How would you reoptimize this headline based on this advice? I’d love to see your ideas in the comments.

Where to run your ad

By now you may be saying, “Larry this is great, but I’m a little worried about how much this all will cost. Any suggestions to keep costs down?”

YES!

We’re just targeting English speakers. So you can save money by taking advantage of countries with lower CPCs.

Heat map of average cost per click around the world.

Rather than running ads in New York City, where CPCs would likely be very expensive, maybe you could set up your ads to appear only in Canada (which has 29 percent lower CPCs on average than the U.S.) or in Ireland (which has 40 percent lower CPCs on average).

Prepare your Unicorn Detector

Make sure to set your ads to rotate evenly. You want to ensure that all 10–14 of your ads have a chance to run.

Before analyzing your results, you’ll want at least 200 impressions per ad. This is actually the number of impressions Google AdWords uses before ascertaining a quality score, but more is better.

Also, you should bid to a specific position (e.g., bid to position 3, 4, or 5) using flexible bid strategies. That way you don’t have to compare CTRs where one ad had a CTR of 20% in position 1 but a 2% CTR in position 8.

Now you can analyze your results and see which headline had the best CTR. Pretty easy, huh?

Usually one of your 10 ads will be a unicorn. However, if all the CTRs turn out the same (e.g., 2% vs. 2.1%) throw them all out and try out more headlines.

"Looks like our unicorn is just a donkey with a plunger stuck to its face." Quote from Dr. Gregory House, House MD.

Your goal is to find an outlier, a headline that generates 2x, 3x, or 4x higher CTR than the rest.

Did it work?

Now we’ve reached the end. We’ve identified the donkeys. We have a workflow for auditioning new possible headlines. And we’ve identified the winning headlines. Now what?

You just swap them out. Replace your donkey title with the winning unicorn headline from your PPC ad test, and put it live.

To determine whether you’ve succeeded, track the number of clicks to the page to ensure that your CTR has indeed increased.

This is a ridiculously easy, low-risk, high-return strategy with a high probability of success because the new headline is battl- tested and should do just as well organically.

Conclusion: Say no to low CTR

Abraham Lincoln riding a unicorn through outer space.

Guys, this is crazy. First of all, think about all the SEO tasks you have to do. None of that is easy. It’s all manual work.

Just take link building as one example. You’re hoping for other people to link to you to help you rank better. In the end it’s very much a hit-or-miss approach to SEO because you have no control over whether you actually get the link (or if it will even help).

Also, link building is more of an art, and one that some people just don’t have the skills to do properly. Plus, when done poorly, bad link building can kill your rankings.

Here, the workflow — my High CTR Pyramid Scheme is all within your own control. This is more like on-page SEO, changing titles and text, but this is a more methodical, data-driven way of doing it.

Optimizing for CTR is very leveraged. You can 5x your CTR if you’re successful in turning a donkey into a unicorn. There’s even more bonus points because it should result in ever better rankings, which should result in even more clicks. And your conversion rates will improve.

I personally believe that CTR is calculated both at a query/page and at the domain level (like domain and page authority in link building). Since we can’t have CTR data for every possible page/query, it makes sense to have something to fall back on. So by killing off your CTR donkeys, you’re improving your domain CTR score, which should help rankings of all the other pages on your site.

There’s a famous Abraham Lincoln quote: “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”

Well, if I had one hour to spend on SEO, I would spend that one hour finding and fixing my donkey headlines, turning them into unicorn headlines. Hour for hour, I’m convinced you have a really great return here.

Your odds of winning the organic CTR lottery are 1 in 10. So go buy 10 lottery tickets!

Are you optimizing for CTR? If not, why?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3228603
via Auto Feed

The Start-to-Finish Guide to Optimizing Your WordPress Blog Posts [Plus a Checklist]

Posted by sergeystefoglo

WordPress is the most popular content management system (CMS) in the world. There’s a good chance you’ll need to optimize or work on a website that uses WordPress, if you haven’t already! Whether you’re a business owner, designer, developer, PPC expert, SEO consultant, or writer — getting familiar with WordPress is a smart move.

When I started out in SEO, I worked with local businesses that hired smaller firms to design or develop their sites. Naturally, most people gravitated towards WordPress as their CMS of choice: it was easy to customize, even easier to maintain, simple to use, and did the job well.

It wasn’t until I started working with websites that were using Joomla or Drupal that I began to appreciate the simplicity and flexibility that WordPress offers. Don’t get me wrong, Joomla and Drupal are both great, but they require a lot more setup and learning beforehand (especially if your goal is to optimize the site for organic search).

What this post is about

This post is going to walk through the process of uploading and optimizing a blog post using WordPress and Yoast SEO. I’ll go into detail on both of these topics and provide you with a downloadable checklist that you can give to your team or use yourself.

Before we get started

Yoast SEO

While it’s true that there are a variety of SEO plugins available for WordPress, I prefer Yoast SEO and will be referencing it as an essential plugin for this post. If you don’t currently have Yoast installed, you can visit their website to download it or simply search for “Yoast SEO” in WordPress and install it directly.

Pages and posts

WordPress has two basic sections for uploading content. There are pages (which are defined as landing pages on your website), and there are posts (which are essentially blog posts). One could argue that this article could be used as a guide to uploading and optimizing landing pages on WordPress, but I believe there’s a different approach for that and therefore will keep the focus of this article around posts.

Uploading your blog post

Before you get to optimizing your blog posts for organic search, you need to get them live on your site. If you’re familiar with how posting a blog works on WordPress, feel free to skip ahead to the optimization section of this article.

1. After logging into your site, hover over “Posts” and then click on “Add New.”

2. Copy and paste the title of your post where it says “Enter title here,” then paste the body text of your post in the section below (don’t copy over images yet).

Pro Tip: I personally write all of my blog posts in a separate program (like Word or Ulysses) and then copy over the text into WordPress when I’m ready to post it. You can definitely write your blog within WordPress and save it as a draft if you aren’t ready to publish it, but if you like having a local copy of your writing I’d recommend simply writing it in a different program.

Pro Tip: You can alternate between the “visual” and “text” editor here. If you’re familiar with HTML, I’d recommend “text,” as you can spot any potential errors in the code and have more control. If not, the “visual” editor works perfectly fine.

Pro Tip: If you have links in your post (which you should), double check that they were added correctly. If not, you can add a link using the WYSIWYG editor. In general, try to at least have 3 relevant internal links in each of your posts. Don’t be afraid of adding external links, either! The important thing to remember is that if the reader will find it useful, it’s okay to add it.

3. If you have images, place your cursor where you want the image. Click on “Add Media” and select “Upload Files.” After choosing your preferred settings, click “insert into post” to add your image in your article.

Note: There are various settings and options for sizing and aligning images. Please see this write up for a more detailed explanation of how images and featured images work in WordPress.

Pro Tip: It’s always a good idea to compress your images before uploading them so they don’t cause long load times. Here’s a great guide to compressing your images.

4. Scroll down a bit and you should see the “Categories” section on the right side of your screen. You don’t have to categorize your post (unless your site is organized by categories), but you can add one if you wish. If you do, WordPress will create category pages that pull in posts within that category. Here’s a great write-up on how WordPress utilizes category pages and what you should consider from an SEO perspective.

5. Under the “Categories” section, you’ll see the tags section. Similar to categories, you don’t have to use tags. In fact, I would argue that you should always noindex tagged pages that are auto-generated by WordPress, as oftentimes it can cause duplication issues. Nonetheless, you can add tags to your post here.

6. If you scroll down further you’ll see an “Author” section, where you can choose the author of your blog post.

7. Scroll back up and find the section that’s called “Publish.” Here you can choose “Preview” to make sure everything looks right in your post before optimizing/uploading it. If something doesn’t look the way you want it to, just edit that section.

8. If you want a snippet of your post to appear on your blog homepage instead of the entire thing, simply place your cursor where you want the break to be and click on the “Insert Read More tag” button. Read this post that explains the “Read More” tag and its function in WordPress.

This should get you to a point where you’re ready to optimize your blog — let’s focus on this next.

Optimizing your blog post

Getting down the foundational elements of uploading a blog post on WordPress is crucial, but we are marketers, aren’t we? This section breaks down what you (or your team) should be doing to optimize a post on WordPress as best as possible. My goal with creating the checklist at the bottom of this article is so that you and your team can reference it when uploading posts. Pretty soon it’ll become second nature!

1. Assuming you’re still on the “Edit Post” page, scroll down until you see a section titled “Yoast SEO.”

Pro Tip: If you don’t see this section, make sure you have the correct plugin installed. If you do and still don’t see this section, scroll up to the very top right of the screen and click on “Screen Options.” From here, make sure that “Wordpress SEO by Yoast” is checked.

2. Click on “Edit Snippet” in the Yoast SEO section. The “SEO title” box will be where you input your title tag.

Pro Tip: In general, you want to include your main keyword first followed by your brand name or website name. Also, make sure that you stay within 40–65 characters here.

3. You guessed it — the “Meta description” box is where you’ll input your meta description.

Pro Tip: Although not necessary, including your main keyword in the meta description can be a great idea if it flows well with your content. Google has explicitly mentioned that meta descriptions aren’t important to search engine rankings, but that doesn’t mean using a keyword won’t help users click on your post. Because of this, try to make your meta description as enticing as possible to a potential user. Why should they click on your blog post instead of the other options available in the SERP? Also, as a general rule, stay within 70–156 characters here.

4. A new addition to Yoast SEO (although not WordPress), the “Slug” section allows you to edit the URL of your post. By default, WordPress will add the title of your post to the URL (which isn’t a bad way to go), but if you want to alter it this is where you can.

Pro Tip: There are “standard practice” tips for URL optimization that don’t necessarily affect your rankings, but solidify what your post is about to users and search engines. These standard practice tips include keeping your URL short, including a keyword if possible, and having the URL make obvious what the post is about. Here is a great write up from Rand on URL optimization.

5. If you click on the gear icon tab within the Yoast SEO section, you’ll notice options for things like meta robots and the canonical URL. In most cases, these settings will already be set on a global scale; however, you can override your global settings for specific posts here.

6. If you click on the “Share” icon, you can override the default metadata (titles, images, etc.) that Facebook and Twitter will pull for your post. In general, you can leave these blank. However, if you have a good reason to override them (testing different images, optimizing for various target audiences, etc.) this is where you can.

7. We’ve covered a lot of important on-page elements so far, but one we haven’t covered is the <h1> tag. This tag is crucial for telling search engines what your page is about. In most cases, your title will automatically be an <h1> tag.

Pro Tip: I see a lot of sites who have multiple <h1> tags on a page, as well as many sites who have duplicate <h1> tags across the site. Often times, the logo or phone number can be wrapped in an <h1> tag. Make sure to double check that you have one <h1> tag for every page, and make sure that these tags are all unique.

8.A dding alt tags to images is fairly simple with WordPress. There are various ways to do this, but it all comes down to whether you’re using the visual editor or the text editor.

Visual: Click on the image you want to add alt text to, and click on the “Edit” icon. Add your alt text in the “Alternative Text” field. Make sure to click on “Update” after.

Text: Simply add the alt=“” snippet of code inside the image tag. It should look something like this:

<img src="http://www.domain.com/images/1" alt="keyword goes here">

In general, alt tags should describe the photo. So, if I was writing a blog post about central vacuum systems and I had an image of a man using a central vacuum system, the ideal alt tag would be “Man Using Central Vacuum System” or “Man Cleaning With Central Vacuum System.”

9. It’s important to take a look at your internal links within your post. Are they topically relevant? Try to include at least 3–4 links that point to your internal pages and don’t be scared to throw in good external links as well.

10. Does your post have a clear CTA? Oftentimes this can be a “Read more posts like this” callout or a “Sign up for our newsletter” button; however, it could also look like a “buy now” CTA for sites that write about products.

11. After following the above steps, take a second glance at everything before hitting “publish.” If you publish your post and realize that something doesn’t look right later on, just head back to the editor, make your changes, and click “update.”

Extras

Optimization checklist

As promised, please download and distribute this checklist as you please. My hope is that after going through it multiple times, posting and optimizing your blog posts on WordPress will come as second nature to you (or your team).

I want the checklist!

3 more essential WordPress plugins for marketers

  1. Broken Link Checker – Essential plugin that monitors all of your internal links and regularly reports on where they are. Easily one of the most simple yet helpful plugins out there.
  2. W3 Total Cache – This plugin helps increase the speed of your site by leveraging caching, and minifying code. Highly recommended!
  3. Gravity Forms – While there are some decent options for contact form plugins on WordPress, Gravity Forms beats them all because of the customization options, continued plugin support, and add-ons..

If you’re interested, I wrote an all-around guide to using Yoast SEO on the Distilled blog earlier this year. Also, please visit the good people at Yoast, as their blog is full of great advice and tutorials.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3219311
via Auto Feed

8 Old School SEO Practices That Are No Longer Effective – Whiteboard Friday

Posted by randfish

[Estimated read time: 14 minutes]

Are you guilty of living in the past? Using methods that were once tried-and-true can be alluring, but it can also prove dangerous to your search strategy. In today’s Whiteboard Friday, Rand spells out eight old school SEO practices that you should ditch in favor of more effective and modern alternatives.

8 Old School SEO Practices That Are No Longer Effective Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about some old school SEO practices that just don’t work anymore and things with which we should replace them.

Let’s start with the first one — keywords before clicks.

Look, I get the appeal here. The idea is that we’ve done a bunch of keyword research, now we’re doing keyword targeting, and we can see that it might be important to target multiple keywords on the same page. So FYI, “pipe smoking,” “tobacco smoking,” “very dangerous for your health,” not recommended by me or by Moz, but I thought it was a funny throwback keyword and so there you go. I do enjoy little implements even if I never use them.

So pipes, tobacco pipes, pipe smoking, wooden pipes, this is not going to draw anyone’s click. You might think, “But it’s good SEO, Rand. It’s good to have all my keywords in my title element. I know that’s an important part of SEO.” Not anymore. It really is not anymore an important . . . well, let’s put it this way. It’s an important part of SEO, which is subsumed by wanting to draw the clicks. The user is searching, they’re looking at the page, and what are they going to think when they see pipes tobacco, pipes, pipe smoking, wooden pipes? They have associations with that — spammy, sketchy, I don’t want to click it — and we know, as SEOs, that Google is using click signals to help documents rank over time and to help websites rank over time.

So if they’re judging this, you’re going to fall in the rankings, versus a title like “Art of Piping: Studying Wooden Pipes for Every Price Range.” Now, you’re not just playing off the, “Yes, I am including some keywords in there. I have ‘wooden’ and ‘pipes.’ I have ‘art of piping,’ which is maybe my brand name.” But I’m worried more about drawing the click, which is why I’m making this part of my message of “for every price range.” I’m using the word “stunning” to draw people in. I’m saying, “Our collection is not the largest but the hand-selected best. You’ll find unique pipes available nowhere else and always free, fast shipping.”

I’m essentially trying to create a message, like I would for an AdWords ad, that is less focused on just having the raw keywords in there and more focused on drawing the click. This is a far more effective approach that we’ve seen over the last few years. It’s probably been a good six or seven years that this has been vastly superior to this other approach.

Second one, heavy use of anchor text on internal links.

This used to be a practice that could have positive impacts on rankings. But what we’ve seen lately, especially the last few years, is that Google has discounted this and has actually even punished it where they feel like it’s inappropriate or spammy, manipulative, overdone. We talked about this a little in our internal and external linking Whiteboard Friday a couple of weeks back.

In this case, my suggestion would be if the internal link is in the navigation, if it’s in the footer, if it’s in a sidebar, if it’s inside content, and it is relevant and well-written and it flows well, has high usability, you’re pretty safe. However, if it has low usability, if it looks sketchy or funny, if you’re making the font small so as to hide it because it’s really for search engines and not for searchers and users, now you’re in a sketchy place. You might count on being discounted, penalized, or hurt at some point by Google.

Number three, pages for every keyword variant.

This is an SEO tactic that many folks are still pursuing today and that had been effective for a very long time. So the idea was basically if I have any variation of a keyword, I want a single page to target that because keyword targeting is such a precise art and technical science that I want to have the maximum capacity to target each keyword individually, even if it’s only slightly different from another one. This still worked even up to four or five years ago, and in some cases, people were sacrificing usability because they saw it still worked.

Nowadays, Google has gotten so smart with upgrades like Hummingbird, obviously with RankBrain last year, that they’ve taken to a much more intent- and topic-matching model. So we don’t want to do something like have four different pages, like unique hand-carved pipes, hand-carved pipes, hand-carved tobacco pipes, and hand-carved tobacco smoking pipes. By the way, these are all real searches that you’ll find in Google Suggest or AdWords. But rather than taking all of these and having a separate page for each, I want one page targeting all of them. I might try and fit these keywords intelligently into the content, the headline, maybe the title, the meta description, those kinds of things. I’m sure I can find a good combination of these. But the intent for each of these searchers is the same, so I only want one page targeting them.

Number four — directories, paid links, etc.

Every single one of these link building, link acquisition techniques that I’m about to mention has either been directly penalized by Google or penalized as part of an update, or we’ve seen sites get hit hard for doing it. This is dangerous stuff, and you want to stay away from all of these at this point.

Directories, well, generic directories and SEO directories for sure. Article links, especially article blasts where you can push an article in and there’s no editorial review. Guest content, depending on the editorial practices, the board might be a little different. Press releases, Google you saw penalized some press release websites. Well, it didn’t penalize the press release website. Google said, “You know what? Your links don’t count anymore, or we’re going to discount them. We’re not going to treat them the same.”

Comment links, for obvious reasons, reciprocal link pages, those got penalized many years ago. Article spinners. Private link networks. You se private and network, or you see network, you should just generally run away. Private blog networks. Paid link networks. Fiverr or forum link buys.

You see advertised on all sorts of SEO forums especially the more aggressive, sketchy ones that a lot of folks are like, “Hey, for $99, we have this amazing package, and I’ll show you all the people whose rankings it’s increased, and they come from PageRank six,” never mind that Page Rank is totally defunct. Or worse, they use Moz. They’ll say like, “Domain authority 60-plus websites.” You know what, Moz is not perfect. Domain authority is not a perfect representation of the value you’re going to get from these things. Anyone who’s selling you links on a forum, you should be super skeptical. That’s somewhat like someone going up to your house and being like, “Hey, I got this Ferrari in the yard here. You want to buy this?” That’s my Jersey coming out.

Social link buys, anything like this, just say no people.

Number five, multiple microsites, separate domains, or separate domains with the same audience or topic target.

So this again used to be a very common SEO practice, where folks would say, “Hey, I’m going to split these up because I can get very micro targeted with my individual websites.” They were often keyword-rich domain names like woodenpipes.com, and I’ve got handmadepipes.net, and I’ve got pipesofmexico.co versus I just have artofpiping.com, not that “piping” is necessarily the right word. Then it includes all of the content from all of these. The benefit here is that this is going to gain domain authority much faster and much better, and in a far greater fashion than any of these will.

Let’s say that it was possible that there is no bias against the exact match domain names folks. We’re happy to link to them, and you had just as much success branding each of these and earning links to each of these, and doing content marketing on each of these as you did on this one. But you split up your efforts a third, a third, a third. Guess what would happen? These would rank about a third as well as all the content would on here, which means the content on handmadepipes.net is not benefitting from the links and content on woodenpipes.com, and that sucks. You want to combine your efforts into one domain if you possibly can. This is one of the reasons we also recommend against subdomains and microsites, because putting all of your efforts into one place has the best shot at earning you the most rankings for all of the content you create.

Number six, exact and partial keyword match domain names in general.

It’s the case like if I’m a consumer and I’m looking at domain names like woodenpipes.com, handmadepipes.net, uniquepipes.shop, hand-carved-pipes.co, the problem is that over time, over the last 15, 20 years of the Web, those types of domain names that don’t sound like real brands, that are not in our memories and don’t have positive associations with them, they’re going to draw clicks away from you and towards your competitors who sound more credible, more competent, and more branded. For that reason alone, you should avoid them.

It’s also that case that we’ve seen that these types of domains do much more poorly with link earning, with content marketing, with being able to have guest content accepted. People don’t trust it. The same is true for public relations and getting press mentions. The press doesn’t trust sites like these.

For those reasons, it’s just a barrier. Even if you thought, “Hey, there’s still keyword benefits to these,” which there is a little bit because the anchor text that comes with them, that points to the site always includes the words and phrases you’re going after. So there’s a little bit of benefit, but it’s far overwhelmed by the really frustrating speed bumps and roadblocks that you face when you have a domain like this.

Number seven — Using CPC or Adwords’ “Competition” to determine the difficulty of ranking in organic or non-paid results

A lot of folks, when they’re doing keyword research, for some reason still have this idea that using cost per click or AdWords as competition scores can help determine the difficulty of ranking in organic, non-paid results. This is totally wrong.

So see right here, I’ve got “hand-carved pipes” and “unique wooden pipes,” and they have an AdWords CPC respectively of $3.80 and $5.50, and they have AdWords competition of medium and medium. That is in no way correlated necessarily with how difficult they’ll be to rank for in the organic results. I could find, for example, that “unique wooden pipes” is actually easier or harder than “hand-carved pipes” to rank for in the organic SEO results. This really depends on: Who’s in the competition set? What types of links do they have and social mentions do they have? How robust is their content? How much are they exciting visitors and drawing them in and serving them well? That sort of stuff is really hard to calculate here.

I like the keyword difficulty score that Moz uses. Some other tools have their own versions. Doctor Pete, I think, did a wonderful job of putting together a keyword difficulty score that’s relatively comprehensive and well-thought through, uses a lot of the metrics about the domain and the page authority scores, and it compensates for a lot of other things, to look at a set of search results and say, “This is probably about how hard it’s going to be,” and whether it’s harder or easier than some other keyword.

Number eight — Unfocused, non-strategic “linkbait”

Last one, some folks are still engaging in this, I think because content strategy, content marketing, and content as a whole has become a very hot topic and a point of investment. Many SEOs still invest in what I call “nonstrategic and unfocused link bait.” The idea being if I can draw links to my website, it doesn’t really matter if the content doesn’t make people very happy or if it doesn’t match and gel well with what’s on my site. So you see a lot of these types of practices on sites that have nothing to do with it. Like, “Here are seven actors who one time wore too little clothing.” That’s an extreme example, but you get the idea if you ever look at the bottom ads for a lot of content stuff. It feels like pretty much all of them say that.

Versus on topic link bait or what I’d call high quality content that is likely to draw in links and attention, and create a positive branding association like, “Here’s the popularity of pipes, cigarettes, electronic cigarettes, and cigars in the U.S. from 1950 to today.” We’ve got the data over time and we’ve mapped that out. This is likely to earn a lot of links, press attention. People would check it out. They’d go, “Oh, when was it that electronic cigarettes started getting popular? Have pipes really fallen off? It feels like no one uses them anymore. I don’t see them in public. When was that? Why was that? Can I go over time and see that dataset?” It’s fundamentally interesting, and data journalism is, obviously, very hot right now.

So with these eight, hopefully you’ll be able to switch from some old school SEO techniques that don’t work so well to some new ways of thinking that will take your SEO results to a great place. And with that, we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3195778
via Auto Feed

The Local SEO Agency’s Complete Guide to Client Discovery and Onboarding

Posted by MiriamEllis

Why proper onboarding matters

Imagine getting three months in on a Local SEO contract before realizing that your client’s storefront is really his cousin’s garage. From which he runs two other “legit” businesses he never mentioned. Or that he neglected to mention the reviews he bought last year. Worse yet, he doesn’t even know that buying reviews is a bad thing.

The story is equally bad if you’re diligently working to build quality unique content around a Chicago client’s business in Wicker Park but then realize their address (and customer base) is actually in neighboring Avondale.

What you don’t know will hurt you. And your clients.

A hallmark of the professional Local SEO department or agency is its dedication to getting off on the right foot with a new client by getting their data beautifully documented for the whole team from the start. At various times throughout the life of the contract, your teammates and staff from complementary departments will be needing to access different aspects of a client’s core NAP, known challenges, company history, and goals.

Having this information clearly recorded in shareable media is the key to both organization and collaboration, as well as being the best preventative measure against costly data-oriented mistakes. Clear and consistent data play vital roles in Local SEO. Information must not only be gathered, but carefully verified with the client.

This article will offer you a working Client Discovery Questionnaire, an Initial Discovery Phone Call Script, and a useful Location Data Spreadsheet that will be easy for any customer to fill out and for you to then use to get those listings up to date. You’re about to take your client discovery process to awesome new heights!

Why agencies don’t always get onboarding right

Lack of a clearly delineated, step-by-step onboarding process increases the potential for human error. Your agency’s Local SEO manager may be having allergies on Monday and simply forget to ask your new client if they have more than one website, if they’ve ever purchased reviews, or if they have direct access to their Google My Business listings. Or they could have that information and forget to share it when they jump to a new agency.

The outcomes of disorganized onboarding can range from minor hassles to disastrous mistakes.

Minor hassles would include having to make a number of follow-up phone calls to fill in holes in a spreadsheet that could have been taken care of in a single outreach. It’s inconvenient for all teammates when they have to scramble for missing data that should have been available at the outset of the project.

Disastrous mistakes can stem from a failure to fully gauge the details and scope of a client’s holdings. Suddenly, a medium-sized project can take on gigantic proportions when the agency learns that the client actually has 10 mini-sites with duplicate content on them, or 10 duplicate GMB listings, or a series of call tracking numbers around the web.

It’s extremely disheartening to discover a mountain of work you didn’t realize would need to be undertaken, and the agency can end up having to put in extra uncompensated time or return to the client to renegotiate the contract. It also leads to client dissatisfaction.

Setting correct client expectations is completely dependent on being able to properly gauge the scope of a project, so that you can provide an appropriate timeline, quote, and projected benchmarks. In Local, that comes down to documenting core business information, identifying past and present problems, and understanding which client goals are achievable. With the right tools and effective communication, your agency will be making a very successful start to what you want to be a very successful project.

Professional client discovery made simple

There’s a lot you want to learn about a new client up front, but asking (and answering) all those questions right away can be grueling. Not to mention information fatigue, which can make your client give shorter and shorter answers when they feel like they’ve spent enough time already. Meanwhile your brain reaches max capacity and you can’t use all that valuable information because you can’t remember it.

To prevent such a disaster, we recommend dividing your Local SEO discovery process into a questionnaire to nail down the basics, a follow-up phone call to help you feel out some trickier issues, and a CSV to gather the location data. And we’ve created templates to get you started…

Client Discovery Questionnaire

Use our Local SEO Client Discovery Questionnaire to understand your client’s history, current organization, and what other consultants they might also be working with. We’ve annotated each question in the Google Doc template to help you understand what you can learn and potential pitfalls to look out for.

If you want to make collecting and preserving your clients’ answers extra easy, use Google Forms to turn that questionnaire into a form like this:

You can even personalize the graphic, questions, and workflow to suit your brand.

Client Discovery Phone Script

Once you’ve received your client’s completed questionnaire and have had time to process the responses and do any necessary due diligence (like using our Check Listings tool to check how aggregators currently display their information), it’s time to follow up on the phone. Use our annotated Local SEO Client Discovery Phone Script to get you started.

local seo client discovery phone script

No form necessary this time, because you’ll be asking the client verbally. Be sure to pay attention to the client’s tone of voice as they answer and refer to the notes under each question to see what you might be in for.

Location Data CSV

Sometimes the hardest part of Local SEO is getting all the location info letter-perfect. Make that easier by having the client input all those details into your copy of the Location Data Spreadsheet.

local seo location data csv

Then use the File menu to download that document as a CSV.

You’ll want to proof this before uploading it to any data aggregators. If you’re working with Moz Local, the next step is an easy upload of your CSV. If you’re working with other services, you can always customize your data collection spreadsheet to meet their standards.

Keep up to date on any business moves or changes in hours by designing a data update form like this one from SEER and periodically reminding your client contact to use it.

Why mutual signals of commitment really matter

There are two sides to every successful client project: one half belongs to the agency and the other to the company it serves. The attention to detail your agency displays via clean, user-friendly forms and good phone sessions will signal your professionalism and commitment to doing quality work. At the same time, the willingness of the client to take the necessary time to fill out these documents and have these conversations signals their commitment to receiving value from their investment.

It’s not unusual for a new client to express some initial surprise when they realize how many questions you’re asking them to answer. Past experience may even have led them to expect half-hearted, sloppy work from other SEO agencies. But, what you want to see is a willingness on their part to share everything they can about their company with you so that you can do your best work.

Anecdotally, I’ve fully refunded the down payments of a few incoming clients who claimed they couldn’t take the time to fill out my forms, because I detected in their unwillingness a lack of genuine commitment to success. These companies have, fortunately, been the exception rather than the rule for me, and likely will be for your agency, too.

It’s my hope that, with the right forms and a commitment to having important conversations with incoming clients at the outset, the work you undertake will make your Local team top agency and client heroes!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3185995
via Auto Feed

Measuring Content: You’re Doing it Wrong

Posted by MatthewBarby

The traditional ways of measuring the success or failure of content are broken. We can’t just rely on metrics like the number of pageviews/visits or bounce rate to determine whether what we’re creating has performed well.

“The primary thing we look for with news is impact, not traffic,” says Jonah Peretti, Founder of BuzzFeed. One of the ways that BuzzFeed have mastered this is with the development of their proprietary analytics platform, POUND.

POUND enables BuzzFeed to predict the potential reach of a story based on its content, understand how effective specific promotions are based on the downstream sharing and traffic, and power A/B tests — and that’s just a few examples.

Just because you’ve managed to get more eyeballs onto your content doesn’t mean it’s actually achieved anything. If that were the case then I’d just take a few hundred dollars and buy some paid StumbleUpon traffic every time.

Yeah, I’d generate traffic, but it’s highly unlikely to result in me achieving some of my actual business goals. Not only that, but I’d have no real indication of whether my content was satisfying the needs of my visitors.

The scary thing is that the majority of content marketing campaigns are measured this way. I hear statements like “it’s too difficult to measure the performance of individual pieces of content” far too often. The reality is that it’s pretty easy to measure content marketing campaigns on a micro level — a lot of the time people don’t want to do it.

Engagement over entrances

Within any commercial content marketing campaign that you’re running, measurement should be business goal-centric. By that I mean that you should be determining the overall success of your campaign based on the achievement of core business goals.

If your primary business goal is to generate 300 leads each month from the content that you’re publishing, you’ll need to have a reporting mechanism in place to track this information.

On a more micro-level, you’ll want to be tracking and using engagement metrics to enable you to influence the achievement of your business goals. In my opinion, all content campaigns should have robust, engagement-driven reporting behind them.

Total Time Reading (TTR)

One metric that Medium uses, which I think adds a lot more value than pageviews, is “Total Time Reading (TTR).” This is a cumulative metric that quantifies the total number of minutes spent reading a piece of content. For example, if I had 10 visitors to one of my blog articles and they each stayed reading the article for 1 minute each, the total reading time would be 10 minutes.

“We measure every user interaction with every post. Most of this is done by periodically recording scroll positions. We pipe this data into our data warehouse, where offline processing aggregates the time spent reading (or our best guess of it): we infer when a reader started reading, when they paused, and when they stopped altogether. The methodology allows us to correct for periods of inactivity (such as having a post open in a different tab, walking the dog, or checking your phone).” (source)

The reason why this is more powerful than just pageviews is because it takes into account how engaged your readers are to give a more accurate representation of its visibility. You could have an article with 1,000 pageviews that has a greater TTR than one with 10,000 pageviews.

Scroll depth & time on page

A related and simpler metric to acquire is the average time on page (available within Google Analytics). The average time spent on your webpage will give a general indication of how long your visitors are staying on the page. Combining this with ‘scroll depth’ (i.e. how far down the page has a visitor scrolled) will help paint a better picture of how ‘engaged’ your visitors are. You’ll be able to get the answer to the following:

“How much of this article are my visitors actually reading?”

“Is the length of my content putting visitors off?”

“Are my readers remaining on the page for a long time?”

Having the answers to these questions is really important when it comes to determining which types of content are resonating more with your visitors.

Social Lift

BuzzFeed’s “Social Lift” metric is a particularly good way of understanding the ‘virality’ of your content (you can see this when you publish a post to BuzzFeed). BuzzFeed calculates “Social Lift” as follows:

((Social Views)/(Seed Views)+1)

Social Views: Traffic that’s come from outside BuzzFeed; for example, referral traffic, email, social media, etc.

Seed Views: Owned traffic that’s come from within the BuzzFeed platform; e.g. from appearing in BuzzFeed’s newsfeed.

BuzzFeed Social Lift

This is a great metric to use when you’re a platform publisher as it helps separate out traffic that’s coming from outside of the properties that you own, thus determining its “viral potential.”

There are ways to use this kind of approach within your own content marketing campaigns (without being a huge publisher platform) to help get a better idea of its “viral potential.”

One simple calculation can just involve the following:

((social shares)/(pageviews)+1)

This simple stat can be used to determine which content is likely to perform better on social media, and as a result it will enable you to prioritize certain content over others for paid social promotion. The higher the score, the higher its “viral potential.” This is exactly what BuzzFeed does to understand which pieces of content they should put more weight behind from a very early stage.

You can even take this to the next level by replacing pageviews with TTR to get a more representative view of engagement to sharing behavior.

The bottom line

Alongside predicting “viral potential” and “TTR,” you’ll want to know how your content is performing against your bottom line. For most businesses, that’s the main reason why they’re creating content.

This isn’t always easy and a lot of people get this wrong by looking for a silver bullet that doesn’t exist. Every sales process is different, but let’s look at the typical process that we have at HubSpot for our free CRM product:

  1. Visitor comes through to our blog content from organic search.
  2. Visitor clicks on a CTA within the blog post.
  3. Visitor downloads a gated offer in exchange for their email address and other data.
  4. Prospect goes into a nurturing workflow.
  5. Prospect goes through to a BOFU landing page and signs up to the CRM.
  6. Registered user activates and invites in members of their team.

This is a simple process, but it can still be tricky sometimes to get a dollar value on each piece of content we produce. To do this, you’ve got to understand what the value of a visitor is, and this is done by working backwards through the process.

The first question to answer is, “what’s the lifetime value (LTV) of an activated user?” In other words, “how much will this customer spend in their lifetime with us?”

For e-commerce businesses, you should be able to get this information by analyzing historical sales data to understand the average order value that someone makes and multiply that by the average number of orders an individual will make with you in their lifetime.

For the purposes of this example, let’s say each of our activated CRM users has an LTV of $100. It’s now time to work backwards from that figure (all the below figures are theoretical)…

Question 1: “What’s the conversion rate of new CRM activations from our email workflow(s)?”

Answer 1: “5%”

Question 2: “How many people download our gated offers after coming through to the blog content?”

Answer 2: “3%”

Knowing this would help me to start putting a monetary value against each visitor to the blog content, as well as each lead (someone that downloads a gated offer).

Let’s say we generate 500,000 visitors to our blog content each month. Using the average conversion rates from above, we’d convert 15,000 of those into email leads. From there we’d nurture 750 of them into activated CRM users. Multiply that by the LTV of a CRM user ($100) and we’ve got $75,000 (again, these figures are all just made up).

Using this final figure of $75,000, we could work backwards to understand the value of a single visitor to our blog content:

 ((75,000)/(500,000))

Single Visitor Value: $0.15

We can do the same for email leads using the following calculation:

(($75,000)/(15,000))

Individual Lead Value: $5.00

Knowing these figures will help you be able to determine the bottom-line value of each of your pieces of content, as well as calculating a rough return on investment (ROI) figure.

Let’s say one of the blog posts we’re creating to encourage CRM signups generated 500 new email leads; we’d see a $2,500 return. We could then go and evaluate the cost of producing that blog post (let’s say it takes 6 hours at $100 per hour – $600) to calculate a ROI figure of 316%.

ROI in its simplest form is calculated as:

(((($return)-($investment))/($investment))*100)

You don’t necessarily need to follow these figures religiously when it comes to content performance on a broader level, especially when you consider that some content just doesn’t have the primary goal of lead generation. That said, for the content that does have this goal, it makes sense to pay attention to this.

The link between engagement and ROI

So far I’ve talked about two very different forms of measurement:

  1. Engagement
  2. Return on investment

What you’ll want to avoid is actually thinking about these as isolated variables. Return on investment metrics (for example, lead conversion rate) are heavily influenced by engagement metrics, such as TTR.

The key is to understand exactly which engagement metrics have the greatest impact on your ROI. This way you can use engagement metrics to form the basis of your optimization tests in order to make the biggest impact on your bottom line.

Let’s take the following scenario that I faced within my own blog as an example…

The average length of the content across my website is around 5,000 words. Some of my content way surpasses 10,000 words in length, taking an estimated hour to read (my recent SEO tips guide is a perfect example of this). As a result, the bounce rate on my content is quite high, especially from mobile visitors.

Keeping people engaged within a 10,000-word article when they haven’t got a lot of time on their hands is a challenge. Needless to say, it makes it even more difficult to ensure my CTAs (aimed at newsletter subscriptions) stand out.

From some testing, I found that adding my CTAs closer to the top of my content was helping to improve conversion rates. The main issue I needed to tackle was how to keep people on the page for longer, even when they’re in a hurry.

To do this, I worked on the following solution: give visitors a concise summary of the blog post that takes under 30 seconds to read. Once they’ve read this, show them a CTA that will give them something to read in more detail in their own time.

All this involved was the addition of a “Summary” button at the top of my blog post that, when clicked, hides the content and displays a short summary with a custom CTA.

Showing Custom Summaries

This has not only helped to reduce the number of people bouncing from my long-form content, but it also increased the number of subscribers generated from my content whilst improving user experience at the same time (which is pretty rare).

I’ve thought that more of you might find this quite a useful feature on your own websites, so I packaged it up as a free WordPress plugin that you can download here.

Final thoughts

The above example is just one example of a way to impact the ROI of your content by improving engagement. My advice is to get a robust measurement process in place so that you’re able to first of all identify opportunities, and then go through with experiments to take advantage of the opportunity.

More than anything, I’d recommend that you take a step back and re-evaluate the way that you’re measuring your content campaigns to see if what you’re doing really aligns with the fundamental goals of your business. You can invest in endless tools that help you measure things better, but if core metrics that you’re looking for are wrong, then this is all for nothing.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3179271
via Auto Feed

Can We Predict the Google Weather?

Posted by Dr-Pete

Four years ago, just weeks before the first Penguin update, the MozCast project started collecting its first real data. Detecting and interpreting Google algorithm updates has been both a far more difficult and far more rewarding challenge than I ever expected, and I’ve learned a lot along the way, but there’s one nagging question that I’ve never been able to answer with any satisfaction. Can we use past Google data to predict future updates?

Before any analysis, I’ve always been a fan of using my eyes. What does Google algorithm “weather” look like over a long time-period? Here’s a full year of MozCast temperatures:

Most of us know by now that Google isn’t a quiet machine that hums along until the occasional named update happens a few times a year. The algorithm is changing constantly and, even if it wasn’t, the web is changing constantly around it. Finding the signal in the noise is hard enough, but what does any peak or valley in this graph tell you about when the next peak might arrive? Very little, at first glance.

It’s worse than that, though

Even before we dive into the data, there’s a fundamental problem with trying to predict future algorithm updates. To understand it, let’s look at a different problem — predicting real-world weather. Predicting the weather in the real world is incredibly difficult and takes a massive amount of data to do well, but we know that that weather follows a set of natural laws. Ultimately, no matter how complex the problem is, there is a chain of causality between today’s weather and tomorrow’s and a pattern in the chaos.

The Google algorithm is built by people, driven by human motivations and politics, and is only constrained by the rules of what’s technologically possible. Granted, Google won’t replace the entire SERP with a picture of a cheese sandwich tomorrow, but they can update the algorithm at any time, for any reason. There are no natural laws that link tomorrow’s algorithm to today’s. History can tell us about Google’s motivations and we can make reasonable predictions about the algorithm’s future, but those future algorithm updates are not necessarily bound to any pattern or schedule.

What do we actually know?

If we trust Google’s public statements, we know that there are a lot of algorithm updates. The fact that only a handful get named is part of why we built MozCast in the first place. Back in 2011, Eric Schmidt testified before Congress, and his written testimony included the following data:

To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm.

I’ve highlighted one phrase — “516 changes”. At a time when we believed Google made maybe a dozen updates per year, Schmidt revealed that it was closer to 10X/week. Now, we don’t know how Google defines “changes,” and many of these changes were undoubtedly small, but it’s clear that Google is constantly changing.

Google’s How Search Works page reveals that, in 2012, they made 665 “improvements” or “launches” based on an incredible 118,812 precision evaluations. In August of 2014, Amit Singhal stated on Google+ that they had made “more than 890 improvements to Google Search last year alone.” It’s unclear whether that referred to the preceding 12 months or calendar year 2013.

We don’t have a public number for the past couple of years, but it is incredibly unlikely that the rate of change has slowed. Google is making changes to search on the order of 2X/day.

Of course, anyone who has experience in software development realizes that Google didn’t evenly divide 890 improvements over the year and release one every 9 hours and 51 minutes. That would be impractical for many reasons. It’s very likely that releases are rolled out in chunks and are tied to some kind of internal process or schedule. That process or schedule may be irregular, but humans at Google have to approve, release, and audit every change.

In March of 2012, Google released a video of their weekly Search Quality meeting, which, at the time, they said occurred “almost every Thursday”. This video and other statements since reveal a systematic process within Google by which updates are reviewed and approved. It doesn’t take very advanced math to see that there are many more updates per year than there are weekly meetings.

Is there a weekly pattern?

Maybe we can’t predict the exact date of the next update, but is there any regularity to the pattern at all? Admittedly, it’s a bit hard to tell from the graph at the beginning of this post. Analyzing an irregular time series (where both the period between spikes and intensity of those spikes changes) takes some very hairy math, so I decided to start a little simpler.

I started by assuming that a regular pattern was present and looking for a way to remove some of the noise based on that assumption. The simplest analysis that yielded results involved taking a 3-day moving average and calculating the Mean Standard Error (MSE). In other words, for every temperature (each temperature is a single day), take the mean of that day and the day on either side of it (a 3-day window) and square the difference between that day’s temperature and the 3-day mean. This exaggerates stand-alone peaks, and smooths some of the noisier sequences, resulting in the following graph:

This post was inspired in part by February 2016, which showed an unusually high signal-to-noise ratio. So, let’s zoom in on just the last 90 days of the graph:

See peaks 2–6 (starting on January 21)? The space between them, respectively, is 6 days, 7 days, 7 days, and 8 days. Then, there’s a 2-week gap to the next, smaller spike (March 3) and another 8 days to the one after that. While this is hardly proof of a clear regular pattern, it’s hard to believe the weekly pacing is entirely a coincidence, given what we know about the algorithm update approval process.

This pattern is less clear in other months, and I’m not suggesting that a weekly update cycle is the whole picture. We know Google also does large data refreshes (including Penguin) and sometimes rolls updates out over multiple days (or even weeks). There’s a similar, although noisier, pattern in April 2015 (the first part of the 12-month MSE graph). It’s also interesting to note the activity levels around Christmas 2015:

Despite all of our conspiracy theories, there really did seem to be a 2015 Christmas lull in Google activity, lasting approximately 4 weeks, followed by a fairly large spike that may reflect some catch-up updates. Engineers go on vacation, too. Notice that that first January spike is followed by a roughly 2-week gap and then two 1-week gaps.

The most frequent day of the week for these spikes seems to be Wednesday, which is odd, if we believe there’s some connection to Google’s Thursday meetings. It’s possible that these approximately weekly cycles are related to naturally occurring mid-week search patterns, although we’d generally expect less pronounced peaks if change were related to something like mid-week traffic spikes or news volume.

Did we win Google yet?

I’ve written at length about why I think algorithm updates still matter, but, tactically speaking, I don’t believe we should try to plan our efforts around weekly updates. Many updates are very small and even some that are large on average may not effect our employer or clients.

I view the Google weather as a bit like the unemployment rate. It’s interesting to know whether that rate is, say, 5% or 7%, but ultimately what matters to you is whether or not you have a job. Low or high unemployment is a useful economic indicator and may help you decide whether to risk finding a new job, but it doesn’t determine your fate. Likewise, measuring the temperature of the algorithm can teach us something about the system as a whole, but the temperature on any given day doesn’t decide your success or failure.

Ultimately, instead of trying to predict when an algorithm update will happen, we should focus on the motivations behind those updates and what they signal about Google’s intent. We don’t know exactly when the hammer will fall, but we can get out of the way in plenty of time if we’re paying attention.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3165457
via Auto Feed

How to Influence Branded Searches and Search Volumes to Earn Big Rewards – Whiteboard Friday

Posted by randfish

What have you been doing with branded searches? If the answer is “not much,” it may be time to shift your focus a bit. In today’s Whiteboard Friday, Rand explores the huge benefits of turning some of your unbranded searches into branded and offers some key tactical advice.

How to Influence Branded Searches and Search Volumes to Earn Big Rewards Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat a little bit about how to influence branded search and get a load of benefit out of that. Some of these things that I’m going to talk about today are more theoretical. Like we think they work. We’ve experimented. We’ve seen some other folks experiment. We’re pretty sure. Then some of them are solid. We know that these things influence. Regardless, I think I can persuade you that trying to turn more of your unbranded search into branded search is a hugely positive thing. Generating more branded search in general is also hugely positive. Let me show you what I mean with some examples first.

Non-branded search

Non-branded search, these are essentially the search terms, the queries and phrases that we are all pursuing. We’re trying to rank for them. This is searchers who have not yet expressed a brand preference. They’re searching. Let’s say we’re talking to a chemist or a lab instructor at a school and they’re trying to put together all their materials for their lab. So they’re searching for things like test tubes and lab equipment and chemical safety goggles. They’re trying to figure out the best prices and the best products, the ones that’ll be the safest, the ones that’ll be best for their class. Those are unbranded. They have expressed no brand preference. They haven’t said, “Oh I want this kind and I know that.”

Branded search

Branded searches are more like, “Oh I know I want a Fisher test tube, Fisher Scientific.” Fisher test tubes is what I’m looking for, or lab equipment from Thermo. Thermo Scientific makes a bunch of lab equipment that you can buy prepackaged, kind of all together. Or chemical goggles, “I know I want the 3M variety.” 3M has, like, these awesome chemical goggles. They’re very safe, very good for this stuff.

These branded searches are preferable in many ways for the brands that own and control these companies than the non-branded searches. Here’s why.

A. Increase ease of ranking and conversion

Obviously it is way, way easier to rank well for “3M chemical goggles” if you are 3M than ranking for just “chemical goggles” if you’re 3M. You’re competing against far fewer folks. A lot of people won’t even use your brand name. Even the people who do, like maybe on Amazon.com, you’ll still get some benefit from that because they’re searching for your brand.

It also increases the propensity to convert, meaning that if someone performs that branded search, they’re more likely to actually buy that product. They’re generally speaking further down the funnel. They’ve sort of decided to at least investigate your brand, and now you have a chance to pitch them. They’re familiar. They know your brand name at least. That’s a real positive thing.

B. Affecting search suggest

The second thing that’s nice is you can affect search suggest, meaning that if lots of people, for example, started searching for “3M chemical goggles” instead of “chemical safety goggles” or “chemical goggles,” it would actually be the case that over time what you’d see Google do is in the dropdown box for “chemical safety goggles,” 3M, the word, would start to be associated with it. You’d see that in search suggest. It might be at the very bottom.

For example, if you do a search for “whiteboard,” today in Google, Whiteboard Friday is somewhere on that list, but it’s usually way down towards the bottom. In some geographies it’s probably not there at all. Over time if we get more and more people searching for Whiteboard Friday, it’ll move up in search suggest. So that means people will be more likely to perform that query. At least they’ll see it and say, “Oh that must be a brand,” or “I must have some association with that, or maybe I’m supposed to,” or “I want to investigate that, I’m curious about it.”

C. Improve rankings for non-branded queries

This is one of those speculative things. We believe that right now search volume for branded terms does have an impact on ranking for the non-branded version of the query.

We saw Google file some patents around this, but we also saw some tests in this direction that looked promising, basically saying that if . . . Let’s do Fisher for this one. Let’s say people start searching for Fisher test tubes a lot more. Google might say, “You know, I think Fisher is very relevant to the search query ‘test tubes.’ Let’s move Fisher up in the rankings for just the unbranded phrase ‘test tubes,’ because that volume is suggesting to us that this brand is more relevant to this query than maybe we initially presumed.” That’s huge as well. If you can drive up that search volume, now you can start to get benefit in the non-branded rankings.

D. Appear in “related searches” feature

You can appear in the related search feature. Related searches is usually somewhere between the middle of the page and the very bottom of the page, most of the time at the very bottom of the search page. That’s a powerful way for those 10% to 20% of people that scroll all the way to the bottom before making a click selection or before deciding to change their query, those related searches are a powerful way to suggest, just like search suggest is, that they should, instead of searching for the non-branded term, search for your branded query. The related searches, by the way, is also we think influenced by content, which I’ll talk about in a second.

E. Create an association between your brand and a keyphrase

Create an entity-style association. This is essentially the idea of co-occurring keywords. If Google is crawling the web and they see tons of documents, high-quality, trustworthy documents that contain the word “test tubes” that also contain the word “3M,” oftentimes in close proximity to the word “test tubes,” they’ll over time start to associate the word “test tubes” with the word “3M.” That can impact suggest. It can impact related. It can impact rankings. It has a bunch of positive potential impact. That can make you more relevant for all sorts of things around search that are just awesome.

F. Affect future searches and personalization

Then the last one, which is also cool and powerful, is that this can affect search personalization, meaning, for example, let’s say someone does a search for “3M chemical goggles.” They click on 3m.com. Maybe they buy them. Maybe they don’t. Next time they do a search, for example let’s say “chemical aprons,” well it turns out that Google already knows that person has visited 3M in the past. They might see that behavior and, because they’re logged into their account, they might show them 3M higher up in the rankings. They might show them 3M higher in the search suggest as they start typing. That personalization is another powerful way that you’re getting benefit from branded search.

There are all these benefits. We want to make this happen. How do we do it?

What are the tactics that an SEO can actually use?

It turns out SEOs, we’re going to have to work pretty cross-departmentally in our marketing teams to be able to make this happen because some of the best tactics require things that SEO doesn’t always own and control entirely. Sometimes you do, sometimes not.

The first one, if we can create curiosity and drive search volume via brand advertising, that’s an awesome way to go.

You’ve seen more and more of this. You have seen advertisements probably on television and YouTube ads. You’ve seen branded ads on display ads. You’ve probably heard things on the radio that say search for us, all that kind of stuff. All that classic media, everything from billboards to radio — I know I’m drawing televisions with rabbit ears still. There are probably no TVs in the US that still have rabbit ears. Magazines, print, whatever, billboards, all of that brand advertising can drive people to then be curious about the brand and to want to investigate them more. If you hear a lot about 3M goggles and the cool stuff they’re doing, well, you might be tempted to perform a search.

You can embed searches as well.

Be careful with this one. This can get spammy and manipulative and could get you into trouble. You can do it. If you do it in authentic white hat ways, you’ll probably be okay.

The idea is basically telling customers like, “Hey, if you want to research us, learn more about 3M’s goggles, don’t just take our word for it. Search Google. Go find what people are saying, what reviews are saying about our product.” You see I think it was LG or Samsung ran a big one of these where they were suggesting people do a Google search, because it turns out their phone had been very, very highly rated by all the top folks who’d done a review of them. You can do that in email. You could do it over social networks. You could do it in content. You’re essentially driving people directly to the Google search result page. That could be an embedded link, or it simply could be a suggestion to search and check people out.

You can also use public relations and content marketing, especially guest contributions and content marketing.

You can use events and sponsorship, all of that stuff to essentially drive latent interest and curiosity, kind of like we did with brand advertising but in a little more organic fashion. If The New York Times writes a piece about you, if you speak at a conference . . . This is me wildly gesticulating at a conference. It looks like I’m very dangerously, precariously perched to fall into the crowd there. Guest contributions on a website, maybe something like a Fortune.com, which takes some guest posts, driving people to want to learn more about the brand or the product that you’ve mentioned.

Then finally, you can create those keyword associations that we talked about, the entity-style associations, through word proximity and co-occurrence in web documents.

I put just web documents here, but really it’s important, trustworthy web documents from sources that Google likes and trusts and indexes. That means looking at: Where are all the places potentially on the web that lab equipment is talked about or would be talked about maybe in the future? How do I influence those authors, those creators, those publications to potentially consider including my brand, Thermo Scientific, in their documents? Or how do I create content for places like these that include my brand and include the unbranded term “lab equipment?”

Bunch of tactics, bunch of great opportunities here. I’d love to hear from you folks about what you’ve done around influencing branded search and how you’ve seen it affect your SEO campaigns overall. I’ll look forward to catching up with you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3142548
via Auto Feed

Battleground Mobile: Why (& How) to Prepare for the Future

Posted by EricEnge

[Estimated read time: 12 minutes]

The world of mobile continues to explode. Major players like Google, Facebook, and Apple are investing massively in efforts to establish themselves as the dominant player in the new markets that are emerging as a result. These companies are betting in a big way on continuing changes in mobile usage and in user expectations for mobile devices, and that means you should be, too. It means you need to have a mobile-first mentality.

The investments by these companies are happening at many different levels. For example, Google has already made mobile friendliness a ranking factor, and intends to increase the strength of that signal in May 2016. But there’s much more to this story, so let’s dig in!

Continuing rise of mobile devices

Sure, you’ve heard it before, but the growth of installed mobile devices is probably happening even faster than you think:

According to this data, PCs, tablets, and smartphones will only be about 25% of the installed Internet-enabled devices by 2020. Just a few years ago, these represented two-thirds of the installed Internet devices. In the biz world, this is what we call a “disruptive change.”

Many of the new device types will probably have a fairly passive role in our lives (such as smart refrigerators, smart thermostats, and other Internet of Things devices). By this, I mean that I don’t expect them to become devices that we interact with heavily.

However, many classes of these devices will be ones that we’ll interact with substantially, such as smart TVs, Internet media devices, and wearables.

Here’s a recast view of that same chart focusing on just this class of devices:

Looking at this new chart, we see that PCs and tablets — the devices that have a fairly substantial keyboard available — still make up only about 40% of the installed devices.

Rise of voice search

So how will we interact with those devices? The primary way of doing that will be via voice.

In the recent keynote event I did with Gary Illyes, he indicated that the number of voice searches Google received in 2015 was double that of 2014, so they’re definitely seeing a steep rise in voice search volume.

The recent interview that Danny Sullivan did with Amit Singhal underscored this. The now-retired Head of Search Quality at Google (he retired as of February 29, 2016) spent a year living primarily on mobile phones. One of the interesting exchanges in the interview:

Danny Sullivan: Do you tend to type more or do you voice-search more?
Amit Singhal: I’m swiping and voice-searching far more than letter-typing.

At another point in the interview he also says: “I realized … that on mobile devices, that I wanted to act more.” This notion is backed up by something Gary Illyes said in our recent keynote event: “We get, I think, 30 times as many action queries by voice as by typing.”

This emergence of voice search is a big deal. As Singhal noted above, this leads to much more voice search, and voice queries use natural language queries far more than typed searches. This appears to be one of the major reasons behind Google developing and launching its RankBrain algorithm.

Who’s winning the mobile wars so far?

Recently, I watched a great video of a presentation by Chartbeat’s Tony Haile. In this video he shows some interesting data on content consumption, as well as the mobile market. One of the more fascinating charts is this one showing that Facebook utterly dominates consumption of major news events:

Note that this particular chart is for one single story on The Atlantic, entitled “What ISIS Really Wants,” but it’s a compelling chart just the same. In addition, Facebook has 678 million users (47% of all their users) that access their platform solely from mobile devices, and 934 million of their 1.44 billion users (65%) access Facebook from a mobile device every day.

Taking this a step further, you can see how Facebook’s dominance here plays out on a minute-by-minute basis, using (once again) the ISIS news story as an example:

In this view, you see Google leading the early surge, but once Facebook spikes, its volume quickly overwhelms that of Google. So in this view, it looks like Facebook is dominating major news cycles. In contrast, Google owns the lulls in the news cycles:

Another interesting note from the Haile presentation is that overall mobile traffic share is continuing to grow, and is pushing towards 60% and higher of all traffic. However, he notes that this is “not because it’s killing desktop, it’s because it’s outgrowing it.”

Haile also points out that there are 5 types of things that you can do with content. These are:

  1. Create
  2. Host
  3. Curate
  4. Distribute
  5. Monetize

Facebook has historically been used to curate and distribute content. With their new Facebook Instant Articles initiative, they are now taking on the hosting and monetization of content. I’ll discuss that more below.

So does this mean that Facebook is the runaway winner in mobile? No, as the charts above focused on the major news cycles, but nonetheless, it shows that Facebook has some strong advantages over Google that you might not have expected.

Mobile apps

Another thing that many underestimate is the growing importance of the apps market. comScore’s September 2015 Mobile App Report provides some compelling data to help you increase your understanding of where apps fit into the overall market.

First, let’s took at the share that apps represent of all digital media time:

Per this chart, usage in all 3 segments is growing (including desktop), but the growth of time in apps is happening at a far greater rate than any other segment. In addition, time spent on apps exceeds that of time on desktop and the mobile web together. Note that not all app time is on smartphones, as usage in tablets have high app usage as well, but smartphone app usage by itself represents 44% of all digital media time spent:

I gotta tell you, seeing that 44% number was a “wow” moment for me. Facebook and Google have both recognized the importance of this growing usage pattern. You can see this in the following chart of the top 25 installed apps by user count:

The top 6 apps, as well as 8 of the top 9 apps, are all provided by either Facebook or Google. Ever wonder why Google keeps Google Plus around? Might have something to do with that app coming in at position 18 among the most-installed apps. This makes G+ a huge potential source of data for Google.

Facebook has the clear lead here too, though, as it’s the number-one installed app, and it’s considered the number one app for 48% of those that have it installed:

One of the big problems with Apps for most publishers is even after you get installed is driving ongoing usage. According to Google, “only one quarter of installed apps are used daily while one quarter are are left completely unused.”

One method that Google offers to help app publishers is app indexing. This will enable content within apps to show up in search results for related queries:

Google currently has 50 billion links within apps indexed, and “25% of searches on Android return deep links to apps for signed-in users. In addition to driving re-engagement, app indexing on Android will also surface install buttons for users who do not yet have your app installed. Since 1 in 4 apps are already being discovered through search, app indexing is a simple and free method for acquiring new users.” Here are some examples of app install buttons showing up in the SERPs:

As shown here, the query that led to this showing up in the SERP was the name of the business, Priceza. However, Google’s Mariya Moeva provided me with other examples of “app seeking queries” that might bring up such an install button:

  1. restaurant finder
  2. grocery shopping list
  3. breaking news app

The benefits of app indexing should be obvious, but Google shares many case studies here. One of these from AliExpress showed an 80–90% increase in search impressions, and a 30–40% increase for searches on Android for users that had the app installed.

This leads to bringing users back to your app, and this offers compelling value as app engaged users tend to be more loyal, place higher dollar value orders, and order more frequently. Part of the upside in terms of visibility results from the fact that app indexing is used by Google as a ranking signal, though the scope of that boost isn’t clear.

Driving initial installs, and then getting help to get users back to your app seems like a good thing!

Speed, speed, and more speed

You’ve heard this, too: that speed is paramount. But you might still have no idea how important. You may well have seen data like this:

Or you may have seen data from some of the Akamai travel site performance study that showed:

  1. Three second rule: 57 percent of online shoppers will wait three seconds or less before abandoning a site
  2. 65% of 18–24 year olds expect a site to load in two seconds or less

It would be tempting to look at all this data and then start setting specific goals as to how fast you need to be, but I want to discourage you from thinking about it that way. Instead, I’m going to give you a different goal:

Be faster than your competition

In today’s hyper-connected world, the real issue is that any time you offer some subpar aspect to what you do, the competitive alternative is only a click or two away. Understanding the implications of that, and applying it in all your online thinking is one of the most important things you can do.

Don’t just focus on being faster than they are today either, but make yourself faster than they will be in 6 months or a year from now.

For some basic help you can check your pages out in Google’s Page Speed Insights tool. However, both Facebook and Google offer initiatives for dramatically speeding up your web pages, and that’s what I’ll explore next.

Facebook Instant Articles

Facebook Instant Articles officially launched on May 12th, 2015. The idea behind the program is to dramatically speed up performance of content on mobile devices. When the program initially launched, it was available only to some publishers, such as the NY Times, the Washington Post, Buzzfeed, Business Insider, NBC News, and Mic.

The benefits that the program offers is near-instant loading of content on mobile devices, and the opportunity to get Facebook to sell your ad space for you (though you can still sell it yourself if you want to). If Facebook sells the ads for you, the split has been reported at 70% to you and 30% kept by Facebook, though that does not appear to be a hard-and-fast number.

Instant Articles come with some neat visualization features too, such as rapid scrolling, zooming capabilities, and the ability to connect to maps functionality.

However, the platform is a proprietary one, with Facebook hosting the content. This will be scary to some. To try and ease those concerns, Facebook does enable publishers to sell their own ads if they prefer, without any need to pay Facebook a cut, or to include their own analytics on the Instant Articles.

As of April 12, 2016, this program will be opened up to all publishers. According to Peter Kafka of re/code: “When I asked reps there if that included one-person operations — that is, someone typing their own stuff on a Tumblr page or Medium page or whatever — they said yes, with a tiny bit of hesitation.”

Accelerated Mobile Pages

In October of 2015, Accelerated Mobile Pages (AMP) were announced. Like Facebook Instant Articles, its goal is to load pages on mobile devices instantly. One of the fundamental differences about AMP is that it’s an ope source project, with participants such as:

  1. Google
  2. Pinterest
  3. Twitter
  4. WordPress
  5. The Guardian

AMP relies on two basic principles to make it operate faster:

  1. The permitted HTML is very limited, with the basic goal being that all code is already pre-rendered to minimize need for server accesses when rendering a page.
  2. The pages can be cached by third parties. For example, Google already has the caching infrastructure in place, but companies such as Pinterest and Twitter can set up their own if they choose.

To make this work, you are only allowed to use fairly limited CSS, and the AMP-supplied JavaScript library. You can more or less forget about AJAX, or forms, for example.

There are also some hoops you need to jump through to implement analytics on these pages, run your ads, and deal with unsupported functionality, but workarounds do exist. For example, according to Paul Shapiro at SMX West, iframes are “the holy grail of unsupported functionality” for AMP:

Also, to implement analytics, you’ll need to use special tags. Paul Shapiro recommends the PageFrog plugin to help with that for both AMP and Facebook Instant Articles:

Expect the AMP platform to evolve rapidly, as there are many interested parties working on this and many of the current shortcomings will get better over time.

Developing an action plan

The cumulative weight of all these changes represents a significant disruptive event. These are the times when businesses can rapidly accelerate their growth, or lose the opportunity and get stuck playing catch-up. I’m pretty sure which of those two scenarios I prefer. The first steps, really, are to understand what are the specific opportunities for connecting with your customers over mobile, and prioritizing among them.

You should have a basic mobile-friendly site, as that’s already a ranking factor. But, your thought process needs to go deeper than that. For example, start understanding what type of mobile experience your customers want to be having. I’d urge you to put significant creative thinking into this question, as the best mobile experience might involve approaches that are quite different from what you do on your current website.

The strategic shift you need to make is to make your business mobile-first. Any time you think about adding something to your website, for example, stop coming up with the desktop design first and relying on responsive web design to handle mobile for you. Start thinking about your mobile experience first, and then consider the desktop variation second.

As you engage in that thought process, be willing to incorporate some of the specific elements I’ve discussed within this article. Here’s a summary of those items, and how they might fit in:

  1. Should you build an app? Yes, if you believe you can put enough value into an app to generate installs and bring users back to use it on an ongoing basis.

    • While I did not discuss this in this article, make sure to perform App Store Optimization to help generate more installs as well.
    • Implement Google’s app indexing. This may help you generate more installs, and should also help bring people back to your app on a regular basis
  2. Should you implement Facebook Instant Articles? I’m a big fan of trying this out for your article-level content. It can’t hurt to have it load instantly within Facebook, particularly if you do any level of Facebook promotion. We’re planning to test it here at STC and see what it does for us.
  3. Should you implement AMP? I’m in the same camp on this one: you should try this, and we’re testing it here at STC.

As for the impact of natural language (voice) search, this just increases the emphasis on the quality of your content and your focus on natural language in that content, instead of obsessing over tweaking the content for search engines.

This list really just itemizes the tactical opportunities for you, and the biggest point is that you need to start operating from a mobile-first mindset at all levels in your online business efforts.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3125696
via Auto Feed

The 9 Most Common Local SEO Myths Dispelled

Posted by JoyHawkins

[Estimated read time: 7 minutes]

I regularly hear things in the Local SEO world that many people believe, but which are completely false. I wanted to put some of these myths to rest by listing out the top 9 Local SEO myths that I run into most frequently.

1. Deleting your listing in Google My Business actually removes the listing from Google.

Business owners will often question how they can get rid of duplicate listings on Google. One of the more common things people try is claiming the duplicate and then deleting it from the Google My Business Dashboard. When you go to delete a listing, you receive a scary message asking if you’re sure you want to do this:
The truth is, removing a listing from Google My Business (GMB) just makes the listing unverified. It still exists on Google Maps and will often still rank, provided you didn’t clear out all the categories/details before you deleted it. The only time you’d really want to delete a listing via GMB is if you no longer want to manage the listing.

Google confirms this in their help center article:

When you delete a local page, the corresponding listing will be unverified and you will no longer be able to manage it. Google may still retain business information from the page and may continue to show information about the business on Maps, Search, and other Google properties, including marking the business as permanently closed, moved, or open, depending on the information that’s known about the business.

2. Failure to claim your page means your business won’t rank anywhere.

I’m sure most of you have received those annoying phone calls that say: “Your business is not currently verified and will vanish on Google unless you claim it now!”

First of all, consider the authority of the people who are calling you. I can say with certainty they are not experts in this industry, or they wouldn’t resort to robo-calling to make sales.

The Moz Local Search Ranking Factors does list verifying your listing as #13 for making an impact on ranking in the 3-pack, but this is often because business owners add more data to the listing when they verify it. If they left the listing exactly how it was before verifying, the verification “status” would not likely impact the ranking much at all. We often see unverified pages outranking verified ones in really competitive markets.

3. “Professional/Practitioner” listings on Google are considered duplicates and can be removed.

Google often creates listings for the actual public-facing professionals in an office (lawyers, doctors, dentists, realtors, etc), and the owner of the practice usually wants them to disappear. Google will get rid of the listing for the professional in two different cases:

a) The professional is not public-facing. Support staff, like hygienists or paralegals for example, don’t qualify for a listing and Google will remove them if they exist.

b) The business only has one public-facing individual. For example, if you have a law firm with only one lawyer, Google considers this to be a “Solo Practitioner” and will merge the listing for the professional with the listing for the office. Their guidelines state to “create a single page, named using the following format: [brand/company]: [practitioner name].”

In the case that the professional has left your office, you can have the listing marked as moved if the professional has retired or is no longer working in the industry. This will cause it to vanish from the search results, but it will still exist in Google’s back-end. If the professional has moved to a different company, you should have them claim the listing and update the address/phone number to list their new contact information.

4. Posting on G+ helps improve your ranking.

Phil Rozek explains this best: “It’s nearly impossible for people to see your Google+ posts unless they search for your business by name. Google doesn’t include a link to your ‘Plus’ page in the local pack. Google doesn’t even call it a ‘Plus’ page anymore. Do you still believe being active on Google+ is a local ranking factor?”

No, posting on G+ will not cause your ranking to skyrocket, despite what the Google My Business phone support team told you.

5. “Maps SEO” is something that can be effectively worked on separately from “Organic SEO.”

I often get small business owners calling me saying something along the lines of this: “Hey, Joy. I have an SEO company and they’re doing an awesome job with my site organically, but I don’t show up anywhere in the local pack. Can I hire you to do Google Maps optimization and have them do Organic SEO?”

My answer is, generally, no. “Maps Optimization” is not a thing that can be separated from organic. At Local U in Williamsburg, Mike Ramsey shared that 75% of ranking local listings also rank organically on the first page. The two are directly connected — a change that you make to your site can have a huge influence on where you rank locally.

If you’re a local business, it’s in your better interests to have an SEO company that understands Google Maps and how the 3-pack works. At the company I work for, we’ve always made it a goal to get the business ranked both organically and locally, since it’s almost impossible to get in the 3-pack without a strong organic ranking and a website with strong local signals.

6. Google employees are the highest authority on which ranking signals you should pay attention to.

Google employees are great; I love reading what they come out with and the insight they provides. However, as David Mihm pointed out at Local U, those employees have absolutely no incentive to divulge any top-secret tips for getting your website to rank well. Here are some recent examples of advice given from Google employees that should be ignored:

  1. Duplicate listings will fix themselves over time.
  2. Posting on Google+ will help your ranking (advice given from phone support reps).
  3. If you want to rank well in the 3-pack, just alter your business description.

Instead of trusting this advice, I always suggest that people make sure what they’re doing matches up with what the pros are saying in big surveys and case studies.

7. Setting a huge service area means you’ll rank in all kinds of additional towns.

Google allows service-area businesses to set a radius around their business address to demonstrate how far they’re willing to travel to the customer. People often set this radius really large because they believe it will help them rank in more towns. It doesn’t. You will still most likely only rank in the town you’re using for your business address.

8. When your business relocates, you want to mark the listing for the old location as closed.

The Google My Business & Google MapMaker rules don’t agree on this one. Anyone on the Google MapMaker side would tell a business to mark a listing as “closed” when they move. This will cause a business listing to have a big, ugly, red “permanently closed” label when anyone searches your business name.

If your listing is verified through Google My Business, all you need to do is edit the address inside your dashboard when you move. If there’s an unverified duplicate listing that exists at your old address, you want to make sure you get it marked as “Moved.”

9. Google displays whatever is listed in your GMB dashboard.

Google gives business owners the ability to edit information on their listing by verifying it via Google My Business. However, whatever data the owner inputs is just one of many sources that Google will get information from. Google updates verified listings all the time by scraping data from the business website, inputs from edits made on Google Maps/MapMaker, and third-party data sources. A recent case I’ve seen is one where Google repeatedly updated an owner-verified listing with incorrect business hours due to not being able to properly read the business hours listed on their website.

Were you surprised by any of those Local SEO myths? Are there others that you come across regularly? I’d love to hear about it, so please leave a comment!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from The Moz Blog http://tracking.feedpress.it/link/9375/3116705
via Auto Feed