Toute l'actualité référencement sur Paradi'SEO !

Actualités SEO en français et en anglais - cliquer sur les drapeaux
Informations anglophones en continu  uk    |    fr
21 / 08 / 2018
Channel Image18:00 SearchCap: Google Assistant finds good news, GSC updates, Google Image search traffic & more» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

Please visit Search Engine Land for the full article.
Channel Image17:40 A checklist: Important SEO points to cover in a content campaign» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Contributor Paddy Moogan shares a checklist of key on-page and analytical items and how to SEO them so your content campaign runs smoothly and supports your ranking efforts. 

Please visit Search Engine Land for the full article.
Channel Image16:57 Google Analytics shows how to find Google Image search traffic when Google Images changes the referral URL» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Curious how Google Analytics will show your traffic coming from Google Image search after the pre-announced referrer URL change? Google has documented it for us in a blog post.

Please visit Search Engine Land for the full article.
Channel Image13:37 Google Assistant will now find you ‘good news’» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
‘OK, Google: Tell me something good’ triggers positive news stories from the Solutions Journalism Network.

Please visit Search Engine Land for the full article.
Channel Image12:29 NEW On-Demand Crawl: Quick Insights for Sales, Prospecting, & Competitive Analysis» Moz Blog

Posted by Dr-Pete

In June of 2017, Moz launched our entirely rebuilt Site Crawl, helping you dive deep into crawl issues and technical SEO problems, fix those issues in your Moz Pro Campaigns (tracked websites), and monitor weekly for new issues. Many times, though, you need quick insights outside of a Campaign context, whether you're analyzing a prospect site before a sales call or trying to assess the competition.

For years, Moz had a lab tool called Crawl Test. The bad news is that Crawl Test never made it to prime-time and suffered from some neglect. The good news is that I'm happy to announce the full launch (as of August 2018) of On-Demand Crawl, an entirely new crawl tool built on the engine that powers Site Crawl, but with a UI designed around quick insights for prospecting and competitive analysis.

While you don’t need a Campaign to run a crawl, you do need to be logged into your Moz Pro subscription. If you don’t have a subscription, you can sign-up for a free trial and give it a whirl.

How can you put On-Demand Crawl to work? Let's walk through a short example together.


All you need is a domain

Getting started is easy. From the "Moz Pro" menu, find "On-Demand Crawl" under "Research Tools":

Just enter a root domain or subdomain in the box at the top and click the blue button to kick off a crawl. While I don't want to pick on anyone, I've decided to use a real site. Our recent analysis of the August 1st Google update identified some sites that were hit hard, and I've picked one (lilluna.com) from that list.

Please note that Moz is not affiliated with Lil' Luna in any way. For the most part, it seems to be a decent site with reasonably good content. Let's pretend, just for this post, that you're looking to help this site out and determine if they'd be a good fit for your SEO services. You've got a call scheduled and need to spot-check for any major problems so that you can go into that call as informed as possible.

On-Demand Crawls aren't instantaneous (crawling is a big job), but they'll generally finish between a few minutes and an hour. We know these are time-sensitive situations. You'll soon receive an email that looks like this:

The email includes the number of URLs crawled (On-Demand will currently crawl up to 3,000 URLs), the total issues found, and a summary table of crawl issues by category. Click on the [View Report] link to dive into the full crawl data.


Assess critical issues quickly

We've designed On-Demand Crawl to assist your own human intelligence. You'll see some basic stats at the top, but then immediately move into a graph of your top issues by count. The graph only displays issues that occur at least once on your site – you can click "See More" to show all of the issues that On-Demand Crawl tracks (the top two bars have been truncated)...

Issues are also color-coded by category. Some items are warnings, and whether they matter depends a lot on context. Other issues, like "Critcal Errors" (in red) almost always demand attention. So, let's check out those 404 errors. Scroll down and you'll see a list of "Pages Crawled" with filters. You're going to select "4xx" in the "Status Codes" dropdown...

You can then pretty easily spot-check these URLs and find out that they do, in fact, seem to be returning 404 errors. Some appear to be legitimate content that has either internal or external links (or both). So, within a few minutes, you've already found something useful.

Let's look at those yellow "Meta Noindex" errors next. This is a tricky one, because you can't easily determine intent. An intentional Meta Noindex may be fine. An unintentional one (or hundreds of unintentional ones) could be blocking crawlers and causing serious harm. Here, you'll filter by issue type...

Like the top graph, issues appear in order of prevalence. You can also filter by all pages that have issues (any issues) or pages that have no issues. Here's a sample of what you get back (the full table also includes status code, issue count, and an option to view all issues)...

Notice the "?s=" common to all of these URLs. Clicking on a few, you can see that these are internal search pages. These URLs have no particular SEO value, and the Meta Noindex is likely intentional. Good technical SEO is also about avoiding false alarms because you lack internal knowledge of a site. On-Demand Crawl helps you semi-automate and summarize insights to put your human intelligence to work quickly.


Dive deeper with exports

Let's go back to those 404s. Ideally, you'd like to know where those URLs are showing up. We can't fit everything into one screen, but if you scroll up to the "All Issues" graph you'll see an "Export CSV" option...

The export will honor any filters set in the page list, so let's re-apply that "4xx" filter and pull the data. Your export should download almost immediately. The full export contains a wealth of information, but I've zeroed in on just what's critical for this particular case...

Now, you know not only what pages are missing, but exactly where they link from internally, and can easily pass along suggested fixes to the customer or prospect. Some of these turn out to be link-heavy pages that could probably benefit from some clean-up or updating (if newer recipes are a good fit).

Let's try another one. You've got 8 duplicate content errors. Potentially thin content could fit theories about the August 1st update, so this is worth digging into. If you filter by "Duplicate Content" issues, you'll see the following message...

The 8 duplicate issues actually represent 18 pages, and the table returns all 18 affected pages. In some cases, the duplicates will be obvious from the title and/or URL, but in this case there's a bit of mystery, so let's pull that export file. In this case, there's a column called "Duplicate Content Group," and sorting by it reveals something like the following (there's a lot more data in the original export file)...

I've renamed "Duplicate Content Group" to just "Group" and included the word count ("Words"), which could be useful for verifying true duplicates. Look at group #7 – it turns out that these "Weekly Menu Plan" pages are very image heavy and have a common block of text before any unique text. While not 100% duplicated, these otherwise valuable pages could easily look like thin content to Google and represent a broader problem.


Real insights in real-time

Not counting the time spent writing the blog post, running this crawl and diving in took less than an hour, and even that small amount of time spent uncovered more potential issues than what I could cover in this post. In less than an hour, you can walk into a client meeting or sales call with in-depth knowledge of any domain.

Keep in mind that many of these features also exist in our Site Crawl tool. If you're looking for long-term, campaign insights, use Site Crawl (if you just need to update your data, use our "Recrawl" feature). If you're looking for quick, one-time insights, check out On-Demand Crawl. Standard Pro users currently get 5 On-Demand Crawls per month (with limits increasing at higher tiers).

Your On-Demand Crawls are currently stored for 90 days. When you re-enter the feature, you'll see a table of all of your recent crawls (the image below has been truncated):

Click on any row to go back to see the crawl data for that domain. If you get the sale and decide to move forward, congratulations! You can port that domain directly into a Moz campaign.

We hope you'll try On-Demand Crawl out and let us know what you think. We'd love to hear your case studies, whether it's sales, competitive analysis, or just trying to solve the mysteries of a Google update.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Channel Image08:32 New Google Search Console has added the links reports from the old interface» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Google continues to port features from the old Google Search Console to the new beta Search Console.

Please visit Search Engine Land for the full article.
20 / 08 / 2018
Channel Image20:00 Google marks 14 years as a public company» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Today the search giant ranks as the world’s third most valuable company.

Please visit Search Engine Land for the full article.
Channel Image18:00 SearchCap: Google Drive rebrands, visual and voice search, Amazon’s ad opportunities & more» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

Please visit Search Engine Land for the full article.
Channel Image16:58 [Reminder] Ramp Up Your Amazon Ad Game: 5 tips for success» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Join us for this live webinar on Thursday, August 23, at 1:00 PM ET (10:00 AM PT).

Please visit Search Engine Land for the full article.
Channel Image15:23 How Visual and Voice Search Are Revitalizing The Role of SEO» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Contributor Jim Yu outlines how savvy marketers are using voice and visual search to engage more meaningfully with audiences at each stage of their purchase journey.

Please visit Search Engine Land for the full article.
Channel Image12:21 Google Drive’s rebrand to Google One includes offers for hotels found in Search» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Deals on hotels found in Google Search are spotlighted in the Benefits section of the new Google One app, with plans to add more promotions from Google properties.

Please visit Search Engine Land for the full article.
17 / 08 / 2018
Channel Image23:23 What Do Dolphins Eat? Lessons from How Kids Search - Whiteboard Friday» Moz Blog

Posted by willcritchlow

Kids may search differently than adults, but there are some interesting insights from how they use Google that can help deepen our understanding of searchers in general. Comfort levels with particular search strategies, reading only the bold words, taking search suggestions and related searches as answers — there's a lot to dig into. In this week's slightly different-from-the-norm Whiteboard Friday, we welcome the fantastic Will Critchlow to share lessons from how kids search.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, everyone. I'm Will Critchlow, founder and CEO of Distilled, and this week's Whiteboard Friday is a little bit different. I want to talk about some surprising and interesting and a few funny facts that I learnt when I was reading some research that Google did about how kids search for information. So this isn't super actionable. This is not about tactics of improving your website particularly. But I think we get some insights — they were studying kids aged 7 to 11 — by looking at how kids interact. We can see some reflections or some ideas about how there might be some misconceptions out there about how adults search as well. So let's dive into it.

What do dolphins eat?

I've got this "What do dolphins eat?" because this was the first question that the researchers gave to the kids to say sit down in front of a search box, go. They tell this little anecdote, a little bit kind of soul-destroying, of this I think it was a seven-year-old child who starts typing dolphin, D-O-L-F, and then presses Enter, and it was like sadly there's no dolphins, which hopefully they found him some dolphins. But a lot of the kids succeeded at this task.

Different kinds of searchers

The researchers divided the ways that the kids approached it up into a bunch of different categories. They found that some kids were power searchers. Some are what they called "developing." They classified some as "distracted." But one that I found fascinating was what they called visual searchers. I think they found this more commonly among the younger kids who were perhaps a little bit less confident reading and writing. It turns out that, for almost any question you asked them, these kids would turn first to image search.

So for this particular question, they would go to image search, typically just type "dolphin" and then scroll and go looking for pictures of a dolphin eating something. Then they'd find a dolphin eating a fish, and they'd turn to the researcher and say "Look, dolphins eat fish." Which, when you think about it, I quite like in an era of fake news. This is the kids doing primary research. They're going direct to the primary source. But it's not something that I would have ever really considered, and I don't know if you would. But hopefully this kind of sparks some thought and some insights and discussions at your end. They found that there were some kids who pretty much always, no matter what you asked them, would always go and look for pictures.

Kids who were a bit more developed, a bit more confident in their reading and writing would often fall into one of these camps where they were hopefully focusing on the attention. They found a lot of kids were obviously distracted, and I think as adults this is something that we can relate to. Many of the kids were not really very interested in the task at hand. But this kind of path from distracted to developing to power searcher is an interesting journey that I think totally applies to grown-ups as well.

In practice: [wat do dolfin eat]

So I actually, after I read this paper, went and did some research on my kids. So my kids were in roughly this age range. When I was doing it, my daughter was eight and my son was five and a half. Both of them interestingly typed "wat do dolfin eat" pretty much like this. They both misspelled "what," and they both misspelled "dolphin." Google was fine with that. Obviously, these days this is plenty close enough to get the result you wanted. Both of them successfully answered the question pretty much, but both of them went straight to the OneBox. This is, again, probably unsurprising. You can guess this is probably how most people search.

"Oh, what's a cephalopod?" The path from distracted to developing

So there's a OneBox that comes up, and it's got a picture of a dolphin. So my daughter, a very confident reader, she loves reading, "wat do dolfin eat," she sat and she read the OneBox, and then she turned to me and she said, "It says they eat fish and herring. Oh, what's a cephalopod?" I think this was her going from distracted into developing probably. To start off with, she was just answering this question because I had asked her to. But then she saw a word that she didn't know, and suddenly she was curious. She had to kind of carefully type it because it's a slightly tricky word to spell. But she was off looking up what is a cephalopod, and you could see the engagement shift from "I'm typing this because Dad has asked me to and it's a bit interesting I guess" to "huh, I don't know what a cephalopod is, and now I'm doing my own research for my own reasons." So that was interesting.

"Dolphins eat fish, herring, killer whales": Reading the bold words

My son, as I said, typed something pretty similar, and he, at the point when he was doing this, was at the stage of certainly capable of reading, but generally would read out loud and a little bit halting. What was fascinating on this was he only read the bold words. He read it out loud, and he didn't read the OneBox. He just read the bold words. So he said to me, "Dolphins eat fish, herring, killer whales," because killer whales, for some reason, was bolded. I guess it was pivoting from talking about what dolphins eat to what killer whales eat, and he didn't read the context. This cracked him up. So he thought that was ridiculous, and isn't it funny that Google thinks that dolphins eat killer whales.

That is similar to some stuff that was in the original research, where there were a bunch of common misconceptions it turns out that kids have and I bet a bunch of adults have. Most adults probably don't think that the bold words in the OneBox are the list of the answer, but it does point to the problems with factual-based, truthy type queries where Google is being asked to be the arbiter of truth on some of this stuff. We won't get too deep into that.

Common misconceptions for kids when searching

1. Search suggestions are answers

But some common misconceptions they found some kids thought that the search suggestions, so the drop-down as you start typing, were the answers, which is bit problematic. I mean we've all seen kind of racist or hateful drop-downs in those search queries. But in this particular case, it was mainly just funny. It would end up with things like you start asking "what do dolphins eat," and it would be like "Do dolphins eat cats" was one of the search suggestions.

2. Related searches are answers

Similar with related searches, which, as we know, are not answers to the question. These are other questions. But kids in particular — I mean, I think this is true of all users — didn't necessarily read the directions on the page, didn't read that they were related searches, just saw these things that said "dolphin" a lot and started reading out those. So that was interesting.

How kids search complicated questions

The next bit of the research was much more complex. So they started with these easy questions, and they got into much harder kind of questions. One of them that they asked was this one, which is really quite hard. So the question was, "Can you find what day of the week the vice president's birthday will fall on next year?" This is a multifaceted, multipart question.

How do they handle complex, multi-step queries?

Most of the younger kids were pretty stumped on this question. Some did manage it. I think a lot of adults would fail at this. So if you just turn to Google, if you just typed this in or do a voice search, this is the kind of thing that Google is almost on the verge of being able to do. If you said something like, "When is the vice president's birthday," that's a question that Google might just be able to answer. But this kind of three-layered thing, what day of the week and next year, make this actually a very hard query. So the kids had to first figure out that, to answer this, this wasn't a single query. They had to do multiple stages of research. When is the vice president's birthday? What day of the week is that date next year? Work through it like that.

I found with my kids, my eight-year-old daughter got stuck halfway through. She kind of realized that she wasn't going to get there in one step, but also couldn't quite structure the multi-levels needed to get to, but also started getting a bit distracted again. It was no longer about cephalopods, so she wasn't quite as interested.

Search volume will grow in new areas as Google's capabilities develop

This I think is a whole area that, as Google's capabilities develop to answer more complex queries and as we start to trust and learn that those kind of queries can be answered, what we see is that there is going to be increasing, growing search volume in new areas. So I'm going to link to a post I wrote about a presentation I gave about the next trillion searches. This is my hypothesis that essentially, very broad brush strokes, there are a trillion desktop searches a year. There are a trillion mobile searches a year. There's another trillion out there in searches that we don't do yet because they can't be answered well. I've got some data to back that up and some arguments why I think it's about that size. But I think this is kind of closely related to this kind of thing, where you see kids get stuck on these kind of queries.

Incidentally, I'd encourage you to go and try this. It's quite interesting, because as you work through trying to get the answer, you'll find search results that appear to give the answer. So, for example, I think there was an About.com page that actually purported to give the answer. It said, "What day of the week is the vice president's birthday on?" But it had been written a year before, and there was no date on the page. So actually it was wrong. It said Thursday. That was the answer in 2016 or 2017. So that just, again, points to the difference between primary research, the difference between answering a question and truth. I think there's a lot of kind of philosophical questions baked away in there.

Kids get comfortable with how they search – even if it's wrong

So we're going to wrap up with possibly my favorite anecdote of the user research that these guys did, which was that they said some of these kids, somewhere in this developing stage, get very attached to searching in one particular way. I guess this is kind of related to the visual search thing. They find something that works for them. It works once. They get comfortable with it, they're familiar with it, and they just do that for everything, whether it's appropriate or not. My favorite example was this one child who apparently looked for information about both dolphins and the vice president of the United States on the SpongeBob SquarePants website, which I mean maybe it works for dolphins, but I'm guessing there isn't an awful lot of VP information.

So anyway, I hope you've enjoyed this little adventure into how kids search and maybe some things that we can learn from it. Drop some anecdotes of your own in the comments. I'd love to hear your experiences and some of the funny things that you've learnt along the way. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Channel Image18:00 SearchCap: Old content in Google, CRO tools, missing mobile opportunities & more» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

Please visit Search Engine Land for the full article.
Channel Image17:46 15 questions to ask yourself before publishing a new landing page» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Building landing pages? Contributor Jacob Baadsgaard has compiled a tried-and-true list of 15 questions to help guide you in creating and evaluating new landing pages.

Please visit Search Engine Land for the full article.
Channel Image15:05 Do You Need Local Pages? - Whiteboard Friday» Moz Blog

Posted by Tom.Capper

Does it make sense for you to create local-specific pages on your website? Regardless of whether you own or market a local business, it may make sense to compete for space in the organic SERPs using local pages. Please give a warm welcome to our friend Tom Capper as he shares a 4-point process for determining whether local pages are something you should explore in this week's Whiteboard Friday!

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hello, Moz fans. Welcome to another Whiteboard Friday. I'm Tom Capper. I'm a consultant at Distilled, and today I'm going to be talking to you about whether you need local pages. Just to be clear right off the bat what I'm talking about, I'm not talking about local rankings as we normally think of them, the local map pack results that you see in search results, the Google Maps rankings, that kind of thing.

A 4-step process to deciding whether you need local pages

I'm talking about conventional, 10 blue links rankings but for local pages, and by local pages I mean pages from a national or international business that are location-specific. What are some examples of that? Maybe on Indeed.com they would have a page for jobs in Seattle. Indeed doesn't have a bricks-and-mortar premises in Seattle, but they do have a page that is about jobs in Seattle.

You might get a similar thing with flower delivery. You might get a similar thing with used cars, all sorts of different verticals. I think it can actually be quite a broadly applicable tactic. There's a four-step process I'm going to outline for you. The first step is actually not on the board. It's just doing some keyword research.

1. Know (or discover) your key transactional terms

I haven't done much on that here because hopefully you've already done that. You already know what your key transactional terms are. Because whatever happens you don't want to end up developing location pages for too many different keyword types because it's gong to bloat your site, you probably just need to pick one or two key transactional terms that you're going to make up the local variants of. For this purpose, I'm going to talk through an SEO job board as an example.

2. Categorize your keywords as implicit, explicit, or near me and log their search volumes

We might have "SEO jobs" as our core head term. We then want to figure out what the implicit, explicit, and near me versions of that keyword are and what the different volumes are. In this case, the implicit version is probably just "SEO jobs." If you search for "SEO jobs" now, like if you open a new tab in your browser, you're probably going to find that a lot of local orientated results appear because that is an implicitly local term and actually an awful lot of terms are using local data to affect rankings now, which does affect how you should consider your rank tracking, but we'll get on to that later.

SEO jobs, maybe SEO vacancies, that kind of thing, those are all going to be going into your implicitly local terms bucket. The next bucket is your explicitly local terms. That's going to be things like SEO jobs in Seattle, SEO jobs in London, and so on. You're never going to get a complete coverage of different locations. Try to keep it simple.

You're just trying to get a rough idea here. Lastly you've got your near me or nearby terms, and it turns out that for SEO jobs not many people search SEO jobs near me or SEO jobs nearby. This is also going to vary a lot by vertical. I would imagine that if you're in food delivery or something like that, then that would be huge.

3. Examine the SERPs to see whether local-specific pages are ranking

Now we've categorized our keywords. We want to figure out what kind of results are going to do well for what kind of keywords, because obviously if local pages is the answer, then we might want to build some.

In this case, I'm looking at the SERP for "SEO jobs." This is imaginary. The rankings don't really look like this. But we've got SEO jobs in Seattle from Indeed. That's an example of a local page, because this is a national business with a location-specific page. Then we've got SEO jobs Glassdoor. That's a national page, because in this case they're not putting anything on this page that makes it location specific.

Then we've got SEO jobs Seattle Times. That's a local business. The Seattle Times only operates in Seattle. It probably has a bricks-and-mortar location. If you're going to be pulling a lot of data of this type, maybe from stats or something like that, obviously tracking from the locations that you're mentioning, where you are mentioning locations, then you're probably going to want to categorize these at scale rather than going through one at a time.

I've drawn up a little flowchart here that you could encapsulate in a Excel formula or something like that. If the location is mentioned in the URL and in the domain, then we know we've got a local business. Most of the time it's just a rule of thumb. If the location is mentioned in the URL but not mentioned in the domain, then we know we've got a local page and so on.

4. Compare & decide where to focus your efforts

You can just sort of categorize at scale all the different result types that we've got. Then we can start to fill out a chart like this using the rankings. What I'd recommend doing is finding a click-through rate curve that you are happy to use. You could go to somewhere like AdvancedWebRanking.com, download some example click-through rate curves.

Again, this doesn't have to be super precise. We're looking to get a proportionate directional indication of what would be useful here. I've got Implicit, Explicit, and Near Me keyword groups. I've got Local Business, Local Page, and National Page result types. Then I'm just figuring out what the visibility share of all these types is. In my particular example, it turns out that for explicit terms, it could be worth building some local pages.

That's all. I'd love to hear your thoughts in the comments. Thanks.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Channel Image12:10 CRO tools to help you boost your SEO efforts» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Is your site getting a lot of traffic? If yes, great! But your work is not over. Contributor Stephanie LeVonne says it’s time to implement a conversion rate optimization campaign. Here are four tools to help.

Please visit Search Engine Land for the full article.
Channel Image11:33 Old content still showing up in Google search results? Google might not find that content important» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Google explains how crawl rate can be used as confirmation that a page is OK to remove from your website, and thus their index.

Please visit Search Engine Land for the full article.
Channel Image11:00 Search in Pics: Google beach party, beer pong & blue pool table» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing


Please visit Search Engine Land for the full article.
16 / 08 / 2018
Channel Image18:05 SearchCap: It’s all about Google today–political ad transparency report, local packs, featured snippets launched & more» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

Please visit Search Engine Land for the full article.
Channel Image17:16 Google Posts added to local packs for some branded queries» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Google Posts normally only show up as an option within a local knowledge panel, but here it is showing up on the local 3-pack.

Please visit Search Engine Land for the full article.
Channel Image17:00 Live webinar with Scott Brinker: 5,000+ martech tools — where do you start?» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Join us on Tuesday, August 21, 2018, at 1:00 PM EDT (10:00 AM PDT)

Please visit Search Engine Land for the full article.
Channel Image15:37 Google launches new expandable featured snippets with more information» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
The new featured snippets provide aggregated access to additional sources about a search query.

Please visit Search Engine Land for the full article.
Channel Image14:22 The Google expanded text ad CTR lift that never was» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Contributor Andy Taylor shares his take on Google’s expanded text ads (ETAs), drawing comparisons to the new responsive search ad (RSA) format.

Please visit Search Engine Land for the full article.
14 / 08 / 2018
Channel Image23:23 Ranking the 6 Most Accurate Keyword Difficulty Tools» Moz Blog

Posted by Jeff_Baker

In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.

Did it work?

Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.

But we got a whole lot more than just traffic.

From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.

As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...

How well keyword research tools can predict where you will rank.

A little background

We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.

We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.

With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:

This image links to an example of a content brief Brafton delivers to writers.

Between mid-January and late May, we ended up writing 55 blog posts each targeting 55 unique keywords. 50 of those blog posts ended up ranking in the top 100 of Google results.

We then paused and took a snapshot of each URL’s Google ranking position for its target keyword and its corresponding organic difficulty scores from Moz, SEMrush, Ahrefs, SpyFu, and KW Finder. We also took the PPC competition scores from the Keyword Planner Tool.

Our intention was to draw statistical correlations between between our keyword rankings and each tool’s organic difficulty score. With this data, we were able to report on how accurately each tool predicted where we would rank.

This study is uniquely scientific, in that each blog had one specific keyword target. We optimized the blog content specifically for that keyword. Therefore every post was created in a similar fashion.

Do keyword research tools actually work?

We use them every day, on faith. But has anyone ever actually asked, or better yet, measured how well keyword research tools report on the organic difficulty of a given keyword?

Today, we are doing just that. So let’s cut through the chit-chat and get to the results...

This image ranks each of the 6 keyword research tools, in order, Moz leads with 4.95 stars out of 5, followed by KW Finder, SEMrush, AHREFs, SpyFu, and lastly Keyword Planner Tool.

While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).

As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.

Let’s dig in!

The Pearson Correlation Coefficient

Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.

In order to understand the relationship between two variables, our first step is to create a scatter plot chart.

Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.

This image shows a scatter plot for Moz's keyword difficulty scores versus our keyword rankings. In general, the data clusters fairly tight around the regression line.

We start with a visual inspection of the data to determine if there is a linear relationship between the two variables. Ideally for each tool, you would expect to see the X variable (keyword ranking) increase proportionately with the Y variable (organic difficulty). Put simply, if the tool is working, the higher the keyword difficulty, the less likely you will rank in a top position, and vice-versa.

This chart is all fine and dandy, however, it’s not very scientific. This is where the Pearson Correlation Coefficient (PCC) comes into play.

The PCC measures the strength of a linear relationship between two variables. The output of the PCC is a score ranging from +1 to -1. A score greater than zero indicates a positive relationship; as one variable increases, the other increases as well. A score less than zero indicates a negative relationship; as one variable increases, the other decreases. Both scenarios would indicate a level of causal relationship between the two variables. The stronger the relationship between the two veriables, the closer to +1 or -1 the PCC will be. Scores near zero indicate a weak or no relatioship.

Phew. Still with me?

So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.

We will use the following table from statisticshowto.com to interpret the PCC score for each tool:

Coefficient Correlation R Score

Key

.70 or higher

Very strong positive relationship

.40 to +.69

Strong positive relationship

.30 to +.39

Moderate positive relationship

.20 to +.29

Weak positive relationship

.01 to +.19

No or negligible relationship

0

No relationship [zero correlation]

-.01 to -.19

No or negligible relationship

-.20 to -.29

Weak negative relationship

-.30 to -.39

Moderate negative relationship

-.40 to -.69

Strong negative relationship

-.70 or higher

Very strong negative relationship

In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.

These scatter plots show three types of correlations: positive, negative, and no correlation. Positive correlations have data plots that move up and to the right. Negative correlations move down and to the right. No correlation has data that follows no linear pattern

And here are some examples of charts with their correlating PCC scores (r):

These scatter plots show what different PCC values look like visually. The tighter the grouping of data around the regression line, the higher the PCC value.

The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.

That was the tough part - you still with me? Great, now let’s look at each tool’s results.

Test 1: The Pearson Correlation Coefficient

Now that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.

In order of performance:

#1: Moz

This image shows a scatter plot for Moz's keyword difficulty scores versus our keyword rankings. In general, the data clusters fairly tight around the regression line.

Revisiting Moz’s scatter plot, we observe a tight grouping of results relative to the regression line with few moderate outliers.

Moz Organic Difficulty Predictability

PCC

0.412

P-val

.003 (P<0.05)

Relationship

Strong

% Keywords Matched

100.00%

Moz came in first with the highest PCC of .412. As an added bonus, Moz grabs data on keyword difficulty in real time, rather than from a fixed database. This means that you can get any keyword difficulty score for any keyword.

In other words, Moz was able to generate keyword difficulty scores for 100% of the 50 keywords studied.

#2: SpyFu

This image shows a scatter plot for SpyFu's keyword difficulty scores versus our keyword rankings. The plot is similar looking to Moz's, with a few larger outliers.

Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.

SpyFu Organic Difficulty Predictability

PCC

0.405

P-val

.01 (P<0.05)

Relationship

Strong

% Keywords Matched

80.00%

SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.

#3: SEMrush

This image shows a scatter plot for SEMrush's keyword difficulty scores versus our keyword rankings. The data has a significant amount of outliers relative to the regression line.

SEMrush would certainly benefit from a couple mulligans (a second chance to perform an action). The Correlation Coefficient is very sensitive to outliers, which pushed SEMrush’s score down to third (.364).

SEMrush Organic Difficulty Predictability

PCC

0.364

P-val

.01 (P<0.05)

Relationship

Moderate

% Keywords Matched

92.00%

Further complicating the research process, only 46 of 50 keywords had keyword difficulty scores associated with them, and many of those had to be found through SEMrush’s “phrase match” feature individually, rather than through the difficulty tool.

The process was more laborious to dig around for data.

#4: KW Finder

This image shows a scatter plot for KW Finder's keyword difficulty scores versus our keyword rankings. The data also has a significant amount of outliers relative to the regression line.

KW Finder definitely could have benefitted from more than a few mulligans with numerous strong outliers, coming in right behind SEMrush with a score of .360.

KW Finder Organic Difficulty Predictability

PCC

0.360

P-val

.01 (P<0.05)

Relationship

Moderate

% Keywords Matched

100.00%

Fortunately, the KW Finder tool had a 100% match rate without any trouble digging around for the data.

#5: Ahrefs

This image shows a scatter plot for AHREF's keyword difficulty scores versus our keyword rankings. The data shows tight clustering amongst low difficulty score keywords, and a wide distribution amongst higher difficulty scores.

Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.

Ahrefs Organic Difficulty Predictability

PCC

0.316

P-val

.03 (P<0.05)

Relationship

Moderate

% Keywords Matched

100%

On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.

#6: Google Keyword Planner Tool

This image shows a scatter plot for Google Keyword Planner Tool's keyword difficulty scores versus our keyword rankings. The data shows randomly distributed plots with no linear relationship.

Before you ask, yes, SEO companies still use the paid competition figures from Google’s Keyword Planner Tool (and other tools) to assess organic ranking potential. As you can see from the scatter plot, there is in fact no linear relationship between the two variables.

Google Keyword Planner Tool Organic Difficulty Predictability

PCC

0.045

P-val

Statistically insignificant/no linear relationship

Relationship

Negligible/None

% Keywords Matched

88.00%

SEO agencies still using KPT for organic research (you know who you are!) — let this serve as a warning: You need to evolve.

Test 1 summary

For scoring, we will use a ten-point scale and score every tool relative to the highest-scoring competitor. For example, if the second highest score is 98% of the highest score, the tool will receive a 9.8. As a reminder, here are the results from the PCC test:

This bar chart shows the final PCC values for the first test, summarized.

And the resulting scores are as follows:

Tool

PCC Test

Moz

10

SpyFu

9.8

SEMrush

8.8

KW Finder

8.7

Ahrefs

7.7

KPT

1.1

Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).

Test 2: Adjusted Pearson Correlation Coefficient

Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.

Here are the adjusted results for the handicap round:

Adjusted Scores (3 Outliers removed)

PCC

Difference (+/-)

SpyFu

0.527

0.122

SEMrush

0.515

0.150

Moz

0.514

0.101

Ahrefs

0.478

0.162

KWFinder

0.470

0.110

Keyword Planner Tool

0.189

0.144

As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.

For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.

Here are the updated scores at the end of round two:

Tool

PCC Test

Adjusted PCC

Total

SpyFu

9.8

10

19.8

Moz

10

9.7

19.7

SEMrush

8.8

9.8

18.6

KW Finder

8.7

8.9

17.6

AHREFs

7.7

9.1

16.8

KPT

1.1

3.6

4.7

SpyFu takes the lead! Now let’s jump into the final round of statistical tests.

Test 3: Resampling

Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.

Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"

He randomly selected 2 keywords from the list and their associated difficulty scores.

Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.

He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:

Resampling

% Guessed correctly

Moz

62.2%

Ahrefs

61.2%

SEMrush

60.3%

Keyword Finder

58.9%

SpyFu

54.3%

KPT

45.9%

As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.

Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.

In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.

For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.

The updated scores are as follows:

Tool

PCC Test

Adjusted PCC

Resampling

Total

Moz

10

9.7

10

29.7

SEMrush

8.8

9.8

8.4

27

Ahrefs

7.7

9.1

9.2

26

KW Finder

8.7

8.9

7.3

24.9

SpyFu

9.8

10

3.5

23.3

KPT

1.1

3.6

-.4

.7

So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.

Finally, we need to make some usability adjustments.

Usability Adjustment 1: Keyword Matching

A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.

To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:

  1. You have to use another tool to get the data, which devalues the entire point of using the original tool.
  2. You miss an opportunity to rank for a high-value keyword.

Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.

One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.

The penalties are as follows:

Tool

Match Rate

Penalty

KW Finder

100%

0

Ahrefs

100%

0

Moz

100%

0

SEMrush

92%

-.8

Keyword Planner Tool

88%

-1.2

SpyFu

80%

-2

Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were not found in its keyword difficulty tool, but rather through manually digging through the phrase match tool. We will give them a pass, but with a stern warning!

Usability Adjustment 2: Reliability

I told you we would come back to this! Revisiting the second test in which we threw away the three strongest outliers that negatively impacted each tool’s score, we will now make adjustments.

In real life, there are no mulligans. In real life, each of those three blog posts that were thrown out represented a significant monetary and time investment. Therefore, when a tool has a major blunder, the result can be a total waste of time and resources.

For that reason, we will impose a slight penalty on those tools that benefited the most from their handicap.

We will use the level of PCC improvement to evaluate how much a tool benefitted from removing their outliers. In doing so, we will be rewarding the tools that were the most consistently reliable. As a reminder, the amounts each tool benefitted were as follows:

Tool

Difference (+/-)

Ahrefs

0.162

SEMrush

0.150

Keyword Planner Tool

0.144

SpyFu

0.122

KWFinder

0.110

Moz

0.101

In calculating the penalty, we scored each of the tools relative to the top performer, giving the top performer zero penalty and imposing penalties based on how much additional benefit the tools received over the most reliable tool, on a scale of 0–100%, with a maximum deduction of 5 points.

So if a tool received twice the benefit of the top performing tool, it would have had a 100% benefit, receiving the maximum deduction of 5 points. If another tool received a 20% benefit over of the most reliable tool, it would get a 1-point deduction. And so on.

Tool

% Benefit

Penalty

Ahrefs

60%

-3

SEMrush

48%

-2.4

Keyword Planner Tool

42%

-2.1

SpyFu

20%

-1

KW Finder

8%

-.4

Moz

-

0

Results

All told, our penalties were fairly mild, with a slight shuffling in the middle tier. The final scores are as follows:

Tool

Total Score

Stars (5 max)

Moz

29.7

4.95

KW Finder

24.5

4.08

SEMrush

23.8

3.97

Ahrefs

23.0

3.83

Spyfu

20.3

3.38

KPT

-2.6

0.00

Conclusion

Using any organic keyword difficulty tool will give you an advantage over not doing so. While none of the tools are a crystal ball, providing perfect predictability, they will certainly give you an edge. Further, if you record enough data on your own blogs’ performance, you will get a clearer picture of the keyword difficulty scores you should target in order to rank on the first page.

For example, we know the following about how we should target keywords with each tool:

Tool

Average KD ranking ≤10

Average KD ranking ≥ 11

Moz

33.3

37.0

SpyFu

47.7

50.6

SEMrush

60.3

64.5

KWFinder

43.3

46.5

Ahrefs

11.9

23.6

This is pretty powerful information! It’s either first page or bust, so we now know the threshold for each tool that we should set when selecting keywords.

Stay tuned, because we made a lot more correlations between word count, days live, total keywords ranking, and all kinds of other juicy stuff. Tune in again in early September for updates!

We hope you found this test useful, and feel free to reach out with any questions on our math!

Disclaimer: These results are estimates based on 50 ranking keywords from 50 blog posts and keyword research data pulled from a single moment in time. Search is a shifting landscape, and these results have certainly changed since the data was pulled. In other words, this is about as accurate as we can get from analyzing a moving target.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!