Toute l'actualité référencement sur Paradi'SEO !

Actualités SEO en français et en anglais - cliquer sur les drapeaux
Informations anglophones en continu  uk    |    fr
18 / 01 / 2019
Channel Image13:30 Full Funnel Testing: SEO & CRO Together - Whiteboard Friday» Moz Blog

Posted by willcritchlow

Testing for only SEO or only CRO isn't always ideal. Some changes result in higher conversions and reduced site traffic, for instance, while others may rank more highly but convert less well. In today's Whiteboard Friday, we welcome Will Critchlow as he demonstrates a method of testing for both your top-of-funnel SEO changes and your conversion-focused CRO changes at once.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, everyone. Welcome to another Whiteboard Friday. My name is Will Critchlow, one of the founders at Distilled. If you've been following what I've been writing and talking about around the web recently, today's topic may not surprise you that much. I'm going to be talking about another kind of SEO testing.

Over at Distilled, we've been investing pretty heavily in building out our capability to do SEO tests and in particular built our optimization delivery network, which has let us do a new kind of SEO testing that hasn't been previously available to most of our clients. Recently we've been working on a new enhancement to this, which is full funnel testing, and that's what I want to talk about today.

So funnel testing is testing all the way through the funnel, from acquisition at the SEO end to conversion. So it's SEO testing plus CRO testing together. I'm going to write a little bit more about some of the motivation for this. But, in a nutshell, it essentially boils down to the fact that it is perfectly possible, in fact we've seen in the wild cases of tests that win in SEO terms and lose in CRO terms or vice versa.

In other words, tests that maybe you make a change and it converts better, but you lose organic search traffic. Or the other way around, it ranks better, but it converts less well. If you're only testing one, which is common — I mean most organizations are only testing the conversion rate side of things — it's perfectly possible to have a winning test, roll it out, and do worse.

CRO testing

So let's step back a little bit. A little bit of a primer. Conversion rate optimization testing works in an A/B split kind of way. You can test on a single page, if you want to, or a site section. The way it works is you split your audience. So your audience is split. Some of your audience gets one version of the page, and the rest of the audience gets a different version.

Then you can compare the conversion rate among the group who got the control and the group who got the variant. That's very straightforward. Like I say, it can happen on a single page or across an entire site. SEO testing, a little bit newer. The way this works is you can't split the audience, because we care very much about the search engine spiders in this case. For the purposes of this consideration, there's essentially only one Googlebot. So you couldn't put Google in Class A or Class B here and expect to get anything meaningful.

SEO testing

So the way that we do an SEO test is we actually split the pages. To do this, you need a substantial site section. So imagine, for example, an e-commerce website with thousands of products. You might have a hypothesis of something that will help those product pages perform better. You take your hypothesis and you only apply it to some of the pages, and you leave some of the pages unchanged as a control.

Then, crucially, search engines and users see the same experience. There's no cloaking going on. There's no duplication of content. You simply change some pages and not change others. Then you apply kind of advanced mathematical, statistical analysis trying to figure out do these pages get statistically more organic search traffic than we think they would have done if we hadn't made this change. So that's how an SEO test works.

Now, as I said, the problem that we are trying to tackle here is it's really plausible, despite Google's best intentions to do what's right for users, it's perfectly plausible that you can have a test that ranks better but converts less well or vice versa. We've seen this with, for example, removing content from a page. Sometimes having a cleaner, simpler page can convert better. But maybe that was where the keywords were and maybe that was helping the page rank. So we're trying to avoid those kinds of situations.

Full funnel testing

That's where full funnel testing comes in. So I want to just run through how you run a full funnel test. What you do is you first of all set it up in the same way as an SEO test, because we're essentially starting with SEO at the top of the funnel. So it's set up exactly the same way.

Some pages are unchanged. Some pages get the hypothesis applied to them. As far as Google is concerned, that's the end of the story, because on any individual request to these pages that's what we serve back. But the critically important thing here is I've got my little character. This is a human browser performs a search, "What do badgers eat?"

This was one of our silly examples that we came up with on one of our demo sites. The user lands on this page here. What we do is we then set a cookie. This is a cookie. This user then, as they navigate around the site, no matter where they go within this site section, they get the same treatment, either the control or the variant. They get the same treatment across the entire site section. This is more like the conversion rate test here.

Googlebot = stateless requests

So what I didn't show in this diagram is if you were running this test across a site section, you would cookie this user and make sure that they always saw the same treatment no matter where they navigated around the site. So because Googlebot is making stateless requests, in other words just independent, one-off requests for each of these of these pages with no cookie set, Google sees the split.

Evaluate SEO test on entrances

Users get whatever their first page impression looks like. They then get that treatment applied across the entire site section. So what we can do then is we can evaluate independently the performance in search, evaluate that on entrances. So do we get significantly more entrances to the variant pages than we would have expected if we hadn't applied a hypothesis to them?

That tells us the uplift from an SEO perspective. So maybe we say, "Okay, this is plus 11% in organic traffic." Well, great. So in a vacuum, all else being equal, we'd love to roll out this test.

Evaluate conversion rate on users

But before we do that, what we can do now is we can evaluate the conversion rate, and we do that based on user metrics. So these users are cookied.

We can also set an analytics tag on them and say, "Okay, wherever they navigate around, how many of them end up converting?" Then we can evaluate the conversion rate based on whether they saw treatment A or treatment B. Because we're looking at conversion rate, the audience size doesn't exactly have to be the same. So the statistical analysis can take care of that fact, and we can evaluate the conversion rate on a user-centric basis.

So then we maybe see that it's -5% in conversion rate. We then need to evaluate, "Is this something we should roll out?" So step 1 is: Do we just roll it out? If it's a win in both, then the answer is yes probably. If they're in different directions, then there are couple things we can do. Firstly, we can evaluate the relative performance in different directions, taking care that conversion rate applies generally across all channels, and so a relatively small drop in conversion rate can be a really big deal compared to even an uplift in organic traffic, because the conversion rate is applying to all channels, not just your organic traffic channel.

But suppose that it's a small net positive or a small net negative. What we can then do is we might get to the point that it's a net positive and roll it out. Either way, we might then say, "What can we take from this?What can we actually learn?" So back to our example of the content. We might say, "You know what? Users like this cleaner version of the page with apparently less content on it.The search engines are clearly relying on that content to understand what this page is about. How do we get the best of both worlds?"

Well, that might be a question of a redesign, moving the layout of the page around a little bit, keeping the content on there, but maybe not putting it front and center to the user as they land right at the beginning. We can test those different things, run sequential tests, try and take the best of the SEO tests and the best of the CRO tests and get it working together and crucially avoid those situations where you think you've got a win, because your conversion rate is up, but you actually are about to crater your organic search performance.

We think this is going to just be the more data-driven we get, the more accountable SEO testing makes us, the more important it's going to be to join these dots and make sure that we're getting true uplifts on a net basis when we combine them. So I hope that's been useful to some of you. Thank you for joining me on this week's Whiteboard Friday. I'm Will Critchlow from Distilled.

Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

17 / 01 / 2019
Channel Image17:00 SearchCap: Bing & Yahoo partner, Google News SEO & link ranking study» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

Please visit Search Engine Land for the full article.
Channel Image16:04 Google shares tips for success in Google News search results» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
Get reacquainted with best practices for getting your content in Google News.

Please visit Search Engine Land for the full article.
Channel Image13:50 Get proven SEO & SEM tactics – attend SMX in San Jose in 2 weeks» Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing
25+ sessions, 40 search marketing experts, thoughtful networking, and more!

Please visit Search Engine Land for the full article.
Channel Image13:16 How to Implement a National Tracking Strategy» Moz Blog

Posted by TheMozTeam

Google is all about serving up results based on your precise location, which means there’s no such thing as a “national” SERP anymore. So, if you wanted to get an accurate representation of how you’re performing nationally, you’d have to track every single street corner across the country.

Not only is this not feasible, it’s also a headache — and the kind of nightmare that keeps your accounting team up at night. Because we’re in the business of making things easy, we devised a happier (and cost-efficient) alternative.

Follow along and learn how to set up a statistically robust national tracking strategy in STAT, no matter your business or budget. And while we’re at it, we’ll also show you how to calculate your national ranking average.

Let’s pretend we’re a large athletic retailer. We have 30 stores across the US, a healthy online presence, and the powers-that-be have approved extra SEO spend — money for 20,000 additional keywords is burning a hole in our pocket. Ready to get started?

Step 1: Pick the cities that matter most to your business

Google cares a lot about location and so should you. Tracking a country-level SERP isn’t going to cut it anymore — you need to be hyper-local if you want to nab results.

The first step to getting more granular is deciding which cities you want to track in — and there are lots of ways to do this: The top performers? Ones that could use a boost? Best and worst of the cyber world as well as the physical world?

When it comes time for you to choose, nobody knows your business, your data, or your strategy better than you do — ain’t nothing to it but to do it.

A quick note for all our e-commerce peeps: we know it feels strange to pick a physical place when your business lives entirely online. For this, simply go with the locations that your goods and wares are distributed to most often.

Even though we’re a retail powerhouse, our SEO resources won’t allow us to manage all 30 physical locations — plus our online hotspots — across the US, so we'll cut that number in half. And because we’re not a real business and we aren’t privy to sales data, we'll pick at random.

From east to west, we now have a solid list of 15 US cities, primed, polished, and poised for our next step: surfacing the top performing keywords.

Step 2: Uncover your money-maker keywords

Because not all keywords are created equal, we need to determine which of the 4,465 keywords that we’re already tracking are going to be spread across the country and which are going to stay behind. In other words, we want the keywords that bring home the proverbial bacon.

Typically, we would use some combination of search volume, impressions, clicks, conversion rates, etc., from sources like STAT, Google Search Console, and Google Analytics to distinguish between the money-makers and the non-money-makers. But again, we’re a make-believe business and we don’t have access to this insight, so we’re going to stick with search volume.

A right-click anywhere in the site-level keywords table will let us export our current keyword set from STAT. We’ll then order everything from highest search volume to lowest search volume. If you have eyeballs on more of that sweet, sweet insight for your business, order your keywords from most to least money-maker.

Because we don’t want to get too crazy with our list, we’ll cap it at a nice and manageable 1,500 keywords.

Step 3: Determine the number of times each keyword should be tracked

We may have narrowed our cities down to 15, but our keywords need to be tracked plenty more times than that — and at a far more local level.

True facts: A “national” (or market-level) SERP isn’t a true SERP and neither is a city-wide SERP. The closer you can get to a searcher standing on a street corner, the better, and the more of those locations you can track, the more searchers’ SERPs you’ll sample.

We’re going to get real nitty-gritty and go as granular as ZIP code. Addresses and geo coordinates work just as well though, so if it’s a matter of one over the other, do what the Disney princesses do and follow your heart.

The ultimate goal here is to track our top performing keywords in more locations than our poor performing ones, so we need to know the number of ZIP codes each keyword will require. To figure this out, we gotta dust off the old desktop calculator and get our math on.

First, we’ll calculate the total amount of search volume that all of our keywords generate. Then, we’ll find the percentage of said total that each keyword is responsible for.

For example, our keyword [yeezy shoes] drew 165,000 searches out of a total 28.6 million, making up 0.62 percent of our traffic.

A quick reminder: Every time a query is tracked in a distinct location, it’s considered a unique keyword. This means that the above percentages also double as the amount of budgeted keywords (and therefore locations) that we’ll award to each of our queries. In (hopefully) less confusing terms, a keyword that drives 0.62 percent of our traffic gets to use 0.62 percent of our 20,000 budgeted keywords, which in turn equals the number of ZIP codes we can track in. Phew.

But! Because search volume is, to quote our resident data analyst, “an exponential distribution,” (which in everyone else-speak means “gets crazy large”) it’s likely going to produce some unreasonably big numbers. So, while [yeezy shoes] only requires 124 ZIP codes, a keyword with much higher search volume, like [real madrid], might need over 1,000, which is patently bonkers (and statistical overkill).

To temper this, we highly recommend that you take the log of the search volume — it’ll keep things relative and relational. If you’re working through all of this in Excel, simply type =log(A2) where A2 is the cell containing the search volume. Because we're extra fancy, we'll multiply that by four to linearly scale things, so =log(A2)*4.

So, still running with our Yeezy example, our keyword goes from driving 0.62 percent of our traffic to 0.13 percent. Which then becomes the percent of budgeted keywords: 0.0013 x 20,000 = tracking [yeezy shoes] in 26 zip codes across our 15 cities.

We then found a list of every ZIP code in each of our cities to dole them out to.

The end. Sort of. At this point, like us, you may be looking at keywords that need to be spread across 176 different ZIP codes and wondering how you're going to choose which ZIP codes — so let our magic spreadsheet take the wheel. Add all your locations to it and it'll pick at random.

Of course, because we want our keywords to get equal distribution, we attached a weighted metric to our ZIP codes. We took our most searched keyword, [adidas], found its Google Trends score in every city, and then divided it by the number of ZIP codes in those cities. For example, if [adidas] received a score of 71 in Yonkers and there are 10 ZIP codes in the city, Yonkers would get a weight of 7.1.

We'll then add everything we have so far — ZIP codes, ZIP code weights, keywords, keyword weights, plus a few extras — to our spreadsheet and watch it randomly assign the appropriate amount of keywords to the appropriate amount of locations.

And that’s it! If you’ve been following along, you’ve successfully divvied up 20,000 keywords in order to create a statistically robust national tracking strategy!

Curious how we’ll find our national ranking average? Read on, readers.

Step 4: Segment, segment, segment!

20,000 extra keywords makes for a whole lotta new data to keep track of, so being super smart with our segmentation is going to help us make sense of all our findings. We’ll do this by organizing our keywords into meaningful categories before we plug everything back into STAT.

Obviously, you are free to sort how you please, but we recommend at least tagging your keywords by their city and product category (so [yeezy shoes] might get tagged “Austin” and “shoes”). You can do all of this in our keyword upload template or while you're in our magic spreadsheet.

Once you’ve added a tag or two to each keyword, stuff those puppies into STAT. When everything’s snug as a bug, group all your city tags into one data view and all your product category tags into another.

Step 5: Calculate your national ranking average

Now that all of our keywords are loaded and tracking in STAT, it’s time to tackle those ranking averages. To do that, we’ll simply pop on over to the Dashboard tab from either of our two data views.

A quick glimpse of the Average Ranking module in the Daily Snapshot gives us, well, our average rank, and because these data views contain every keyword that we’re tracking across the country, we’re also looking at the national average for our keyword set. Easy-peasy.

To see how each tag is performing within those data views, a quick jump to the Tags tab breaks everything down and lets us compare the performance of a segment against the group as a whole.

So, if our national average rank is 29.7 but our Austin keywords have managed an average rank of 27.2, then we might look to them for inspiration as our other cities aren't doing quite as well — our keywords in Yonkers have an average rank of 35.2, much worse than the national average.

Similarly, if our clothes keywords are faring infinitely worse than our other product categories, we may want to revamp our content strategy to even things out.

Go get your national tracking on

Any business — yes, even an e-commerce business — can leverage a national tracking strategy. You just need to pick the right keywords and locations.

Once you have access to your sampled population, you’ll be able to hone in on opportunities, up your ROI, and bring more traffic across your welcome mat (physical or digital).

Got a question you’re dying to ask us about the STAT product? Reach out to clientsuccess@getSTAT.com. Want a detailed walkthrough of STAT? Say hello (don’t be shy) and request a demo.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!