SEO experiments can help us gain a better understanding of how search engines operate and information gained can be used for competitive advantage. However, in order to make valid and reliable findings which can be applied to ‘real world’ projects, proper methodology and reporting should be used.
This post looks at overall issues of validity in SEO experiments and methods to reduce confounding variables.
Do SEO Experiments Work? – TL;DR
There is some debate over whether SEO experiments allow us to make valid assumptions. This video and this post by Michael Martinez suggest that it is difficult for most experiments to effectively isolate variables and produce valid results.
It is certainly true that removing all extraneous variables is a challenge, however, with proper methodology and reporting, it should be possible to decrease their influence and increase the transparency of results. Having proper reporting is particularly important as it allows replication and peer review of experiments.
The difficult of reducing confounding variables is highlighted in this Google Plus post by Pedro Dias. Pedro is an Ex-Googler and has a fantastic insight into issues that others are not aware of. In his post he talks about issues of experimental bias (Type of query used) and internal validity (The effects of GEO location, Personalization of results & Data centres hit).
Effective SEO experiments should be able to recognise these factors and take steps to mitigate their impact on results. These should also be included in reporting.
The Sentimient I take away from both of these experienced SEOs is that experiments can be useful, however they are all to often done badly, with the results frequently over generalised. The solution then is to improve the methodology & reporting procedure whilst recognising the limitations of any findings.
So how can we improve reporting and methodology?
Experimental Reporting – The Scientific Method
In the sciences there has been a long history of structured reporting. The ‘Scientific Method’ allows for greatly improved transparency and makes it possible for other researchers to conduct the same experiment in order to validate results.
Reporting should follow the structure:
- Formulation of the Question
Discussing established knowledge, previous work in the field and identifying gaps in research.
- Stating a Hypothesis
Stating a null and Alternative Hypothesis which can later be accepted or rejected.
Example Null Hypothesis: Changing title tags will have no impact on ranking
Example Alternative Hypothesis: Changing title tags will result in a positive change in ranking
- Prediction of Results
Predicting the logical consequences of the hypothesis and suggesting a measurable outcome against which it can be tested.
This is fairly simple in SEO experiments as results are generally observable, eg: change in ranking.
This is where the control variables are manipulated and the predicted outcome of the hypothesis measured.
- Analysis of Results
Looking into the results and identifying whether the null hypothesis can be rejected and the alternative hypothesis accepted. This section would also discuss the validity of the experiment, acknowledge any limitations, and propose further steps for research.
Note: Obviously, this method of reporting is not always attractive to the readership of a blog, however, it does provide the opportunity for two forms of reporting. The complete, complex report and a simplified blog post. In my eyes this is a big bonus to any SEO looking to create good content.
Baseline Comparison: Baseline comparison takes measurements before, during and after testing.
A measure is taken, then the control variable altered, a second measure is then taken, then the control variable is returned to it’s original state, then a third measure is taken.
This format can be problematic as the long term impact of the variable is not always known. This experimental format would also require a large sample size in order to reduce the impact of confounding variables.
Controlled A/B testing: This method uses a control and experimental sample. The control should be the same as the test sample in all ways except for the change in variables. For simple SEO experiments this method can be useful as it can yields faster results.
Like other forms, this type of methodology can be suseptable to confounding variables. These can be reduced to some extent by increasing the sample size and aggregating results.
Eliminating Confounding Variables
Confounding variables are those which are not intended to be measured, thus reducing the validity of the experiment.
The following methods can help reduce or eliminate the effects of confounding variables:
- Increased sample size
Increasing sample size allows reduces the likelihood the measured outcome occurred by chance.
- Alternating order
Removes experimental bias from order effects. Eg: The order of links in the source code may have an impact on the flow of link value. If using larger sample sizes it could be possible to reduce this effect by alternating the order of links to the test pages.
- Measurements from multiple locations
Location can have an impact on results. Measurements from multiple locations (Around the world) will negate this. The alternative is to exclude other locations from any conclusions about the findings.
- Depersonalization of results
Google offers personalised results at browser level. It is possible to depersonalise results by entering incognito mode in Chrome (of using a fresh install of another browser) and adding ‘&pws=0’ into the url. Eg:
There is undeniably controversy about the effectivness of SEO experiments. In their defence, A recent article by SEO.com makes the point well, that although they are often imperfect they can still provide us with insight.
I agree with this sentiment. Whilst results may not always be 100% valid doesn’t mean we should give up on them altogether. There are many examples of imperfect research and experimentation outside of SEO, but it would never be suggested we stop those.
Finally, I would agrue that the only thing which can increase the validity of SEO experiments is to conduct more, with improved methodology and reporting.