I attended a webinar of the best practices for using A/B testing. Let’s face it, math is not my area of expertise and I am very skeptical of statistics. Being in marketing for as long as I have, I have the opinion that you can find a statistic to support any fact you want.

Case in point: is this shade of blue better than that shade of blue for my website? If you analyze the question, you realize that this is a question of opinion. There is no empirically correct answer for this. However, you can test the daylights out of this question (yes, someone is making millions off people asking this exact question) and get a statistically “correct” answer. In reality, it is the most liked shade of blue; it is not the “best” blue for your website. And this answer will not drive more people to your website.

So, when is statistical testing, and A/B testing, worthwhile? For two reasons: to find an absolute answer – such as how many marbles are in the jar, or to test a theory. There is a math professor that proves the first reason. He does a test every year to ask his students to guess how many jellybeans are in a jar. The answers span a vast range, but the average is very close, like less than one percent off, from the actual number.

A/B testing and statistical testing is also great for testing a theory. It is great for determining if there a demand in the marketplace for a product. Your theory, as the inventor, is the answer is yes. A/B testing gets abused when people try to use it to determine which headline is best. That is an opinion, and frankly, the percentage for error eliminates any real differential in subtle changes.

The lessons I took from this webinar are:

- Crowds are great at finding objective truth
- Test your theory
- Statistical testing kills creativity and greatness
- Be bold, go for huge effects not subtle changes
- Google tested 41 shades of blue before they determined which one to use for the Gmail send button