“If you want to maximize your online revenue, A/B testing is a must.”

For a B2C company, that statement probably sounds obvious. Consumer-facing ecommerce businesses see large amounts of traffic every day, and expect to turn that traffic into direct website sales – preferably with no human contact whatsoever. Since the website itself is the main sales channel, the results of regular testing are clearly visible on the bottom line.

For B2B companies, things are a little different.

For one, the B2B customer journey is much longer, and often includes a significant offline sales component. There are often multiple decision makers involved, and they tend to do more thorough background research before selecting a solution. This makes it very hard to attribute a successful sale to any single cause.

Another difference is that most B2B websites usually see a lot less traffic than B2C sites, since they target specialized markets with fewer total customers. And to make things more difficult, they also have to appeal to multiple buyer types with very distinct needs. This requires them to segment their website content, so as to provide a different set of information for each buyer persona. The end result is that already-low traffic numbers are divided into several distinct “pathways”, reducing the available sample size even more.

Because of these distinctions, B2B companies have to approach their A/B testing differently from their B2C counterparts. In particular, they need to aim for substantial changes that will move the needle in a big way. Amazon might be perfectly happy with a conversion increase of 2-3%, but when you have to deal with fuzzy attribution and small sample sizes, minor improvements like that simply won’t cut it.

To understand this better, let’s take a deeper look at how A/B testing works.

 

Why Focus On Big Changes?

Fundamentally, A/B testing is a statistically-based experimental framework. And like all statistical experiments, the reliability of the results depends upon a large enough sample size.
What is “large enough” depends on three factors:

  • Base Conversion Rate
  • Improvement in Conversion Rate
  • Confidence Level

As a general rule, A/B testers aim for a confidence rate of 95%, which means there’s only a 5% probability of the results being a fluke. The thing to realize is that the larger the improvement in confidence, the lower the likelihood of it being due to chance. That means you don’t need as large of a sample size to be confident in the results.

This isn’t a small difference, either. According to CRO expert Michal Pařízek, a 10% improvement on a 2% base conversion would require a sample size of 39,488 per variant, while a 50% improvement on a 2% base would require a sample size of 1,871 per variant. That is a 21x difference in sample size!

Because of this dynamic, B2B companies shouldn’t bother with small tweaks like changing button colors or adding a single word to your headline. Instead, focus on big, radical changes that can have a major impact on your base conversions. This will usually involve entirely different designs for a web page, like the following two landing pages for Iron Mountain:

In the above A/B test, version B increased total form submissions by a whopping 107%. With an improvement like that, you really don’t need all that much traffic to achieve statistical significance.

 

Start With Personalization

One of the most important “big changes” to test is buyer personalization.

While most B2B websites should include some degree of personalization for different buyer personas, the optimal configuration will differ from market to market. You should test various levels of customization, and try segmenting your website in different ways, such as by end user, industry or business function.

The results of effective buyer personalization can be quite remarkable. For example, enterprise software provider Citrix tested a personalized homepage for their three largest verticals: healthcare, education and finance. Half of their customers saw a personalized banner with copy, imagery and a call-to-action that was specific to these verticals. The other half saw a standard, “one-size-fits-all” version.

This single test generated significant improvements across a wide range of metrics:

  • Bounce rate dropped by 7%
  • Homepage banners saw a 30% increase in clickthrough rate
  • 10% increase in total pageviews per session
  • 4% increase in average session duration

Note that the last two are improvements in site-wide engagement, which means that the benefit of personalizing the homepage banner actually extended to other pages as well.
As these results show, personalization matters a lot in B2B. So if you haven’t done so already, personalized homepages are definitely something you’ll want to test.

 

Test Micro-Conversions

While big transformations can be very effective at “moving the needle”, they do come with one important drawback. Since you’re making multiple changes at the same time, it’s not possible to determine exactly which changes were responsible for the improved results.

That’s where micro-conversions come in.

A micro-conversion is a desired action that occurs before the main conversion goal of your website (otherwise known as a macro-conversion). For instance, your website’s goal might be for the customer to fill in a form to request a quote. But before getting to that step, they would usually have taken several other actions, such as downloading a white paper, watching a case study video, or asking questions through an online chat. Each of those events is a micro-conversion – an action that moves the customer closer to requesting a quote.

The nice thing about micro-conversions is that they occur earlier in the buyer journey, and so you’ll have a much larger pool of people to work with. If 1 in 20 visitors who download a white paper end up requesting a quote, then measuring white paper downloads gives you 20x the available users. Also, since you’re only testing for a single, low-commitment action, it’s possible to see significant percentage improvements with just one tweak. These two factors combined allow you to conduct much more specific A/B tests.

Here’s an example of a micro-conversion from B2B broadcast platform Ustream:

The addition of the word “broadcast” led to a 12% increase in button clicks for broadcasting new sessions. Due to the substantial improvement, Ustream only needed to expose the test to 12,000 visitors before achieving statistical significance.

 

Increase B2B Revenue With Scientific A/B Testing

Done properly, A/B testing can bring a higher level of clarity to your B2B marketing efforts. While B2B companies don’t have the luxury of working with large consumer audiences, A/B testing can still work very well for them, given a few important modifications. Over time, understanding and implementing reliable, scientific A/B tests will put you many steps ahead of the competition.