[contact-form-7 404 "Not Found"]
Log in

In 2020, Split-Test Video
is no longer an option…

  Go back to Blog

Fabrice Courdesses
Published on 15 March 2020


  • share

Targeting performance KPIs implies an attitude and above all, an adaptive approach: it quickly seemed insufficient to launch video campaigns blindly by relying only on a single video “creative”. You might as well toss a coin in the air and hope it falls on the winning side…

It’s commonly accepted practice to split-test the various banners you’re using in your display campaigns. Why shouldn’t it be the same for video ads campaigns?

The origins of our inspiration

We profess to have been largely inspired right from the beginning by an American video campaign, the first referred to as a best-case by YouTube, dating back 9 years ago to 2009, an eternity. YouTube US was only 4 years old!

That was the Orabrush company’s video, which catapulted its business to success in less than a year, thanks to a superbly orchestrated YouTube campaign.

orabrush_guy_close_web-615x500

We had the opportunity to meet and work with the US teams behind this campaign, as well as their successors, such as PooPourri or SquattyPotty.

It quickly became clear that these business successes were largely built on pre-campaign engineering, based on multidimensional testing of ads variations, and that distribution was from now on as much as important as production.

The Video Split-Test: campaign performance indicator

When it comes to video advertising, brands have to invest much more money in their creatives than for a banner package.

As a result, unfortunately, like the TV model, production practices are limited to delivering only one video.

To generate better performance for our clients, we quickly requested, following the model of the best American cases, that the production studios deliver several ads variations of the same video, in particular for the introduction, the call-to-action, the duration, the sequencing, the soundtrack, the inlays, the catchphrases, the tone, etc…

It may seem to be a little more work for the studios but if these variations variants are anticipated from the writing phase ➔ their production is marginal ➔ the campaigns are more effective ➔ and the clients are more satisfied.

To illustrate and measure the immediate impact of these pre-campaign video Split-Tests, we calculate of a Worst-to-Best ratio.

Based on a given KPI (often conversions… but it can also be post-click visits, add to basket, clicks, views… ), we measure the performance gap between the worst and the best video variant.

After more than a dozen campaigns carried out under these conditions, we have observed discrepancies ranging from 70% to more than 600%!

This means that by launching a campaign with a single (blind) video, a brand would have seen its performance significantly impacted: the worst video could have generated a ROAS (Return On Ad Spend) of 1, for example, and the best one of 10.

The impact of Split-Test video in the success of a campaign is such that it’s clearly no longer an option.

How does the Split-Test video work?

Initially, we organized our campaigns for Split Test videos “by hand,” meaning that we did labor-intensive work consisting of setting upa bunch of Google Ads campaigns, each time with a different variant to be able to compare them and determine which variant impacted the campaign results the most.

So in 2016, for our first campaign for the Balinea brand, we set up 60 different campaigns, with the following variables:

> multiple different videos (intro, length, ending).
> multiple CTAs.
> multiple landing pages.

When we saw the impact of these tests on the success of the campaign and its Scale (+40% on sales), we inevitably sought to do better (and therefore more) for the next campaign.

For the second run, 100 sub-campaigns were set up with an even greater impact, and even more so at 150… by integrating more and more variables:

> different videos
> CTA
> Landing page
> Devices
> Audiences

… until the manual approach showed its limitations:

In terms of time spent – remember Google Ads and YouTube are two different platforms Google Ads is not initially designed to manage video campaigns. So we ended up spending whole days doing the setup manually for our tests.

In terms of statistical reliability: as the budget allocated to these tests was limited, making it essential to surpass a manual budget.

We therefore conceived and developed VideoRunRun, the first technology that maximizes video ads, based on Split-Tests on YouTube.

Today with just a few clicks, VideoRunRun allows the simultaneous launch of several thousands of sub-campaigns integrating all possible variants.

In a few days, our algorithm is be able to identify the best performing sub-campaigns on the brand’s KPIs objectives, and then launch the scaling phase on the best performing variants.

VideoRunRun’s ambition for brands

Support digital brands and e-commerce in the field of video performance… by predicting the ROI of their video campaigns during the testing phase and then optimizing KPI acquisition during the resulting scaling phases.

➔ Significantly increase the ROAS of retail chains and advertisers who invest a lot on YouTube and don’t always know which KPIs to measure… and thereby optimize them.

Accompany creative studios and agencies in their transformation from audiovisual to digital.

Our ambition is to establish a new standard, a new way of understanding digital and social video by putting it (also) into performance service, thanks to Split-Testing.

Branding and performance are complementary, so why choose?

Read Next

Le Slip Français:
how a high-performance YouTube campaign generates a 5.9 ROI

See how creative testing and campaign optimizations with VideoRunRun drove success during a hyper-competitive period of the year, Christmas.

Stay ahead

Subscribe to receive updates on videorunrun and how video is making waves in the industry