How a Simple A/B Test Can Dramatically Improve Ad Monetization

Post on March 31, 2021 by Tina Tsang

Tina Tsang Director, Customer Success, APAC

There’s no getting away from it – programmatic advertising is complex, and players across the industry face constant pressure to adopt new technology. Trying to assess the value of that new technology should be, but rarely is, a scientific process.

This is especially true when it comes to publishers adopting header bidding technology. Header bidding technology boosts monetization potential for publishers by allowing them to send ad requests to all demand partners simultaneously and having all bids compete in one unified auction. To do this, it’s common practice for publishers to select a wrapper solution from a vendor. A wrapper is a piece of code embedded into the header of a website, into which multiple demand sources can be plugged. A wrapper makes it easier for publishers to add more demand sources… and more demand sources competing in an auction = increased yield. Publishers ensure they’re getting the highest price for every single impression.

All too often, time-pressed publisher technical teams rely on an RFI or RFP rather than robust testing of competing wrappers. Technology is selected based only on the information provided by vendors. But something as simple as an A/B test can yield dramatic improvements in monetization.

A/B testing abounds on the buy side – most commonly, advertisers use it to test the effectiveness of creative assets. Pitting everything from display ads to email subject lines against an alternative, it’s a fast and efficient way to learn and improve on the fly. Proper A/B testing, with more data-informed decisioning, can provide publishers with real guideposts for how to drive increased revenue potential.

It did for online media brand 9GAG.

As an organization that believes in researching, hypothesizing, testing and then deciding, it made sense to them to run A/B tests when it came to selecting a new wrapper solution to improve programmatic monetization.

They wanted to understand whether another wrapper solution might drive better results than their incumbent wrapper. So they selected PubMatic’s Prebid-based OpenWrap and one other industry leading Prebid-based wrapper, and put all three to the test – using robust and randomized A/B testing methodology.

The results came as a surprise. 9GAG found that, despite all three wrappers being Prebid-based and running the same demand partners, for the same ad units, in the same markets, there were big differences in the monetization they delivered.

PubMatic’s OpenWrap dramatically outperformed both competing wrappers – driving a 36% increase in net revenue.

The publisher had hypothesized that another wrapper would drive the best results – so had they made this decision solely on an RFP, they would have missed out on significant revenue.

RFIs and RFPs are great for initial research, but without objective testing it’s impossible to account for all the variables that affect real life performance. So if you’re thinking about trying a new wrapper solution – don’t just rely on the information provided by vendors; test the options for yourself – don’t leave money on the table.

Find out more by reading the full case study here.

Originally published in B&T Magazine