How To Make Your Digital Signage More Effective, With A/B Testing

July 10, 2018 by guest author, Kenneth Brinkmann

Guest Post: Debbie DeWitt, Visix

In order to truly use your organizational digital signage to its fullest potential, you need to have a system of continuous assessment in place – what works, what doesn’t and how can less effective messages be improved to get people to follow your calls-to-action? One way to peek under the hood is to conduct analytical experiments with content, running two different versions of the same message to see which one performs better.

Debbie DeWitt

This is known as a Conversion Rate Optimization (CRO) test. A “conversion” is when someone does what you want them to do after being exposed to your communications. In a marketing email, the goal could be could get the reader to visit the company’s website. On the website, maybe the goal is to get people to fill out the request more information form. These are just examples – the goal of the communications could be literally anything.

There are two ways to go about this that can also apply to digital signage:

Using multivariate testing might have some advantages for high traffic websites, but for something like digital signage messages, where the content is limited to a few well-chosen words and images, split testing, or A/B testing, makes much more sense. It yields reliable data that can instantly be acted upon, and lets you create a database of effective design and content information that can be used to create better messages in the future.

Set Your Conversion Goal
First off, you need a conversion goal – what do you want people seeing that particular digital signage message to do? This is why it’s crucial to always have a call-to-action (CTA) for each message – something the viewer can do (preferably immediately) that lets you know that the message worked.

If the message is promoting, say, a new training package or module, your conversion goal might be for people to sign up or get more information on the web. This might mean including a short URL or QR code in the message, so people can use their phones to go right to the webpage. The link should lead to a dedicated landing page that is only used for people who use that specific URL or QR code. That way you can see exactly how many people interacted with your digital signage message to access the webpage. If you also send out an organization-wide email promoting the new training, include a link to an identical landing page using a unique URL, so you can see how many people responded to the email versus the digital sign. So, a good CTA also has some built in information of the effectiveness of various communications methods.

But maybe you’re seeing only a very small uptick in people signing up, despite having a message in the digital signage playlist that you think looks pretty good. So then, why isn’t it working better? This is where an A/B test comes in. Start tweaking elements and seeing how the altered message performs against the original one.

Form A Hypothesis
A/B means that you change a single variable at a time (the original version is A and the version with one altered variable is B). First you need to come up with a hypothesis, then change the variable and test to see if your hypothesis was correct.

In the example above, let’s say the message had a short URL as a means to sign up for the training package on the web. Your hypothesis might go something like this: “People aren’t using the short URL. Maybe they think it’s easy to remember and they’ll type it into their browser when they get back to their desk, but then they forget it, or get distracted by something else and never get around to it. Maybe another way to get to the webpage would be more effective.” You think of a QR code – these might get a more immediate response because they contain information that can only be deciphered by an app; there’s no way to “remember” a QR code – you have to use it right then and there. So, you create a new message that is identical in every aspect to the first one, but instead of a short URL, you use a QR code.

Run The Test
Now you run your A/B test. The short URL message is the control (A), and the QR code message is the challenger (B). Your dependent variable, which is the primary metric to determine which one is more effective, is the number of people who go to a dedicated landing page on the web seeing one of the messages.

You need to split your sample groups fairly randomly and equally. Let’s say that the original message (which is now the control message) ran four times and hour in high traffic areas between 11am and 3pm, Monday-Friday. Change none of those variables (number of times displayed, locations, or when the message is shown) – just put message A on half the time (twice an hour) and message B on the other half. The total number of times either A or B is shown is the same as before – four times an hour for four hours, or 16 times a day, but each one only gets displayed eight times. And you want to randomize when A or B is being shown – don’t just alternate them or you may accidentally test times of the day instead of a short URL vs. a QR code. And they should display in a different order the following day, so you have a good random spread across the five days (meaning that each version will display eight times a day for five days or 40 times in a week). You have to test both A and B at the same time or you won’t really know if your hypothesis was correct.

Figure out how long you need to determine how good your hypothesis is. Will just one week do it, or should you do it for two weeks? What about a whole month? You need to make sure that there was a large enough audience exposed to both messages to give you a good sample size. When A/B testing is done with emails, the general rule of thumb is that a group of 1,000 people gives you enough to work with.

Measure Results
Now you need to see if there are changes in your dependent variable – if you see more people going to the landing page. But remember – you need to be able to track visits that originated from the URL separately from those that originated from the QR code to measure which was more effective.

You might be able to track this with your QR application, or tools like Google Analytics. If not, you’ll need to create two landing pages. They should look identical, but the actual URL of each one needs to be slightly different. (For example, skynet.com/training/URL and skynet.com/training/QR.)

Once you’ve run the A/B test for the length of time you think you need, look at the data for your landing pages. Did the QR code get more traffic than the short URL? If yes, then your hypothesis is probably correct, and you should run only the B version of your message from now on because it’s been shown to be the more effective communication. However, if there was about the same amount of traffic to both landing pages, then your hypothesis was wrong, and this A/B test gets marked as inconclusive. Then it’s back to the drawing board, trying to figure out why people aren’t going to where you want them to go.

Variables to Test
This is just one specific example, but the number of things you can fine-tune is huge. It might be the colors, or the images, or the wording of a message that needs to be changed and tested. Maybe it’s the layout and design of the whole screen. Maybe adding video would get more people to follow the CTA. Maybe the CTA itself needs to change. And yes, sometimes the actual thing itself being promoted just isn’t that interesting to large numbers of people, or only appeals to a niche audience.

In a perfect set up, you would be running A/B tests all the time with many different messages. This is what a program of continuous assessment and improvement would entail. In a given playlist of messages being displayed on your digital signs, at least a few of them should be going through some kind of A/B testing at any given time. This can yield an amazing amount of data about what does and does not appeal to your audience, not only for individual messages, but for your digital signage as a whole.

A/B Testing Checklist
So, when conducting A/B testing on your digital signage messages, follow this checklist:

  1. Choose a message that isn’t performing well in terms of response to your CTA.
  2. Define your conversion goal – your dependent variable.
  3. Come up with a hypothesis (or several – but only test one at a time).
  4. Choose a single variable to test.
  5. Create a control variable (A – the original message) and a challenger (B – the same message with one element changed)
  6. Split your sample groups equally and randomly.
  7. Run the A/B test long enough to get a good-sized sample (generally, at least 1000 people exposed to either message variant).
  8. Run both A and B versions during the same time period.
  9. After the test, examine your goal metric (that dependent variable).
  10. Determine how significant your findings are.
  11. Take action based on the results – either use the more effective version, or test another variable.
  12. Plan the next A/B test.

A phrase from the world of sales is ABC – Always Be Closing. It’s tempting to co-opt that for digital signage into Always Be Correcting. But that seems to suggest that A/B testing should only happen when there’s a problem. It might be better to say ABO – Always be Optimizing or ABI – Always Be Improving. Because that’s what this is really all about – using this simple method to fine-tune your messages to your specific audience to get the best response possible and keep them engaged. Digital signage is a dynamic communications medium, and dynamic means constant progress, constant change, constant improvement.

Leave a comment