IMPACT Inbound Marketing Agency]
Services
TAYA

They Ask, You Answer Mastery

A coaching & training program that drives unmatched sales & marketing results.

Sales

Sales Performance Mastery

Improve the competencies and close rates of your sales organization.

Web design

Website Mastery

Web design, development & training for your team.

HubSpot

HubSpot Mastery

Everything you need to get the most from HubSpot.

AI Mastery

AI Enablement Mastery

Unlock the power of AI in all aspects of your revenue operations.

Discover how IMPACT’s services can help take your business to the next level. Talk to Us Talk to Us
Learning Center
Learning Center

Learning Center

Free resources to help you improve the way you market, sell and grow your business.

[NEW] The Endless Customers Podcast is now available everywhere. Learn how to earn trust & win more customers in the age of AI. Listen Now Listen Now

Free Assessment: How does your sales & marketing measure up?

Close Bottom Left Popup Offer

Free Assessment:

How does your sales & marketing measure up?
Take this free, 5-minute assessment and learn what you can start doing today to boost traffic, leads, and sales.
Iris Hearn

By Iris Hearn

Mar 12, 2019

Topics:

News Facebook Marketing Paid Media
Subscribe
Never miss an episode of Endless Customers!

Subscribe now and get the latest podcast releases delivered straight to your inbox.

Thanks, stay tuned for our upcoming episodes.
News  |   Facebook Marketing  |   Paid Media

Facebook Releases Testing Guide to Help Marketers Perform Better Experiments

Iris Hearn

By Iris Hearn

Mar 12, 2019

Facebook Releases Testing Guide to Help Marketers Perform Better Experiments

A good marketing strategy largely stems from testing, analyzing, and adapting based on results.

This is specifically relevant to social media marketing, where the best strategies are based on an understanding of audience behavior, time they’re active on the site, and what type of content will “jump out” to them and inspire click-throughs -  even when they’re not actively searching for it.

Conducting tests on social media is essential to cracking this code, and can help you discover new pathways to success. When done correctly, tests can also provide insights to help you understand your target audience better.

However, the testing process by nature means that some tactics may not work out the way we intend them to.

When mistakes are made or experiments don’t pan out the way we had predicted, too many marketers make the mistake of concluding that the campaign was a failure.

This mindset is counterproductive for two reasons:

  1. It can scare marketers into playing it too safe, and not taking risks that can reap bigger results and better insights.

  2. It puts the focus on pivoting too quickly; completely missing the opportunity to dig into the lessons you did learn about your audience from the campaign.

Of course, you always want to see success and improvements in performance, but when that doesn’t happen, there are just as many lessons to be learned as there are from a successful campaign.

In fact, without learning these lessons, it's unlikely you’ll ever see the full potential of what your business can do on social media

Facebook, one of the top platforms when it comes to both paid social ads and organic social business engagement, explains:

Making mistakes is part of the process, and actually necessary for companies to make progress. The adoption of incrementally should encompass a change in mindset and culture to embrace responsible learning. This includes managing expectations, that things can, and eventually will, go wrong during an experiment, but that when they do it is ok because you are learning along the way. “

Facebook sees countless experiments conducted on its platform on any given day, and has used that data to determine that many errors in the testing process are made more frequently than others:

In fact, many errors are fairly commonplace and predictable—and there are proven tactics that can be employed to give the best response and salvage as much valuable learning as possible."

To help marketers and business leaders understand this, Facebook has released a 16-page guide titled “Prepare for the Unexpected: A Guide to Testing and Learning with Incrementality Measurement” focused on the testing process for its platform.

In this guide, the team at Facebook highlights six of the most common things that tend to go wrong during experiments, how they happen, and what to do in each scenario.

Below, I’ve summarized their main findings - but I highly encourage all marketers and business leaders to read the full PDF here for a more comprehensive understanding.  

Facebook’s 6 Common Errors in Paid Ad Campaigns

After analyzing many campaigns, Facebook boiled it down to six common errors they see businesses make most frequently when they’re unhappy with paid Facebook ad campaign performance.

These include:

  • Data lacks statistical power: Whether your audience is too big or too small, sometimes, your initial test does not have the right amount of data to be statistically significant.

  • Outliers in test group skewing results: No matter how hard we try to lock down our targeted group, there’s always the risk of outliers skewing results. For example, if one test reaches a COO that decides to buy 100 licenses for your software - the test will show positive metrics, but it’s very unlikely the same result will be replicated for a larger audience.

  • Variables in test are not isolated enough: Because variables will often help draw conclusions for cause and effect, poorly isolated variables can lead marketing teams to draw inaccurate conclusions about performance.

  • Cross-pollination between test group and control groups: In scientific studies, the experimenter has methods of keeping groups separate. Unfortunately, in the paid ads world, we don’t always have that advantage. Professionals talk to one another, someone in a control group could actually be making purchasing decisions for someone in a test group, or your tracking cookies might be on a shared computer that could skew results.

  • Tests can have effects beyond initial user interaction: These are referred to as “second order effects”. Your ad might influence the user to do something - it just might not be exactly what you’re thinking (i.e., a campaign that’s aiming to generate free trials might actually get paid subscribers, requests to talk to sales or content downloads - but those aren’t being tracked). In this case, your campaign did boost engagement, but it might be inaccurately depicted without digging deeper.

  • Marketers can’t track everything they need to: Too often, the ad dashboard is seen as the “single point of truth” when it comes to ad performance - and this just isn’t true. As much as we try to lock down everything, it’s not always possible. People can make cash or otherwise untraceable purchases, switch devices or browsers so the behavior isn’t attributed, or even sign up with a different email than they previously used. While you can’t always solve for this, it’s important to keep your bottom line in mind, and don’t underestimate how your efforts are connected - even when you can’t always physically see it in the backend.

Why Did Facebook Release this Guide?

Recently, we’ve been seeing more and more social media platforms roll out tools to help marketers have success with their campaigns.

Why?

Paid platforms are competing against one another for business. While they can’t control how receptive a user will be to ads shown to them, they can offer tips to help marketers run better campaigns.

Simply put, platforms like Facebook have actionable data that can help marketers make more informed decisions and gain better ROI through their paid ad efforts, and they’re putting it to use. The more brands that report success with a particular paid social outlet, the more profitable that platform will become.

This is definitely a trend that will benefit marketers and will enable us to run better, more comprehensive campaigns down the line.

Key Takeaways for Marketers

With the release of this new testing guide, Facebook is saying that just because a test didn’t perform as expected, doesn't mean that Facebook advertising isn’t effective.

Additionally, they’re saying that with every test, there is always the risk that it won’t perform how you think it will.

This is true not only for Facebook ads - or even paid ads in general - but for virtually everything we do in digital marketing.

You can have the most informed data sets, the most well-defined buyer personas, or the most strategically laid out workflow, and still not generate the desired results.

When this happens, it’s so vital to analyze what went wrong from a new lens. Maybe your campaign was perfectly positioned, but it just included the wrong people. Maybe you had people included in a previous test that were going to buy anyway, regardless of what email drip they received. Maybe the people filling out your form were just collecting data for academic research purposes instead of looking to buy.

The truth is, sometimes you’ll never get those answers. But it's important to realize that no matter how much data you have, there will always be pieces missing - so don’t take the results from one experiment as a single point of truth for your audience.

Instead, think of it as just that - the findings from one test. These theories can be further supported by additional tests, or disproven by contrary results from a future campaign.

Above all, make sure you’re considering the six common error sources Facebook has identified in this guide before launching your campaign. If nothing else, you’ll be prepared for any bumps that arise, and be able to move forward proactively rather than reactively.

Free Assessment:

How does your sales & marketing measure up?
Take this free, 5-minute assessment and learn what you can start doing today to boost traffic, leads, and sales.