40: Sustainable growth marketing experimentation

Experimentation lives at the center of growth marketing and it’s one of the best ways to explain how marketing combines art and science.

Too much of today’s marketing is about attribution and data and reporting. We know that’s part of experimentation obviously, tracking lift on certain metrics. But the art side is really the idea generation part of experimentation. Trying things that not a lot of other folks are doing, going against the grain, trying crazy ideas. Isn’t that what marketing is all about?

Today’s main takeaway is:

The most important part of designing experiments isn’t to have a single metric in mind or a  rock solid hypothesis. It’s to create a knowledge base of insights from past experiments that everyone on your team can learn from. That’s what we’re calling sustainable experimentation.

Sangram Vajre talks about 3 kinds of superpowers in marketing leaders:

The doer; They make sure the world is running today in the best way possible. They get stuff done. People count on them to be operational.

The driver; They can push projects through and assist with the process of securing buy-in from internal – and sometimes external – stakeholders.

The dreamer; They are forward-thinkers who can help shake things up and come up with new suggestions. They spend time imagining the world we want to live in, the future. Bunch of ideas, but not always ability to focus and move those along.

I’m wholeheartedly a dreamer. I spend my time digesting information, taking notes of cool ideas and keeping a swipe file of things to try.

I don’t see growth marketers as scientists experimenting in a lab… I think of us as early adopters.

We’ve talked about channel fatigue before and how eventually marketers ruin every new strategy and everything has diminishing returns.

That’s why experimenting with new channels, new ideas is so so important.

How to design an experiment

  • Goal/objective
  • Assumptions, supporting data
  • Hypothesis
  • Implementation
  • Reporting

First things first, what’s your goal? When designing an experiment, I prefer having a single metric in mind, while still monitoring secondary metrics as well. For example, here’s an objective:

Double the conversion rate of free trials to paid in the first 30 days from 2% to 4%.

Next up is the hypothesis.

Assumptions that back up your hypotheses

Before throwing out your hypothesis, it’s important to give as much context and supporting data for your hypothesis.

For our free trial conversion rate objective for example, it’s important to have a complete understanding of user needs.

In the free trial part of the funnel, users are still in the discover and try phase of their experience with your product.

So in your hypothesis doc you could you can share supporting data that shows free trial users are more likely to convert to paid users if they have successfully experienced a series of key moments of delight.

Hypothesis example

Free trial signups who are segmented by activity and receive trigger based onboarding series–specific to what they’ve completed in the product–are more likely to achieve a series of moments of delight and are thus more likely to convert to paid than users who receive the current onboarding series.

Implementation

Each experiment should have a dependent variable (conversion rate of free trials to paid), and an independent variable (the onboarding email series).

I also encourage folks to take a cohort approach to implementation. We split half the audience into a control group and a test group. The control group would continue to receive the current onboarding email series and the test group would be part of the experiment.

In our example, we would split all signups into a test group and control group. The control group receives the current time based emails and our test group receives the new trigger behaviour emails in our experiment, and we compare conversion rates as well as monitor other metrics like product behaviours.

Sharing insights across your team

I’ve used VWO in my past and love how they use some cool collaboration features.

You can log observations from past experiments or data you’ve uncovered in other tools and log them into VWO. Eventually you could have a miny filterable db of observations that folks on your team can prioritize or sift through.

These observations lead to hypotheses which are also a unique object in VWO. Before launching an experiment you need to first create a hypothesis, then link it to your experiment.

After your experiment has run its course, the last object in VWO is Learnings, or insights. This is building a knowledge base of learnings from tests you’ve run so everyone is in the know.

When you think of ideas in your company they all come and are stored in different places, word docs, project management backlogs, emails… VWO adds a bit of structure to everything that touches experimentation.

✌️


Intro music by Wowa via Unminus
Cover art created with help via Undraw

See all episodes

📫 Never miss an episode or key takeaway 💡

By subscribing to our newsletter we’ll only send you an email when we drop a new episode, usually on Tuesday mornings ☕️ and we’ll give you a summary and key takeaways.

Processing…
Success! You're on the list.

Future-proofing the humans behind the tech

Leave a Reply