How do you increase engagement and take your digital program to the next level? TESTING.

Running tests is an incredibly powerful way to build a segmented, data-driven program. No matter how many educated guesses we make, we can never know what is best for our particular fundraising and advocacy programs until we test.

Ready to embark on your quest? These steps and free tools will guide you through the process.

 

Step 1: Set your Goals

It's time to think big… what do you want from your test?

  • Higher response rates?
  • Increased new donor pool?
  • Sustainer program growth?

Identifying the areas for improvement in your program is often the best way to prioritize your testing goals. Maybe your open rate has been gradually declining, or your average gift is below benchmark for your sector. Connect your goals with your underperforming metrics.

 

Step 2: Research

Now that you've identified your goal(s), it's time to start collecting additional data and ideas. Look at email and website performance – see where there may be usability issues. Where is there a break in the process? Use other nonprofit programs to gather testing inspiration.

Keep up with industry trends by reading blogs and articles (like this one!), attending webinars, going to conferences, and making contacts with other organizations, within and outside of your sector.

 

Step 3: Hypothesize

Now that you've figured out your overall goal and familiarized yourself with your data against industry benchmarks, it's time to get serious about this test by brainstorming a hypothesis.

The purpose of hypotheses is to make sure you're thinking critically about how to best connect your goal to the test itself. If your clickthrough rate is down, and you've seen examples of highlighted calls to action or more prominent buttons, perhaps you want to test one of those changes to your email design.

If your response rate is low, maybe you want to test removing donation form fields or limiting text on your donation page.

In order to keep your test scientific, don't change more than one variable at once. For example, if you're testing an email subject line, both emails should launch at the same time with the same content.

 

Step 4: Set up the Test

Use our Sample Size Calculator tool to figure out what size your test audiences will need to be in order to gather a statistically significant result.

First, pick your confidence level – we suggest choosing a number 90% or higher to ensure there isn't a possibility that any difference you see is due to chance.

Second, put in the response rate you anticipate for your test, and then enter the response rate you anticipate for your control. We will use that information to recommend a sample size necessary for a statistically significant result.

Third-party software from Optimizely, Facebook or Google Optimize can randomize your testing audience for you. Some CRMs will also randomize automatically when setting up a test.

If you don't have these tools at your disposal, randomizing by hand is very simple. First, export your list into excel and create a new column. Add a randomized integer into that column by pasting =rand() into each row. Sort all of your data by that column and split your list in half. Import each half of the list into a different group or segment of your email platform.

If you plan to look at test results by segment, or track specific behavior (like button clicks) make sure you set everything up to do this. Perhaps this means different email segments in your CRM, or different tracking codes on your donation page URL.

 

Step 5: Analyze

Once you've gathered your testing data, you can use our Statistical Calculator to see whether you've reached a significant result.

To use the calculator, enter the number of contacts and number of gifts (or other results you've tested) for both the control and test versions.

Statistical significance measures the likelihood that the results of a test are real and repeatable, and not just due to chance. If your confidence level is 92%, that means, according to probability theory, there's a 92% chance that you'd see similar results in a repeat of the test.

Now that you have your statistically significant results, you can apply them to your program. The test quest is an ongoing one – repeat this process to make iterative changes that will optimize your donor's journey.