Testing experiences
Unless offers multiple ways to test the success of your experiences; split tests, testing multiple variations, continuous validation with a control group, as well as setting page goals.
You can create an experience (on-site or component) and adjust its settings as usual, click save, and move on to the Testing tab. Alternatively, you can use the editor to make changes first and then go to the Testing tab. This way when you are adding more variations, they will also have the changes you made to the first variation.
Once at the testing tab, you will find the following options:
- A/B test against original: Use this mode to rapidly compare your experience with a control group, using a 50% traffic split between the new experience and the original website.
- Test multiple variations (A/B/n): Use this mode to compare multiple versions of an experience. Visitors will be distributed evenly across variations.
- Continuous validation: Use this mode for successful experiences. Using a control group of 10%, you can still check whether the success rate changes over time.
- No control group: Use this mode for successful experiences that you want to be shown to 100% of your visitors. Only use this if you are certain that the success of the experience will not fluctuate over time.
We generally recommend starting with one of the first two options to reach statistical significance faster, after which you can continue with a small control group (10%) for continuous validation or maybe even no control group if you are certain of the benefit of a particular experience.
If you want to "Test multiple variations (A/B/n)" you will need to "Add a variation" or more. Once adding a new variation, a pop-up will appear where you can add a control variation or a new variation with a name. You will then be able to see these variations at the Testing tab.
You can now open up the Editor by choosing one of the variations from the dropdown. Note that the control variation will not show up in the dropdown since it cannot be edited. Once in the editor, you can make your changes to a variation, go back and repeat the steps for the next variation. Remember to save and publish your changes each time!
If you selected any of the other options, you don't need to add a variation, and can directly open up the editor, make changes, save, and publish.
Deleting a variation is also possible but keep in mind that it cannot be undone. After deletion, all visitors will be distributed across the remaining variations. Insights for deleted variations will still be available.
Setting a page goal
In addition to the testing options mentioned above we also offer the option to set up a page specific goal. Let's say, you made changes to a page and want to see the impact that has on a CTA button getting clicked more or not, you can set up a page goal!
In Unless, most of the results you see via the Insights tab will be about the overall user experience, often in relation to a site-wide goal. A page goal on the otherhand allows you to get more specific with a set goal for a single experience.
You start by creating a new on-site experience and once you are in the editor, you will see the option to Set up page goal on the right side, as marked in the image below.
After making the change(s) you wanted, you need to select an element on the page to track as the goal of your experience. For this, you can use the target button; click it once and after you are over the element you'd like to select, click again. That's it! Don't forget to save, create new version and publish, and start your experience.
To see the results of your experience(s) with a page goal, you can go to the Experiences tab of the Insights page and scroll down to Experiences with page goal. Here you will see the percentage of visitors in the control group as well as participants (visitors who have seen the experience) and the number of total conversions. You can also click Details to get more information.
Why experiments are tricky
Generally, testing your experiences is a good idea. However, running proper experiments requires a lot of traffic and the more experiences you test, the harder it gets to reach statistical significance. Also, the longer you run a test, the higher the risk of results polluted by external factors. Lastly, experiences influence each other, so with every additional experience it gets harder and harder to pinpoint what caused a dip or uplift in your goals.