A/B testing consists of a randomized experiment with two flow variants, A and B, are run at the same time to determine which version better improves your business metrics. You should use this feature when you want to test an hypothesis about a flow setup you think will optimize your payment results.
An example of a hypothesis: I think the Payment Service Provider X will have higher authorization rates and lower costs when compared to the Payment Service Provider Y.
Even if you think your hypothesis is obvious, we highly encourage you to test it out before making significant changes in your flows.
The transactions of the flow under an A/B Testing are randomly sampled between the variant A and B of the experiment. Some transactions are allocated to a Control Group that will work as a sanity check to evaluate whether the sampling is biased.
You need at least one active flow to create an A/B Testing. Then you define the hypothesis you want to test in that flow, the percentage of the total volume you want to allocate to the experiment and the flow B setting that will reflect your hypothesis. You can remove, add, edit and change the order of all templates of your experiment flow.
Finally, you have to publish your A/B Testing to start running your test with actual transactions and evaluate the results. You can monitor the performance of all your A/B Testing with the Flow Metrics. The metrics related to the experiment will be displayed close to the regular flows in the Flows section in your Console.
Be careful about not jumping into a conclusion too quickly. As with any A/B Testing, you need to have statistical relevance for conclusive results, so it might need some time before you have enough transactions for your experiment. Once your experiment has statistical relevance, you can promote the variant B of your experiment to process all the transactions of that flow or keep using the variant A.