On the SaveLocal team, we believe in analytics, and we believe in lean development philosophy. This post specifically will talk about one of our lessons learned around feature testing and creating an effective hypothesis.
As part of our lean practices, we often release limited-scope features into our product. These features are typically limited in total size and to a specific customer segment; for example, 200 total deals scheduled to existing customers with deal prices less than $100.
In the past, when analyzing the results of A/B tests, we were easily able to pick a winner. If A converts at 50% and B converts at 60%, it’s easy to answer the question, “What converts better?” However, with feature testing, the answer may not always be as straightforward.
Take, for instance, the payment feature for SaveLocal. When we talked to merchants, we found that the #1 reason they decide not to schedule a deal with SaveLocal was because of the way potential customers would pay for their deals. So, we A/B tested an option in which merchants could get a check mailed to them at the end of their deal rather than using a third-party online payment option. In addition, on the consumer-facing checkout page, we would use a standard credit card checkout instead of a third-party hosted pay page.
We developed the new option to get paid by check and released it to our test segment. Merchants started selecting the new option, and we were collecting data points in our analytics tool. Instead of building out automation for merchants to get their check, with buy-in from our finance and accounting teams, we performed some of the logistics manually. After all, this was a limited-run test.
One day, our product manager asked, “So which one is better?” Such a seemingly simple question, but when we went to aggregate the data, the answer was not clear. While we had collected data to know the funnel conversion for the third-party online payment option vs the check payment option, the question being asked left a lot of room for interpretation. Did we want to know which option converted better overall? Did we want to know which deals performed better on the consumer end?
When developing the feature story, we had neglected to think about how we would report on the results.
After having discussions around it, and aggregating analytics and database data, we were able to determine that our overall scheduling conversion funnel had improved a significant amount, and that our consumer checkout conversion also had a higher conversion rate.
From this experience, we learned that when scoping a feature test, it is critical to define the report that will be used to analyze the result of the test. Not only does this inform the developers to determine how the data will be reported (analytics, database queries, or both), it also forces product management to think critically about what they want to measure and how they are going to make their decisions.
What other lessons learned have you had with analyzing and reporting on your analytics data?
I think this is really a good site with vast fine piece of information. I am always looking for this type of blog now I bookmark it.