My goal with this blog

I write about relevant changes in the way that people use the web and how startups are built to provide services and products for this ever changing wonderful thing we still know as "the web." As a former entrepreneur turned early-stage investor, my greatest hope is for this to be useful to other folks that are like me in the hopes that they can avoid some of the mistakes I've made.

A/B testing is no panacea

My partner Josh had a great blog post this weekend that went straight to one of my recent peeves with the world of Internet startups: the almost religious adherence to a particular methodology for developing your product. In this case, he takes on A/B testing, wondering whether relying heavily on this type of iterative optimization can ever come close to achieving what product vision can.

These days a lot of folks seem to like the idea of "12 simple recipes for a successful Internet startup." The reality is that there is no such certainty— and certainly not in any one easily repeatable process. For every Zynga that has supposedly risen via A/B tests and constant tweaking, there is a Facebook that almost got burned at the stake by introducing a controversial feature which would have never resulted from iterative development but which later went on to redefine the entire core experience (anyone remember the outcries around the newsfeed's introduction?)

One of the commenters on Josh's post points to the classic "local minima" problem with iterative optimization: getting stuck at a less than ideal outcome because your testing approach doesn't accommodate large discontinuous jumps on how UI/messaging/calls-to-action are presented. And while true that good testing hygiene can help overcome some of this, the reality is that with any relatively complex app, the interdependencies between the various different parts of a flow can quickly cause a combinatorial explosion of things to test absent some sort of very coherent product vision.

For instance, at my last company, Tabblo, we had broken the entire lifecycle of a user of our application into an acronym we affectionately referred to as M.E.E.M. (Marketing, Engagement, Experience, Merchandising). Each of the elements contained a set of key actions we were hoping to guide our users through and had a set of associated metrics which were owned at times by the same folks, and at times by different people.

Had we put all of our eggs in the iterative testing school of thought, I think the overall experience would have greatly suffered, eventually becoming a set of disjointed gates held together by little more than suspension of disbelief and blind hope that it all still made sense. Even with judicious use of it, when considered in the context of cohorts who sometimes took months to move from one phase to the next, our A/B test results often caused some fantastically heated arguments among the product owners.

To me A/B tests and other iterative product development practices are like a good writing fundamentals: just because you know to use active verbs, avoid adjectives, and keep sentences short doesn't mean you are going to start turning out Hemingway novels. For that you need great inspiration and tons of hard work— and even then, there is no sure thing.