Sunday, May 28, 2006

A Learned Skeptic

I confess that I have not read Jeffrey Pfeffer’s new book but it is nearing "The Read Zone."

An excerpt from
an interview with Pfeffer:

You make a case for running a lot of little experiments. You give examples of a few internet companies doing it, which is easy at some level, because of all the metrics they can run. But I think some people think, "God, run experiments in my company? I didn't do so well in science in high school. Scientific method is beyond me." Do you think there's any possibility that that's what prevents people from really looking at evidence for why they're doing something?


JP: I think it could be one reason. But I also think there's a tendency in companies to believe that if it's worth doing, we ought to do it for everybody everywhere, all the time, and roll it out in a big Program with a capital "P." The mentality is, "If we're not convinced it's going to work, we might as well not do it anywhere." So you can see in these companies the endless debate, "Should we do A, or should we do B, or should we do C?" When the obvious thing to do is try A, B, and C in different places or at different times, and see which one works best.

Think about it, if medicine was practiced this way, you'd have people sitting around, having endless debates about whether some drug in theory ought to work or not, as opposed to doing trials. Look at the way airplanes are designed. You obviously start with theory and evidence about physics and engineering, but you also design, you build prototypes, or you now build prototypes on the computer. You put them through various exercises and you try different things. This is how architects now design buildings.

No comments: