Getting Real About “Experiments” and Learning
Part one of a three-part essay on facilitating group learning.
Last year, I went to Cincinnati to visit my sister and her family. My older nephew, Elliott, who was eight at the time, asked if I could help him with his science experiment. He was supposed to pick a project, develop a hypothesis, and run some experiments to prove or disprove it.
Elliott explained to me that earlier that year, he had participated in a pinewood derby and had lost. He wanted to figure out how to make a car that would go faster. I asked him, “What do you think would make the car go faster?”
He responded, “Making it heavier.” That seemed like an eminently reasonable hypothesis, especially coming from an eight year old. I helped him define the parameters of an experiment, and he constructed a car out of Legos and a ramp using a hodgepodge of race track parts to run his tests.
In theory, mass has nothing to do with the speed of the car. The only thing that matters is the acceleration of gravity, which is constant. A heavier car should go down the ramp at the same speed as a lighter one.
Except that’s not true either. Wind resistance counteracts the effects of gravity, which might make a lighter car go slower as a result. Then again, the aerodynamics of the car might have a bigger effect on decreasing wind resistance than mass. Then there’s the issue of both the friction and alignment of the wheels. And so forth.
Wading through all of these variables would require a carefully calibrated measurement system. Suffice it to say, Elliott’s system was not very precise. When he ran his experiment initially, the lighter car went faster than the heavier car. He dutifully proceeded to conclude that weight had the opposite effect on speed than he had theorized. I suggested that he try the experiment again just to be sure. This time, the cars took the same amount of time.
He was thoroughly confused. So was I, but for different reasons. How was I supposed to explain to him all the possible reasons for his results without delving into the intricacies of physics and engineering?
It was a ridiculous thing to expect an eight year-old to figure out, but it would have been fair to have asked of a high schooler. Science is clean that way. You can set up experiments and controls, you can meticulously account for variables, and you can repeat and replicate your experiments to build confidence in your results.
This is not the case with people.
It has become en vogue in the business world to frame knowledge work around experiments and learning. This is the essence of the Lean Startup idea, but it’s not limited to lean. I’ve been as guilty of this as anyone, and I’ve been doing it for a long time now.
But what exactly does it mean to frame people-work this way? Unlike science, you do not have laboratory conditions where you can set up replicable experiments with controls. Sure, you can come up with hypotheses, but your conditions are constantly changing, and there’s usually no way to set up a control in real-life.
How can you fairly draw any conclusions from your results? What are you even measuring? The realm of trying to assess “impact” or “effectiveness” or (to get very meta about it) “learning” tends to devolve into a magical kingdom of hand-waving.
The reality is that experimentation without some level of discipline and intentionality is just throwing spaghetti against the wall. The worse reality is that — even with all the discipline in the world — you may not be able to draw any reasonable, useful conclusions from your experiments. If your ultimate goal is learning, you need more than discipline and intentionality. You need humility.
In The Signal and the Noise, data analysis wunderkind Nate Silver points out how bad humans tend to be at forecasting anything reasonably complex — be it political elections or the economy. There are way too many variables, and we have way too many cognitive biases. However, we are remarkably good at predicting certain things — the weather, for example. Why can we predict the weather with a high degree of certainty but not things like the economy?
There are lots of reasons, but one of the simplest is accountability. Simply put, meteorologists are held accountable for their predictions, economists are not. Meteorologists are incentivized to improve their forecasts, whereas economists generally are not.
What does this mean for groups that are working on anything complex and are trying to learn?
First, be intentional, but hold it lightly. Know what it is you’re trying to learn or understand, and be open to something else happening entirely. Measure something. Be thoughtful about what you measure and why.
Second, be accountable. Track your learning progress. Review and build on previous results. Be transparent about how you’re doing. Don’t use “experiments” as a proxy for doing whatever you want regardless of outcome.
Third, be humble. Despite your best efforts, you may not be able to conclude anything from your experiments. Or, you might draw “convincing” conclusions you might validate again and again, only to discover that you are totally, entirely wrong.
See also parts two, “Documenting Is Not Learning,” and three, “The Key to Effective Learning? Soap Bubbles!”
I love this idea of being intentional and holding it lightly, as in don’t get too attached to the questions or the answers. In complex social systems, it also makes rigor more attractive.
Thanks, Sujatha! It’s a very core principle in my work, and it’s hard in practice to do consistently.
Can you explain what you mean by making “rigor more attractive”?
I totally agree with the complexity involved in learning scenarios. Especially group learning. And networks? So much VUCA–volatility, uncertainty, complexity, ambiguity–in large datasets that what must be solved or resolved is always in a state of formula-defying change. Thank you for your observations, Sujatha. Combining rigor and humility gets to the heart of group learning, I think. This is why I prefer the phrase “Hold on tightly, let go lightly.” The holding tightly is the rigor, the letting go lightly is the humility.
Fancy the impacts of incentivizing accurate economic forecasting!