Getting Real About “Experiments” and Learning

Elliott's Science Project

Part one of a three-part essay on facilitating group learning.

Last year, I went to Cincinnati to visit my sister and her family. My older nephew, Elliott, who was eight at the time, asked if I could help him with his science experiment. He was supposed to pick a project, develop a hypothesis, and run some experiments to prove or disprove it.

Elliott explained to me that earlier that year, he had participated in a pinewood derby and had lost. He wanted to figure out how to make a car that would go faster. I asked him, “What do you think would make the car go faster?”

He responded, “Making it heavier.” That seemed like an eminently reasonable hypothesis, especially coming from an eight year old. I helped him define the parameters of an experiment, and he constructed a car out of Legos and a ramp using a hodgepodge of race track parts to run his tests.

In theory, mass has nothing to do with the speed of the car. The only thing that matters is the acceleration of gravity, which is constant. A heavier car should go down the ramp at the same speed as a lighter one.

Except that’s not true either. Wind resistance counteracts the effects of gravity, which might make a lighter car go slower as a result. Then again, the aerodynamics of the car might have a bigger effect on decreasing wind resistance than mass. Then there’s the issue of both the friction and alignment of the wheels. And so forth.

Wading through all of these variables would require a carefully calibrated measurement system. Suffice it to say, Elliott’s system was not very precise. When he ran his experiment initially, the lighter car went faster than the heavier car. He dutifully proceeded to conclude that weight had the opposite effect on speed than he had theorized. I suggested that he try the experiment again just to be sure. This time, the cars took the same amount of time.

He was thoroughly confused. So was I, but for different reasons. How was I supposed to explain to him all the possible reasons for his results without delving into the intricacies of physics and engineering?

It was a ridiculous thing to expect an eight year-old to figure out, but it would have been fair to have asked of a high schooler. Science is clean that way. You can set up experiments and controls, you can meticulously account for variables, and you can repeat and replicate your experiments to build confidence in your results.

This is not the case with people.

It has become en vogue in the business world to frame knowledge work around experiments and learning. This is the essence of the Lean Startup idea, but it’s not limited to lean. I’ve been as guilty of this as anyone, and I’ve been doing it for a long time now.

But what exactly does it mean to frame people-work this way? Unlike science, you do not have laboratory conditions where you can set up replicable experiments with controls. Sure, you can come up with hypotheses, but your conditions are constantly changing, and there’s usually no way to set up a control in real-life.

How can you fairly draw any conclusions from your results? What are you even measuring? The realm of trying to assess “impact” or “effectiveness” or (to get very meta about it) “learning” tends to devolve into a magical kingdom of hand-waving.

The reality is that experimentation without some level of discipline and intentionality is just throwing spaghetti against the wall. The worse reality is that — even with all the discipline in the world — you may not be able to draw any reasonable, useful conclusions from your experiments. If your ultimate goal is learning, you need more than discipline and intentionality. You need humility.

In The Signal and the Noise, data analysis wunderkind Nate Silver points out how bad humans tend to be at forecasting anything reasonably complex — be it political elections or the economy. There are way too many variables, and we have way too many cognitive biases. However, we are remarkably good at predicting certain things — the weather, for example. Why can we predict the weather with a high degree of certainty but not things like the economy?

There are lots of reasons, but one of the simplest is accountability. Simply put, meteorologists are held accountable for their predictions, economists are not. Meteorologists are incentivized to improve their forecasts, whereas economists generally are not.

What does this mean for groups that are working on anything complex and are trying to learn?

First, be intentional, but hold it lightly. Know what it is you’re trying to learn or understand, and be open to something else happening entirely. Measure something. Be thoughtful about what you measure and why.

Second, be accountable. Track your learning progress. Review and build on previous results. Be transparent about how you’re doing. Don’t use “experiments” as a proxy for doing whatever you want regardless of outcome.

Third, be humble. Despite your best efforts, you may not be able to conclude anything from your experiments. Or, you might draw “convincing” conclusions you might validate again and again, only to discover that you are totally, entirely wrong.

See also parts two, “Documenting Is Not Learning,” and three, “The Key to Effective Learning? Soap Bubbles!”

Maximizing Collective Intelligence Means Giving Up Control

Ant City

Today marks the 45th anniversary of the Mother of All Demos, where technologies such as the mouse and hypertext were unveiled for the first time. I wanted to mark this occasion by writing about collective intelligence, which was the driving motivation of the mouse’s inventor (and my mentor), Doug Engelbart, who passed away this past July.

Doug was an avid churchgoer, but he didn’t go because he believed in God. He went because he loved the music.

He had no problem discussing his beliefs with anyone. He once told me a story about a conversation he had struck up with a man at church, who kept mentioning “God’s will.” Doug asked him, “Would you say — when it comes to intelligence — that God is to man as man is to ants?”

“At least,” the man responded.

“Do you think that ants are capable of understanding man’s will?”

“No.”

“Then what makes you think that you’re capable of understanding God’s will?”

While Doug is best known for what he invented — the mouse, hypertext, outlining, windowing interfaces, and so on — the underlying motivation for his work was to figure out how to augment collective intelligence. I’m pleased that this idea has become a central theme in today’s conversations about collaboration, community, collective impact, and tackling wicked problems.

However, I’m also troubled that many seem not to grasp the point that Doug made in his theological discussion. If a group is behaving collectively smarter than any individual, then it — by definition — is behaving in a way that is beyond any individual’s capability. If that’s the case, then traditional notions of command-and-control do not apply. The paradigm of really smart people thinking really hard, coming up with the “right” solution, then exerting control over other individuals in order to implement that solution is faulty.

Maximizing collective intelligence means giving up individual control. It also often means giving up on trying to understand why things work.

Ants are a great example of this. Anthills are a result of collective behavior, not the machination of some hyperintelligent ant.

In the early 1980s, a political scientist named Robert Axelrod organized a tournament, where he invited people to submit computer programs to play the Iterated Prisoner’s Dilemma, a twist on the classic game theory experiment, where the game is repeated over and over again by the same two prisoners.

In the original game, the prisoners will never see each other again, and so there is no cost to screwing over the other person. This changes in the Iterated Prisoner’s Dilemma, which means there’s now an incentive to cooperate. Axelrod was using the game as a way to try to understand the nature of cooperation more deeply.

As it turned out, one algorithm completely destroyed the competition at Axelrod’s tournament: Tit for Tat. Tit for Tat followed three basic rules:

  • Trust by default
  • The Golden Rule of reciprocity: Do unto others what they do unto you.
  • Forgive easily

Axelrod was intrigued by the simplicity of Tit for Tat and by how easily it had trounced its competition. He decided to organize a followup tournament, figuring that someone would figure out a way to improve on Tit for Tat. Even though everyone was gunning for the previous tournament’s winner, Tit for Tat again won handily. It was a clear example of how a set of simple rules could result in collectively intelligent behavior, highly resistant to the best individual efforts to understand and outsmart it.

There are lots of other great examples of this. Prediction markets consistently outperform punditry when it comes to forecasting everything from elections to finance. Nate Silver’s perfect forecasting of the 2012 presidential elections (not a prediction market, but similar in spirit) was the most recent example of this. Similarly, there have been several attempts to build a service that outperforms Wikipedia by “correcting” its flaws. All have invoked the approaches people took to try to beat Tit for Tat. All have failed.

The desires to understand and to control are fundamentally human. It’s not easy to rein those instincts in. Unfortunately, if we’re to figure out ways to maximize our collective intelligence, we must find that balance between doing what we do best and letting go. It’s very hard, but it’s necessary.

Remembering Doug today, I’m struck — as I often am — by how the solution to this dilemma may be found in his stories. While he was agnostic, he was still spiritual. Spirituality and faith are about believing in things we can’t know. Spirituality is a big part of what it means to be human. Maybe we need to embrace spirituality a little bit more in how we do our work.

Miss you, Doug.

Artwork by Amy Wu.