|This is a Work-In-Progress. Feel free to edit!|
Doug Engelbart is the reason I'm in the business I'm in. He was my hero and mentor, and he set me on this life course of improving collaboration because of his intellect and his friendship.
We first met in 1998, and I started working with him in 2000. I spent two intense years learning from and working with him, and was inspired to set down this career path as a result of that time. In 2006, he got an NSF grant for his HyperScope project, which he asked me to lead.
I've compiled some recommended readings on Doug's ideas. I've tried to encapsulate some of the more concrete things I learned from Doug below.
Best place to comment on this page is this Loomio thread.
- 1 Lessons Learned
- 1.1 Doing good as a life's pursuit is an acceptable career choice
- 1.2 Just because it's obvious, doesn't mean you're doing it
- 1.3 Everybody is people
- 1.4 Urgent necessity of collaboration
- 1.5 Think big, then think bigger
- 1.6 The biggest contributor to collective intelligence and high-performance is the ability to learn and adapt
- 1.7 "High-performance collaboration"
- 1.8 The bicycle as a metaphor for performance
- 1.9 Get better at getting better
- 1.10 Role of artifacts and "knowledge products"
- 1.11 Shared language and understanding is critical
- 1.12 Maintaining perspective
Doing good as a life's pursuit is an acceptable career choice
As crazy as it might sound, I never really understood this before meeting Doug in my early 20s. As it turns out, it was the permission I was seeking at the time to pursue what I considered meaningful. I am so grateful to have had clear, guiding purpose for myself since 2000 as a result of my time with Doug. It has made a world of difference in navigating a tremendous amount of difficulty and uncertainty over the years.
Just because it's obvious, doesn't mean you're doing it
Most of Doug's ideas are deceptively obvious. It's what makes them compelling the first time you hear them, but it also makes them easy to take for granted when it comes to execution. In my early years with him, I would constantly find myself saying, "Yeah, yeah, of course that's the case," then realize afterward that, even though I understood what he was saying intellectually, I wasn't actually doing it.
Doug always used to say that he was handicapped, like a color-blind person. He didn't see the world the way others saw it, but the handicap left his other senses heightened. I think that was very true. Everyone is easily capable of seeing what he saw, but the rest of us are often distracted by other noise.
Everybody is people
Doug treated everybody he met exactly the same — with curiosity and kindness. He had zero traces of pretension or arrogance, and he was unconcerned with traditional notions of status. He knew he could learn something from anyone, and he did. His lab in the 1950s and 1960s (the Augmentation Research Center) was 50 percent women, which is practically unheard of even a half century later. He didn't beat his chest about stuff like that, because it seemed totally obvious to him.
This people-centrism was not only a defining characteristic of who he was as a person, but for his work as well. Doug's advocacy for augmenting humanity is one of those obvious-sounding things I described above, but it was loudly ridiculed by the computing world in the 1950s and 1960s, which was investing most of its energy in artificial intelligence (i.e. replicating humanity).
- Even though the world and the field of computing started appreciating Doug's emphasis on humanity in his later life, I don't think we as a society have fully grasped this. Heidegger said that the essence of modern technology is to get humans to forget their humanity, but he also said that this process of forgetting might also teach us to become more aware of it. I think that Doug and his work was a major manifestation of this latter notion. —Eekim (talk) 18:29, 2 January 2017 (UTC)
Urgent necessity of collaboration
The whole framing of my mission (i.e. "faster than 20") is lifted from his observation that problems are scaling faster than our ability to solve them. He observed this in 1950, and it acted as his North Star from that point forward, which he followed with clarity and consistency. We're not going to get there individually. We need to figure out how to get smarter collectively.
Think big, then think bigger
My all-time favorite Doug slides are what I like to refer to as his "whale" slides. They depict so clearly how we hold ourselves back from truly thinking big. Thinking big is what enabled Doug to do big things.
Doug first demonstrated the mouse, graphical user interfaces, outliners, hypertext, networked computing, etc. in his 1968 Mother of All Demos. Most of these technologies did not see mass acceptance until 20-30 years later. On the one hand, you could look at this and say that Doug was 20-30 years ahead of his time. On the other hand, you could also attribute this to the clarity and urgency of his North Star. It's one thing to see this coming, it's another thing to act with urgency so far in advance. (In California, we've all known that "The Big One" is coming for decades. How many of us act with urgency in terms of our individual preparation? How effectively have we collectively prepared?)
- In some ways, this speaks to the tremendous flaw in how we traditionally do visioning and scenario thinking. What good is it to think about these things if we don't actually live into them? On the other hand, it's critical to have the proper amount of perspective and compassion about this. Even if we're living into these things, they are so challenging and overwhelming, and they are so dependent on so many things, failure is a certainty. The goal, ultimately, is to reduce our failure rate and to iterate fast enough to realize the benefits.
- Even if we are successful in doing that in practice, it's tremendously difficult to recognize that we are doing this successfully in practice because of cognitive bias. Great baseball players strike out 7 out of 10 times. Terrible baseball players strike out 8 out of 10 times. Do you have the self-awareness and structures in place to help you see and navigate the difference? —Eekim (talk) 18:29, 2 January 2017 (UTC)
The biggest contributor to collective intelligence and high-performance is the ability to learn and adapt
I first picked up this term from Doug. "High-performance" is a critical qualifier, because it speaks to the bigger goal. Good enough is not good enough. We must constantly get better.
- Collaboration is not a binary concept. Don't treat it as one. It's about quality of execution. —Eekim (talk) 18:35, 2 January 2017 (UTC)
The bicycle as a metaphor for performance
Doug loved to bike, and he often used bicycle metaphors to articulate his thinking. He's probably most famous for talking about bicycles as expert-oriented tools (versus user-friendly tools). I wrote in my tribute to him:
- "Doug believed we needed tools that would require mastery to yield results, tools like the bicycle, which he loved. A bike is not novice-friendly. It takes time to learn, and the learning process can be frustrating. But once you learn how to ride a bike, you are capable of doing new and powerful things that weren’t possible before. There’s a reason we don’t settle for riding tricycles."
However, tools themselves are not enough. Good tools lead to new possibilities, which require developing new human capacities (e.g. riding a bike). Those new capacities would open up new possibilities to develop new tools that would take our performance to the next level. He called this process "coevolution."
My all-time favorite Doug story about bicycles captures the spirit of coevolution and the difficulty of achieving it. I wrote in my tribute:
- "As a kid, he and his brother used to challenge neighborhood kids to see who could perform the most difficult tricks. Doug had a trick that always worked. He would challenge the other kids to ride their bikes with their arms crossed.
- "What was so hard about this? Riding straight with your arms crossed was easy. The only tough part was turning. If you wanted to turn right, you’d have to move your left arm. If you wanted to turn left, you’d have to move your right arm. In other words, you simply had to do the opposite of what you normally had to do.
- "Two rules easily grasped, yet none of the kids could ever do it without falling off their bikes. Why? Because learning, in order to be applied, needs to be embodied. We need to build that habit and, sometimes, that means changing old habits.
- "This is hard, but it’s not impossible. That was the other key lesson of this story. Doug could do the trick, not because he was smarter or more physically gifted than the other kids, but because he had trained his body to do it through lots and lots of practice."
Get better at getting better
In order to scale our ability to solve our most complex problems, we need to improve at improving. Note this is not quite the same thing as continuous improvement (which is also critical). What is the craft of improving? Can we break down what makes us really good at improvement, and focus on those things?
Doug identified this as "bootstrapping," because he felt this was the most powerful place to leverage our investments. Improving at improvement would help everyone get better at improving, which would lead to exponential improvement.
- Doug's vision for how we would get better at improving was via "Networked Improvement Communities" (NICs) which were communities of practice devoted to improving at improvement. People would often ask him to point to examples of NICs, and he would generally say that none exist, although he was trying to start one. (He named his attempt at starting one, "Bootstrap Alliance.") I don't think this was quite correct. I think Peter Senge's concept of a Learning Community was essentially equivalent to a NIC, although there were architectural and subtle conceptual differences.
- In many ways, discovering and making some of these connections was a central motivation for going down this path. I realized that the universe of people trying to get better at collaboration was itself siloed. My starting premise was that there were already groups who were practicing high-performance collaboration as well as folks who were spending a lot of time thinking about what made high-performance collaboration possible. This mirrored my personal experience, starting with open source software development communities and leading to my time with Doug. This premise helped lead me to other practitioners and thinkers, folks like Gail and Matt Taylor. —Eekim (talk) 18:29, 2 January 2017 (UTC)
Role of artifacts and "knowledge products"
High-performance groups maintain what Doug called a "Dynamic Knowledge Repository" (DKR). (Doug used lots of acronyms.) DKRs consisted of artifacts and knowledge products, external manifestations of the things we know.
The process of managing and generating the artifacts and knowledge products are as important as the tools we use to capture them. Continuous synthesis is an important practice that not only supports learning (for the persons doing the synthesis), but also results in artifacts that can be shared and can catalyze collective shifts as a result.
- What makes online collaboration unique is that engagement happens through the artifacts themselves. —Eekim (talk) 18:37, 2 January 2017 (UTC)
DKRs don't have to be digital, although this is where Doug invested the majority of his thinking.
- I think the focus on digital architectures for DKRs has been net negative, because it's distracted people from thinking about the process of managing and generating artifacts and knowledge products, which is where most people have very weak muscles. Yes, we should be looking at ways to improve our digital architectures, but those will not magically solve all of our problems.
- All that said, I spent a lot of my early days with Doug and afterward (with Chris Dent) thinking about improving digital architectures. Specifically, I was interested in the importance of fine-grained addressability (e.g. Purple Numbers) and standardizing these architectures. It still frustrates me that these concepts are not more broadly understood, much less implemented. —Eekim (talk) 18:29, 2 January 2017 (UTC)
Doug really valued language. He cited Whorf a lot in his foundational paper on augmenting human intellect. He not only made up a lot of his own acronyms (as I noted earlier), but he used language in a precise, specific way. This made it hard to communicate his ideas and for him to trust that others were understanding him. It took me three years (including several months of concentrated study) before I finally started understanding his language enough to start working with him. I think this process of deep listening and studying was a key reason why Doug invited me to become such a close collaborator, and it really underscored for me the importance of developing shared language and shared understanding (which was helped considerably by Jeff Conklin).
Doug was depressed for most of the time I knew him. He truly saw himself as a failure. Of course this was ridiculous, but it really spoke to the fallacy of taking on the burdens of humanity's most complex problems yourself. There's an aspect of this that I admired and that resonated strongly with me — his focus and passion for the bigger picture and how he held himself accountable to this. And, the net outcome of this was unhealthy. It's something I think about often, and it's something I struggle to balance within myself. I talk about this in my rubber band blog post.