Filter
DCE DCE
Improving Our Ability to Improve: A Call for Investment in a New Future Douglas C. Engelbart
The Bootstrap Alliance
April 23, 2002 (AUGMENT,133320,)


Doug's Keynote Address, presented at the World Library Summit, April 23 - 26, 2002, Singapore
Available Here: Permalink | HyperScope(?) | Print/PDF
A 2nd Edition was published for the IBM Co-Evolution Symposium, September 24, 2003. See (Biblio-32) for details.


Summary. In the past fifty years we have seen enormous growth in computing capability – computing is everywhere and has impacted nearly everything. In this talk, Dr. Douglas Engelbart, who pioneered much of what we now take for granted as interactive computing, examines the forces that have shaped this growth. He argues that our criteria for investment in innovation are, in fact, short-sighted and focused on the wrong things. He proposes, instead, investment in an improvement infrastructure that can result in sustained, radical innovation capable of changing computing and expanding the kinds of problems that we can address through computing. In this talk, Dr. Engelbart describes both the processes that we need to put in place and the capabilities that we must support in order to stimulate this higher rate of innovation. The talk closes with a call to action for this World Library Summit audience, since this is a group that has both a stake in innovation and the ability to shape its direction.

Good news and bad news

The development of new computing technologies over the past fifty years - in hardware and software – has provided stunningly important changes in the way we work and in the way we solve problems.
I need to get this assertion out in front of you early in this talk, because most of the rest of what I have to say might cause you to think that I have lost track of this progress or that I don't appreciate it. So, let me get it said – we have made enormous strides since the early 1950s, when I first began thinking seriously about ways to use computers to address important social problems. It has truly been a remarkable fifty years.
At my first job at NACA, the forerunner of NASA, right out of engineering school, there was no vision at all of electronic computers. In fact the term "computers" referred to the room full of women sitting at desks using desk calculators to process the wind tunnel data. This was in the late '40s. After I'd earned my doctorate at UC Berkeley and worked there as an acting assistant professor, I applied to Stanford to help develop and teach computer design courses. They told me "Since computing is only a service activity, we don't contemplate ever having courses in computer design be part of our academic program." When I was pursuing Federal funding for IT projects at SRI in the early 60's, there was some real question about whether there would be the programming talent to work on computing applications of this complexity in Palo Alto California. This certainly inhibited our chances for getting research support.
Later in my research, when I thought about using computers to manipulate symbols and language, rather than to do calculations on numbers, most people thought I was really pretty far gone. I was even advised to stay away from jobs in academia. It seemed clear to everyone else at the time that nobody would ever take seriously the idea of using computers in direct, immediate interaction with people. The idea of interactive computing – well, it seemed simply ludicrous to most sensible people.
So, we have made truly tremendous progress. We are able to solve problems, ranging from weather forecasting to disaster relief to creating new materials to even cracking the very genetic code that makes us humans - problems that we could not have even contemplated without the availability of cheap, widely available, highly functional computers and software. It has been a marvelous 50 years to be in this business.
But that is not what I am going to talk to you about. Not out of lack of appreciation – even a sense of wonder – over what computer technologists have developed – but because I can see that we are not yet really making good progress toward realizing the really substantial payoff that is possible. That payoff will come when we make better use of computers to bring communities of people together and to augment the very human skills that people bring to bear on difficult problems.
In this talk I want to talk to you about that big payoff, think a bit with you about what is getting in the way of our making better progress, and enlist you in an effort to redirect our focus. This audience is made up of the kinds of people who really can change the focus so that we can set at least of part of our development efforts on the right course.
The rewards of focusing on the right course are great. I hope to show you that they can be yours.

The vision: The payoff

Before talking and thinking with you about why we keep heading off in the wrong direction, I need to quickly sketch out what I see as the goal – the way to get the significant payoff from using computers to augment what people can do. This vision of success has not changed much for me over fifty years – it has gotten more precise and detailed – but it is pointed at the same potential that I saw in the early 1950s (Ref. 1). It is based on a very simple idea, which is that when problems are really difficult and complex – problems like addressing hunger, containing terrorism, or helping an economy grow more quickly – the solutions come from the insights and capabilities of people working together. So, it is not the computer, working alone, that produces a solution. It is the combination of people, augmented by computers.
The key word here is "augment." The reason I was interested in interactive computing, even before we knew what that might mean, arose from this conviction that we would be able to solve really difficult problems only through using computers to extend the capability of people to collect information, create knowledge, manipulate and share it, and then to put that knowledge to work. Just as the tractor extends the human's ability to work the earth, and planes extend our ability to move, so does the computer extend our ability to process and use knowledge. And that knowledge production is a group activity, not an individual one. Computers most radically and usefully extend our capabilities when they extend our ability to collaborate to solve problems beyond the compass of any single human mind..
I have found, over the years, that this idea of "augmenting" capability needs clarification for many people. It is useful to contrast "augmentation" with "automation." Automation is what most people have in mind when they think about using computers. Automation is what is going on when we use computers to figure up and print telephone bills or to keep track of bank charges. It is also what is going on when we think about "artificial intelligence." Even though printing up phone bills and AI seem very different, they both share this assumption that the computer stands over there, apart from the human, doing its thing. That is not, in my mind, how we use computers to solve tough problems.  We have the opportunity to harness their unique capabilities to provide us new and more effective ways to use our mind and senses – so that the computer truly becomes a way of extending our capabilities.
The shovel is a tool, and so is a bulldozer. Neither works on its own, "automating" the task of digging. But both tools augment our ability to dig. And the one that provides the greatest augmentation, not surprisingly, takes the most training and experience in order to use it really effectively.
For fifty years I have been working to build the computing equivalents of bulldozers. With the added constraint that, in serious knowledge works, it is almost always through the collaborative work of a team, rather than a lone operator, that we make progress.
Now, there is a lot more to say about this vision than that – and I will speak to some of it later in this talk. But, in starting our consideration of how to change course in order to get a bigger payoff from our investment in computing, this focus on augmenting the ability of groups to solve problems is the right starting point.

Evidence of trouble

Because we are so accustomed to thinking in terms of the enormous progress and change surrounding computing, it is important to take at least a few moments to look at the evidence that, when it comes to these broader, social and group-centered dimensions of computing, the picture looks quite different.
Difficulty in doing important collaborative work. As one example, my organization, the Bootstrap Alliance, works in loose collaboration with a number of other organizations to help them develop better ways to improve their ability to learn and to use knowledge – in short, we work with organizations to help them improve their ability to improve.
One organization that we work with is the Global Disaster Information Network – or "GDIN" – which is, itself, a consortium of regional and local disaster response organizations. Organizations that respond to disasters are tremendous examples of organizations that must learn to adapt and use new information quickly. Disasters are, by their nature, unplanned and surprising. Responding requires rapid access to weather information, geographical and mapping information, information about local resources, local communications, the availability of outside resources and organizations - sometimes even about the location of buried mines and unexploded munitions. And, because disaster response involves so many people, from many organizations and jurisdictions, it is critically important to share all of this information about resources and capabilities, as well as information about response status, planned next steps, and so on.
Computers and, in particular, the Internet, clearly play a key role in the efforts to coordinate such disaster response and to improve the ability to improve over the lifecycle of a disaster response effort. But what is striking, as GDIN grapples with these issues, is how difficult it is to harness all the wonderful capability of the systems that we have today in GDIN's effort to improve its ability to improve disaster response. It turns out that it is simply very difficult to share information across systems – where "sharing" means both the ability to find the right information, when it is needed, as well as the ability to use it across systems.
Even harder is the ability to use the computer networks to monitor and reflect status. Anyone that regularly uses e-mail can readily imagine how the chaotic flow of messages between the different people and organizations during a disaster falls far short of creating the information framework that is required for an effectively coordinated response. Make no mistake about it, GDIN and its member disaster response organizations find computers to be very useful – but it is even more striking how the capabilities offered by today's personal productivity and publishing systems are mismatched to the needs of these organizations as they work to coordinate effective response flexibly and quickly.
Difficulties with knowledge governance. As another example of our still relatively primitive ability to deal with information exchange among groups, consider the chaotic and increasingly frightening direction of new laws regarding knowledge governance – most notably reflected in laws regarding copyright. Because it is generally technically advanced, one might think that my country, the United States, would be representative of leading edge capability to deal with knowledge governance and knowledge sharing. But, instead, we are passing increasingly draconian laws to protect the economic value of copies of information. In the US, we are even  contemplating laws that would require hardware manufacturers to take steps to encrypt and protect copies (Ref. 2).
We are doing this while entering a digital era in which the marginal cost of a copy is zero – at a time where the very meaning and significance of the notion of "copy" has changed. It is as if we are trying to erect dikes, using laws, to keep the future from flooding in..
The immediate effect of all this is to enable a dramatic shift in control to the owners of information, away from the users of information (Ref. 3) – a strategy which will almost certainly fail in  the long run and that has confusing and probably damaging economic consequences in the short run.
The most modest conclusion that one might draw from watching the U.S. attempt to deal with knowledge governance in a digital age is that the legislators have a weak understanding of the issues and are responding to the enormous political power of the companies with vested interest in old ways of using information. Looking somewhat more deeply, it seems quite clear that we are ill-prepared to come to terms with an environment in which the social value of knowledge emerges from collaborative use of it. The entire idea of value emerging from sharing, collaboration, and use of knowledge – as opposed to treating knowledge as a scarce resource that should be owned and protected – is anathema to the  20th century knowledge owners, who are fighting hard to protect their turf.

Structural roots of the problem

One possible response to my examples is to say, "Doug, be patient. These are new problems and hard problems and it takes time to solve them. We will have better tools and better laws over time. Just wait."
An off-handed response might be that I have been trying to be patient for fifty years. But a much more important, meaningful response is that patience has nothing to do with it. These problems are not due to lack of time, but are instead due to structural factors that create a systematic bias against the improvement of what I call "Collective IQ."
The good news is that, if we can see and understand the bias, we have the opportunity to change it. If we can see how some of the basic assumptions that we bring to the development of computing technologies lead us away from improvement in our ability to solve problems collectively, we can reexamine those assumptions and chart a different course.
Oxymoron: "Market Intelligence." One of the strongly held beliefs within the United States is that the best way to choose between competing technologies and options for investment is to "let the market decide." In my country we share a mystical, almost religious kind of faith in the efficacy of this approach, growing from Adam Smith's idea of an "invisible hand" controlling markets and turning selfish interest into general good. The "market" assumes the dimensions of faceless, impersonal deity, punishing economically inefficient solutions and rewarding the economically fit. We believe in the wisdom of the market and belief that it represents a collective intelligence that surpasses the understanding of us poor mortal players in the market's great plan.
One of the nice things about getting outside the U.S. – giving a talk here, in Singapore, for example – is that it is a little easier to see what an odd belief this is. It is one of the strange quirks of the U.S. culture.
In any case, it is quite clear that whatever it is that the market "knows," its knowledge is fundamentally conservative in that it only values what is available today. Markets are, in particular, notoriously poor judges of value for things that are not currently being bought and sold. In other words, markets do a bad job at assessing the value of innovation when that innovation is so new that it will actually rearrange the structure of the markets.
This is well understood by people doing market research. Decades ago, when Hewlett Packard was first coming up with the idea of a desktop laser printer – before anyone had experience with such devices and before there was even software available for desktop publishing – market studies of the potential use and penetration for desktop laser printing came up with a very strange answer: people simply did not yet have enough experience with the devices to be able to understand their value. The same thing happened to companies, ten to fifteen years ago, when they did market studies about the potential value and use of digital cameras.
Perhaps the best study of this systematic and very basic conflict between markets and certain kind of innovations is Clayton Christensen's classic and very valuable book, The Innovator's Dilemma (Ref. 4). Probably most of you are familiar with Christensen's thesis (if you haven't read the book, you should), but, briefly stated, it is that one kind of innovation – Christensen calls it "continuous innovation" – emerges when companies do a good job of staying close to their customers and, in general, "listening to the market." This is the kind of innovation that produces better versions of the kinds of products that are already in the market. If we were all riding tricycles, continuous innovation would lead to more efficient, more comfortable, and perhaps more affordable tricycles.
But it would never, ever produce a bicycle. To do that, you need a different kind of innovation – one that usually, at the outset, results in products that do not make sense to the existing market and that it therefore cannot value. Christensen calls this "discontinuous innovation."
Discontinuous innovation is much riskier,  in that it is much less predicable,  than continuous innovation. It disrupts markets. It threatens the positions of market leaders because, as leaders, they need to "listen" to the existing market and existing customers and keep building improved versions of the old technology, rather than take advantage of the new innovation. It is this power to create great change that makes discontinuous innovation so valuable over the long run. It is how we step outside the existing paradigm to create something that is really new.
In the past fifty years of history of computing, the one really striking example of discontinuous innovation – the kind where the market's "intelligence" approached an IQ of zero – was early generation of World Wide Web software - and in particular, the Mosaic web browser. There were, as the Web first emerged, numerous companies selling highly functional electronic page viewers – viewers that could jump around in electronic books, follow different kinds of hyperlinks, display vector graphics, and do many other things that early web browsers could not do. The companies in this early electronic publishing business were actually able to sell these "electronic readers" for as much as US $50 a "seat" – meaning that, when selling electronic document viewers to big companies with many users, this was big business.
Then, along came the Web and Mosaic – a free Web browser that was much less functional than these proprietary offers. But it was free! And, more important, it could do something else that these other viewers could not do – it provided access to information anywhere in the world on the Web. As a result, over the next few years, everything changed. We actually did get closer to the goal of computers assisting with collaborative work.
But the key point of the story is that, at first, the "market intelligence" saw no value in web browsers at all. In fact, the market leader, Microsoft, initially started off in the direction of building its own proprietary viewer and network – because that is what market intelligence suggested would work. Fortunately for Microsoft's shareholders, Bill Gates realized that he was facing a discontinuity, and threw the company into a sudden and aggressive campaign to change course.
Despite the Web, despite the example of Mosaic, despite all the work that Christensen has done to teach us about discontinuous innovation, most companies still act as if they believe that the market is intelligent - and, to be sure, this approach really does often work, in the short term. So we are saddled with a systematic, built-in bias against thinking outside the box. And that bias gets in the way of solving hard problems, such as building high performance tools that help groups of people collaborate more effectively.
In a little bit, I will explain how we can overcome such systematic bias and open the doors to the very substantial rewards from continued, productive discontinuous innovation. There is huge opportunity here - and it is an opportunity that will be most available to emerging economies rather than to the incumbents. But, before turning to solutions, I need to tell you about another dimension of systematic bias that is getting in the way of our making important progress in finding new ways to use computers.
The seductive, destructive appeal of "ease of use." A second powerful, systematic bias that leads computing technology development away from grappling with serious issues of collaboration – the kind of thing, for example, that would really make a difference to disaster response organizations - is the belief that "ease of use" is somehow equated with better products.
Going back to my tricycle/bicycle analogy, it is clear that for an unskilled user, the tricycle is much easier to use. But, as we know, the payoff from investing in learning to ride on two wheels is enormous.
We seem to lose sight of this very basic distinction between "ease of use" and "performance" when we evaluate computing systems. For example, just a few weeks ago, in early March, I was invited to participate in a set of discussions, held at IBM's Almaden Labs, that looked at new research and technology associated with knowledge management and retrieval. One thing that was clearly evident in these presentations was that the first source of bias – the tendency to look solely to the invisible hand and intelligence of the market for guidance, was in full gear. Most of the presenters were looking to build a better tricycle, following the market to the next stage of continuous innovation, rather than stepping outside the box to consider something really new.
But there was another bias, even in the more innovative work – and that bias had to do with deciding to set aside technology and user interactions that were "too difficult" for users to learn. I was particularly disappointed to learn, for example, that one of the principal websites offering knowledge retrieval on the web had concluded that a number of potentially more powerful searching tools should not be offered because user testing discovered that they were not easy to use.
Here in Singapore, I see a lot of people wind surfing. I am sure that there are beginner boards and sails, just as in kayaking there are beamy, forgiving boats that are good for beginners, and in tennis there are powerful racquets that make it easy for beginners to wallop the ball even with a short swing. But someone who wants real performance in wind surfing, to have control in difficult conditions, does not want a beginning board. Someone who wants a responsive kayak, that will perform well in following seas and surf, does not want a beginner's boat. A serious tennis player with a powerful swing does not want a beginner's racquet.
Why do we assume that, in computing, ease of use – particularly ease of use by people with little training – is desirable for anyone other than a beginner?  Sure, I understand that the big money for a company making surfboards, tennis racquets, skis, golf clubs, and what-have-you is always in the low end of the market, serving the weekend amateur. And surely the same thing is true in computing. That is not surprising. What is surprising is that, in serious discussions with serious computer/human factors experts, who are presumably trying to address hard problems of knowledge use and collaboration, ease of use keeps emerging as a key design consideration.
Doesn't anyone ever aspire to serious amateur or pro status in knowledge work?

Restoring balance

I need to remind you of what I said at the beginning of this talk: we have made huge strides forward in computing. It is a wonderful thing to have a large, mass market for equipment that has brought the cost of computing hardware and software down to the point where truly staggering computing capability is available for a few thousand – even a few hundred - dollars. It has been a marvelous fifty years. But I want to alert you to two very important facts:
These facts are critical for institutions and individuals who are interested in improving our ability to improve. I am pretty sure that this includes everyone in this audience. The important realization – and the message of this talk – is that these institutions and individuals can take big steps forward simply by systematically addressing the biases that are pushing innovation toward lowest common denominator solutions and toward simple continuation down roads that we already understand. This does not mean that we should stop building easy to use applications that represent continuous innovation. What it does mean is that we also need to find ways to address the harder problems and to stimulate more discontinuous innovation.
This focus on new, discontinuous innovation is particularly important for the majority of people and nations in the world who are building emerging economies. It is the developing nations that  have the most to gain from developing new ways to share knowledge and to stimulate improvement.

Moving from "invisible hand" to strategy

The good news is that it is possible to build an infrastructure that supports discontinuous innovation. There is no need at all to depend on mystical, invisible hands and the oracular pronouncements hidden within the marketplace. The alternative is conscious investment in an improvement infrastructure to support new, discontinuous innovation (Ref. 5).
This is something that individual organizations can do – it is also something that local governments, nations, and regional alliances of nations can do. All that is necessary is an understanding of how to structure that conscious investment.
ABCs of improvement infrastructure. The key to developing an effective improvement infrastructure is the realization that, within any organization, there is a division of attention between the part of the organization that is concerned with the organization's primary activity - I will call this the "A" activity – and the part of the organization concerned with improving the capability to perform this A-level function. I refer to these improvement efforts as "B" activities. The two different levels of activity are illustrated in Figure 1.
Graphic depicting Infrastructure fundamentals: A and B Activities
Figure 1. Infrastructure fundamentals: A and B Activities (Ref. 1, Ref. 5)

The investment made in B activities is recaptured, along with an aggressive internal rate of return, through improved productivity in the A activity. If investments in R&D, IT infrastructure, and other dimensions of the B activity are effective, the rate of return for a dollar invested in the B activity will be higher than for a dollar invested in the A activity.

Clearly, there are limits to how far a company can pursue an investment and growth strategy based on type B activities – at some point the marginal returns for new investment begin to fall off. This leads to a question: How can we maximize the return from investment in B activities, maximizing the improvement that they enable?

Put another way, we are asking how we improve our ability to improve. This question suggests that we really need to think in terms of yet another level of activity – I call it the "C" activity – that focuses specifically on the matter of accelerating the rate of improvement. Figure 2 shows what I mean.

Introducing 'C' level activity to improve the ability to improve
Figure 2. Introducing "C" level activity to improve the ability to improve (Ref. 1)

Clearly, investment in type C activities is potentially highly leveraged. The right investments here will be multiplied in returns in increased B level productivity – in the ability to improve – which will be multiplied again in returns in productivity in the organization's primary activity. It is a way of getting a kind of compound return on investment in innovation.

The highly leveraged nature of investment in type C activities make this kind of investment in innovation particularly appropriate for governments, public service institutions such as libraries, and broad consortia of different companies and agencies across an entire industry. The reason for this is not only that a small investment here can make a big difference - though that certainly is an important consideration – but also because the investment in C activities is typically pre-competitive. It is investment that can be shared even among competitors in an industry because it is, essentially, investment in creating a better playing field. Perhaps the classic recent example of such investment in the U.S. is the relatively small investment that the Department of Defense made in what eventually became the Internet.

Another example, looking to the private sector, is the investment that companies made in improving product and process quality as they joined in the quality movement. What was particularly important about this investment was that, when it came to ISO 9000 compliance and other quality programs and measures, companies – even competing companies – joined together in industry consortia to develop benchmarks and standards. They even shared knowledge about quality programs. What emerged from this collaborative activity at the C level was significant gain for individual companies at the B and A levels. When you are operating at the C level, collaboration can produce much larger returns than competition.

Investing wisely in improvement

Let's keep our bigger goal in mind: we want to correct the current bias, emerging from over-reliance on market forces and the related obsession with ease of use that get in the way of developing better computing tools. We want to do this so that we can use computers to augment the capabilities of entire groups of people as they share knowledge and work together on truly difficult problems. The proposal that I am placing on the table is to correct that bias by making relatively small, but highly leveraged investments in efforts to improve our ability to improve – in what I have called type C activities.
The proposal is attractive not only for quantitative reasons – because it can produce a lot of change with a relatively small investment – but also for qualitative reasons:  This kind of investment is best able to support disruptive innovation – the kind of innovation that is necessary to embrace a new, knowledge-centered society. The acceleration in movement away from economic systems based on manufacturing and toward systems based on knowledge needs to be reflected in accelerated change in our ways of working with each other. This is the kind of change that we can embrace by focusing on type C activity and on improvement of our ability to improve.
Given all of that, what do we need to do?  If, say, Singapore wants to use this kind of investment as a focus for its development activity, where does it concentrate its attention?  If the organizations participating in this World Library Summit want to support and stimulate this kind of investment, where do they begin?
The answer to such questions has two different, but complementary dimensions. The first dimension has to do with process:  How do you operate and set expectations in a way that is consistent with productive type C activity?  The second dimension has to do with actual tools and techniques.