(A slightly modified version of this paper
appeared in IEEE Expert, December, 1990.)

Common Knowledge or Superior Ignorance?

Christopher Locke
Robotics Institute
Carnegie Mellon University
26 August 1990

This is a highly polemical critique of the "Cyc" system being constructed by Douglas Lenat and his colleagues at MCC in Texas, purportedly to capture "common sense knowledge." It caused a fair amount of furor when it was published, and was responded to in print (also in IEEE Expert) by both Lenat himself and Yuval Lirov of AT&T Bell Laboratories. The latter said in part:

The 'denial' approach described by Lenat in his reply to Locke is based on in-depth technical rationalizing about knowledge representation and reasoning techniques. Lenat avoids discussing the global issues and focuses on correcting the image of Cyc projected by Locke.... Lenat... disregards Locke's main criticism, which is of the arbitrary empowerment of a select group of people to maintain the expert super-system.


"[Douglas Lenat insists] that Cyc will change the way we live and work. Schools will use Cyc to provide one-on-one tutoring to students, he says.... Cyc will make scientific discoveries, apply justice, and even counsel unhappy couples. As Cyc continues to grow, he predicts, pieces of it will be stored in computers around the world, its contents made available through phone lines and radio waves. Cyc's intelligence, he claims, will flow like electricity through a gigantic, ubiquitous knowledge grid."

David H. Freedman
"Common Sense and the Computer"
Discover, August, 1990

" 'I do not like experts,' he said. 'They are our jailers.... Experts are addicts. They solve nothing. They are servants of whatever system hires them. They perpetuate it. When we are tortured, we shall be tortured by experts. When we are hanged, experts will hang us.... When the world is destroyed, it will be destroyed not by its madmen but by the sanity of its experts and the superior ignorance of its bureaucrats.' "

John le Carre
The Russia House
Knopf, 1988

One of the quotes above is from a work of fiction, the other from an objective report. The trick is to tell which is which.

The passage from le Carre describes the attitudes of a dissident Soviet scientist. Multiplied several million-fold, similar dangerously democratic attitudes about the abused relationship between knowledge and power have recently transfigured the political topography of the entire planet. The character is speaking about the covert custodians and executors of the prevailing ideological canon, making little distinction among the British, American and Soviet intelligence communities. The character despises and fears these "experts" because they voluntarily police the "correct" interpretation of collective significance, i.e., the State's view of what makes common sense.

The quote from Discover refers to another "intelligence community," which should be familiar to most readers. The passage expresses the enthusiastic projections of Douglas Lenat, team leader of MCC's Cyc "common sense" knowledge base project. One of the stated goals of this project is to make expert systems more intelligent by supplying them with a very large base of "human consensus reality knowledge" for interpreting information conveyed in natural language [see Note 1]. He is speaking of a time in the future, whose temporal distance from our own could be debated.

If le Carre's reference to being hanged by experts seems a bit extreme, consider Lenat's hope that expert systems will "apply justice." This suggestion should give rise to serious questioning of where this technology may be leading. In the field of law, to what exactly does "consensus reality" refer? The U.S. Constitution is a rather short, succinct document, yet hundreds of thousands of pages have been written in search of its supposed "common sense," that is, an interpretation on which all "reasonable men" would agree.

For instance, one legal issue that reasonable men have agreed on in the past is today vehemently contested by millions of reasonable women. This is not an isolated example of political conflict over the interpretation of textually transmitted cultural knowledge. It is not idle media curiosity that fills the front pages of our newspapers with speculation as to the interpretive philosophy and practice of David Souter, current nominee for the Supreme Court.

If we could determine what constitutes common sense in the law, we could perhaps dispense with our cumbersome, expensive and harrowingly inefficient system of adjudication. But any candidate replacement had better be capable of performing at a high level of intelligence indeed. If Cyc were somehow nominated to the Supreme Court (and it would almost have to be, if its "applications of justice" were not all to be overturned), imagine those Senate hearings! Would opposing Senators be branded as hopelessly reactionary Luddites? As the institution of Justice represents a critical element of the balance of powers in our constitutional democracy, one would hope not.

If we add to such legal concerns the many passionately disputed issues surrounding education, scientific discovery and human relations -- to name only those touched on in the quote -- it seems not only legitimate, but imperative, to question deeply the assumptions underlying the notion of automating the interpretation of linguistically mediated "common sense." The establishment of something approaching a "consensus reality" is the greatest single challenge facing any truly pluralistic society. Our commitment to -- or abandonment of -- such a society may be inextricably linked to the information systems we construct in the current decade.

However, if "common sense" is an optimistic, even dangerous, fiction, what then is to be done? Other than Cyc, is there another way to deal with the flood of unstructured textual information threatening to drown us? Imagine for a moment another scenario, also involving the law. Imagine that all our core legal documents were accessible online, along with multiple opinions and counter-opinions regarding them. Imagine that judges, lawyers, even mere mortals like ourselves, could refer to these texts and form our own ideas as to their relevance and impact on court cases, political decisions, and the freedoms or constraints under which we might conduct our future lives. Imagine moreover, that -- at least for legal professionals -- the opportunity existed to annotate and cross-reference these online documents, offering further views, corroborating evidence, exceptions, mitigating circumstances, agreement and dissent. Imagine, in short, a community of electronically networked practitioners with open access to a dynamically evolving cultural knowledge base, the knowledge in which was represented in that community's own natural language.

In fact, much of this system is in place today. But of course, there are problems. Needed classification schemes are often arbitrary for the same reason that Cyc's will always be: there is no globally shared ontology relating to disputed legal issues -- or even to foundational legal concepts (which is why disputes arise in the first place). Moreover, such textually embedded knowledge has not been "operationalized." This ugly buzz word emphasizes an assumption widely held by many information scientists: that online information does not constitute bona fide knowledge unless it forms the basis for automated decision-making. These two problems are intrinsically related; if we can't agree on definitive knowledge categories and relationships, how can we possibly automate the making of knowledgeable decisions?

One way out of this dilemma is to assume that the knowledge in a system is "common knowledge" precisely because it has been put into the system by "knowledgeable experts." (This is either tautology or paradox, depending on whether expert knowledge is thought to subsume common knowledge, or the two are seen as constituting opposing epistemological poles.) The politically charged correlate of this assumption is that experts are those who have the necessary power to define or affect the contents of such systems. If the representation of this knowledge depends on a formal language which most of the community of practitioners cannot interpret (e.g., Lisp, Prolog, C), then the larger community will have been effectively disenfranchised.

When only knowledge engineers can define what constitutes legitimate (i.e., "operational") knowledge, the perceived and accepted constitution of a domain can change overnight -- and not necessarily for the better. Only half tongue-in-cheek, we could say that if English (or Spanish or Chinese) is outlawed, only outlaws will speak English (or Spanish or Chinese). In other words, our supposedly knowledge-based systems could very easily end up "speaking a language" unable to adequately reflect and convey our own ambiguous, often conflicting, but deeply human concerns.

While this point of view may be hotly debated in AI circles, it is often not debated at all outside them. Most people who are not computer scientists or software engineers assume -- if they think about the question at all -- that they do not possess sufficient knowledge on which to base an opinion. While this assumption is in many respects correct, the alacrity with which these non-professionals abdicate any role in a debate which will certainly affect their lives is somewhat frightening. The cause for alarm here is the attribution of near-omniscience and a sort of universal compassion to technology. As experts and professionals, do we accept these godlike attributes projected upon us by the "ignorant folk" as our just due? Or do they place on us a heightened social and ethical obligation to take greater care in the exercise of our professional knowledge and its attendant power?

These are not simply issues for the future. Historically, expert systems have been effective in capturing corporate knowledge (disregarding for a moment the problematic semantics of the word). However, once captured, these knowledge resources have then been, in effect, locked up in the corporate vault (read "knowledge base"), never again to be seen by 99% of the people who were to make use of them.

Also, the knowledge on which many expert systems are based is more the opinion of a single, and sometimes questionable, "expert" than widely accepted fact. That is to say, it is not "common" knowledge at all. This problem has long been acknowledged by AI system designers and theoreticians. Knowledge is not static -- "it depends..." as we say. Statistical information can be considered fact because there is a truly common foundation of well articulated axioms on which it is based. No such universal conventions underpin the kind of knowledge which is conceived and transmitted via language. Though the Cyc project at MCC is attempting to create just such an axiom base, it may be deceptively generous to say that the challenge is enormous.

The closer we look at linguistic sense, the less we find that is unequivocally common. What is "true" for one community of practitioners may simply not hold for another. The interpretation of semantic import often varies from community to community -- even from individual to individual -- depending on such impossible-to-quantify variables as culture (ethnic and corporate), class (social and economic), gender (biological and socially imprinted) and ideological inclination (philosophical/religious and political), as well as on previous exposure, experience, and education. Refusing to acknowledge this profuse human pluralism simply because it frustrates our neatly logical formal schemata is to define "common sense" as that mythical commodity which fits a poverty stricken lowest-common-denominator metric of meaning.

"Operationalizing" this attenuated knowledge in some non-human-readable representation then locks most people out of these so-called "knowledge systems" along with the diverse riches they could have imparted to them. The much-discussed "knowledge acquisition bottleneck" should come as no surprise when the core design philosophy of these systems throws up nearly insurmountable barriers to those who might, otherwise, be able to contribute relevant and valuable knowledge.

In the past, the high costs associated with expert system development were often justified in terms of the number of jobs that could be taken off the payroll. Had these systems performed well enough, this return on investment might have made expert systems far more popular in the corporate world than they are, in fact, today. However, these systems have traditionally been plagued by brittleness, blindness and shallow knowledge. Lenat actually provides an excellent critique of expert systems along these lines -- for these are precisely the problems he predicts Cyc will solve [see Note 1]. Soon, Lenat tells us, Cyc will be able to read all by itself.

This sounds less like a working hypothesis than the apotheosis of hype, especially as there is no "human consensus reality knowledge" whatsoever about what it means "to read" -- an extremely high-level interpretive function around which debate currently rages in nearly every aspect of the humanities [see Note 2]. Whenever Cyc itself can understand, argue, and thereby effectively refute this harsh assessment, I'll be happy to eat my words. In the meantime, what we desperately need is more people who can read all by themselves.

Illiteracy is a plague far worse than dumb computers, and the costs -- human and economic -- are staggering. Perhaps this seems too either-or. Why not both: pursue natural language understanding systems and remedial reading programs for human beings? Consider a counter-question: How many apprenticeship and training programs were sacrificed in the past decade to pay for the promise of "intelligent systems" that could (it was fondly dreamed) displace unneeded workers? If we lend credence to the ultimate promise of Cyc, user-interface problems may come to an end, for the simple reason that, "after the revolution," there will be no further need of interfaces and the messy, ambiguous "wet-ware" they imply. It boggles the mind to imagine: a totally self-sufficient global information system. But if we've been willing to suspend disbelief thus far, why not?

One reason is that this vision is totally dehumanizing in the most common, garden-variety sense. If that's too metaphysical for some, here's another: because it's incredibly bad business. Whatever people may be, they are not stupid. Despite much corporate rhetoric, workers at every level know in what esteem they are actually held, and many intuitively sense how quickly they would be terminated if an adequate technological replacement were available. How many hand-wringing polemics on America's failing productivity and competitiveness take this particular item of knowledge into account? If people are treated like low-grade automata, how can they be expected, at the same time, to pull together for the common good? Perhaps the "common good" exists in the same alternate universe with "common sense."

There is a cure for this illness. Perhaps not simple, painless or logically elegant, but a cure nonetheless. And it is not -- as some may have suspected from the tone of this -- that we chuck our computers into the nearest ocean. Rather, it is that we use our incredible technology to empower people. How? By opening up the closed-vault knowledge base and delivering unrestricted access to human-readable (i.e., natural-language) corporate and cultural information resources through even the lowliest interface; by letting people interpret and "debug" the meaning and value of that information as it applies to their own work, hopes, dreams and necessarily ambiguous beliefs about "consensus reality"; and finally, by giving them the means to contribute what they have learned from hard experience, to upload their understanding, perceptions, reservations, new ideas and plural perspectives into an organization that is actually listening, because it is their own [See Note 3].

Won't this have serious political ramifications for institutions? You can bet on it. The naive hope for a painless, magic-bullet cure must be debunked before we can finally challenge the disease at its root. The real problem is rudimentary: people don't really care, because we haven't cared much about them. Though the described solution may seem politically painful, it is also the only quarter from which genuinely renewed productivity is likely to come. There is unimagined energy in our human resources. Although they are just as endangered today as our natural ones, tapping them involves no technological pipe dreams. We already have the tools to humanize our information systems and the institutions they were initially intended to serve. In the process, we can reinvent more than just the corporation; we can reinvent our culture and ourselves.

Shoshana Zuboff has called this process of opening up the institutional knowledge base "informating," which she contrasts with the more popular -- and autocratic -- notion of automating [see Note 4]. It also has another name, a word with no clear commonly accepted sense that has nonetheless swept Eastern Europe like a forest fire in the past 12 months. That word is, of course, "democracy."

NOTES

  1. Douglas B. Lenat, et al., "Cyc: Toward Programs with Common Sense," Communications of the ACM, Vol. 33, No. 8, August, 1990. See also Douglas B. Lenat and R.H. Guha, Building Large Knowledge-Based Systems, Addison-Wesley, 1990.
  2. See, for instance, Robert Scholes' Protocols of Reading, Yale University Press, 1989. For the other side of the coin, see Clifford Geertz' Works and Lives: The Anthropologist as Author, Stanford University Press, 1988.
  3. For an idea of the scope and value of such institution-wide discovery, see Peter Senge, The Fifth Discipline: The Art and Practice of the Learning Organization, Doubleday, 1990. Senge heads the Systems Dynamics group at MIT's Sloane School. For another perspective on his exciting work in this area, see John Briggs and F. David Peat, Turbulent Mirror: An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper & Row, 1989.
  4. Shoshana Zuboff, In the Age of the Smart Machine: The Future of Work and Power, Basic Books, 1988. See, especially, the chapter titled "What Was Managerial Authority?" (emphasis added). This book is an excellent treatment of how information systems are overturning long-cherished assumptions about power in the workplace. The author belongs to a radical group that calls itself Harvard Business School.