This book is a collection of essays based on a forum which was itself a discussion of the ideas presented in Ray Kurzweil’s book [[The Age of Spiritual Machines]], in which several scientists/academics present their arguments against the theses in Kurzweil’s book. Following the critics’ essays are Kurzweil’s responses to each in turn, in which he adeptly counters the arguments they bring up and tears some of them to complete shreds. One (which he tears to shreds) is made by [http://en.wikipedia.org/wiki/John_Searle John Searle], whose famous “Chinese Room” argument, which we discussed in an [[A.I.]] class at [[UCLA]] years ago, didn’t leave me quite satisfied back then, either.
: John Searle is very well known for his development of a thought experiment, called the “Chinese Room” argument. He set out to prove that human thought was not simply computation. His main premise is that a computational process in itself cannot have an “understanding” of events and processes.
In more detail, from the [http://en.wikipedia.org/wiki/Chinese_Room Chinese Room] Wikipedia article:
: Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.
: Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn’t, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don’t understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don’t understand what they’re ‘saying’, just as he doesn’t.
Kurzweil’s retort, in an article whose title begins “Locked In His Chinese Room…”, is essentially that in order to pass the Turing Test, the “rule book” referred to has to be so extraordinarily complex that it, the rule book, is what necessarily needs to understand Chinese in the fullest sense of the word, just as well as a Chinese-speaking person does. The part Searle plays sitting inside the computer where he consults the rule book is just that of one of the senses. He’s an eye, let’s say. Does the eye have an understanding of Chinese? Of course not. But the rule book is entirely equivalent in its range of responses to a Chinese human (such that it can pass the Turing Test), and would need to be at least as intelligent as a human. It follows that the rule book, being that complex, certainly does have what we would term “understanding”.
Another class of questions (phrased as counter-arguments) Kurzweil answers is: could we simulate human intelligence in a computer to the degree that such a computer could pass a Turing Test but //not// be conscious? Such a simulation would itself be intelligent (by virtue of its responses), but since the simulation isn’t really thinking, just computing the answer to the question “how //would// a human respond?”, could we really say that it has understanding? How could it have consciousness? Even if we’re simulating a human brain itself, if the simulation runs via a different method (for example, not by simulating neurons in a massively parallel neural net with the structure of a human mind, but via a single-instruction-at-a-time computer program which evaluates the functional result of the same neural net), how could that have consciousness?
In answer to this one, he points out that it doesn’t make sense to implement a simulation this way. We’re better off simulating the actual neural-net structure of the brain, and observing the outputs. In fact, the variation of human response is so complex that a simulation of what the brain //would do// would have to be orders of magnitude more complex than the brain itself, and to give adequate results would have to essentially contain a model of the brain, in some form. And that brain, in its “some form”, we would come to accept as conscious.
Very interesting read. Read [[The Age of Spiritual Machines]], then work through this one.