Over the summer of 1956, an enterprising group of scientists, engineers, and mathematicians held a workshop at Dartmouth College to explore the idea of a new area of scientific research: artificial intelligence. The proposal for the workshop, penned a year earlier and now famous in the field, reads:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.
One of the group’s luminaries, the economist Herbert Simon, prophesied some years later that “machines will be capable, within twenty years, of doing any work that a man can do.” And another, the computer scientist Marvin Minsky, declared that “within a generation, I am convinced, few compartments of intellect will remain outside the machine’s realm — the problem of creating ‘artificial intelligence’ will be substantially solved.”
At the birth of the discipline of AI we find these two core ideas. First, all aspects of human learning and intelligence — including language, problem-solving, and abstract thought — can in principle be rigorously codified and reproduced by machines. Second, given the accelerating pace of technological progress, the development of truly intelligent machines is inevitable; it’s only a question of time.
The prophesies have not come true within a generation, nor within three. Instead, each generation has had its own prophets of the imminent arrival of human-level AI — and each its skeptics. The latest broadside comes from computer scientist and AI entrepreneur Erik J. Larson in the provocative new book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do.
The book is many things: a fascinating history of AI’s roots in the work of Alan Turing and his contemporaries, a powerful and wide-ranging exposé of the limitations of today’s statistics-based “deep learning,” a constructive account of what aspects of the human mind AI still cannot capture, and a warning about the harm that AI hype is inflicting on science and the culture of innovation.
Yet while the critique of AI hype points us in the right direction, it is not radical enough. For Larson is fixated on intelligence’s logical aspects. And while he insightfully argues that even human logic goes beyond what today’s “deep learning” systems can do, in defending the human in this way he misses the broader picture. Our intelligent human capacities to identify meaning and practical significance in the realm of human action, to respond flexibly to social context and cultural nuance, to read literature and tell stories with understanding, to converse and reason with others, and to decide what to do and how to live — all this goes far beyond the domain of formal logic. To make sense of these capacities, one must look to something else.
Follow us on our blog app and never miss an update.