To Understand The Future of AI, Study Its Past – Forbes

Dr. Claude Shannon, one of the pioneers of the field of artificial intelligence, with an electronic ... [+] mouse designed to navigate its way around a maze after only one 'training' run. May 10, 1952 at Bell Laboratories. (Photo by Keystone/Getty Images)

A schism lies at the heart of the field of artificial intelligence. Since its inception, the field has been defined by an intellectual tug-of-war between two opposing philosophies: connectionism and symbolism. These two camps have deeply divergent visions as to how to "solve" intelligence, with differing research agendas and sometimes bitter relations.

Today, connectionism dominates the world of AI. The emergence of deep learning, which is a quintessentially connectionist technique, has driven the worldwide explosion in AI activity and funding over the past decade. Deep learning's recent accomplishments have been nothing short of astonishing. Yet as deep learning spreads, its limitations are becoming increasingly evident.

If AI is to reach its full potential going forward, a reconciliation between connectionism and symbolism is essential. Thankfully, in both academic and commercial settings, research efforts that fuse these two traditionally opposed approaches are beginning to emerge. Such synthesis may well represent the future of artificial intelligence.

Symbolic approaches to AI seek to build systems that behave intelligently through the manipulation of symbols that map directly to conceptsfor instance, words and numbers. Connectionist approaches, meanwhile, represent information and simulate intelligence via massive networks of interconnected processing units (commonly referred to as neural networks), rather than explicitly with symbols.

In many respects, connectionism and symbolism represent each others yin and yang: each approach has core strengths which for the other are important weaknesses. Neural networks develop flexible, bottoms-up intuition based on the data they are fed. Their millions of interconnected "neurons" allow them to be highly sensitive to gradations and ambiguities in input; their plasticity allows them to learn in response to new information.

But because they are not explicitly programmed by humans, neural networks are "black boxes": it is generally not possible to pinpoint, in terms that are meaningful to humans, why they make the decisions that they do. This lack of explainability is a fundamental impediment to the widespread use of connectionist methods in high-stakes real-world environments.

Symbolic systems do not have this problem. Because these systems operate with high-level symbols to which discrete meanings are attached, their logic and inner workings are human-readable. The tradeoff is that symbolic systems are more static and brittle. Their performance tends to break down when confronted with situations that they have not been explicitly programmed to handle. The real world is complex and heterogeneous, full of fuzzily defined concepts and novel situations. Symbolic AI is ill-suited to grapple with this complexity.

At its inception, the field of artificial intelligence was dominated by symbolism. As a serious academic discipline, artificial intelligence traces its roots to the summer of 1956, when a small group of academics (including future AI icons like Claude Shannon, Marvin Minsky and John McCarthy) organized a two-month research workshop on the topic at Dartmouth College. As is evident in the group's original research proposal from that summer, these AI pioneers' conception of intelligence was oriented around symbolic theories and methods.

Throughout the 1960s and into the 1970s, symbolic approaches to AI predominated. Famous early AI projects like Eliza and SHRDLU are illustrative examples. These programs were designed to interact with humans using natural language (within carefully prescribed parameters). For instance, SHRDLU could successfully respond to human queries like: Is there a large block behind a pyramid? or "What does the box contain?"

At the same time that symbolic AI research was showing early signs of promise, nascent efforts to explore connectionist paths to AI were shut down in dramatic fashion. In 1969, in response to early research on artificial neural networks, leading AI scholars Marvin Minsky and Seymour Papert published a landmark book called Perceptrons. The book set forth mathematical proofs that seemed to establish that neural networks were not capable of executing certain basic mathematical functions.

Perceptrons impact was sweeping: the AI research community took the analysis as authoritative evidence that connectionist methods were an unproductive path forward in AI. As a consequence, neural networks all but disappeared from the AI research agenda for over a decade.

Yet despite its early momentum, it would soon become clear that symbolic AI had profound shortcomings of its own.

Symbolic AI reached its mainstream zenith in the early 1980s with the proliferation of what were called expert systems: computer programs that, using extensive if-then logic, sought to codify the knowledge and decision-making of human experts in particular domains. These systems generated tremendous expectations and hype: startups like Teknowledge and Intellicorp raised millions and Fortune 500 companies invested billions in attempts to commercialize the technology.

Expert systems failed spectacularly to deliver on these expectations, due to the shortcomings noted above: their brittleness, inflexibility and inability to learn. By 1987 the market for expert systems had all but collapsed. An "AI winter" set in that would stretch into the new century.

Amid the ashes of the discredited symbolic AI paradigm, a revival of connectionist methods began to take shape in the late 1980sa revival that has reached full bloom in the present day. In 1986 Geoffrey Hinton published a landmark paper introducing backpropagation, a new method for training neural networks that has become the foundation for modern deep learning. As early as 1989, Yann LeCun had built neural networks using backpropagation that could reliably read handwritten zip codes for the U.S. Postal Service.

But these early neural networks were impractical to train and could not scale. Through the 1990s and into the 2000s, Hinton, LeCun and other connectionist pioneers persisted in their work on neural networks in relative obscurity. In just the past decade, a confluence of technology developmentsexponentially increased computing capabilities, larger data sets, and new types of microprocessorshave supercharged these connectionist methods first devised in the 1980s. These forces have catapulted neural networks out of the research lab to the center of the global economy.

Yet for all of its successes, deep learning has meaningful shortcomings. Connectionism is at heart a correlative methodology: it recognizes patterns in historical data and makes predictions accordingly, nothing more. Neural networks do not develop semantic models about their environment; they cannot reason or think abstractly; they do not have any meaningful understanding of their inputs and outputs. Because neural networks inner workings are not semantically grounded, they are inscrutable to humans.

Importantly, these failings correspond directly to symbolic AI's defining characteristics: symbolic systems are human-readable and logic-based.

Recognizing the promise of a hybrid approach, AI researchers around the world have begun to pursue research efforts that represent a reconciliation of connectionist and symbolic methods.

DARPA

To take one example, in 2017 DARPA launched a program called Explainable Artificial Intelligence (XAI). XAI is providing funding to 13 research teams across the country to develop new AI methods that are more interpretable than traditional neural networks.

Some of these research teams are focused on incorporating symbolic elements into the architecture of neural networks. Other teams are going further still, developing purely symbolic AI methods.

Autonomous vehicles

Another example of the merits of a dual connectionist/symbolic approach comes from the development of autonomous vehicles.

A few years ago, it was not uncommon for AV researchers to speak of pursuing a purely connectionist approach to vehicle autonomy: developing an "end-to-end" neural network that would take raw sensor data as input and generate vehicle controls as output, with everything in between left to the opaque workings of the model.

As of 2016, prominent AV developers like Nvidia and Drive.ai were building end-to-end deep learning solutions. Yet as research efforts have progressed, consensus has developed across the industry that connectionist-only methods are not workable for the commercial deployment of AVs.

The reason is simple: for an activity as ubiquitous and safety-critical as driving, it is not practicable to use AI systems whose actions cannot be closely scrutinized and explained. Regulators across the country have made clear that an AV systems inability to account for its own decisions is a non-starter.

Today, the dominant (perhaps the exclusive) technological approach among AV programs is to combine neural networks with symbolically-based features in order to increase model transparency.

Most often, this is achieved by breaking the overall AV cognition pipeline into modules: e.g., perception, prediction, planning, actuation. Within a given module, neural networks are deployed in targeted ways. But layered on top of these individual modules is a symbolic framework that integrates the various components and validates the systems overall output.

Academia

Finally, at leading academic institutions around the world, researchers are pioneering cutting-edge hybrid AI models to capitalize on the complementary strengths of the two paradigms. Notable examples include a 2018 research effort at DeepMind and a 2019 program led by Josh Tenenbaum at MIT.

In a fitting summary, NYU professor Brenden Lake said of the MIT research: Neural pattern recognition allows the system to see, while symbolic programs allow the system to reason. Together, the approach goes beyond what current deep learning systems can do.

Taking a step back, we would do well to remember that the human mind, that original source of intelligence that has inspired the entire AI enterprise, is at once deeply connectionist and deeply symbolic.

Anatomically, thoughts and memories are not discretely represented but rather distributed in parallel across the brains billions of interconnected neurons. At the same time, human intelligence is characterized at the level of consciousness by the ability to express and manipulate independently meaningful symbols. As philosopher Charles Sanders Peirce put it, We think only in signs.

Any conception of human intelligence that lacked either a robust connectionist or a robust symbolic dimension would be woefully incomplete. The same may prove to be true of machine intelligence. As dazzling as the connectionist-driven advances in AI have been over the past decade, they may be but a prelude to what becomes possible when the discipline more fully harmonizes connectionism and symbolism.

Go here to see the original:

To Understand The Future of AI, Study Its Past - Forbes

Related Posts

Comments are closed.