The age of AI-ism – TechTalks

By Rich Heimann

I recently read The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher. The book describes itself as an essential roadmap to our present and our future. We certainly need more business-, government-, and philosophical-centric books on artificial intelligence rather than hype and fantasy. Despite high hopes, in terms of its promise as a roadmap, the book is wanting.

Some of the reviews on Amazon focused on the lack of examples of artificial intelligence and the fact that the few provided, like Halicin and AlphaZero, are banal and repeatedly filled up the pages. These reviews are correct in a narrow sense. However, the book is meant to be conceptual, so few examples are understandable. Considering that there are no actual examples of artificial intelligence, finding any is always an accomplishment.

Frivolity aside, the book is troubling because it promotes some doubtful philosophical explanations that I would like to discuss further. I know what you must be thinking. However, this review is necessary because the authors attempt to convince readers that AI puts human identity at risk.

The authors ask, if AI thinks, or approximates thinking, who are we? (p. 20). While this statement may satiate a spiritual need by the authors and provide them a purpose to save us, it is unfair under the vague auspices of AI to even talk about such an existential risk.

We could leave it at that, but the authors represent important spheres of society (e.g., Silicon Valley, government, and academia); therefore, the claim demands further inspection. As we see governments worldwide dedicating more resources and authorizing more power to newly created organizations and positions, we must ask ourselves if these spheres, organizations, and leaders reflect our shared goals and values. This is a consequential inquiry, and to prove it, the authors determine the same pursuit. They declare that societies across the globe need to reconcile technology with their values, structures, and social contracts (p. 21) and add that while the number of individuals capable of creating AI is growing, the ranks of those contemplating this technologys implications for humanitysocial, legal, philosophical, spiritual, moralremain dangerously thin. (p. 26)

To answer the most basic question, if AI thinks,who are we? the book begins by explaining where we are (Chapter One: Where We Are). But, where we are is a suspicious jumping-off point because it is not where we are, and it indeed fails to tell us where AI is. It also fails to tell us where AI was as where we are is inherently ahistorical. AI did not start, nor end, in 2017 with the victory of AlphaZero over Stockfish in a chess match. Moreover, AlphaZero beating Stockfish is not evidence, let alone proof, that machines think. Such an arbitrary story creates the illusion of inevitability or conclusiveness in a field historically with neither.

The authors quickly turn from where we are into who we are. And, who we are, according to the authors, are thinking brains. They argue that the AI age needs its own Descartes by offering the reader the philosophical work of Ren Descartes. (p. 177) Specifically, the authors present Descartes dictum, I think, therefore I am, as proof that thinking is who we are. Unfortunately, this is not what Descartes meant with his silly dictum. Descartes meant to prove his existence by arguing that his thoughts were more real and his body less real. Unfortunately, things dont exist more or less. (Thomas Hobbes famous objection asked, Does reality admit of more and less?) The epistemological pursuit of understanding what we can know by manipulating what is, was not a personality disorder in the 17th century.

It is not uncommon to involve Descartes when discussing artificial intelligence. However, the irony is that Descartes would not have considered AI thinking at all. Descartes, who was familiar with the automata and mechanical toys of the 17th century, suggested that the bodies of animals are nothing more than complex machines. However, the I in Descartes dictum treats the human mind as non-mechanical and non-computational. Descartess dualism treats the human mind as non-computational and contradicts that AI is, or can ever, think. The double irony is that what Descartes thinks about thinking is not a property of his identity or his thinking. We will come back to this point.

To be sure, thinking is a prominent characteristic of being human. Moreover, reason is our primary means of understanding the world. The French philosopher and mathematician Marquis de Condorcet argued that reasoning and acquiring new knowledge would advance human goals. He even provided examples of science impacting food production to better support larger populations and science extending the human life span, well before they emerged. However, Descartess argument fails to show why thinking and not rage or love is as valid to least doubt ones existence.

The authors also imply that Descartess dictum meant to undermine religion by disrupting the established monopoly on information, which was largely in the hands of the church. (p. 20). While largely is doing much heavy lifting, the authors overlook that the Cogito argument (I think, therefore I am) was meant to support the existence of God. Descartes thought what is more perfect cannot arise from what is less perfect and was convinced that his thought of God was put there by someone more perfect than him.

Of course, I can think of something more perfect than me. It does not mean that thing exists. AI is filled with similarly modified ontological arguments. A solution with intelligence more perfect than human intelligence must exist because it can be thought into existence. AI is cartesian. You can decide if that is good or bad.

If we are going to criticize religion and promote pure thinking, Descartes is the wrong man for the job. We ought to consider Friedrich Nietzsche. The father of nihilism, Nietzsche, did not equivocate. He believed that the advancement of society meant destroying God. He rejected all concepts of good and evil, even secular ones, which he saw as adaptations of Judeo-Christian ideas. Nietzsches Beyond Good and Evil explains that secular ideas of good and evil do not reject God. According to Nietzsche, going beyond God is to go beyond good and evil. Today, Nietzsches philosophy is ignored because it points, at least indirectly, to the oppressive totalitarian regimes of the twentieth century.

This thought isnt endorsing religion, antimaterialism, or nonsecular government. Instead, this explanation is meant to highlight that antireligious sentiment is often used to swap out religious beliefs with studied scripture and moral precepts for unknown moral precepts and opaque nonscriptural. It is a kind of religion, and in this case, the authors even gaslight nonbelievers calling those that reject AI like the Amish and the Mennonites. (p. 154) Ouch. That said, this conversation isnt merely that we believe or value at all, something that machines can never do or be, but that some beliefs are more valuable than others. The authors do not promote or reject any values aside from reasoning, which is a process, not a set of values.

None of this shows any obsolescence for philosophyquite the opposite. In my opinion, we need philosophy. The best place to start is to embrace many of the philosophical ideas of the Enlightenment. However, the authors repeatedly kill the Enlightenment idea despite repeated references to the Enlightenment. The Age of AI creates a story where human potential is inert and at risk from artificial intelligence by asking who are we? and denying that humans are exceptional. At a minimum, we should embrace the belief that humans are unique with the unique ability to reason, but not reduce humans to just thinking, much less transfer all uniqueness and potential to AI.

The question, if AI thinks, or approximates thinking, who are we? begins with the false premise that artificial intelligence is solved, or only the details need to be worked out. This belief is so widespread that it is no longer viewed as an assumption that requires skepticism. It also represents the very problem it attempts to solve by marginalizing humans at all stages of problem-solving. Examples like Halicin and AlphaZero are accomplishments in problem-solving and human ingenuity, not artificial intelligence. Humans found these problems, framed them, and solved them at the expense of other competing problems using the technology available. We dont run around and claim that microscopes can see or give credit to a microscope when there is a discovery.

The question is built upon another flawed premise: our human identity is thinking. However, we are primarily emotional, which drives our understanding and decision-making. AI will not supplant the emotional provocations unique to humans that motivate us to seek new knowledge and solve new problems to survive, connect, and reproduce. AI also lacks the emotion that decides when, how, and should be deployed.

The false conclusion in all of this is that because of AI, humanity faces an existential risk. The problem with this framing, aside from the pesky, false premises, is that when a threat is framed in this way, the danger justifies any action which may be the most significant danger of all.

My book, Doing AI, explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving.

About the author

Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.

Go here to see the original:
The age of AI-ism - TechTalks

Related Posts

Comments are closed.