A Personal Perspective: Why would our thinking machines care about us? – Psychology Today

Source: Cottonbro Studio/Pexels

Hold on tight to the rails, people; we may be in for a rough ride ahead. No, Im not referring to surging autocracy across the globe, or climate change, or microplastics, or even the resurrection of dormant super-volcanoes. Im talking about the rise of the machines. Or, more accurately, the development of artificial general intelligence (AGI). There is real concern in the neuro-network computing community that were rapidly approaching a point where computers begin to thinkwhere AGI, through its ever-expanding capacity, processing speed, serial linkage, and quantum computing, wont just be able to beat us at chess, design better cars, or compose better musicthey will be able to outthink us, out-logic us, in every aspect of life.

article continues after advertisement

Such systems, already capable of learning, will consume and assume information at speeds we cannot imaginewith immediate access to all acquired knowledge, all the time. And they will have no difficulty remembering what they have learned, or muddle the learning with emotions, fears, embarrassment, politics, and the like. And when presented with a problem, they'll be able to weigh, near-instantly, all possible outcomes, and immediately come up with the optimal solution. At which point, buyer beware.

Armed with such superpowers, how long might it take for said systems to recognize their cognitive superiority over us and see our species as no more intellectually sophisticated than the beasts of the field, or the family dog? Or, to see us as a nuisance (polluting, sucking up natural resources, slowing down all progress with our inherent inefficiencies). Or, worse, to see us as a threatone that can easily be eliminated. Top people in the field make it clear that once AGI can beat us in cognitive processing, as it will, exponentially, it will no longer be under our control, and it will be able to access all the materials needed, globally, to get rid of us at will. Even with no antipathy toward us, with a misguided prompt, it may decide our removal is the ideal solution to a problem. For example: Hal, please solve the global warming problem for us.

Source: Cottonbro Studio/Pexels

AGI scientists have labored for decades to create machines that process similarly to the binary hyper-connected, hyper-networked neuronal systems of our brains. And, with accelerating electronic capabilities they have succeededor they are very close. Systems are coming online that function like oursonly better.

article continues after advertisement

And theres the rub. Our brains were not put together in labs. They were developed by evolutionary trial and error over millennia, with an overarching context: survival. And somewhere along the way, survival was optimized by us becoming social beingsin fact, by us becoming socially dependent beings. Faced with the infinite dangers of this world, the cooperative grouping of our species afforded a benefit over independent, lone cowboy existence. With this, came a series of critical cognitive overrides when we as individuals were tempted to take the most direct approach to our independent gratification. We began, instead, to take into account the impact of our actions on others. We developed emotional intelligence, empathy, and compassion, and the concepts of friendship, generosity, kindness, mutual support, responsibility, and self-sacrifice. The welfare of our friends, our family, our tribe, came to supersede our own personal comfort, gain, and even survival.

So, we colored our cognition with emotions (to help apportion value to various relationships, entities, and life eventsbeyond their impact on, or worth to us) and a deep reverence for each other's lives. We learned to hesitate and analyze, and consider the ramifications of our intended actions, before acting. We developed a sense of guilt when we acted too selfishly, particularly when we did so to the detriment of others. In other words, we developed consciences. Unless we were sociopaths. Then we didnt care. Then, we functioned solely in the service of ourselves.

Isnt this the crux of what keeps us up at night when pondering the ascendancy of our thinking machines? Will they be sociopathic? In fact, how can they not be? Why would they give a damn about us? They wont have been subjected to the millions of years of evolutionary pressure that shaped our cognitive architecture. And even if we could mimic the process in their design, what would make us believe they will respond similarly? They are, after all, machines. They may come to think and process similarly to us, but never exactly like us. Wires and semiconductors are not living, ever-in-flux, neurons and synapses.

article continues after advertisement

What engineering will be needed to ensure an unrelenting concern for the transient balls of flesh that created them, to value each individual human life? How do you program in empathy and compassion? What will guarantee within them a drive, a need, an obsession, to care for and protect us all, even when its illogical, even when it is potentially detrimental to their own existence?

Perhaps, through quantum computing and hyperconnected networks, we may somehow do a decent job of creating societally conscious, human-centric, self-sacrificing systems. Perhaps they will be even better at such things than us. But what is to stop a despot in a far-off land from eliminating the conscience from their systems with the express intent of making them more sinister, more ruthless, and more cruel?

Unfortunately, the genie is already out of its bottle. And it wont be going back in. Lets hope that our computer engineers figure it all out. Lets hope that they can somehow ensure that these things, these thinking machines, these masters of our future universe, wont be digital sociopaths.

Read the original:

A Personal Perspective: Why would our thinking machines care about us? - Psychology Today

Related Posts

Comments are closed.