Category Archives: Alphazero
According to his website, Gary Marcus, a notable figure in the AI community, has published extensively in fields ranging from human and animal behaviour to neuroscience, genetics, linguistics, evolutionary psychology and artificial intelligence.
AI and evolutionary psychology, which is considered to be a remarkable range of topics to cover for a man as young as Marcus.
Marcus, in his website, calls himself a scientist, a best-selling author, and an entrepreneur. And is also a founding member of Geometric Intelligence, a machine learning company acquired by Uber in 2016. However, Marcus is widely known for his debates with machine learning researchers like Yann Lecun and Yoshua Bengio.
Marcus leaves no stone unturned to flaunt his ferocity in calling out the celebrities of the AI community.
However, he also, call it an act of benevolence or finding a neutral ground, downplays his criticisms through his we agree to disagree tweets.
Last week, Marcus did what he does best when he tried to reboot and shake up AI once again as he debated Turing award winner Yoshua Bengio.
In this debate, hosted by Montreal.AI, Marcus, in his speech, criticized Bengio for not citing him in Bengios work and complained that it would devalue Marcus contribution.
Marcus, in his arguments, tried to explain how hybrids are pervasive in the field of AI by citing the example of Google, which according to him, is actually a hybrid between knowledge graph, a classic symbolic knowledge, and deep learning like a system called BERT.
Hybrids are all around us
Marcus also insists on the requirement of thinking in terms of nature and nurture, rather than nature versus nurture when it comes to the understanding of the human brain.
He also laments about how much of machine learning, historically, has avoided nativism.
Marcus also pointed out that Yoshua misrepresented him as saying deep learning doesnt work.
I dont care what words you want to use, Im just trying to build something that works.
While Marcus argued for symbols, pointing out that DeepMinds chess-winning AlphaZero program is a hybrid involving symbols because it uses Monte Carlo Tree Search. You have to keep track of your trees, and trees are symbols.
Bengio dismissed the notion that a tree search is a symbol system. Rather Its a matter of words, Bengio said. If you want to call those symbols, but symbols to me are different, they have to do with the discreteness of concepts.
Bengio also shared his views on how deep learning might be extended to dealing with computational capabilities rather than taking the old techniques and combining them with Neural Nets.
Bengio admitted that he completely agrees that a lot of current systems, which use machine learning, has also used a bunch of handcrafted rules and codes that were designed by people.
While Marcus pressed Bengio for hybrid systems as a solution, Bengio, patiently reminded how hybrid systems have already been built, which has led to Marcus admitting that he misunderstood Bengio!
This goof-up was followed by Bengios takedown of symbolic AI and why there is a need to move on from good old fashioned AI (GOFAI). In a nod to Daniel Kahnemann, Bengio, took the two-system theory to explain how richer representation is required in the presence of an abundance of knowledge.
To this Marcus quickly responded by saying, Now I would like to emphasise on our agreements. This was followed up by one more hour of conversation between the speakers and a Q&A session with the audience.
The debate ended with the moderator Vincent Boucher thanking the speakers for a hugely impactful debate, which was hugely pointless for a large part of it.
Gary Marcus has been playing or trying to play the role of an antagonist that would shake up the hype around AI for a long time now.
In his interview with Synced, when asked about his relationship with Yann Lecun, Marcus said that they both are friends as well as enemies. While calling out Lecun for making ad hominem attacks on him, he also approves many perspectives of his frenemies.
Deliberate or not, Marcus online polemics to bring down hype of AI, usually ends up hyping up his own antagonism. What the AI community needs is the likes of Nassim Taleb, who is known for his relentless, eloquent and technically intact arguments. Taleb has been a practitioner and an insider who doesnt give a damn about being an outsider.
On the other hand, Marcus calls himself a cognitive scientist, however, his contribution to the field of AI cannot be called groundbreaking. There is no doubt that Marcus should be appreciated for positioning himself in the line of fire in the celebrated era of AI. However, one cant help but wonder two things when one listens to Marcus antics/arguments:
There is a definitely a thing or two Marcus can learn from Talebs approaches in debunking pseudo babble. A very popular example could be that of Talebs takedown of Steven Pinker, who also happens to be a dear friend and mentor to Marcus.
That said, the machine learning research community, did witness something similar in the form of David Duvenaud and Smerity, when they took a detour from the usual we shock with you jargon research, and added a lot of credibility to the research community. While Duvenaud, trashed his own award-winning work, Stephen Smerity Merity, investigated his paper on the trouble with naming inventions and unwanted sophistication.
There is no doubt that there is a lot of exaggerations related to what AI can do. Not to forget the subtle land grab amongst the researchers for papers, which can mislead the community into thinking vanity as an indication of innovation. As we venture into the next decade, AI can use a healthy dose of scepticism and debunking from the Schmidhubers and Smeritys of its research world to be more reliable.
The rest is here:
For the five years, I've been working with Sophia, the world's most expressive humanoid robot (and the first robot citizen), and the other amazing creations of social robotics pioneer Dr. David Hanson. During this time, I've been asked a few questions over and over again.
Some of these are not so intriguing like, "Can I take Sophia out on a date?"
But there are some questions that hold more weight and lead to even deeper moral and philosophical discussions questions such as "Why do we really want robots that look and act like humans, anyway?"
This is the question I aim to address.
The easiest answer here is purely practical. Companies are going to make, sell and lease humanoid robots because a significant segment of the population wants humanoid robots. If some people aren't comfortable with humanoid robots, they don't need to buy or rent them.
I stepped back from my role as chief scientist of Hanson Robotics earlier this year so as to devote more attention to my role as CEO of SingularityNET, but I am still working on the application of artificial general intelligence (AGI) and decentralized AI to social robotics.
At the web summit this November, I demonstrated the OpenCog neural-symbolic AGI engine and the SingularityNET blockchain-based decentralized AI platform controlling David Hanson's Philip K. Dick robot (generously loaned to us by Dan Popa's lab at the University of Louisville). The ability of modern AI tools to generate philosophical ruminations in the manner of Philip K. Dick (PKD) is fascinating, beguiling and a bit disorienting. You can watch a video of the presentation here to see what these robots are like.
While the presentation garnered great enthusiasm, I also got a few people coming to me with the "Why humanoid robots?" question but with a negative slant. Comments in the vein of "Isn't it deceptive to make robots that appear like humans even though they don't have humanlike intelligence or consciousness?"
To be clear, I'm not in favor of being deceptive. I'm a fan of open-source software and hardware, and my strong preference is to be transparent with product and service users about what's happening behind the magic digital curtain. However, the bottom line is that "it's complicated."
There is no broadly agreed theory of consciousness of the nature of human or animal consciousness, or the criteria a machine would need to fulfill to be considered as conscious as a human (or more so).
And intelligence is richly multidimensional. Technologies like AlphaZero and Alexa, or the AI genomic analysis software used by biologists, are far smarter than humans in some ways, though sorely lacking in certain aspects such as self-understanding and generalization capability. As research pushes gradually toward AGI, there may not be a single well-defined threshold at which "humanlike intelligence" is achieved.
A dialogue system like the one we're using in the PKD robot incorporates multiple components some human-written dialogue script fragments, a neural network for generating text in the vein of PKD's philosophical writings and some simple reasoning. One thread in our ongoing research focuses on more richly integrating machine reasoning with neural language generation. As this research advances, the process of the PKD robot coming to "really understand what it's talking about" is probably going to happen gradually rather than suddenly.
It's true that giving a robot a humanoid form, and especially an expressive and reactive humanlike face, will tend to bias people to interact with the robot as if it really had human emotion, understanding and culture. In some cases this could be damaging, and it's important to take care to convey as accurately as feasible to the people involved what kind of system they're interacting with.
However, I think the connection that people tend to feel with humanoid robots is more of a feature than a bug. I wouldn't want to see human-robot relationships replace human-human relationships. But that's not the choice we're facing.
McDonald's, for instance, has bought an AI company and is replacing humans with touchpad-based kiosks and automated voice systems, for cost reasons. If people are going to do business with machines, let them be designed to create and maintain meaningful social and emotional connections with people.
As well as making our daily lives richer than they would be in a world dominated by faceless machines, humanoid robots have the potential to pave the way toward a future in which humans and robots and other AIs interact in mutually compassionate and synergetic ways.
As today's narrow AI segues into tomorrow's AGI, how will emerging AGI minds come to absorb human values and culture?
Hard-coded rules regarding moral values can play, at best, a very limited role, e.g., in situations like a military robot deciding who to kill, or a loan-officer AI deciding who to loan funds to. The vast majority of real-life ethical decisions are fuzzy, uncertain and contextual in nature the kind of thing that needs to be learned by generalization from experience and by social participation.
The best way for an AI to absorb human culture is the way kids do, through rich participation in human life. Of course, the architecture of the AI's mind also matters. It has to be able to represent and manipulate thought and behavior patterns as nebulous as human values. But the best cognitive architecture won't help if the AI doesn't have the right experience base.
So my ultimate answer to why should we have humanoid robots is not just because people want them or because they are better for human life and culture than faceless kiosks but because they are the best way I can see to fill the AGI mind of the future with human values and culture.