Category Archives: Alphazero
Singapores Central Bank Partners With Google to Explore AI for Internal Use – 24/7 Wall St.
The Central Bank of Singapore is partnering with Googles cloud unit to explore potential internal use cases of the burgeoning artificial intelligence (AI) space and advance the development of innovative technologies in the Asian hub, Nikkei reported on Wednesday.
The Monetary Authority of Singapore (MAS), the central banking authority of the Southeast Asian city-state, is teaming up with Googles cloud division to develop AI projects for internal use and equip its employees with deep AI skillsets.
According to Nikkeis report, the MAS will collaborate with Google Cloud on generative AI initiatives to facilitate the use of internal applications in a manner that is grounded on responsible AI practices. The move comes after large language models (LLMs) and other generative AI products like OpenAIs ChatGPT took the world by storm with their abilities to produce impressive text and media content.
In addition, it also represents a part of the MASs broader strategy to streamline the development of state-of-the-art technologies in Singapore.
MAS has been committed to leveraging technology and innovation to their fullest potential. This collaboration allows us to explore potential use cases in our functions and operations that could harness generative AI while prioritizing information security as well as data and AI model governance.
said Vincent Loy, an assitant managing director for technology at MAS.
The MAS did not specify how to implement Googles AI technology. However, it explained the partnership would establish a framework for determining potential use cases, conducting technical pilots, and co-creating solutions for the central banks digital services.
Google, one of the biggest tech companies in the world, has been developing artificial intelligence (AI) products and services for several years, pouring significant capital into its research and development. In 2014, the company acquired DeepMind Technologies, a leading AI startup known for its breakthroughs in deep reinforcement learning.
Over recent years, the tech behemoth has made significant progress in AI by developing advanced models like AlphaGo, AlphaZero, and BERT. Moreover, to support AI research and innovation, the company has also launched several AI-related tools and frameworks, such as TensorFlow an open-source machine learning and AI library widely adopted by researchers and developers worldwide.
Most recently, Google released Google Bard, a conversational chatbot launched just a few months after ChatGPT, whose success left many tech giants scrambling to focus on AI, including Microsoft and Baidu.
This article originally appeared on The Tokenist
Sponsored: Tips for Investing
A financial advisor can help you understand the advantages and disadvantages of investment properties. Finding a qualified financial advisor doesnt have to be hard. SmartAssets free tool matches you with up to three financial advisors who serve your area, and you can interview your advisor matches at no cost to decide which one is right for you. If youre ready to find an advisor who can help you achieve your financial goals, get started now.
Investing in real estate can diversify your portfolio. But expanding your horizons may add additional costs. If youre an investor looking to minimize expenses,consider checking out online brokerages. They often offer low investment fees, helping you maximize your profit.
See the original post here:
Singapores Central Bank Partners With Google to Explore AI for Internal Use - 24/7 Wall St.
Meta AI Boss: current AI methods will never lead to true intelligence – Gizchina.com
Meta is one of the leading companies in AI development globally. However, the company appears to not have confidence in the current AI methods. According toYann LeCun, chief AI scientist at Meta, there needs to be an improvement for true intelligence. LeCun claims that the most current AI methods will never lead to true intelligence. His research on many of the most successful deep learning fields today method is skeptical.
The Turing Award winner said that the pursuit of his peers is necessary, but not enough.These include research on large language models such as Transformer-based GPT-3.As LeCun describes it, Transformer proponents believe: We tokenize everything and train giant models to make discrete predictions, and thats where AI stands out.
Theyre not wrong. In that sense, this could be an important part of future intelligent systems, but I think its missing the necessary parts, explained LeCun. LeCun perfected the use of convolutional neural networks, which has been incredibly productive in deep learning projects.
LeCun also seesflaws and limitations in many other highly successful areas of the discipline.Reinforcement learning is never enough, he insists.Researchers like DeepMinds David Silver, although they developed the AlphaZero program and mastered chess and Go, focused on very action-oriented programs, while LeCun observed. He claims that most of our learning is not done by taking actual action, but by observation.
LeCun, 62, has a strong sense of urgency to confront the dead ends he believes many may be heading. He will also try to steer his field in the direction he thinks it should be heading. Weve seen a lot of claims about what we should be doing to push AI to human-level intelligence. I think some of those ideas are wrong, LeCun said. Our intelligent machines arent even at the level of cat intelligence. So why do we not start here?
LeCun believes that not only academia but also the AI industry needs profound reflection. Self-driving car groups, such as startups like Wayve, think they can learn just about anything by throwing data at large neural networks,which seems a little too optimistic, he said.
You know, I think its entirely possible for us to have Level 5 autonomous vehicles without common sense, but you have to work on the design, LeCun said. He believes that this over-engineered self-driving technology will like all computer vision programs obsoleted by deep learning, they become fragile. At the end of the day, there will be a more satisfying and possibly better solution that involves systems that better understand how the world works, he said.
LeCun hopes to prompt a rethinking of the fundamental concepts about AI, saying: You have to take a step back and say, Okay, we built the ladder, but we want to go to the moon, and this ladder cant possibly get us there. I would say its like making a rocket, I cant tell you the details of how we make a rocket, but I can give the basics.
According to LeCun, AI systems need to be able to reason, and the process he advocates is to minimize certain underlying variables. This enables the system to plan and reason. Furthermore, LeCun argues that the probabilistic framework should be abandoned. This is because it is difficult to deal with when we want to do things like capture dependencies between high-dimensional continuous variables. LeCun also advocates forgoing generative models. If not, the system will have to devote too many resources to predicting things that are hard to predict. Ultimately, the system will consume too many resources.
In a recent interview with business technology media ZDNet, LeCun reveals some information from a paper which he wrote regarding the exploration of the future of AI. In this paper, LeCun disclosed his research direction for the next ten years.Currently GPT-3, Transformer advocates believe that as long as everything is tokenized and then huge models are trained to make discrete predictions, AI will somehow emerge.But he believes that this is only one of the components of future intelligent systems, but not a key necessary part.
And even reinforcement learning cant solve the above problem, he explained. Although they are good chess players, they are still only programs that focus on actions.LeCun also adds that many people claim to advance AI in some way, but these ideas mislead us. He further believes that the common sense of current intelligent machines is not even as good as that of a cat. This is the origin of the low development of AI he believes. The AI methods have serious flaws.
As a result, LeCun confessed that he had given up the study of using the generative network to predict the next frame of the video from this frame
It was a complete failure, he adds.
LeCun summed up the reasons for the failure, the models based on probability theory that limited him. At the same time, he denounced those who believed that probability theory was superstitious. They believe that probability theory is the only framework for explaining machine learning, but in fact, a world model built with 100% probability is difficult to achieve.At present, he has not been able to solve this underlying problem very well. However, LeCun hopes torethink and draw an analogy.
It is worth mentioning that LeCun talked bluntlyabout his critics in the interview. He specifically took a jab atGary Marcus, a professor at New York University who he claims has never made any contribution to AI.
Continue reading here:
Meta AI Boss: current AI methods will never lead to true intelligence - Gizchina.com
I'm Bin Yu, the head of the Yu Group at Berkeley, which consists of 15-20 students and postdocs from Statistics and EECS. I was formally trained as a statistician, but my research interests and achievements extend beyond the realm of statistics. Together with my group, my work has leveraged new computational developments to solve important scientific problems by combining novel statistical machine learning approaches with the domain expertise of my many collaborators in neuroscience, genomics and precision medicine. We also develop relevant theory to understand random forests and deep learning for insight into and guidance for practice.
We have developed the PCS framework for veridical data science (or responsible, reliable, and transparent data analysis and decision-making). PCS stands for predictability, computability and stability, and it unifies, streamlines, and expands on ideas and best practices of machine learning and statistics.
In order to augment empirical evidence for decision-making, we are investigating statistical machine learning methods/algorithms (and associated statistical inference problems) such as dictionary learning, non-negative matrix factorization (NMF), EM and deep learning (CNNs and LSTMs), and heterogeneous effect estimation in randomized experiments (X-learner). Our recent algorithms include staNMF for unsupervised learning, iterative Random Forests (iRF) and signed iRF (s-iRF) for discovering predictive and stable high-order interactions in supervised learning, contextual decomposition (CD) and aggregated contextual decomposition (ACD) for interpretation of Deep Neural Networks (DNNs).
Stability expanded, in reality. Harvard Data Science Review (HDSR), 2020.
Data science process: one culture. JASA, 2020.
Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist, Nature Medicine, 2020.
Veridical data science (PCS framework), PNAS, 2020 (QnAs with Bin Yu)
Breiman Lecture (video) at NeurIPS "Veridical data Science" (PCS framework and iRF), 2019; updated slides, 2020
Definitions, methods and applications in interpretable machine learning, PNAS, 2019
Data wisdom for data science (blog), 2015
IMS Presidential Address "Let us own data science", IMS Bulletin, 2014
Stability, Bernoulli, 2013
Embracing statistical challenges in the IT age, Technometrics, 2007
Honorary Doctorate, University of Lausanne (UNIL) (Faculty of Business and Economics), June 4, 2021 (Interview of Bin Yu by journalist Nathalie Randin, with an introduction by Dean Jean-Philippe Bonardi of UNIL in French (English translation))
CDSS news on our PCS framework: "A better framework for more robust, trustworthy data science", Oct. 2020
UC Berkeley to lead $10M NSF/Simons Foundation program to investigate theoretical underpinnings of deep learning, Aug. 25, 2020
Curating COVID-19 data repository and forecasting county-level death counts in the US, 2020
Interviewed by PBS Nova about AlphaZero, 2018
Mapping a cell's destiny, 2016
Seeking Data Wisdom, 2015
Member, National Academy of Sciences, 2014
Fellow, American Academy of Arts and Sciences, 2013
One of the 50 best inventions of 2011 by Time Magazine, 2011
The Economist Article, 2011
ScienceMatters @ Berkeley. Dealing with Cloudy Data, 2004
See the original post:
The age of AI-ism – TechTalks
By Rich Heimann
I recently read The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher. The book describes itself as an essential roadmap to our present and our future. We certainly need more business-, government-, and philosophical-centric books on artificial intelligence rather than hype and fantasy. Despite high hopes, in terms of its promise as a roadmap, the book is wanting.
Some of the reviews on Amazon focused on the lack of examples of artificial intelligence and the fact that the few provided, like Halicin and AlphaZero, are banal and repeatedly filled up the pages. These reviews are correct in a narrow sense. However, the book is meant to be conceptual, so few examples are understandable. Considering that there are no actual examples of artificial intelligence, finding any is always an accomplishment.
Frivolity aside, the book is troubling because it promotes some doubtful philosophical explanations that I would like to discuss further. I know what you must be thinking. However, this review is necessary because the authors attempt to convince readers that AI puts human identity at risk.
The authors ask, if AI thinks, or approximates thinking, who are we? (p. 20). While this statement may satiate a spiritual need by the authors and provide them a purpose to save us, it is unfair under the vague auspices of AI to even talk about such an existential risk.
We could leave it at that, but the authors represent important spheres of society (e.g., Silicon Valley, government, and academia); therefore, the claim demands further inspection. As we see governments worldwide dedicating more resources and authorizing more power to newly created organizations and positions, we must ask ourselves if these spheres, organizations, and leaders reflect our shared goals and values. This is a consequential inquiry, and to prove it, the authors determine the same pursuit. They declare that societies across the globe need to reconcile technology with their values, structures, and social contracts (p. 21) and add that while the number of individuals capable of creating AI is growing, the ranks of those contemplating this technologys implications for humanitysocial, legal, philosophical, spiritual, moralremain dangerously thin. (p. 26)
To answer the most basic question, if AI thinks,who are we? the book begins by explaining where we are (Chapter One: Where We Are). But, where we are is a suspicious jumping-off point because it is not where we are, and it indeed fails to tell us where AI is. It also fails to tell us where AI was as where we are is inherently ahistorical. AI did not start, nor end, in 2017 with the victory of AlphaZero over Stockfish in a chess match. Moreover, AlphaZero beating Stockfish is not evidence, let alone proof, that machines think. Such an arbitrary story creates the illusion of inevitability or conclusiveness in a field historically with neither.
The authors quickly turn from where we are into who we are. And, who we are, according to the authors, are thinking brains. They argue that the AI age needs its own Descartes by offering the reader the philosophical work of Ren Descartes. (p. 177) Specifically, the authors present Descartes dictum, I think, therefore I am, as proof that thinking is who we are. Unfortunately, this is not what Descartes meant with his silly dictum. Descartes meant to prove his existence by arguing that his thoughts were more real and his body less real. Unfortunately, things dont exist more or less. (Thomas Hobbes famous objection asked, Does reality admit of more and less?) The epistemological pursuit of understanding what we can know by manipulating what is, was not a personality disorder in the 17th century.
It is not uncommon to involve Descartes when discussing artificial intelligence. However, the irony is that Descartes would not have considered AI thinking at all. Descartes, who was familiar with the automata and mechanical toys of the 17th century, suggested that the bodies of animals are nothing more than complex machines. However, the I in Descartes dictum treats the human mind as non-mechanical and non-computational. Descartess dualism treats the human mind as non-computational and contradicts that AI is, or can ever, think. The double irony is that what Descartes thinks about thinking is not a property of his identity or his thinking. We will come back to this point.
To be sure, thinking is a prominent characteristic of being human. Moreover, reason is our primary means of understanding the world. The French philosopher and mathematician Marquis de Condorcet argued that reasoning and acquiring new knowledge would advance human goals. He even provided examples of science impacting food production to better support larger populations and science extending the human life span, well before they emerged. However, Descartess argument fails to show why thinking and not rage or love is as valid to least doubt ones existence.
The authors also imply that Descartess dictum meant to undermine religion by disrupting the established monopoly on information, which was largely in the hands of the church. (p. 20). While largely is doing much heavy lifting, the authors overlook that the Cogito argument (I think, therefore I am) was meant to support the existence of God. Descartes thought what is more perfect cannot arise from what is less perfect and was convinced that his thought of God was put there by someone more perfect than him.
Of course, I can think of something more perfect than me. It does not mean that thing exists. AI is filled with similarly modified ontological arguments. A solution with intelligence more perfect than human intelligence must exist because it can be thought into existence. AI is cartesian. You can decide if that is good or bad.
If we are going to criticize religion and promote pure thinking, Descartes is the wrong man for the job. We ought to consider Friedrich Nietzsche. The father of nihilism, Nietzsche, did not equivocate. He believed that the advancement of society meant destroying God. He rejected all concepts of good and evil, even secular ones, which he saw as adaptations of Judeo-Christian ideas. Nietzsches Beyond Good and Evil explains that secular ideas of good and evil do not reject God. According to Nietzsche, going beyond God is to go beyond good and evil. Today, Nietzsches philosophy is ignored because it points, at least indirectly, to the oppressive totalitarian regimes of the twentieth century.
This thought isnt endorsing religion, antimaterialism, or nonsecular government. Instead, this explanation is meant to highlight that antireligious sentiment is often used to swap out religious beliefs with studied scripture and moral precepts for unknown moral precepts and opaque nonscriptural. It is a kind of religion, and in this case, the authors even gaslight nonbelievers calling those that reject AI like the Amish and the Mennonites. (p. 154) Ouch. That said, this conversation isnt merely that we believe or value at all, something that machines can never do or be, but that some beliefs are more valuable than others. The authors do not promote or reject any values aside from reasoning, which is a process, not a set of values.
None of this shows any obsolescence for philosophyquite the opposite. In my opinion, we need philosophy. The best place to start is to embrace many of the philosophical ideas of the Enlightenment. However, the authors repeatedly kill the Enlightenment idea despite repeated references to the Enlightenment. The Age of AI creates a story where human potential is inert and at risk from artificial intelligence by asking who are we? and denying that humans are exceptional. At a minimum, we should embrace the belief that humans are unique with the unique ability to reason, but not reduce humans to just thinking, much less transfer all uniqueness and potential to AI.
The question, if AI thinks, or approximates thinking, who are we? begins with the false premise that artificial intelligence is solved, or only the details need to be worked out. This belief is so widespread that it is no longer viewed as an assumption that requires skepticism. It also represents the very problem it attempts to solve by marginalizing humans at all stages of problem-solving. Examples like Halicin and AlphaZero are accomplishments in problem-solving and human ingenuity, not artificial intelligence. Humans found these problems, framed them, and solved them at the expense of other competing problems using the technology available. We dont run around and claim that microscopes can see or give credit to a microscope when there is a discovery.
The question is built upon another flawed premise: our human identity is thinking. However, we are primarily emotional, which drives our understanding and decision-making. AI will not supplant the emotional provocations unique to humans that motivate us to seek new knowledge and solve new problems to survive, connect, and reproduce. AI also lacks the emotion that decides when, how, and should be deployed.
The false conclusion in all of this is that because of AI, humanity faces an existential risk. The problem with this framing, aside from the pesky, false premises, is that when a threat is framed in this way, the danger justifies any action which may be the most significant danger of all.
My book, Doing AI, explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving.
About the author
Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.
Go here to see the original:
The age of AI-ism - TechTalks
Attempt to compare different types of intelligence falls a bit short – Ars Technica
What makes machines, animals, and people smart? asks the subtitle of Paul Thagards new book, Bots and Beasts. Not Are computers smarter than humans? or will computers ever be smarter than humans? or even are computers and animals conscious, sentient, or self-aware (whatever any of that might mean)? And thats unfortunate, because most people are probably more concerned with questions like those.
Thagard is a philosopher and cognitive scientist, and he has written many books about the brain, the mind, and society. In this one, he defines what intelligence is and delineates the 12 features and 8 mechanisms that he thinks Its built from,comprise it which allows him toso that he can compare the intelligences of these three very different types of beings.
He starts with a riff on the Aristotelian conception of virtue ethics. Whereas in that case, a good person is defined as someone who possesses certain virtues; in Thagards case, a smart person is defined as someone who epitomizes certain ways of thinking. Confucius, Mahatma Ghandi, and Angela Merkel excelled at social innovation; Thomas Edison and George Washington Carver excelled at technological innovation; he lists Beethoven, Georgia OKeeffe, Jane Austen, and Ray Charles as some of his favorite artistic geniuses; and Charles Darwin and Marie Curie serve as his paragons of scientific discoverers. Each of these people epitomizes different aspects of human intelligence, including creativity, emotion, problem solving, and using analogies.
Next he chooses six smart computers and six smart animals and grades them on how they measure up to people on these different features and mechanisms of intelligence. The computers are IBM Watson, DeepMind AlphaZero, self-driving cars, Alexa, Google Translate, and recommender algorithms; the animals are bees, octopuses, ravens, dogs, dolphins, and chimps.
All fare pretty abysmally on his report card. Animals as a class do better, but computers are evolving much more quickly. The upshot of his argument is that while some computers can beat the best humans at Jeopardy, Go, chess, debate, some medical diagnoses, and navigation, they are not smarter than humans because they have a low EQ. Or they may be smarter than some humans at some things, but they are not smarter than humanity with its diverse range of specializations.
Animals, on the other hand, can use their bodies to act upon the world and perceive that worldoften better than peoplebut cant reason. Its almost as if humans were animals with computing devices in our heads.
After the grading, the book becomes pretty wide ranging, with each chapter tackling a big topic that could be better handled in its own book (and often has been). "Human Advantages" and "When Did Minds Begin got better treatment in Darwins Unfinished Symphony; "The Morality of Bots and Beasts" and "Ethics of AI" have been better covered in countless works of fiction, like I, Robot; Blade Runner; and Mary Doria Russells The Sparrow, to mention a very few. These works not only raise the same ideas, they do so in a more nuanced, thought-provoking, and much more interesting way.
Thargard lists his features and mechanisms of intelligence, the specific characteristics that give advantages to humans, and the principles that should dictate the future development of AI, and thats pretty much all of his arguments. This book has a lot of lists. Like a lot. It makes his points straightforward and methodical, but also so, so boring to read.
He doesnt claim that computers cant or will never have emotions; he just concludes that they probably wont, because why would anyone ever want to make computers with emotions? So for now our spot at the pinnacle of intelligence seems safe. But if we ever meet up with a C-3PO (human cyborg relations) or a Murderbot, we might be in trouble.
Read this article:
Attempt to compare different types of intelligence falls a bit short - Ars Technica
Future Prospects of Data Science with Growing Technologies – Analytics Insight
Data science in simple words means the study of data. It entails developing methods of recording, storing, and analyzing data to successfully bring out useful information. Data Science put together and make use of several statistical procedures. The procedures cover data modeling, data transformations, machine learning, statistical operations including descriptive and inferential statistics. For all data scientists statistics is the primary asset.
With the biggest innovation of the time, that is a cryptocurrency, the demands for controlling data online have become a crucial challenge. Various techniques are put forward by Data Science to identify a group of people and providing them with the best possible security from fraud activities.
However, the application of data science is not just concerned with one field rather its application disseminated across various sectors.
Healthcare sector- The biggest application of Data Science is in healthcare. The accessibility of large datasets of patients can be used to build a Data Science approach to identify the diseases at a very early stage. Healthcare is one of the biggest sectors for providing opportunities for the professional who can use their medical expertise with Data Science and provide immediate help to the suffering patients.
Arms and Weapons- Data Science can help in building various automated solutions to identify any attack at a very early stage. Other than that Data Science can help in constructing automated weapons that will be smart enough to identify when to fire and when not to.
Banking and Finance- Data Science in the Banking and Finance sector can be used in managing the money effectively to invest in the right places based on Data Science predictions for best results.
Other than the above sectors Data Science is also applied in Automobile Industry like self-driving cars, Fixed destination cabs as well as in Power and Energy. Data Science can predict the maximum safest potential and can help in building AI bots that can easily handle enormous power sources.
The implementation of Data Science cannot be ignored as it is already in action in the present stage. When you look for something in Myntra or Flipkart and then you get similar recommendations or similar advertisements for whatever you have searched on the internet is all about Data Science. The whole world is operated by Data Science. For every single search in Google, the process of data science is activated.
The future of data science is growing. According to Cloud Vendor Domo even when a person accounts for the Earths entire population, the average person is expected to generate 1.7 megabytes of data per second by the end of 2020.
An overreaching motif today and moving ahead, big data is assured to play an authoritative role in the future. Data will stipulate modern health care, finance, business management, marketing, government, energy, and manufacturing. The scale of big data is truly staggering as it has already entwined itself in the fundamental aspect of business as well as personal life.
Like almost all businesses prime concern is tech, there is a high possibility of the growth of data science jobs.
Artificial Intelligence is the most impactful technology among others that data scientists will run up into. Today Ai is already refining the business operations and assures to be a major trend in the near future. The applications of AI in todays world have driven the adoption of other AI applications such as machine learning, deep learning and this will lead the way as the future of data science. Machine learning is the aptitude of statistical models to develop the capabilities and improve the performance with time in the absence of programmed instructions. This principle can be seen in the chess machine that is developed by Googles DeepMind unit the AlphaZero. The AlphaZero improves on its other computerized chess-playing peers in the absence of instructions is an example of how it learns from its movements to reach the most desired outcome.
As a greater number of businesses are merging with AI and data-based technologies at a high rate there is a need for a greater number of data scientists to help guide the initiatives.
Data science is a leviathan pool of multiple data operations that include statistics and machine learning. Machine Learning algorithms are very much dependent on data. Therefore, machine learning is the primary contributor to the future of data science. In particular data science covers the areas like Data Integration, Distributed Architecture, Automating Machine learning, Data Visualisation, Dashboards and BI, Data Engineering, Deployment in production mode, Automated, data-driven decisions.
While IT-focused jobs have been all the rage over the last two decades the rate of growth in the sector has been projected to be about 13% by the Bureau of Labor Statistics. It is still higher than the average rate of growth for all other sectors. However, data science has seen an explosive growth of over 650% since 2012 based on an analysis done on LinkedIn. The role of a Data Scientist has projected forward to one of the most in-demand jobs and ranks second to machine learning engineer- which is a job that is adjacent to a data scientist.
In the upcoming time, Data Scientists will have the ability to take on areas that are business-critical as well as several complex challenges. This will facilitate the businesses to make exponential leaps in the future. Companies in the present are facing a huge shortage of data scientists. However, this is set to change in the future.
Share This ArticleDo the sharing thingy
The rest is here:
Future Prospects of Data Science with Growing Technologies - Analytics Insight
Towards Broad Artificial Intelligence (AI) & The Edge in 2021 – BBN Times
Artificial intelligence (AI) has quickened its progress in 2021.
A new administration is in place in the US and the talk is about a major push forGreen Technologyand the need to stimulate next generation infrastructure including AI and 5G to generate economic recovery withDavid Knight forecasting that 5G has the potential - thepotential- to drive GDP growth of 40% or more by 2030.TheBiden administration has statedthat it will boost spending in emerging technologies that includes AI and 5G to $300Bn over a four year period.
On the other side of the Atlantic Ocean, the EU have announced aGreen Dealand also need to consider theEuropean AI policyto develop next generation companies that will drive economic growth and employment. It may well be that theEU and US(alongside Canada and other allies) will seek ways to work together on issues such as 5G policy and infrastructure development. TheUK will be hosting COP 26and has also made noises about AI and 5G development.
The world needs to find a way to successfully end the Covid-19 pandemic and in the post pandemic world move into a phase of economic growth with job creation. An opportunity exists for a new era of highly skilled jobs with sustainable economic development built around next generation technologies.
AI and 5G: GDP and jobs growth potential plus scope to reduce GHG emissions (source for numbers PWC / Microsoft, Accenture)
The image above sets out the scope for large reductions in emissions of GHGs whilst allowing for economic growth.
GDP and jobs growth will be very high on the post pandemic agendas of governments around the world. At the same time those economies that truly proposer and grow rapidly in this decade will be those who adopt Industry 4.0 technology and in turn will lead to a shift away from the era of heavy fossil fuel consumption towards a digital world that may be powered by renewable energy and with transportation that is either heavily electric or over time, hydrogen based.
2021 will mark the continued acceleration of Digital Transformation across the economy.
Firms will be increasingly "analytics driven" (it needs to be stressed that analytics rather than data driven is the key term).Data is the fuelthat needs to be processed.Analytics provide the ability for organisations to make actionable insights.
Source for image above Lean BI
The examples of how Machine to Machine Communication at the Edge enabled by AI could work maybe demonstrated by the following image as an example:
In the image above the Machine to Machine communication allows for broadcast across the network that a person has been detected stepping onto the road so that even the car that does not have line of sight of the person is aware of their presence
It is important to note that AI alongside 5G networks will be at the heart of this transition to the world ofIndustry 4.0.
5G will play an important role as 5G networks are not only substantially faster than 4G networks, but they also enablesignificant reductions in latency in turn allowing for near real-time analytics and responses, and also enable far greater capacity for connection thereby facilitating massive machine to machine communication forIoT devices on the Edge of the network(closer to where the data is created on the device).
The image below sets out the speed advantage of 5G networks relative to 4G.
Source for image above Thales Group
However, as noted above 5G has many more advantages over 4G than speed alone as shown in the image below:
Source for image above Thales Group
The growth in Edge Computing will reduce the amount of data being sent backwards and forwards between a remote cloud server and thereby make the system more efficient.
Source for image above Thales Group
The economic benefits of 5G are set out below:
$13.2 Trillion dollars of global economic output
22.3 Million new jobs created
$2.1 Trillion dollars in GDP growth
Towards AI at the Edge (AIIoT)
To date AI has been most pervasive and effective in the areas of Social Media and Ecommerce giants whose large digital data sets give them an advantage and whereedge casesdon't matter so much in terms of their consequences. No fatalities, injuries or material damages arise from an incorrect recommendation for a video, a post, or an item of clothing, other than a bad user experience.
However, when we seek to scale AI into the real world, edge cases and interpretability matter. Issues such as causality and explainability become key in areas such as autonomous vehicles and robots and also in healthcare.
Equally data privacy and security also really matter. On the one hand as noted above, data is the fuel for Machine Learning models. However, on the other hand in areas such as healthcare much of that data is often siloed and decentralised plus also protected by strict privacy rules in the likes of the US (HIPAA) and Europe (GDPR). It is also an issue in areas such as Finance and Insurance where data privacy and regulation are of significant importance to the operations of financial services firms.
This is an area whereFederated LearningwithDifferential Privacycould play a big role in scaling Machine Learning across areas such as healthcare and financial services.
Source for image above NVIDIA What is Federated Learning?
It is also an area where the US and Europe could work together to enable collaborative learning and help scale Machine Learning that also provides for Data Security and Privacy for end users (patients). The Healthcare sector around the world is at breakpoint due to the strains of the Covid-19 pandemic and augmenting our healthcare workers with AI to reduce the strain upon them whilst ensuring that patient data security is maintained will be key to transforming our Healthcare systems to reduce the strain on them and deliver better outcomes for the patient.
Source for Image above TensorFlow Federated
For more on Federated Learning see:Federated Learning an Introduction.
In relation to AI, we will need to move away from the giant models and techniques that were predominant in the last decade towards neural compression (pruning)that in turn will enable models to operate more efficiently on the Edge and help preserve battery life of devices and also reduce carbon footprint with reduced energy consumption.
Furthermore, we won't only requireDeep Learning models that may inference on the Edge, but also models thatmay continue to learn on the Edge, on the fly, from smaller data sets and respond dynamically to their environments. This will be key to enabling effective autonomous systems such as autonomous vehicles (cars, drones) and also robots.
Solving for these challenges will be key to enabling AI to scale beyond Social Media and Ecommerce across the sectors of the economy.
It is no surprise that the most powerful AI companies today and last few years tend to be from the Ecommerce and social media sector.
Furthermore, the images below from Valuewalk show how ByteDance (owner of TikTok) is the world's most valuable Unicorn and an AI company.
Source for image above Valuewalk, Tipalti, The Most Valuable Unicorn in the World 2020
Venture Capitalist and Angel Investors should also try to understand that in order to scale AI startup ventures access to usable data and meeting the requirements of their customer in terms of usability (which may include some or all of transparency, causality, explainability, model size, ethics) are key for many sectors.
The number of connected devices and volume of data is forecast to grow dramatically as Digital Technology continues to expand its reach for example the image below shows a forecast from Statista for 75 Billion internet connected devices by 2025, an average of over 9 per person on the planet!
Data will grow but an increasing amount of data will be decentralised data dispersed around the Edge.
Source for image above IDC
In factIDC forecast that" Theglobal dataspherewill grow from 45 zettabytes in 2019 to 175 by2025. Nearly 30% of the world's data will need real-time processing. ... Many of these interactions are because of the billions of IoT devices connected across the globe, which are expected to create over 90 ZB of data in2025."
Illustration of the AI IoT across the Edge
Source for infographic images below:Iman Ghosh VisualCapitalist.com
In the past decade key Machine Learning tools such as XG Boost, Light Gradient Boosting Machines and Cat Boost emerged (approximately 2015 to 2017) and these tools will continue to be popular with Data Scientists for powerful insights with structured data using supervised learning. No doubt we will see continued enhancements in Machine Learning tools over the next few years.
In relation to areas such as Natural Language Processing (NLP), Computer Vision and Drug Discovery efforts Deep Learning will continue to be the effective tool. However, it is submitted that increasingly the techniques will move towards the following:
Transformers(including inComputer Vision);
Neuro Symbolic AI(hybrid AI that combines Deep Learning with symbolic Logic);
Neuroevoutionary(hybrid approaches that combinedDeep Learning with Evolutionary algorithm approaches);
Some or all of the above combined withDeep Reinforcement Learning.
This will lead to an era ofBroad AIas AI starts to move beyond narrow AI (performing just one task) and starts working with multitasking but not at the level where AI can match the human brain (AGI).
My own work is focused on the above hybrid approaches for Broad AI as we seek to find ways to scale AI across the economy beyond Social Media and Ecommerce the above will be key to enabling true Digital Transformation with AI across traditional sectors of the economy and enabling our moving into the era of Industry 4.0.
Source for Image above David Cox, IBM Watson
MIT IBM Watson Lab define Broad AIand the types of AI as follows:
"Narrow AIis the ability to perform specific tasks at a super-human rate within various categories, from chess, Jeopardy!, and Go, to voice assistance, debate, language translation, and image classification."
"Broad AIis next. Were just entering this frontier, but when its fully realized, it will feature AI systems that use and integrate multimodal data streams, learn more efficiently and flexibly, and traverse multiple tasks and domains. Broad AI will have powerful implications for business and society."
"Finally,General AIis essentially what science fiction has long imagined: AI systems capable of complex reasoning and full autonomy. Some scientists estimate that General AI could be possible sometime around 2050 which is really little more than guesswork. Others say it will never be possible. For now, were focused on leading the next generation of Broad AI technologies for the betterment of business and society."
I would addArtificial Super Intelligence(or Super AI) to the list above as this is a type of AI that often gains much attention in Hollywood movies and television series.
Whether one views 2021 as the first year of a decade or not, 2021 will mark a year for reset across the economy and hopefully one whereby we start to move beyond the Covid pandemic to a post pandemic world.
California will remain as a leading area for AI development with the presence of Stanford, UC Berkley, Caltech, UCLA, and University of San Diego. However, other centres for AI will continue to grow around the US and the world, for example Boston, Austin, Toronto, London, Edinburgh, Oxford, Cambridge, Tel Aviv, Dubai, Abu Dhabi, Singapore, Berlin, Paris, Barcelona, Madrid, Lisbon, Sao Paulo, Tallinn, Bucharest, Kyiv / Kharkiv, Moscow and of course across China (many other examples of cities could be cited too). AI will become a pervasive technology that is increasingly in the devices (including within our mobile phones) that we interact with everyday and not just when we enter our social media accounts or go online to shop.
It will also mark a reset for AI to be increasingly on the Edge and across the "real-world" sectors of the economy with the emergence of Broad AI to take over from Narrow AI as we move across the decade.
Smaller models will be more desirable / more useful
GPT-3is an exciting development in AI and shows the potential for Transformer models, however, in the future small will be beautiful and crucial. The human brain does not require the amount of server capacity of GPT-3 and uses far less energy. For AI to scale across the edge we'll need powerful models that are energy efficient and optimised to work on small devices. For exampleMao et al. set out LadaBERT: lightweight adaptation of the BERT ( a large Transformer language model) through hybrid model compression.
The authors note "...a major blocking issue of applying BERT to online services is that it is memory-intensive and leads to unsatisfactory latency of user requests, raising the necessity of model compression. Existing solutions leverage the knowledge distillation framework to learn a smaller model that imitates the behaviours of BERT."
"However, the training procedure of knowledge distillation is expensive itself as it requires sufficient training data to imitate the teacher model."
"In this paper, we address this issue by proposing a hybrid solution named LadaBERT (Lightweight adaptation of BERT through hybrid model compression), which combines the advantages of different model compression methods, including weight pruning, matrix factorization and knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various public datasets while the training overheads can be reduced by an order of magnitude."
Reducing training overheads and avoiding unsatisfactory latency of user requests will also be a key objective of Deep Learning development and evolution over the course of 2021 and beyond.
My Vision of Connectionism: Connecting one human to another (we're all human beings), connecting AI with AI, and AI with humans all at the level of the mind.
When I adopted the @DeepLearn007 handle on Twitter many years ago, I was inspired by the notion ofconnectionism, and the image that I selected for the account illustrates how 2 human beings could connect at the level of the brain and how the exchange of information, in effect ideas, drives innovation and the development of humanity. In the virtual world much of that occurs at the level of data and the analytical insights that we gain from that data through application of AI (Machine Learning and Deep Learning) to generate responses.
I remain a connectionist, albeit an open minded one. I believe that Deep Neural Networks will remain very important and the cornerstone of AI development but just like Deep Reinforcement Learning combined Reinforcement Learning with Deep Learning to very powerful effect with the likes ofAlphaGo,AlphaZero, andMuZeroresulting, so too developing hybrid AI that combines Deep Learning with Symbolic and Evolutionary approaches will lead to exciting new product developments and enable Deep Learning to scale beyond Social Media and Ecommerce sectors where the likes of medics and financial services staff wantcausal inferenceandexplainabilityfor trust in the AI decision making. For example,Microsoft Researchstate that "understanding causality is widely seen as a key deficiency of current AI methods, and a necessary precursor for building more human-like machine intelligence."
Furthermore, in order for autonomous vehicles to truly take off we'll need the model explainability for situations where things have gone wrong in order to understand what happened and how we may reduce the probability of the same outcome in the future.
The next generation of AI will be in the direction towards the era Broad AI and the adventure will be here in 2021 as we move towards the Edge, towards a better world as we move beyond the scars and challenges of 2020. The journey may require scaled up 5G networks around the world to really transform the broader economy and that may only really start to happen at the end of the year and beyond but the direction of the pathway is clear.
The exciting potential for healthcare, smart industry, smart cities, smart living, education, and every other sector of the economy will mean that a new of businesses will emerge that we cannot even imagine today.
Perhaps a good point to conclude is with the forecast from Ovum and Intel for the impact of 5G for the media sector (of courseAI will play a big role alongside 5Gin developing new hyper personalised services and products andhave a symbiotic relationship together).
Source for the image above:Intel Study Finds 5G will Drive $1.3 Trillion in New Revenues in Media and Entertainment Industry by 2028
See more here:
Towards Broad Artificial Intelligence (AI) & The Edge in 2021 - BBN Times
AI’s Carbon Footprint Issue Is Too Big To Be Ignored – Analytics India Magazine
A lot has been said about the capabilities of artificial intelligence, from humanoid robots, self-driving cars to speech recognition. However, one aspect of AI that often doesnt get spoken about is its carbon footprint. AI systems consume a lot of power, and resultant of this generate large volumes of carbon emissions that harm the environment and further accelerate climate change.
It is interesting to note the duality of AI in terms of its effect on the environment. On the one hand, it helps in devising solutions that can reduce the effects of climate and ecological change. Some of which include smart grid design, development of low-emission infrastructure, and climate change predictions.
But, on the other hand, AI has a significant carbon footprint that is hard to ignore.
For instance, in a 2019 study, a research team from the University of Massachusetts had analysed several natural language processing training models. The energy consumed by these models was converted into carbon emissions and electricity cost. It was then found that training an AI language-processing system generates an astounding 1,400 pounds (635 kg) of emission. The study further noticed that this number can even reach up to 78,000 pounds (over 35,000 kg) depending on the scale of the AI experiment and the source of power used. This is equivalent to 125 round trip flights between New York and Beijing.
Notably, the centre of the whole Timnit Gebru-Google controversy is also a study titled, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? This paper, co-authored by Gebru raised questions about AI language models being too big, and whether tech companies are doing enough to reduce the arising potential risk. Apart from shining light on how such models perpetually create abusive languages, hate speeches, stereotypes, and other microaggressions towards specific communities, the paper also spoke of the AIs carbon footprint and how it disproportionately affects the marginalised communities, much more than any other group of people.
The paper pointed out that the number of resources required to build and sustain such large models only benefitting the large corporations and wealthy organisations, but the resulting repercussions of climate change were borne by the marginalised communities. It is past time for researchers to prioritise energy efficiency and cost to reduce negative environmental impact and inequitable access to resources, the paper said.
This OpenAI graph also shows how since 2012, the amount of computing power in training some of the largest models such as AlphaZero has been increasing exponentially with a 3.4 month doubling time. This is higher than Moores law two-year doubling period.
Download our Mobile App
To address this issue, in September 2019, employees of tech giants such as Amazon, Google, Microsoft, Facebook, and Twitter, have joined the brimming worldwide march against climate change and demanded from their employers to issue an assurance towards reducing emissions to zero by 2030. This would require them to cut contracts with fossil fuel companies as well as stop the exploitation of climate refugees. In a very strong-worded demand that called out Techs dirty role in climate change, the coalition had written that the tech industry has a massive carbon footprint, often obscured behind jargon like cloud computing or bitcoin mining, along with depictions of code and automation as abstract and immaterial.
Considering the growing conversation around climate change, a movement called Green AI was also started by the Allen Institute of Artificial Intelligence through their research. This paper proposed undertaking AI research that yields desired results but without increasing computational cost, and in some cases, even reducing it. As per the authors of this paper, the goal should be to make AI greener and inclusive, as opposed to Red AI that currently dominates the research industry. Red AI has been referred to the research practices that use massive computational power to obtain state-of-the-art results in terms of accuracy and efficiency.
In a 2019 paper, co-founders of AI Now Institute, Roel Dobbe and Meredith Whittaker, gave seven recommendations that could help draft a tech-aware climate policy and climate-aware tech policy. They included
There is a lot to be done on recognising, understanding, and acting against the implications of AI carbon footprint. An ideal situation would be bigger tech companies to take the first step in the direction for others to follow.
I am a journalist with a postgraduate degree in computer network engineering. When not reading or writing, one can find me doodling away to my hearts content.
See the original post here:
AI's Carbon Footprint Issue Is Too Big To Be Ignored - Analytics India Magazine
AI Could Save the World, If It Doesnt Ruin the Environment First – PCMag Portugal
When Mohammad Haft-Javaherian, a student at the Massachusetts Institute of Technology, attended MITs Green AI Hackathon in January, it was out of curiosity to learn about the capabilities of a new supercomputer cluster being showcased at the event. But what he had planned as a one-hour exploration of a cool new server drew him into a three-day competition to create energy-efficient artificial-intelligence programs.
The experience resulted in a revelation for Haft-Javaherian, who researches the use of AI in healthcare: The clusters I use every day to build models with the goal of improving healthcare have carbon footprints, Haft-Javaherian says.
The processors used in the development of artificial intelligence algorithms consume a lot of electricity. And in the past few years, as AI usage has grown, its energy consumption and carbon emissions have become an environmental concern.
I changed my plan and stayed for the whole hackathon to work on my project with a different objective: to improve my models in terms of energy consumption and efficiency, says Haft-Javaherian, who walked away with a $1,000 prize from the hackathon. He now considers carbon emission an important factor when developing new AI systems.
But unlike Haft-Javaherian, many developers and researchers overlook or remain oblivious to the environmental costs of their AI projects. In the age of cloud-computing services, developers can rent online servers with dozens of CPUs and strong graphics processors (GPUs) in a matter of minutes and quickly develop powerful artificial intelligence models. And as their computational needs rise, they can add more processors and GPUs with a few clicks (as long as they can foot the bill), not knowing that with every added processor, theyre contributing to the pollution of our green planet.
The recent surge in AIs power consumption is largely caused by the rise in popularity of deep learning, a branch of artificial-intelligence algorithms that depends on processing vast amounts of data. Modern machine-learning algorithms use deep neural networks, which are very large mathematical models with hundreds of millionsor even billionsof parameters, says Kate Saenko, associate professor at the Department of Computer Science at Boston University and director of the Computer Vision and Learning Group.
These many parameters enable neural networks to solve complicated problems such as classifying images, recognizing faces and voices, and generating coherent and convincing text. But before they can perform these tasks with optimal accuracy, neural networks need to undergo training, which involves tuning their parameters by performing complicated calculations on huge numbers of examples.
To make matters worse, the network does not learn immediately after seeing the training examples once; it must be shown examples many times before its parameters become good enough to achieve optimal accuracy, Saenko says.
All this computation requires a lot of electricity. According to a study by researchers at the University of Massachusetts, Amherst, the electricity consumed during the training of a transformer, a type of deep-learning algorithm, can emit more than 626,000 pounds of carbon dioxidenearly five times the emissions of an average American car. Another study found that AlphaZero, Googles Go- and chess-playing AI system, generated 192,000 pounds of CO2 during training.
To be fair, not all AI systems are this costly. Transformers are used in a fraction of deep-learning models, mostly in advanced natural-language processing systems such as OpenAIs GPT-2 and BERT, which was recently integrated into Googles search engine. And few AI labs have the financial resources to develop and train expensive AI models such as AlphaZero.
Also, after a deep-learning model is trained, using it requires much less power. For a trained network to make predictions, it needs to look at the input data only once, and it is only one example rather than a whole large database. So inference is much cheaper to do computationally, Saenko says.
Many deep-learning models can be deployed on smaller devices after being trained on large servers. Many applications of edge AI now run on mobile devices, drones, laptops, and IoT (Internet of Things) devices. But even small deep-learning models consume a lot of energy compared with other software. And given the expansion of deep-learning applications, the cumulative costs of the compute resources being allocated to training neural networks are developing into a problem.
Were only starting to appreciate how energy-intensive current AI techniques are. If you consider how rapidly AI is growing, you can see that we're heading in an unsustainable direction, says John Cohn, a research scientist with IBM who co-led the Green AI hackathon at MIT.
According to one estimate, by 2030, more than 6 percent of the worlds energy may be consumed by data centers. I don't think it will come to that, though I do think exercises like our hackathon show how creative developers can be when given feedback about the choices theyre making. Their solutions will be far more efficient, Cohn says.
CPUs, GPUs, and cloud servers were not designed for AI work. They have been repurposed for it, as a result, are less efficient than processors that were designed specifically for AI work, says Andrew Feldman, CEO and cofounder of Cerebras Systems. He compares the usage of heavy-duty generic processors for AI to using an 18-wheel-truck to take the kids to soccer practice.
Cerebras is one of a handful of companies that are creating specialized hardware for AI algorithms. Last year, it came out of stealth with the release of the CS-1, a huge processor with 1.2 trillion transistors, 18 gigabytes of on-chip memory, and 400,000 processing cores. Effectively, this allows the CS-1, the largest computer chip ever made, to house an entire deep learning model without the need to communicate with other components.
When building a chip, it is important to note that communication on-chip is fast and low-power, while communication across chips is slow and very power-hungry, Feldman says. By building a very large chip, Cerebras keeps the computation and the communication on a single chip, dramatically reducing overall power consumed. GPUs, on the other hand, cluster many chips together through complex switches. This requires frequent communication off-chip, through switches and back to other chips. This process is slow, inefficient, and very power-hungry.
The CS-1 uses a tenth of the power and space of a rack of GPUs that would provide the equivalent computation power.
Satori, the new supercomputer that IBM built for MIT and showcased at the Green AI hackathon, has also been designed to perform energy-efficient AI training. Satori was recently rated as one of the worlds greenest supercomputers. Satori is equipped to give energy/carbon feedback to users, which makes it an excellent laboratory for improving the carbon footprint both AI hardware and software, says IBMs Cohn.
Cohn also believes that the energy sources used to power AI hardware are just as important. Satori is now housed at the Massachusetts Green High Performance Computing Center (MGHPCC), which is powered almost exclusively by renewable energy.
We recently calculated the cost of a high workload on Satori at MGHPCC compared to the average supercomputer at a data center using the average mix of energy sources. The results are astounding: One year of running the load on Satori would release as much carbon into the air as is stored in about five fully-grown maple trees. Running the same load on the 'average' machine would release the carbon equivalent of about 280 maple trees, Cohn says.
Yannis Paschalidis, the Director of Boston Universitys Center for Information and Systems Engineering, proposes a better integration of data centers and energy grids, which he describes as demand-response models. The idea is to coordinate with the grid to reduce or increase consumption on-demand, depending on electricity supply and demand. This helps utilities better manage the grid and integrate more renewables into the production mix, Paschalidis says.
For instance, when renewable energy supplies such as solar and wind power are scarce, data centers can be instructed to reduce consumption by slowing down computation jobs and putting low-priority AI tasks on pause. And when theres an abundance of renewable energy, the data centers can increase consumption by speeding up computations.
The smart integration of power grids and AI data centers, Paschalidis says, will help manage the intermittency of renewable energy sources while also reducing the need to have too much stand-by capacity in dormant electricity plants.
Scientists and researchers are looking for ways to create AI systems that dont need huge amounts of data during training. After all, the human brain, which AI scientists try to replicate, uses a fraction of the data and power that current AI systems use.
During this years AAAI Conference, Yann LeCun, a deep-learning pioneer, discussed self-supervised learning, deep-learning systems that can learn with much less data. Others, including cognitive scientist Gary Marcus, believe that the way forward is hybrid artificial intelligence, a combination of neural networks and the more classic rule-based approach to AI. Hybrid AI systems have proven to be more data- and energy-efficient than pure neural-network-based systems.
It's clear that the human brain doesnt require large amounts of labeled data. We can generalize from relatively few examples and figure out the world using common sense. Thus, 'semi-supervised' or 'unsupervised' learning requires far less data and computation, which leads to both faster computation and less energy use, Cohn says.
Read the original post:
AI Could Save the World, If It Doesnt Ruin the Environment First - PCMag Portugal
Marcus vs Bengio AI Debate: Gary Marcus Is The Villain We Never Needed – Analytics India Magazine
According to his website, Gary Marcus, a notable figure in the AI community, has published extensively in fields ranging from human and animal behaviour to neuroscience, genetics, linguistics, evolutionary psychology and artificial intelligence.
AI and evolutionary psychology, which is considered to be a remarkable range of topics to cover for a man as young as Marcus.
Marcus, in his website, calls himself a scientist, a best-selling author, and an entrepreneur. And is also a founding member of Geometric Intelligence, a machine learning company acquired by Uber in 2016. However, Marcus is widely known for his debates with machine learning researchers like Yann Lecun and Yoshua Bengio.
Marcus leaves no stone unturned to flaunt his ferocity in calling out the celebrities of the AI community.
However, he also, call it an act of benevolence or finding a neutral ground, downplays his criticisms through his we agree to disagree tweets.
Last week, Marcus did what he does best when he tried to reboot and shake up AI once again as he debated Turing award winner Yoshua Bengio.
In this debate, hosted by Montreal.AI, Marcus, in his speech, criticized Bengio for not citing him in Bengios work and complained that it would devalue Marcus contribution.
Marcus, in his arguments, tried to explain how hybrids are pervasive in the field of AI by citing the example of Google, which according to him, is actually a hybrid between knowledge graph, a classic symbolic knowledge, and deep learning like a system called BERT.
Hybrids are all around us
Marcus also insists on the requirement of thinking in terms of nature and nurture, rather than nature versus nurture when it comes to the understanding of the human brain.
He also laments about how much of machine learning, historically, has avoided nativism.
Marcus also pointed out that Yoshua misrepresented him as saying deep learning doesnt work.
I dont care what words you want to use, Im just trying to build something that works.
While Marcus argued for symbols, pointing out that DeepMinds chess-winning AlphaZero program is a hybrid involving symbols because it uses Monte Carlo Tree Search. You have to keep track of your trees, and trees are symbols.
Bengio dismissed the notion that a tree search is a symbol system. Rather Its a matter of words, Bengio said. If you want to call those symbols, but symbols to me are different, they have to do with the discreteness of concepts.
Bengio also shared his views on how deep learning might be extended to dealing with computational capabilities rather than taking the old techniques and combining them with Neural Nets.
Bengio admitted that he completely agrees that a lot of current systems, which use machine learning, has also used a bunch of handcrafted rules and codes that were designed by people.
While Marcus pressed Bengio for hybrid systems as a solution, Bengio, patiently reminded how hybrid systems have already been built, which has led to Marcus admitting that he misunderstood Bengio!
This goof-up was followed by Bengios takedown of symbolic AI and why there is a need to move on from good old fashioned AI (GOFAI). In a nod to Daniel Kahnemann, Bengio, took the two-system theory to explain how richer representation is required in the presence of an abundance of knowledge.
To this Marcus quickly responded by saying, Now I would like to emphasise on our agreements. This was followed up by one more hour of conversation between the speakers and a Q&A session with the audience.
The debate ended with the moderator Vincent Boucher thanking the speakers for a hugely impactful debate, which was hugely pointless for a large part of it.
Gary Marcus has been playing or trying to play the role of an antagonist that would shake up the hype around AI for a long time now.
In his interview with Synced, when asked about his relationship with Yann Lecun, Marcus said that they both are friends as well as enemies. While calling out Lecun for making ad hominem attacks on him, he also approves many perspectives of his frenemies.
Deliberate or not, Marcus online polemics to bring down hype of AI, usually ends up hyping up his own antagonism. What the AI community needs is the likes of Nassim Taleb, who is known for his relentless, eloquent and technically intact arguments. Taleb has been a practitioner and an insider who doesnt give a damn about being an outsider.
On the other hand, Marcus calls himself a cognitive scientist, however, his contribution to the field of AI cannot be called groundbreaking. There is no doubt that Marcus should be appreciated for positioning himself in the line of fire in the celebrated era of AI. However, one cant help but wonder two things when one listens to Marcus antics/arguments:
There is a definitely a thing or two Marcus can learn from Talebs approaches in debunking pseudo babble. A very popular example could be that of Talebs takedown of Steven Pinker, who also happens to be a dear friend and mentor to Marcus.
That said, the machine learning research community, did witness something similar in the form of David Duvenaud and Smerity, when they took a detour from the usual we shock with you jargon research, and added a lot of credibility to the research community. While Duvenaud, trashed his own award-winning work, Stephen Smerity Merity, investigated his paper on the trouble with naming inventions and unwanted sophistication.
There is no doubt that there is a lot of exaggerations related to what AI can do. Not to forget the subtle land grab amongst the researchers for papers, which can mislead the community into thinking vanity as an indication of innovation. As we venture into the next decade, AI can use a healthy dose of scepticism and debunking from the Schmidhubers and Smeritys of its research world to be more reliable.
The rest is here:
Marcus vs Bengio AI Debate: Gary Marcus Is The Villain We Never Needed - Analytics India Magazine