Category Archives: Alphazero

Creative Machines: The Future is Now with Arthur Miller – CUNY Graduate Center

Abstract

This talk by Arthur Miller will focus on creative machines. Machines running programs like AlphaZero, DeepDream, GANs and radical new developments like ChatGPT, GPT-4 and Dalle-2 have shown creativity, opening up new vistas in AI-created art, music, and literature. In the future will there be machines with consciousness and emotions, machines capable of falling in love? This talk explores all of this and more.

Arthur I. Miller is Emeritus Professor of History and Philosophy of Science at University College London. He is a CCNY Physics graduate, and was awarded a Ph.D.from MIT. He is the author of a groundbreaking theory of creativity which applies to both humans and machines

Science & the Arts, The CUNY Academy for the Humanities and Sciences, The Belle Zeller Scholarship Trust Fund, Art Science Connect, and the CUNY Graduate Centers Interdisciplinary Concentration in Cognitive Science.

For more information contact Brian Schwartz,bschwartzcuny@gmail.com.

Review ourBuilding Entry Policyfor in-person events.

View Event Flyer

Read the rest here:
Creative Machines: The Future is Now with Arthur Miller - CUNY Graduate Center

The Race for AGI: Approaches of Big Tech Giants – Fagen wasanni

Big tech companies like OpenAI, Google DeepMind, Meta (formerly Facebook), and Tesla are all on a quest to achieve Artificial General Intelligence (AGI). While their visions for AGI differ in some aspects, they are all determined to build a safer, more beneficial form of AI.

OpenAIs mission statement encapsulates their goal of ensuring that AGI benefits all of humanity. Sam Altman, former CEO of OpenAI, believes that AGI may not have a physical body and that it should contribute to the advancement of scientific knowledge. He sees AI as a tool that amplifies human capabilities and participates in a human feedback loop.

OpenAIs key focus has been on transformer models, such as the GPT series. These models, trained on large datasets, have been instrumental in OpenAIs pursuit of AGI. Their transformer models extend beyond text generation and include text-to-image and voice-to-text models. OpenAI is continually expanding the capabilities of the GPT paradigm, although the exact path to AGI remains uncertain.

Google DeepMind, on the other hand, places its bets on reinforcement learning. Demis Hassabis, CEO of DeepMind, believes that AGI is just a few years away and that maximizing total reward through reinforcement learning can lead to true intelligence. DeepMind has developed models like AlphaFold and AlphaZero, which have showcased the potential of this approach.

Metas Yann LeCun disagrees with the effectiveness of supervised and reinforcement learning for achieving AGI, citing their limitations in reasoning with commonsense knowledge. He champions self-supervised learning, which does not rely on labeled data for training. Meta has dedicated significant research efforts to self-supervised learning and has seen promising results in language understanding models.

Elon Musks Tesla aims to build AGI that can comprehend the universe. Musk believes that a physical form may be essential for AGI, as seen through his investments in robotics. Teslas Optimus robot, powered by a self-driving computer, is a step towards that vision.

Both Google and OpenAI have incorporated multimodality functions into their models, allowing for the processing of textual descriptions associated with images. These companies are also exploring research avenues like causality, which could have a significant impact on achieving AGI.

While the leaders in big tech have different interpretations of AGI and superintelligence, their approaches reflect a shared ambition to develop AGI that benefits humanity. The race for AGI is still ongoing, and the path to its realization remains a combination of innovation, research, and exploration.

Continued here:
The Race for AGI: Approaches of Big Tech Giants - Fagen wasanni

Relevance of Software Developers in the Era of Prompt Engineering – Analytics India Magazine

Developers today are a disturbed lot with auto code-generation platforms like GitHub Copilot, Code Whisper or even ChatGPT and Bard threatening to make their job redundant in the coming months. Writing code is no longer a Herculean task that it used to be as just about anyone can do it now by giving prompts and building new applications and tools.

Many developers are even being asked to learn prompt engineering in the name of upskilling and to boost productivity. So, what does one do now? Move on and focus on optimising LLMs.

LLMs work on heavy computers. Recently, OpenAI CEO Sam Altman said that he was worried about the lack of GPUs for powering OpenAIs models. This clearly shows that the need for optimising LLMs is the need of the hour, and this is where the research is increasingly shifting to.

Take a look at the open source community coming up with models like Falcon-7B that are able to perform on a par, or even better than GPT-based models, even on a single device. This requires much less computation and thus improving the efficiency and performance of models. Contribution to the open source ecosystem is what developers need to focus on since even Google and OpenAI agree that they cannot compete with what the community offers.

Building and improving the efficiency of models is based on making better algorithms that these models work on. Recently, DeepMind released AlphaDev, an algorithm built on AlphaZero that can sort data three times faster than a human-written algorithm. This is one of the major breakthroughs in reducing the computation requirement of these AI systems with better sorting algorithms.

Replit is another great example that is boosting the hard-core developer community. Apart from banking on the phone-based developer ecosystem and building Replit optimised for it, Replit came to the rescue of developers by starting Replit bounties.

Using Replit, a lot of non-developers can put up a bug or a problem on bounty, and the developers can solve these problems and earn cycles, which is Replits virtual currency which can be a great source of income.

Now that developers have earned money, there are a lot of problems that need solving. Since LLMs are compute heavy, they are also carbon intensive. Alok Lall, sustainability head at Microsoft India, told AIM, When we look at reducing emissions, it is very easy to look at infrastructure and get more efficient hardware like servers, heating, ventilation, and cooling systems, but addressing and understand the main ingredient the code, is the most important.

This is where Microsoft partnered with Thoughtworks, GitHub, and Accenture to build Green Software Foundation in 2021, to make coding and software development sustainable. This clearly shows that the need for making models more sustainable by making more efficient algorithms is of utmost importance for a lot of companies and developers.

If we consider that generative AI or more specifically, LLM-based models are just a bubble that is going to burst, there is a lot of space that requires research and development by developers. For example, DeepMinds AlphaFold for predicting protein structures is one of the crucial fields that needs more exploration.

Banking on this, recently Soutick Saha, bioinformatic developer at Wolfram, developer ProteinVisualisation paclet, a tool for bringing biomolecular structures for everyone to build further on. He described how he has worked with six programming languages in the last 12 years, and was able to develop this by learning the Wolfram language in just five months.

In India, the rise of open source semiconductor technology like RISC-V for designing chips has driven more startups into the design chip design industry. A lot of RISC-V startups are increasingly getting funded in India.

Then there is quantum computing. NVIDIA opened the floor for research in quantum computing by replicating CUDAs success and building QODA (quantum optimised device architecture). The open source platform is built for integrating GPUs and CPUs in one system, thus developers, not prompt engineers, can dive into the field.

Similarly, Quantum Brilliance, a company focusing on developing miniaturised, room-temperature quantum computing solutions, open source its Qristal SDK. This will further allow developers to innovate quantum algorithms for quantum accelerators. This also includes C++ and Python APIs, with the support for NVIDIA CUDA for creation of quantum-enhanced designs.

No-code platforms typically excel at creating simple or straightforward applications. However, when it comes to building complex systems with intricate business logic, integrations, and scalability requirements, hardcore developers are still essential. Focus on architecting and building robust, scalable, and efficient systems that require advanced technical knowledge.

There is a lot more to do, and we are just getting started. Now more than ever, quit sulking and complaining about the prompt engineers. Instead of rolling eyes over these auto-code generation platforms, developers can leverage their creativity and adaptability to solve complex problems and build architectures, while letting these platforms do the laborious task of writing code.

Read the original post:
Relevance of Software Developers in the Era of Prompt Engineering - Analytics India Magazine

Singapores Central Bank Partners With Google to Explore AI for Internal Use – 24/7 Wall St.

The Central Bank of Singapore is partnering with Googles cloud unit to explore potential internal use cases of the burgeoning artificial intelligence (AI) space and advance the development of innovative technologies in the Asian hub, Nikkei reported on Wednesday.

The Monetary Authority of Singapore (MAS), the central banking authority of the Southeast Asian city-state, is teaming up with Googles cloud division to develop AI projects for internal use and equip its employees with deep AI skillsets.

According to Nikkeis report, the MAS will collaborate with Google Cloud on generative AI initiatives to facilitate the use of internal applications in a manner that is grounded on responsible AI practices. The move comes after large language models (LLMs) and other generative AI products like OpenAIs ChatGPT took the world by storm with their abilities to produce impressive text and media content.

In addition, it also represents a part of the MASs broader strategy to streamline the development of state-of-the-art technologies in Singapore.

MAS has been committed to leveraging technology and innovation to their fullest potential. This collaboration allows us to explore potential use cases in our functions and operations that could harness generative AI while prioritizing information security as well as data and AI model governance.

said Vincent Loy, an assitant managing director for technology at MAS.

The MAS did not specify how to implement Googles AI technology. However, it explained the partnership would establish a framework for determining potential use cases, conducting technical pilots, and co-creating solutions for the central banks digital services.

Google, one of the biggest tech companies in the world, has been developing artificial intelligence (AI) products and services for several years, pouring significant capital into its research and development. In 2014, the company acquired DeepMind Technologies, a leading AI startup known for its breakthroughs in deep reinforcement learning.

Over recent years, the tech behemoth has made significant progress in AI by developing advanced models like AlphaGo, AlphaZero, and BERT. Moreover, to support AI research and innovation, the company has also launched several AI-related tools and frameworks, such as TensorFlow an open-source machine learning and AI library widely adopted by researchers and developers worldwide.

Most recently, Google released Google Bard, a conversational chatbot launched just a few months after ChatGPT, whose success left many tech giants scrambling to focus on AI, including Microsoft and Baidu.

This article originally appeared on The Tokenist

Sponsored: Tips for Investing

A financial advisor can help you understand the advantages and disadvantages of investment properties. Finding a qualified financial advisor doesnt have to be hard. SmartAssets free tool matches you with up to three financial advisors who serve your area, and you can interview your advisor matches at no cost to decide which one is right for you. If youre ready to find an advisor who can help you achieve your financial goals, get started now.

Investing in real estate can diversify your portfolio. But expanding your horizons may add additional costs. If youre an investor looking to minimize expenses,consider checking out online brokerages. They often offer low investment fees, helping you maximize your profit.

See the original post here:
Singapores Central Bank Partners With Google to Explore AI for Internal Use - 24/7 Wall St.

Meta AI Boss: current AI methods will never lead to true intelligence – Gizchina.com

Meta is one of the leading companies in AI development globally. However, the company appears to not have confidence in the current AI methods. According toYann LeCun, chief AI scientist at Meta, there needs to be an improvement for true intelligence. LeCun claims that the most current AI methods will never lead to true intelligence. His research on many of the most successful deep learning fields today method is skeptical.

The Turing Award winner said that the pursuit of his peers is necessary, but not enough.These include research on large language models such as Transformer-based GPT-3.As LeCun describes it, Transformer proponents believe: We tokenize everything and train giant models to make discrete predictions, and thats where AI stands out.

Theyre not wrong. In that sense, this could be an important part of future intelligent systems, but I think its missing the necessary parts, explained LeCun. LeCun perfected the use of convolutional neural networks, which has been incredibly productive in deep learning projects.

LeCun also seesflaws and limitations in many other highly successful areas of the discipline.Reinforcement learning is never enough, he insists.Researchers like DeepMinds David Silver, although they developed the AlphaZero program and mastered chess and Go, focused on very action-oriented programs, while LeCun observed. He claims that most of our learning is not done by taking actual action, but by observation.

LeCun, 62, has a strong sense of urgency to confront the dead ends he believes many may be heading. He will also try to steer his field in the direction he thinks it should be heading. Weve seen a lot of claims about what we should be doing to push AI to human-level intelligence. I think some of those ideas are wrong, LeCun said. Our intelligent machines arent even at the level of cat intelligence. So why do we not start here?

LeCun believes that not only academia but also the AI industry needs profound reflection. Self-driving car groups, such as startups like Wayve, think they can learn just about anything by throwing data at large neural networks,which seems a little too optimistic, he said.

You know, I think its entirely possible for us to have Level 5 autonomous vehicles without common sense, but you have to work on the design, LeCun said. He believes that this over-engineered self-driving technology will like all computer vision programs obsoleted by deep learning, they become fragile. At the end of the day, there will be a more satisfying and possibly better solution that involves systems that better understand how the world works, he said.

LeCun hopes to prompt a rethinking of the fundamental concepts about AI, saying: You have to take a step back and say, Okay, we built the ladder, but we want to go to the moon, and this ladder cant possibly get us there. I would say its like making a rocket, I cant tell you the details of how we make a rocket, but I can give the basics.

According to LeCun, AI systems need to be able to reason, and the process he advocates is to minimize certain underlying variables. This enables the system to plan and reason. Furthermore, LeCun argues that the probabilistic framework should be abandoned. This is because it is difficult to deal with when we want to do things like capture dependencies between high-dimensional continuous variables. LeCun also advocates forgoing generative models. If not, the system will have to devote too many resources to predicting things that are hard to predict. Ultimately, the system will consume too many resources.

In a recent interview with business technology media ZDNet, LeCun reveals some information from a paper which he wrote regarding the exploration of the future of AI. In this paper, LeCun disclosed his research direction for the next ten years.Currently GPT-3, Transformer advocates believe that as long as everything is tokenized and then huge models are trained to make discrete predictions, AI will somehow emerge.But he believes that this is only one of the components of future intelligent systems, but not a key necessary part.

And even reinforcement learning cant solve the above problem, he explained. Although they are good chess players, they are still only programs that focus on actions.LeCun also adds that many people claim to advance AI in some way, but these ideas mislead us. He further believes that the common sense of current intelligent machines is not even as good as that of a cat. This is the origin of the low development of AI he believes. The AI methods have serious flaws.

As a result, LeCun confessed that he had given up the study of using the generative network to predict the next frame of the video from this frame

It was a complete failure, he adds.

LeCun summed up the reasons for the failure, the models based on probability theory that limited him. At the same time, he denounced those who believed that probability theory was superstitious. They believe that probability theory is the only framework for explaining machine learning, but in fact, a world model built with 100% probability is difficult to achieve.At present, he has not been able to solve this underlying problem very well. However, LeCun hopes torethink and draw an analogy.

It is worth mentioning that LeCun talked bluntlyabout his critics in the interview. He specifically took a jab atGary Marcus, a professor at New York University who he claims has never made any contribution to AI.

Continue reading here:
Meta AI Boss: current AI methods will never lead to true intelligence - Gizchina.com

Bin Yu

I'm Bin Yu, the head of the Yu Group at Berkeley, which consists of 15-20 students and postdocs from Statistics and EECS. I was formally trained as a statistician, but my research interests and achievements extend beyond the realm of statistics. Together with my group, my work has leveraged new computational developments to solve important scientific problems by combining novel statistical machine learning approaches with the domain expertise of my many collaborators in neuroscience, genomics and precision medicine. We also develop relevant theory to understand random forests and deep learning for insight into and guidance for practice.

We have developed the PCS framework for veridical data science (or responsible, reliable, and transparent data analysis and decision-making). PCS stands for predictability, computability and stability, and it unifies, streamlines, and expands on ideas and best practices of machine learning and statistics.

In order to augment empirical evidence for decision-making, we are investigating statistical machine learning methods/algorithms (and associated statistical inference problems) such as dictionary learning, non-negative matrix factorization (NMF), EM and deep learning (CNNs and LSTMs), and heterogeneous effect estimation in randomized experiments (X-learner). Our recent algorithms include staNMF for unsupervised learning, iterative Random Forests (iRF) and signed iRF (s-iRF) for discovering predictive and stable high-order interactions in supervised learning, contextual decomposition (CD) and aggregated contextual decomposition (ACD) for interpretation of Deep Neural Networks (DNNs).

Stability expanded, in reality. Harvard Data Science Review (HDSR), 2020.

Data science process: one culture. JASA, 2020.

Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist, Nature Medicine, 2020.

Veridical data science (PCS framework), PNAS, 2020 (QnAs with Bin Yu)

Breiman Lecture (video) at NeurIPS "Veridical data Science" (PCS framework and iRF), 2019; updated slides, 2020

Definitions, methods and applications in interpretable machine learning, PNAS, 2019

Data wisdom for data science (blog), 2015

IMS Presidential Address "Let us own data science", IMS Bulletin, 2014

Stability, Bernoulli, 2013

Embracing statistical challenges in the IT age, Technometrics, 2007

Honorary Doctorate, University of Lausanne (UNIL) (Faculty of Business and Economics), June 4, 2021 (Interview of Bin Yu by journalist Nathalie Randin, with an introduction by Dean Jean-Philippe Bonardi of UNIL in French (English translation))

CDSS news on our PCS framework: "A better framework for more robust, trustworthy data science", Oct. 2020

UC Berkeley to lead $10M NSF/Simons Foundation program to investigate theoretical underpinnings of deep learning, Aug. 25, 2020

Curating COVID-19 data repository and forecasting county-level death counts in the US, 2020

Interviewed by PBS Nova about AlphaZero, 2018

Mapping a cell's destiny, 2016

Seeking Data Wisdom, 2015

Member, National Academy of Sciences, 2014

Fellow, American Academy of Arts and Sciences, 2013

One of the 50 best inventions of 2011 by Time Magazine, 2011

The Economist Article, 2011

ScienceMatters @ Berkeley. Dealing with Cloudy Data, 2004

See the original post:
Bin Yu

The age of AI-ism – TechTalks

By Rich Heimann

I recently read The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher. The book describes itself as an essential roadmap to our present and our future. We certainly need more business-, government-, and philosophical-centric books on artificial intelligence rather than hype and fantasy. Despite high hopes, in terms of its promise as a roadmap, the book is wanting.

Some of the reviews on Amazon focused on the lack of examples of artificial intelligence and the fact that the few provided, like Halicin and AlphaZero, are banal and repeatedly filled up the pages. These reviews are correct in a narrow sense. However, the book is meant to be conceptual, so few examples are understandable. Considering that there are no actual examples of artificial intelligence, finding any is always an accomplishment.

Frivolity aside, the book is troubling because it promotes some doubtful philosophical explanations that I would like to discuss further. I know what you must be thinking. However, this review is necessary because the authors attempt to convince readers that AI puts human identity at risk.

The authors ask, if AI thinks, or approximates thinking, who are we? (p. 20). While this statement may satiate a spiritual need by the authors and provide them a purpose to save us, it is unfair under the vague auspices of AI to even talk about such an existential risk.

We could leave it at that, but the authors represent important spheres of society (e.g., Silicon Valley, government, and academia); therefore, the claim demands further inspection. As we see governments worldwide dedicating more resources and authorizing more power to newly created organizations and positions, we must ask ourselves if these spheres, organizations, and leaders reflect our shared goals and values. This is a consequential inquiry, and to prove it, the authors determine the same pursuit. They declare that societies across the globe need to reconcile technology with their values, structures, and social contracts (p. 21) and add that while the number of individuals capable of creating AI is growing, the ranks of those contemplating this technologys implications for humanitysocial, legal, philosophical, spiritual, moralremain dangerously thin. (p. 26)

To answer the most basic question, if AI thinks,who are we? the book begins by explaining where we are (Chapter One: Where We Are). But, where we are is a suspicious jumping-off point because it is not where we are, and it indeed fails to tell us where AI is. It also fails to tell us where AI was as where we are is inherently ahistorical. AI did not start, nor end, in 2017 with the victory of AlphaZero over Stockfish in a chess match. Moreover, AlphaZero beating Stockfish is not evidence, let alone proof, that machines think. Such an arbitrary story creates the illusion of inevitability or conclusiveness in a field historically with neither.

The authors quickly turn from where we are into who we are. And, who we are, according to the authors, are thinking brains. They argue that the AI age needs its own Descartes by offering the reader the philosophical work of Ren Descartes. (p. 177) Specifically, the authors present Descartes dictum, I think, therefore I am, as proof that thinking is who we are. Unfortunately, this is not what Descartes meant with his silly dictum. Descartes meant to prove his existence by arguing that his thoughts were more real and his body less real. Unfortunately, things dont exist more or less. (Thomas Hobbes famous objection asked, Does reality admit of more and less?) The epistemological pursuit of understanding what we can know by manipulating what is, was not a personality disorder in the 17th century.

It is not uncommon to involve Descartes when discussing artificial intelligence. However, the irony is that Descartes would not have considered AI thinking at all. Descartes, who was familiar with the automata and mechanical toys of the 17th century, suggested that the bodies of animals are nothing more than complex machines. However, the I in Descartes dictum treats the human mind as non-mechanical and non-computational. Descartess dualism treats the human mind as non-computational and contradicts that AI is, or can ever, think. The double irony is that what Descartes thinks about thinking is not a property of his identity or his thinking. We will come back to this point.

To be sure, thinking is a prominent characteristic of being human. Moreover, reason is our primary means of understanding the world. The French philosopher and mathematician Marquis de Condorcet argued that reasoning and acquiring new knowledge would advance human goals. He even provided examples of science impacting food production to better support larger populations and science extending the human life span, well before they emerged. However, Descartess argument fails to show why thinking and not rage or love is as valid to least doubt ones existence.

The authors also imply that Descartess dictum meant to undermine religion by disrupting the established monopoly on information, which was largely in the hands of the church. (p. 20). While largely is doing much heavy lifting, the authors overlook that the Cogito argument (I think, therefore I am) was meant to support the existence of God. Descartes thought what is more perfect cannot arise from what is less perfect and was convinced that his thought of God was put there by someone more perfect than him.

Of course, I can think of something more perfect than me. It does not mean that thing exists. AI is filled with similarly modified ontological arguments. A solution with intelligence more perfect than human intelligence must exist because it can be thought into existence. AI is cartesian. You can decide if that is good or bad.

If we are going to criticize religion and promote pure thinking, Descartes is the wrong man for the job. We ought to consider Friedrich Nietzsche. The father of nihilism, Nietzsche, did not equivocate. He believed that the advancement of society meant destroying God. He rejected all concepts of good and evil, even secular ones, which he saw as adaptations of Judeo-Christian ideas. Nietzsches Beyond Good and Evil explains that secular ideas of good and evil do not reject God. According to Nietzsche, going beyond God is to go beyond good and evil. Today, Nietzsches philosophy is ignored because it points, at least indirectly, to the oppressive totalitarian regimes of the twentieth century.

This thought isnt endorsing religion, antimaterialism, or nonsecular government. Instead, this explanation is meant to highlight that antireligious sentiment is often used to swap out religious beliefs with studied scripture and moral precepts for unknown moral precepts and opaque nonscriptural. It is a kind of religion, and in this case, the authors even gaslight nonbelievers calling those that reject AI like the Amish and the Mennonites. (p. 154) Ouch. That said, this conversation isnt merely that we believe or value at all, something that machines can never do or be, but that some beliefs are more valuable than others. The authors do not promote or reject any values aside from reasoning, which is a process, not a set of values.

None of this shows any obsolescence for philosophyquite the opposite. In my opinion, we need philosophy. The best place to start is to embrace many of the philosophical ideas of the Enlightenment. However, the authors repeatedly kill the Enlightenment idea despite repeated references to the Enlightenment. The Age of AI creates a story where human potential is inert and at risk from artificial intelligence by asking who are we? and denying that humans are exceptional. At a minimum, we should embrace the belief that humans are unique with the unique ability to reason, but not reduce humans to just thinking, much less transfer all uniqueness and potential to AI.

The question, if AI thinks, or approximates thinking, who are we? begins with the false premise that artificial intelligence is solved, or only the details need to be worked out. This belief is so widespread that it is no longer viewed as an assumption that requires skepticism. It also represents the very problem it attempts to solve by marginalizing humans at all stages of problem-solving. Examples like Halicin and AlphaZero are accomplishments in problem-solving and human ingenuity, not artificial intelligence. Humans found these problems, framed them, and solved them at the expense of other competing problems using the technology available. We dont run around and claim that microscopes can see or give credit to a microscope when there is a discovery.

The question is built upon another flawed premise: our human identity is thinking. However, we are primarily emotional, which drives our understanding and decision-making. AI will not supplant the emotional provocations unique to humans that motivate us to seek new knowledge and solve new problems to survive, connect, and reproduce. AI also lacks the emotion that decides when, how, and should be deployed.

The false conclusion in all of this is that because of AI, humanity faces an existential risk. The problem with this framing, aside from the pesky, false premises, is that when a threat is framed in this way, the danger justifies any action which may be the most significant danger of all.

My book, Doing AI, explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving.

About the author

Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.

Go here to see the original:
The age of AI-ism - TechTalks

Attempt to compare different types of intelligence falls a bit short – Ars Technica

What makes machines, animals, and people smart? asks the subtitle of Paul Thagards new book, Bots and Beasts. Not Are computers smarter than humans? or will computers ever be smarter than humans? or even are computers and animals conscious, sentient, or self-aware (whatever any of that might mean)? And thats unfortunate, because most people are probably more concerned with questions like those.

Thagard is a philosopher and cognitive scientist, and he has written many books about the brain, the mind, and society. In this one, he defines what intelligence is and delineates the 12 features and 8 mechanisms that he thinks Its built from,comprise it which allows him toso that he can compare the intelligences of these three very different types of beings.

He starts with a riff on the Aristotelian conception of virtue ethics. Whereas in that case, a good person is defined as someone who possesses certain virtues; in Thagards case, a smart person is defined as someone who epitomizes certain ways of thinking. Confucius, Mahatma Ghandi, and Angela Merkel excelled at social innovation; Thomas Edison and George Washington Carver excelled at technological innovation; he lists Beethoven, Georgia OKeeffe, Jane Austen, and Ray Charles as some of his favorite artistic geniuses; and Charles Darwin and Marie Curie serve as his paragons of scientific discoverers. Each of these people epitomizes different aspects of human intelligence, including creativity, emotion, problem solving, and using analogies.

Next he chooses six smart computers and six smart animals and grades them on how they measure up to people on these different features and mechanisms of intelligence. The computers are IBM Watson, DeepMind AlphaZero, self-driving cars, Alexa, Google Translate, and recommender algorithms; the animals are bees, octopuses, ravens, dogs, dolphins, and chimps.

All fare pretty abysmally on his report card. Animals as a class do better, but computers are evolving much more quickly. The upshot of his argument is that while some computers can beat the best humans at Jeopardy, Go, chess, debate, some medical diagnoses, and navigation, they are not smarter than humans because they have a low EQ. Or they may be smarter than some humans at some things, but they are not smarter than humanity with its diverse range of specializations.

Animals, on the other hand, can use their bodies to act upon the world and perceive that worldoften better than peoplebut cant reason. Its almost as if humans were animals with computing devices in our heads.

After the grading, the book becomes pretty wide ranging, with each chapter tackling a big topic that could be better handled in its own book (and often has been). "Human Advantages" and "When Did Minds Begin got better treatment in Darwins Unfinished Symphony; "The Morality of Bots and Beasts" and "Ethics of AI" have been better covered in countless works of fiction, like I, Robot; Blade Runner; and Mary Doria Russells The Sparrow, to mention a very few. These works not only raise the same ideas, they do so in a more nuanced, thought-provoking, and much more interesting way.

Thargard lists his features and mechanisms of intelligence, the specific characteristics that give advantages to humans, and the principles that should dictate the future development of AI, and thats pretty much all of his arguments. This book has a lot of lists. Like a lot. It makes his points straightforward and methodical, but also so, so boring to read.

He doesnt claim that computers cant or will never have emotions; he just concludes that they probably wont, because why would anyone ever want to make computers with emotions? So for now our spot at the pinnacle of intelligence seems safe. But if we ever meet up with a C-3PO (human cyborg relations) or a Murderbot, we might be in trouble.

Read this article:
Attempt to compare different types of intelligence falls a bit short - Ars Technica

Future Prospects of Data Science with Growing Technologies – Analytics Insight

Data science in simple words means the study of data. It entails developing methods of recording, storing, and analyzing data to successfully bring out useful information. Data Science put together and make use of several statistical procedures. The procedures cover data modeling, data transformations, machine learning, statistical operations including descriptive and inferential statistics. For all data scientists statistics is the primary asset.

With the biggest innovation of the time, that is a cryptocurrency, the demands for controlling data online have become a crucial challenge. Various techniques are put forward by Data Science to identify a group of people and providing them with the best possible security from fraud activities.

However, the application of data science is not just concerned with one field rather its application disseminated across various sectors.

Healthcare sector- The biggest application of Data Science is in healthcare. The accessibility of large datasets of patients can be used to build a Data Science approach to identify the diseases at a very early stage. Healthcare is one of the biggest sectors for providing opportunities for the professional who can use their medical expertise with Data Science and provide immediate help to the suffering patients.

Arms and Weapons- Data Science can help in building various automated solutions to identify any attack at a very early stage. Other than that Data Science can help in constructing automated weapons that will be smart enough to identify when to fire and when not to.

Banking and Finance- Data Science in the Banking and Finance sector can be used in managing the money effectively to invest in the right places based on Data Science predictions for best results.

Other than the above sectors Data Science is also applied in Automobile Industry like self-driving cars, Fixed destination cabs as well as in Power and Energy. Data Science can predict the maximum safest potential and can help in building AI bots that can easily handle enormous power sources.

The implementation of Data Science cannot be ignored as it is already in action in the present stage. When you look for something in Myntra or Flipkart and then you get similar recommendations or similar advertisements for whatever you have searched on the internet is all about Data Science. The whole world is operated by Data Science. For every single search in Google, the process of data science is activated.

The future of data science is growing. According to Cloud Vendor Domo even when a person accounts for the Earths entire population, the average person is expected to generate 1.7 megabytes of data per second by the end of 2020.

An overreaching motif today and moving ahead, big data is assured to play an authoritative role in the future. Data will stipulate modern health care, finance, business management, marketing, government, energy, and manufacturing. The scale of big data is truly staggering as it has already entwined itself in the fundamental aspect of business as well as personal life.

Like almost all businesses prime concern is tech, there is a high possibility of the growth of data science jobs.

Artificial Intelligence is the most impactful technology among others that data scientists will run up into. Today Ai is already refining the business operations and assures to be a major trend in the near future. The applications of AI in todays world have driven the adoption of other AI applications such as machine learning, deep learning and this will lead the way as the future of data science. Machine learning is the aptitude of statistical models to develop the capabilities and improve the performance with time in the absence of programmed instructions. This principle can be seen in the chess machine that is developed by Googles DeepMind unit the AlphaZero. The AlphaZero improves on its other computerized chess-playing peers in the absence of instructions is an example of how it learns from its movements to reach the most desired outcome.

As a greater number of businesses are merging with AI and data-based technologies at a high rate there is a need for a greater number of data scientists to help guide the initiatives.

Data science is a leviathan pool of multiple data operations that include statistics and machine learning. Machine Learning algorithms are very much dependent on data. Therefore, machine learning is the primary contributor to the future of data science. In particular data science covers the areas like Data Integration, Distributed Architecture, Automating Machine learning, Data Visualisation, Dashboards and BI, Data Engineering, Deployment in production mode, Automated, data-driven decisions.

While IT-focused jobs have been all the rage over the last two decades the rate of growth in the sector has been projected to be about 13% by the Bureau of Labor Statistics. It is still higher than the average rate of growth for all other sectors. However, data science has seen an explosive growth of over 650% since 2012 based on an analysis done on LinkedIn. The role of a Data Scientist has projected forward to one of the most in-demand jobs and ranks second to machine learning engineer- which is a job that is adjacent to a data scientist.

In the upcoming time, Data Scientists will have the ability to take on areas that are business-critical as well as several complex challenges. This will facilitate the businesses to make exponential leaps in the future. Companies in the present are facing a huge shortage of data scientists. However, this is set to change in the future.

Share This ArticleDo the sharing thingy

The rest is here:
Future Prospects of Data Science with Growing Technologies - Analytics Insight

Towards Broad Artificial Intelligence (AI) & The Edge in 2021 – BBN Times

Artificial intelligence (AI) has quickened its progress in 2021.

A new administration is in place in the US and the talk is about a major push forGreen Technologyand the need to stimulate next generation infrastructure including AI and 5G to generate economic recovery withDavid Knight forecasting that 5G has the potential - thepotential- to drive GDP growth of 40% or more by 2030.TheBiden administration has statedthat it will boost spending in emerging technologies that includes AI and 5G to $300Bn over a four year period.

On the other side of the Atlantic Ocean, the EU have announced aGreen Dealand also need to consider theEuropean AI policyto develop next generation companies that will drive economic growth and employment. It may well be that theEU and US(alongside Canada and other allies) will seek ways to work together on issues such as 5G policy and infrastructure development. TheUK will be hosting COP 26and has also made noises about AI and 5G development.

The world needs to find a way to successfully end the Covid-19 pandemic and in the post pandemic world move into a phase of economic growth with job creation. An opportunity exists for a new era of highly skilled jobs with sustainable economic development built around next generation technologies.

AI and 5G: GDP and jobs growth potential plus scope to reduce GHG emissions (source for numbers PWC / Microsoft, Accenture)

The image above sets out the scope for large reductions in emissions of GHGs whilst allowing for economic growth.

GDP and jobs growth will be very high on the post pandemic agendas of governments around the world. At the same time those economies that truly proposer and grow rapidly in this decade will be those who adopt Industry 4.0 technology and in turn will lead to a shift away from the era of heavy fossil fuel consumption towards a digital world that may be powered by renewable energy and with transportation that is either heavily electric or over time, hydrogen based.

2021 will mark the continued acceleration of Digital Transformation across the economy.

Firms will be increasingly "analytics driven" (it needs to be stressed that analytics rather than data driven is the key term).Data is the fuelthat needs to be processed.Analytics provide the ability for organisations to make actionable insights.

Source for image above Lean BI

The examples of how Machine to Machine Communication at the Edge enabled by AI could work maybe demonstrated by the following image as an example:

In the image above the Machine to Machine communication allows for broadcast across the network that a person has been detected stepping onto the road so that even the car that does not have line of sight of the person is aware of their presence

It is important to note that AI alongside 5G networks will be at the heart of this transition to the world ofIndustry 4.0.

5G will play an important role as 5G networks are not only substantially faster than 4G networks, but they also enablesignificant reductions in latency in turn allowing for near real-time analytics and responses, and also enable far greater capacity for connection thereby facilitating massive machine to machine communication forIoT devices on the Edge of the network(closer to where the data is created on the device).

The image below sets out the speed advantage of 5G networks relative to 4G.

Source for image above Thales Group

However, as noted above 5G has many more advantages over 4G than speed alone as shown in the image below:

Source for image above Thales Group

The growth in Edge Computing will reduce the amount of data being sent backwards and forwards between a remote cloud server and thereby make the system more efficient.

Source for image above Thales Group

The economic benefits of 5G are set out below:

$13.2 Trillion dollars of global economic output

22.3 Million new jobs created

$2.1 Trillion dollars in GDP growth

Towards AI at the Edge (AIIoT)

To date AI has been most pervasive and effective in the areas of Social Media and Ecommerce giants whose large digital data sets give them an advantage and whereedge casesdon't matter so much in terms of their consequences. No fatalities, injuries or material damages arise from an incorrect recommendation for a video, a post, or an item of clothing, other than a bad user experience.

However, when we seek to scale AI into the real world, edge cases and interpretability matter. Issues such as causality and explainability become key in areas such as autonomous vehicles and robots and also in healthcare.

Equally data privacy and security also really matter. On the one hand as noted above, data is the fuel for Machine Learning models. However, on the other hand in areas such as healthcare much of that data is often siloed and decentralised plus also protected by strict privacy rules in the likes of the US (HIPAA) and Europe (GDPR). It is also an issue in areas such as Finance and Insurance where data privacy and regulation are of significant importance to the operations of financial services firms.

This is an area whereFederated LearningwithDifferential Privacycould play a big role in scaling Machine Learning across areas such as healthcare and financial services.

Source for image above NVIDIA What is Federated Learning?

It is also an area where the US and Europe could work together to enable collaborative learning and help scale Machine Learning that also provides for Data Security and Privacy for end users (patients). The Healthcare sector around the world is at breakpoint due to the strains of the Covid-19 pandemic and augmenting our healthcare workers with AI to reduce the strain upon them whilst ensuring that patient data security is maintained will be key to transforming our Healthcare systems to reduce the strain on them and deliver better outcomes for the patient.

Source for Image above TensorFlow Federated

For more on Federated Learning see:Federated Learning an Introduction.

In relation to AI, we will need to move away from the giant models and techniques that were predominant in the last decade towards neural compression (pruning)that in turn will enable models to operate more efficiently on the Edge and help preserve battery life of devices and also reduce carbon footprint with reduced energy consumption.

Furthermore, we won't only requireDeep Learning models that may inference on the Edge, but also models thatmay continue to learn on the Edge, on the fly, from smaller data sets and respond dynamically to their environments. This will be key to enabling effective autonomous systems such as autonomous vehicles (cars, drones) and also robots.

Solving for these challenges will be key to enabling AI to scale beyond Social Media and Ecommerce across the sectors of the economy.

It is no surprise that the most powerful AI companies today and last few years tend to be from the Ecommerce and social media sector.

Furthermore, the images below from Valuewalk show how ByteDance (owner of TikTok) is the world's most valuable Unicorn and an AI company.

Source for image above Valuewalk, Tipalti, The Most Valuable Unicorn in the World 2020

Venture Capitalist and Angel Investors should also try to understand that in order to scale AI startup ventures access to usable data and meeting the requirements of their customer in terms of usability (which may include some or all of transparency, causality, explainability, model size, ethics) are key for many sectors.

The number of connected devices and volume of data is forecast to grow dramatically as Digital Technology continues to expand its reach for example the image below shows a forecast from Statista for 75 Billion internet connected devices by 2025, an average of over 9 per person on the planet!

Data will grow but an increasing amount of data will be decentralised data dispersed around the Edge.

Source for image above IDC

In factIDC forecast that" Theglobal dataspherewill grow from 45 zettabytes in 2019 to 175 by2025. Nearly 30% of the world's data will need real-time processing. ... Many of these interactions are because of the billions of IoT devices connected across the globe, which are expected to create over 90 ZB of data in2025."

Illustration of the AI IoT across the Edge

Source for infographic images below:Iman Ghosh VisualCapitalist.com

In the past decade key Machine Learning tools such as XG Boost, Light Gradient Boosting Machines and Cat Boost emerged (approximately 2015 to 2017) and these tools will continue to be popular with Data Scientists for powerful insights with structured data using supervised learning. No doubt we will see continued enhancements in Machine Learning tools over the next few years.

In relation to areas such as Natural Language Processing (NLP), Computer Vision and Drug Discovery efforts Deep Learning will continue to be the effective tool. However, it is submitted that increasingly the techniques will move towards the following:

Transformers(including inComputer Vision);

Neuro Symbolic AI(hybrid AI that combines Deep Learning with symbolic Logic);

Neuroevoutionary(hybrid approaches that combinedDeep Learning with Evolutionary algorithm approaches);

Some or all of the above combined withDeep Reinforcement Learning.

This will lead to an era ofBroad AIas AI starts to move beyond narrow AI (performing just one task) and starts working with multitasking but not at the level where AI can match the human brain (AGI).

My own work is focused on the above hybrid approaches for Broad AI as we seek to find ways to scale AI across the economy beyond Social Media and Ecommerce the above will be key to enabling true Digital Transformation with AI across traditional sectors of the economy and enabling our moving into the era of Industry 4.0.

Source for Image above David Cox, IBM Watson

MIT IBM Watson Lab define Broad AIand the types of AI as follows:

"Narrow AIis the ability to perform specific tasks at a super-human rate within various categories, from chess, Jeopardy!, and Go, to voice assistance, debate, language translation, and image classification."

"Broad AIis next. Were just entering this frontier, but when its fully realized, it will feature AI systems that use and integrate multimodal data streams, learn more efficiently and flexibly, and traverse multiple tasks and domains. Broad AI will have powerful implications for business and society."

"Finally,General AIis essentially what science fiction has long imagined: AI systems capable of complex reasoning and full autonomy. Some scientists estimate that General AI could be possible sometime around 2050 which is really little more than guesswork. Others say it will never be possible. For now, were focused on leading the next generation of Broad AI technologies for the betterment of business and society."

I would addArtificial Super Intelligence(or Super AI) to the list above as this is a type of AI that often gains much attention in Hollywood movies and television series.

In Summary

Whether one views 2021 as the first year of a decade or not, 2021 will mark a year for reset across the economy and hopefully one whereby we start to move beyond the Covid pandemic to a post pandemic world.

California will remain as a leading area for AI development with the presence of Stanford, UC Berkley, Caltech, UCLA, and University of San Diego. However, other centres for AI will continue to grow around the US and the world, for example Boston, Austin, Toronto, London, Edinburgh, Oxford, Cambridge, Tel Aviv, Dubai, Abu Dhabi, Singapore, Berlin, Paris, Barcelona, Madrid, Lisbon, Sao Paulo, Tallinn, Bucharest, Kyiv / Kharkiv, Moscow and of course across China (many other examples of cities could be cited too). AI will become a pervasive technology that is increasingly in the devices (including within our mobile phones) that we interact with everyday and not just when we enter our social media accounts or go online to shop.

It will also mark a reset for AI to be increasingly on the Edge and across the "real-world" sectors of the economy with the emergence of Broad AI to take over from Narrow AI as we move across the decade.

Smaller models will be more desirable / more useful

GPT-3is an exciting development in AI and shows the potential for Transformer models, however, in the future small will be beautiful and crucial. The human brain does not require the amount of server capacity of GPT-3 and uses far less energy. For AI to scale across the edge we'll need powerful models that are energy efficient and optimised to work on small devices. For exampleMao et al. set out LadaBERT: lightweight adaptation of the BERT ( a large Transformer language model) through hybrid model compression.

The authors note "...a major blocking issue of applying BERT to online services is that it is memory-intensive and leads to unsatisfactory latency of user requests, raising the necessity of model compression. Existing solutions leverage the knowledge distillation framework to learn a smaller model that imitates the behaviours of BERT."

"However, the training procedure of knowledge distillation is expensive itself as it requires sufficient training data to imitate the teacher model."

"In this paper, we address this issue by proposing a hybrid solution named LadaBERT (Lightweight adaptation of BERT through hybrid model compression), which combines the advantages of different model compression methods, including weight pruning, matrix factorization and knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various public datasets while the training overheads can be reduced by an order of magnitude."

Reducing training overheads and avoiding unsatisfactory latency of user requests will also be a key objective of Deep Learning development and evolution over the course of 2021 and beyond.

My Vision of Connectionism: Connecting one human to another (we're all human beings), connecting AI with AI, and AI with humans all at the level of the mind.

When I adopted the @DeepLearn007 handle on Twitter many years ago, I was inspired by the notion ofconnectionism, and the image that I selected for the account illustrates how 2 human beings could connect at the level of the brain and how the exchange of information, in effect ideas, drives innovation and the development of humanity. In the virtual world much of that occurs at the level of data and the analytical insights that we gain from that data through application of AI (Machine Learning and Deep Learning) to generate responses.

I remain a connectionist, albeit an open minded one. I believe that Deep Neural Networks will remain very important and the cornerstone of AI development but just like Deep Reinforcement Learning combined Reinforcement Learning with Deep Learning to very powerful effect with the likes ofAlphaGo,AlphaZero, andMuZeroresulting, so too developing hybrid AI that combines Deep Learning with Symbolic and Evolutionary approaches will lead to exciting new product developments and enable Deep Learning to scale beyond Social Media and Ecommerce sectors where the likes of medics and financial services staff wantcausal inferenceandexplainabilityfor trust in the AI decision making. For example,Microsoft Researchstate that "understanding causality is widely seen as a key deficiency of current AI methods, and a necessary precursor for building more human-like machine intelligence."

Furthermore, in order for autonomous vehicles to truly take off we'll need the model explainability for situations where things have gone wrong in order to understand what happened and how we may reduce the probability of the same outcome in the future.

The next generation of AI will be in the direction towards the era Broad AI and the adventure will be here in 2021 as we move towards the Edge, towards a better world as we move beyond the scars and challenges of 2020. The journey may require scaled up 5G networks around the world to really transform the broader economy and that may only really start to happen at the end of the year and beyond but the direction of the pathway is clear.

The exciting potential for healthcare, smart industry, smart cities, smart living, education, and every other sector of the economy will mean that a new of businesses will emerge that we cannot even imagine today.

Perhaps a good point to conclude is with the forecast from Ovum and Intel for the impact of 5G for the media sector (of courseAI will play a big role alongside 5Gin developing new hyper personalised services and products andhave a symbiotic relationship together).

Source for the image above:Intel Study Finds 5G will Drive $1.3 Trillion in New Revenues in Media and Entertainment Industry by 2028

See more here:
Towards Broad Artificial Intelligence (AI) & The Edge in 2021 - BBN Times