Category Archives: Alphago

Artificial intelligence could help fund managers monetise data but will conservatism hold back the industry? – HedgeWeek

Technological advances are shaping the way asset management firms operate, as they look for ways to introduce artificial intelligence applications to monetise data, and improve automation from the front to the back office.

Back in 2016, SEI wrote a white paper entitled The Upside of Disruption: Why the Future of Asset Management Depends on Innovation, in which it highlighted five trends shaping innovation: Watsonisation, Googlisation, Amazonisation, Uberisation and Twitterisation.

Witnessing the exponential changes occurring within and outside of the asset management industry as it relates to artificial intelligence, data management, platforms, social media and the like, SEI, in collaboration with ANZU Research, has updated these themes in its new series, The Exponential Pull of Innovation: asset management and the upside of disruption.

With regards to the first trend, Watsonisation, a lot has changed in terms of the power, sophistication and scale of artificial intelligence applications being used within asset management.

As the first of 5 papers in this series being released over the coming months, SEIs new Watsonisation 2.0 white paper points out, successfully harnessing technology in a complex and heavily regulated industry like ours is not easy. With new technologies and business models making change a constant, the financial services industry is being reorganized, re-engineered and reinvented before our eyes. There are now dedicated AI hedge fund managers such as Aidiyia Holdings, Cerebellum Capital and Numerai, all of whom are pushing the envelope when it comes to harnessing the power of AI in their trading models.

According to a report by Cerulli, AI-driven hedge funds produced cumulative returns of 34 per cent over a three-year period from 2016 to 2019, compared to 12 per cent for the global hedge fund industry. Moreover, Cerullis research shows that European AI-led active equity funds grew at a faster rate than other active equity funds from January to April this year.

That trend will likely continue as asset managers tap into the myriad possibilities afforded by AI. As SEI notes, portfolio management teams are tapping in to the predictive capabilities by working alongside quantitative specialists with the skills needed to train AI systems on large data sets.

Large managers such as Balyasny Asset Management are now actively embracing a quantamental strategy to mine alternative data sets and evolve their investment capabilities. To do this, they are hiring sector analysts; people with sector expertise and superior programming skills in programming languages such as Python. The aim of this is to act as a conduit between Balyasnys quantitative and fundamental analysts.

SEI argues that asset management is perfectly suited for the widespread adoption of AI.

They write: Data is its lifeblood, and there is an abundance of historic and real time data from a huge variety of sources (both public and private/internal). Traditional sources of structured data are always useful but ripe for more automated analytics.

Julien Messias is the co-founder of Quantology Capital Management, a Paris-based asset management that focuses on behavioural analysis, using systematic processes and quantitative tools to generate alpha for the strategy. The aim is to apply a scientific methodology based on collective intelligence.

Our only conviction is with the processes weve created rather than any personal beliefs on how we think the markets will perform. Although it is not possible to be 100 per cent systematic, we aim to be as systematic as possible, in respect to how we run the investment strategy, says Messias.

Messias says the predictive capabilities of AI have been evolving over the last decade but we have really noticed an acceleration over the last three or four years. Its not as straightforward as the report would (seem to) suggest, though. At least 50 per cent of the time is spent by analysts cleansing data. If you want to avoid the Garbage In Garbage Out scenario, you have to look carefully at the quality of data being used, no matter how sophisticated the AI is.

Its not the most interesting job for a quant manager but it is definitely the most important one.

One of the hurdles to overcome in asset management, particularly large blue chip names with decades of investment pedigree, is the inherent conservatism that comes with capital preservation. Large institutions may be seduced by the transformative properties of AI technology but trying to convince the CFO or executive board that more should be done to embrace new technology can be a hard sell. And as SEI rightly points out, any information advantage gained can quickly evaporate, particularly in an environment populated by a growing number of AIs.

We notice an increase in the use of alternative data, to generate sentiment signals, says Messias, but if you look at the performance of some hedge funds that claim to be fully AI, or who have incorporated AI into their investment models, it is not convincing. I have heard some large quant managers have had a tough year in 2020.

The whole concept of AI in investment management has become very popular today and become a marketing tool for some managers. Some managers dont fully understand how to use AI, however, they just claim to use it to sell their fund and make it sound attractive to investors.

When it comes to applying AI, it is compulsory for us to understand exactly how each algorithm works.

This raises an interesting point in respect to future innovation in asset management. For fund managers to put their best foot forward, they will need to develop their own proprietary tools and processes to optimise the use of AI. And in so doing, avoid the risk of jumping on the bandwagon and lacking credibility; investors take note. If the manager claims to be running AI tools, get them to explain exactly how and why they work.

Messias explains that at Quantology they create their own databases and that the aim is to make the investment strategy as autonomous as possible.

Every day we run an automatic batch process. We flash the market, during which all of the algorithms run in order to gather data, which we store in our proprietary system. One example of the data sets we collect is earnings transcripts, when company management teams release guidance etc.

For the last four years, weve been collecting these transcripts and have built a deep database of rich textual data. Our algorithms apply various NLP techniques to elicit an understanding of the transcript data, based on key words, says Messias.

He points out, however, that training algorithms to analyse textual data is not as easy as analyzing quantitative data.

As of today, the algorithms that are dedicated to that task are not efficient enough for us to exploit the data. In two or three years time, however, we think there will be a lot of improvements and the value will not be placed on the algorithms, per se, but on the data, he suggests.

Investment research is a key area of AI application for asset managers to consider, as they seek to evolve over the coming years. Human beings are dogged by multiple behavioural biases that cloud our judgment and often lead to confirmation bias, especially when developing an investment thesis; its the classic case of looking for data to fit the theory, rather than acknowledging when the theory is wrong.

AI systems suffer no such foibles. They are, as SEIs white paper explains, better able to illuminate variables, probabilistically predict outcomes and suggest a sensible course of action.

Messias explains that at Quantology they run numerous trading algorithms that seek to exploit investment opportunities based on two primary pillars; one is behavioural biases which exist in the market. We think our algorithms can detect these biases better than a human being can, states Messias.

The second pillar is collective intelligence; that is, the collective wisdom of the crowd.

We have no idea where the market will go this is not our job, asserts Messias.Our job is to deliver alpha. The way markets react is always the right way. The market is the best example of collective intelligence thats what our algorithms seek to better understand and translate into trading signals.

One of the truly exciting aspects to fund management over the next few years will be to see how AI systems evolve, as their machine learning capabilities enable them to become even smarter at detecting micro patterns in the markets.

Googles AlphaGo became the first computer program to defeat a professional Go player without handicaps in 2015 and went on to defeat the number-one ranked player in the world. As SEI observes: Analysts of AlphaGos play, for example, noted that it played with a unique style that set it apart from human players, taking a relatively conservative approach punctuated with odd moves. This underscores the real power of AI. It is not just faster and more accurate. It is inclined to do things differently.

Logic would suggest that such novel, innovative moves (ie trades) could also become a more prominent feature of systematic fund management. Indeed, it is already happening.

Messias refers to Quantologys algorithms building a strong signal for Tesla when the stock rallied in September last year when the company released its earnings report.

The model sent us a signal that a human being would not have created based on a traditional fundamental way of thinking, he says.

Will we see more hedge funds launching with AI acting as the portfolio manager?

I think that is the way investment management will eventually evolve. Newer firms are likely to test innovations and techniques and if AI shows they can become more competitive than human-based trading, then yes I think the future of investment will be more technology orientated, concludes Messias.

To read the SEI paper, click herefor the US version, and here for the UK version.

Original post:
Artificial intelligence could help fund managers monetise data but will conservatism hold back the industry? - HedgeWeek

The term ‘ethical AI’ is finally starting to mean something – Report Door

Earlier this year, the independent research organisation of which I am the Director, London-based Ada Lovelace Institute, hosted a panel at the worlds largest AI conference, CogX, called The Ethics Panel to End All Ethics Panels. The title referenced both a tongue-in-cheek effort at self-promotion, and a very real need to put to bed the seemingly endless offering of panels, think-pieces, and government reports preoccupied with ruminating on the abstract ethical questions posed by AI and new data-driven technologies. We had grown impatient with conceptual debates and high-level principles.

And we were not alone. 2020 has seen the emergence of a new wave of ethical AI one focused on the tough questions of power, equity, and justice that underpin emerging technologies, and directed at bringing about actionable change. It supersedes the two waves that came before it: the first wave, defined by principles and dominated by philosophers, and the second wave, led by computer scientists and geared towards technical fixes. Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology. It is taking us beyond the principled and the technical, to practical mechanisms for rectifying power imbalances and achieving individual and societal justice.

Between 2016 and 2019, 74 sets of ethical principles or guidelines for AI were published. This was the first wave of ethical AI, in which we had just begun to understand the potential risks and threats of rapidly advancing machine learning and AI capabilities and were casting around for ways to contain them. In 2016, AlphaGo had just beaten Lee Sedol, promoting serious consideration of the likelihood that general AI was within reach. And algorithmically-curated chaos on the worlds duopolistic platforms, Google and Facebook, had surrounded the two major political earthquakes of the year Brexit, and Trumps election.

In a panic for how to understand and prevent the harm that was so clearly to follow, policymakers and tech developers turned to philosophers and ethicists to develop codes and standards. These often recycled a subset of the same concepts and rarely moved beyond high-level guidance or contained the specificity of the kind needed to speak to individual use cases and applications.

This first wave of the movement focused on ethics over law, neglected questions related to systemic injustice and control of infrastructures, and was unwilling to deal with what Michael Veale, Lecturer in Digital Rights and Regulation at University College London, calls the question of problem framing early ethical AI debates usually took as a given that AI will be helpful in solving problems. These shortcomings left the movement open to critique that it had been co-opted by the big tech companies as a means of evading greater regulatory intervention. And those who believed big tech companies were controlling the discourse around ethical AI saw the movement as ethics washing. The flow of money from big tech into codification initiatives, civil society, and academia advocating for an ethics-based approach only underscored the legitimacy of these critiques.

At the same time, a second wave of ethical AI was emerging. It sought to promote the use of technical interventions to address ethical harms, particularly those related to fairness, bias and non-discrimination.The domain of fair-ML was born out of an admirable objective on thepart of computer scientists to bake fairness metrics or hard constraints into AI models to moderate their outputs.

This focus on technical mechanisms for addressing questions of fairness, bias, and discrimination addressed the clear concerns about how AI and algorithmic systems were inaccurately and unfairly treating people of color or ethnic minorities. Two specific cases contributed important evidence to this argument. The first was the Gender Shades study, which established that facial recognition software deployed by Microsoft and IBM returned higher rates of false positives and false negatives for the faces of women and people ofcolor. The second was the 2016 ProPublica investigation into the COMPAS sentencing algorithmic tool, whichfound that Black defendants were far more likely than White defendants to be incorrectly judged to be at a higher risk of recidivism, while White defendants were more likely than Black defendants to be incorrectly flagged as low risk.

Second-wave ethical AI narrowed in on these questions of bias and fairness, and explored technical interventions to solve them. In doing so, however, it may have skewed and narrowed the discourse, moving it away from the root causes of bias and even exacerbating the position of people of color and ethnic minorities. As Julia Powles, Director of the Minderoo Tech and Policy Lab at the University of Western Australia, argued, alleviating the problems with dataset representativeness merely co-opts designers in perfecting vast instruments of surveillance and classification. When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Some also saw the fair-ML discourse as a form of co-option of socially conscious computer scientists by big tech companies. By framing ethical problems as narrow issues of fairness and accuracy, companies could equate expanded data collection with investing in ethical AI.

The efforts of tech companies tochampion fairness-related codes illustrate this point: In January 2018, Microsoft published its ethical principles for AI, starting with fairness; in May2018, Facebook announced a tool to search for bias called Fairness Flow; and in September2018, IBM announced a tool called AI Fairness 360, designed to check for unwanted bias in datasets and machine learning models.

What was missing from second-wave ethical AI was an acknowledgement that technical systems are, in fact, sociotechnical systems they cannot be understood outside of the social context in which they are deployed, and they cannot be optimised for societally beneficial and acceptable outcomes through technical tweaks alone. As Ruha Benjamin, Associate Professor of African American Studies at Princeton University, argued in her seminal text, Race After Technology: Abolitionist Tools for the New Jim Code, the road to inequity is paved with technical fixes. The narrow focus on technical fairness is insufficient to help us grapple with all of the complex tradeoffs, opportunities, and risks of an AI-driven future; it confines us to thinking only about whether something works, but doesnt permit us to ask whether it should work. That is, it supports an approach that asks, What can we do? rather than What should we do?

On the eve of the new decade, MIT Technology Reviews Karen Hao published an article entitled In 2020, lets stop AI ethics-washing and actually do something. Weeks later, the AI ethics community ushered in 2020 clustered in conference rooms at Barcelona, for the annual ACM Fairness, Accountability and Transparency conference. Among the many papers that had tongues wagging was written by Elettra Bietti, Kennedy Sinclair Scholar Affiliate at the Berkman Klein Center for Internet and Society. It called for a move beyond the ethics-washing and ethics-bashing that had come to dominate the discipline. Those two pieces heralded a cascade of interventions that saw the community reorienting around a new way of talking about ethical AI, one defined by justice social justice, racial justice, economic justice, and environmental justice. It has seen some eschew the term ethical AI in favor of just AI.

As the wild and unpredicted events of 2020 have unfurled, alongside them third-wave ethical AI has begun to take hold, strengthened by the immense reckoning that the Black Lives Matter movement has catalysed. Third-wave ethical AI is less conceptual than first-wave ethical AI, and is interested in understanding applications and use cases. It is much more concerned with power, alive to vested interests, and preoccupied with structural issues, including the importance of decolonising AI.An article published by Pratyusha Kalluri, founder of the Radical AI Network, in Nature in July 2020, has epitomized the approach, arguing that When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people.

What has this meant in practice? We have seen courts begin to grapple with, and political and private sector players admit to, the real power and potential of algorithmic systems. In the UK alone, the Court of Appeal found the use by police of facial recognition systems unlawful and called for a new legal framework; a government department ceased its use of AI for visa application sorting; the West Midlands police ethics advisory committee argued for the discontinuation of a violence-prediction tool; and high school students across the country protested after tens of thousands of school leavers had their marks downgraded by an algorithmic system used by the education regulator, Ofqual. New Zealand published an Algorithm Charter and Frances Etalab a government task force for open data, data policy, and open government has been working to map the algorithmic systems in use across public sector entities and to provide guidance.

The shift in gaze of ethical AI studies away from the technical towards the socio-technical has brought more issues into view, such as the anti-competitive practices of big tech companies, platform labor practices, parity in negotiating power in public sector procurement of predictive analytics, and the climate impact of training AI models. It has seen the Overton window contract in terms of what is reputationally acceptable from tech companies; after years of campaigning by researchers like Joy Buolamwini and Timnit Gebru, companies such as Amazon and IBM have finally adopted voluntary moratoria on their sales of facial recognition technology.

The COVID crisis has been instrumental, surfacing technical advancements that have helped to fix the power imbalances that exacerbate the risks of AI and algorithmic systems. The availability of the Google/Apple decentralised protocol for enabling exposure notification prevented dozens of governments from launching invasive digital contact tracing apps. At the same time, governments response to the pandemic has inevitably catalysed new risks, as public health surveillance has segued into population surveillance, facial recognition systems have been enhanced to work around masks, and the threat of future pandemics is leveraged to justify social media analysis. The UKs attempt to operationalize a weak Ethics Advisory Board to oversee its failed attempt at launching a centralized contact-tracing app was the death knell for toothless ethical figureheads.

Research institutes, activists, and campaigners united by the third-wave approach to ethical AI continue to work to address these risks, with a focus on practical tools for accountability (we at the Ada Lovelace Institute, and others such as AI Now, are working on developing audit and assessment tools for AI; and the Omidyar Network has published itsEthical Explorer toolkit for developers and product managers), litigation, protest and campaigning for moratoria, and bans.

Researchers are interrogating what justice means in data-driven societies, and institutes such as Data & Society, the Data Justice Lab at Cardiff University, JUST DATA Lab at Princeton, and the Global Data Justice project at the Tilberg Institute for Law, Technology and Society in the Netherlands are churning out some of the most novel thinking. The Mindaroo Foundation has just launched its new future says initiative with a $3.5 million grant, with aims to tackle lawlessness, empower workers, and reimagine the tech sector. The initiative will build on the critical contribution of tech workers themselves to the third wave of ethical AI, from AI Now co-founder Meredith Whittakers organizing work at Google before her departure last year, to walk outs and strikes performed by Amazon logistic workersand Uber and Lyft drivers.

But the approach of third-wave ethical AI is by no means accepted across the tech sector yet, as evidenced by the recent acrimonious exchange between AI researchers Yann LeCun and Timnit Gebru about whether the harms of AI should be reduced to a focus on bias. Gebru not only reasserted well established arguments against a narrow focus on dataset bias but also made the case for a more inclusive community of AI scholarship.

Mobilized by social pressure, the boundaries of acceptability are shifting fast, and not a moment too soon. But even those of us within the ethical AI community have a long way to go. A case in point: Although wed programmed diverse speakers across the event, the Ethics Panel to End All Ethics Panels we hosted earlier this year failed to include a person of color, an omission for which we were rightly criticized and hugely regretful. It was a reminder that as long as the domain of AI ethics continues to platform certain types of research approaches, practitioners, and ethical perspectives to the exclusion of others, real change will elude us. Ethical AI can not only be defined from the position of European and North American actors; we need to work concertedly to surface other perspectives, other ways of thinking about these issues, if we truly want to find a way to make data and AI work for people and societies across the world.

Carly Kind is a human rights lawyer, a privacy and data protection expert, and Director of the Ada Lovelace Institute.

Visit link:
The term 'ethical AI' is finally starting to mean something - Report Door

This A.I. makes up gibberish words and definitions that sound astonishingly real – Digital Trends

A sesquipedalian is a person who overuses uncommon words like lameen (a bishops letter expressing a fault or reprimand) or salvestate (to transport car seats to the dining room) just for the sake of it. The first of those italicized words is real. The second two arent. But they totally should be. Theyre the invention of a new website called This Word Does Not Exist. Powered by machine learning, it conjures up entirely new words never before seen or used, and even generates a halfway convincing definition for them. Its all kinds of brilliant.

In February, I quit my job as an engineering director at Instagram after spending seven intense years building their ranking algorithms like non-chronological feed, Thomas Dimson, creator of This Word Does Not Exist, told Digital Trends. A friend and I were trying to brainstorm names for a company we could start together in the A.I. space. After [coming up with] some lame ones, I decided it was more appropriate to let A.I. name a company about A.I.

Then, as Dimson tells it, a global pandemic happened, and he found himself at home with lots of time on his hands to play around with his name-making algorithm. Eventually I stumbled upon the Mac dictionary as a potential training set and [started] generating arbitrary words instead of just company names, he said.

If youve ever joked that someone who uses complex words in their daily lives must have swallowed a dictionary, thats pretty much exactly what This Word Does Not Exist has done. The algorithm was trained from a dictionary file Dimson structured according to different parts of speech, definition, and example usage. The model refines OpenAIs controversial GPT-2 text generator, the much-hyped algorithm once called too dangerous to release to the public. Dimsons twist on it assigns probabilities to potential words based on which letters are likely to follow one another until the word looks like a reasonably convincing dictionary entry. As a final step, it checks that the generated word isnt a real one by looking it up in the original training set.

This Word Does Not Exist is just the latest in a series of [Insert object] Does Not Exist creations. Others range from non-existent Airbnb listings to fake people to computer-generated memes which nonetheless capture the oddball humor of real ones.

People have a nervous curiosity toward what makes us human, Dimson said. By looking at these machine-produced demos, we are better able to understand ourselves. Im reminded of the fascination with Deep Blue beating Kasparov in 1996 or AlphaGo beating Lee Sedol in 2016.

See the original post:
This A.I. makes up gibberish words and definitions that sound astonishingly real - Digital Trends

This A.I. makes up gibberish words and definitions that sound astonishingly real – Yahoo Tech

A sesquipedalian is a person who overuses uncommon words like lameen (a bishops letter expressing a fault or reprimand) or salvestate (to transport car seats to the dining room) just for the sake of it. The first of those italicized words is real. The second two arent. But they totally should be. Theyre the invention of a new website called This Word Does Not Exist. Powered by machine learning, it conjures up entirely new words never before seen or used, and even generates a halfway convincing definition for them. Its all kinds of brilliant.

In February, I quit my job as an engineering director at Instagram after spending seven intense years building their ranking algorithms like non-chronological feed, Thomas Dimson, creator of This Word Does Not Exist, told Digital Trends. A friend and I were trying to brainstorm names for a company we could start together in the A.I. space. After [coming up with] some lame ones, I decided it was more appropriate to let A.I. name a company about A.I.

Then, as Dimson tells it, a global pandemic happened, and he found himself at home with lots of time on his hands to play around with his name-making algorithm. Eventually I stumbled upon the Mac dictionary as a potential training set and [started] generating arbitrary words instead of just company names, he said.

If youve ever joked that someone who uses complex words in their daily lives must have swallowed a dictionary, thats pretty much exactly what This Word Does Not Exist has done. The algorithm was trained from a dictionary file Dimson structured according to different parts of speech, definition, and example usage. The model refines OpenAIs controversial GPT-2 text generator, the much-hyped algorithm once called too dangerous to release to the public. Dimsons twist on it assigns probabilities to potential words based on which letters are likely to follow one another until the word looks like a reasonably convincing dictionary entry. As a final step, it checks that the generated word isnt a real one by looking it up in the original training set.

This Word Does Not Exist is just the latest in a series of [Insert object] Does Not Exist creations. Others range from non-existent Airbnb listings to fake people to computer-generated memes which nonetheless capture the oddball humor of real ones.

People have a nervous curiosity toward what makes us human, Dimson said. By looking at these machine-produced demos, we are better able to understand ourselves. Im reminded of the fascination with Deep Blue beating Kasparov in 1996 or AlphaGo beating Lee Sedol in 2016.

Read more from the original source:
This A.I. makes up gibberish words and definitions that sound astonishingly real - Yahoo Tech

The New ABC’s: Artificial Intelligence, Blockchain And How Each Complements The Other – Technology – United States – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

The terms "revolution" and "disruption" inthe context of technological innovation are probably bandied abouta bit more liberally than they should. Technological revolution anddisruption imply upheaval and systemic reevaluations of the waythat humans interact with industry and even each other. Actualtechnological advancement, however, moves at a much slower pace andtends to augment our current processes rather than to outrightdisplace them. Oftentimes, we fail to realize the ubiquity oflegacy systems in our everyday lives sometimes to our owndetriment.

Consider the keyboard. The QWERTY layout of keys is standard forEnglish keyboards across the world. Even though the layout remainsa mainstay of modern office setups, its origins trace back to themass popularization of a typewriter manufactured and sold by E.Remington & Sons in 1874.1 Urban legend has itthat the layout was designed to slow down typists from jammingtyping mechanisms, yet the reality reveals otherwise thelayout was actually designed to assist those transcribing messagesfrom Morse code.2 Once typists took to the format, thekeyboard, as we know it today, was embraced as a global standard even as the use of Morse code declined.3 LikeQWERTY, our familiarity and comfort with legacy systems hascontributed to their rise. These systems are varied in their scope,and they touch everything: healthcare, supply chains, our financialsystems and even the way we interact at a human level. However,their use and value may be tested sooner than we realize.

Artificial intelligence (AI) and blockchain technology(blockchain) are two novel innovations that offer the opportunityfor us to move beyond our legacy systems and streamline enterprisemanagement and compliance in ways previously unimaginable. However,their potential is often clouded by their "buzzword"status, with bad actors taking advantage of the hype. When one cutsthrough the haze, it becomes clear that these two technologies holdsignificant transformative potential. While these new innovationscan certainly function on their own, AI and blockchain alsocomplement one another in such ways that their combination offersbusiness solutions, not only the ability to build upon legacyenterprise systems but also the power to eventually upend them infavor of next level solutions. Getting to that point, however,takes time and is not without cost. While humans are generallyquick to embrace technological change, our regulatory frameworkstake longer to adapt. The need to address this constraint ispressing real market solutions for these technologies havestarted to come online, while regulatory opaqueness hurdles abound.As innovators seek to exploit the convergence of AI and blockchaininnovations, they must pay careful attention to overcome bothtechnical and regulatory hurdles that accompany them. Do sosuccessfully, and the rewards promise to be bountiful.

First, a bit of taxonomy is in order.

AI in a Nutshell:

Artificial Intelligence is "the capability of machine toimitate intelligent human behavior," such as learning,understanding language, solving problems, planning and identifyingobjects.4 More practically speaking, however,today's AI is actually mostly limited to if X, then Yvarieties of simple tasks. It is through supervised learning thatAI is "trained," and this process requires an enormousamount of data. For example, IBM's question-answeringsupercomputer Watson was able to beat Jeopardy! championsBrad Rutter and Ken Jennings in 2011, because Watson had been codedto understand simple questions by being fed countless iterationsand had access to vast knowledge in the form of digital dataLikewise, Google DeepMind's AlphaGo defeated the Go championLee Sedol in 2016, since AlphaGo had undergone countless instancesof Go scenarios and collected them as data. As such, mostimplementations of AI involve simple tasks, assuming that relevantinformation is readily accessible. In light of this, Andrew Ng, theStanford roboticist, noted that, "[i]f a typical person can doa mental task with less than one second of thought, we can probablyautomate it using AI either now or in the near future."5

Moreover, a significant portion of AI currently in use or beingdeveloped is based on "machine learning." Machinelearning is a method by which AI adapts its algorithms and modelsbased on exposure to new data thereby allowing AI to"learn" without being programmed to perform specifictasks. Developing high performance machine learning-based AI,therefore, requires substantial amounts of data. Data high in bothquality and quantity will lead to better AI, since an AI instancecan indiscriminately accept all data provided to it, and can refineand improve its algorithms to the extent of the provided data. Forexample, AI that visually distinguishes Labradors from other breedsof dogs will become better at its job the more it is exposed toclear and accurate pictures of Labradors.

It is in these data amalgamations that AI does its job best.Scanning and analyzing vast subsets of data is something that acomputer can do very rapidly as compared to a human. However, AI isnot perfect, and many of the pitfalls that AI is prone to are oftenthe result of the difficulty in conveying how humans processinformation in contrast to machines. One example of this phenomenonthat has dogged the technology has been AI's penchant for"hallucinations." An AI algorithm"hallucinates" when the input is interpreted by themachine into something that seems implausible to a human looking atthe same thing.6 Case in point, AI has interpreted animage of a turtle as that of a gun or a rifle as a helicopter.7 This occurs because machines arehypersensitive to, and interpret, the tiniest of pixel patternsthat we humans do not process. Because of the complexity of thisanalysis, developers are only now beginning to understand such AIphenomena.

When one moves beyond pictures of guns and turtles, however,AI's shortfalls can become much less innocuous. AI learning isbased on inputted data, yet much of this data reflects the inherentshortfalls and behaviors of everyday individuals. As such, withoutproper correction for bias and other human assumptions, AI can, forexample, perpetuate racial stereotypes and racial profiling.8 Therefore, proper care for what goesinto the system and who gets access to the outputs must be employedfor the ethical employment of AI, but therein lies an additionalproblem who has access to enough data to really take fulladvantage of and develop robust AI?

Not surprisingly, because large companies are better able tocollect and manage increasingly larger amounts of data thanindividuals or smaller entities, such companies have remainedbetter positioned in developing complex AI. In response to thistilted landscape, various private and public organizations,including the U.S. Department of Justice's Bureau of Justice,Google Scholar and the International Monetary Fund, have launchedopen source initiatives to make publicly available vast amounts ofdata that such organizations have collected over many years.

Blockchain in a Nutshell:

Blockchain technology as we know it today came onto the scene inlate 2009 with the rise of Bitcoin, perhaps the most famousapplication of the technology. Fundamentally, blockchain is a datastructure that makes it possible to create a tamper-proof,distributed, peer-to-peer system of ledgers containing immutable,time-stamped and cryptographically connected blocks of data. Inpractice, this means that data can be written only once onto aledger, which is then read-only for every user. However, many ofthe most utilized blockchain protocols, for example, the Bitcoin orEthereum networks, maintain and update their distributed ledgers ina decentralized manner, which stands in contrast to traditionalnetworks reliant on a trusted, centralized data repository.9 In structuring the network in thisway, these blockchain mechanisms function to remove the need for atrusted third party to handle and store transaction data. Instead,data are distributed so that every user has access to the sameinformation at the same time. In order to update a ledger'sdistributed information, the network employs pre-defined consensusmechanisms and military grade cryptography to prevent maliciousactors from going back and retroactively editing or tampering withpreviously recorded information. In most cases, networks are opensource, maintained by a dedicated community and made accessible toany connected device that can validate transactions on a ledger,which is referred to as a node.

Nevertheless, the decentralizing feature of blockchain comeswith significant resource and processing drawbacks. Manyblockchain-enabled platforms run very slowly and haveinteroperability and scalability problems. Moreover, these networksuse massive amounts of energy. For example, the Bitcoin networkrequires the expenditure of about 50 terawatt hours per year equivalent to the energy needs of the entire country ofSingapore.10 To ameliorate these problems,several market participants have developed enterprise blockchainswith permissioned networks. While many of them may be open source,the networks are led by known entities that determine who mayverify transactions on that blockchain, and, therefore, therequired consensus mechanisms are much more energy efficient.

Not unlike AI, a blockchain can also be coded with certainautomated processes to augment its recordkeeping abilities, and,arguably, it is these types of processes that contributed toblockchain's rise. That rise, some may say, began with theintroduction of the Ethereum network and its engineering around"smart contracts" a term used to describecomputer code that automatically executes all or part of anagreement and is stored on a blockchain-enabled platform. Smartcontracts are neither "contracts" in the sense of legallybinding agreements nor "smart" in employing applicationsof AI. Rather, they consist of coded automated parametersresponsive to what is recorded on a blockchain. For example, if theparties in a blockchain network have indicated, by initiating atransaction, that certain parameters have been met, the code willexecute the step or steps triggered by those coded parameters. Theinput parameters and the execution steps for smart contracts needto be specific the digital equivalent of if X, thenY statements. In other words, when required conditions havebeen met, a particular specified outcome occurs; in the same waythat a vending machine sells a can of soda once change has beendeposited, smart contracts allow title to digital assets to betransferred upon the occurrence of certain events. Nevertheless,the tasks that smart contracts are currently capable of performingare fairly rudimentary. As developers figure out how to expandtheir networks, integrate them with enterprise-level technologiesand develop more responsive smart contracts, there is every reasonto believe that smart contracts and their decentralizedapplications (d'Apps) will see increased adoption.

AI and blockchain technology may appear to be diametricopposites. AI is an active technologyitanalyzes what is around and formulates solutions based on thehistory of what it has been exposed to. By contrast, blockchain isdata agnostic with respect to what is written into it thetechnology bundle is largely passive. It is primarily inthat distinction that we find synergy, for each technology augmentsthe strengths and tempers the weaknesses of the other. For example,AI technology requires access to big data sets in order to learnand improve, yet many of the sources of these data sets are hiddenin proprietary silos. With blockchain, stakeholders are empoweredto contribute data to an openly available and distributed networkwith immutability of data as a core feature. With a potentiallylarger pool of data to work from, the machine learning mechanismsof a widely distributed, blockchain-enabled and AI-powered solutioncould improve far faster than that of a private data AIcounterpart. These technologies on their own are more limited.Blockchain technology, in and of itself, is not capable ofevaluating the accuracy of the data written into its immutablenetwork garbage in, garbage out. AI can, however, act as alearned gatekeeper for what information may come on and off thenetwork and from whom. Indeed, the interplay between these diversecapabilities will likely lead to improvements across a broad arrayof industries, each with unique challenges that the twotechnologies together may overcome.

Footnotes

1 See Rachel Metz, Why WeCan't Quit the QWERTY Keyboard, MIT Technology Review(Oct. 13, 2018), available at: https://www.technologyreview.com/s/611620/why-we-cant-quit-the-qwerty-keyboard/.

2 Alexis Madrigal, The Lies You'veBeen Told About the Origin of the QWERTY Keyboard, TheAtlantic (May 3, 2013), available at: https://www.theatlantic.com/technology/archive/2013/05/the-lies-youve-been-told-about-the-origin-of-the-qwerty-keyboard/275537/.

3 See Metz, supra note1.

4 See Artificial Intelligence,Merriam-Webster's Online Dictionary, Merriam-Webster (lastaccessed Mar. 27, 2019), available at: https://www.merriam-webster.com/dictionary/artificial%20intelligence.

5 See Andrew Ng, What ArtificialIntelligence Can and Can't Do Right Now, Harvard BusinessReview (Nov. 9, 2016), available at: https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now.

6 Louise Matsakis, Artificial IntelligenceMay Not Hallucinate After All, Wired (May 8, 2019),available at: https://www.wired.com/story/adversarial-examples-ai-may-not-hallucinate/.

7 Id.

8 Jerry Kaplan, Opinion: Why Your AI MightBe Racist, Washington Post (Dec. 17, 2018), availableat: https://www.washingtonpost.com/opinions/2018/12/17/why-your-ai-might-be-racist/?noredirect=on&utm_term=.568983d5e3ec.

9 See Shanaan Cohsey, David A.Hoffman, Jeremy Sklaroff and David A. Wishnick, Coin-OperatedCapitalism, Penn. Inst. for L. & Econ. (No. 18-37) (Jul.17, 2018) at 12, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3215345##.

10 See Bitcoin Energy ConsumptionIndex (last accessed May 13, 2019), available at: https://digiconomist.net/bitcoin-energy-consumption.

Keywords:Artificial Intelligence + Robotics,Blockchain, Fintech

Mofo Tech Blog - A blog dedicated to information,trend-spotting & analysis for science & tech-basedcompanies

Because of the generality of this update, the informationprovided herein may not be applicable in all situations and shouldnot be acted upon without specific legal advice based on particularsituations.

Morrison & Foerster LLP. All rights reserved

More:
The New ABC's: Artificial Intelligence, Blockchain And How Each Complements The Other - Technology - United States - Mondaq News Alerts

The professionals who predict the future for a living – MIT Technology Review

Inez Fung

Professor of atmospheric science, University of California, Berkeley

Leah Fasten

Prediction for 2030: Well light up the world safely

Ive spoken to people who want climate model information, but theyre not really sure what theyre asking me for. So I say to them, Suppose I tell you that some event will happen with a probability of 60% in 2030. Will that be good enough for you, or will you need 70%? Or would you need 90%? What level of information do you want out of climate model projections in order to be useful?

I joined Jim Hansens group in 1979, and I was there for all the early climate projections. And the way we thought about it then, those things are all still totally there. What weve done since then is add richness and higher resolution, but the projections are really grounded in the same kind of data, physics, and observations.

Still, there are things were missing. We still dont have a real theory of precipitation, for example. But there are two exciting things happening there. One is the availability of satellite observations: looking at the cloud is still not totally utilized. The other is that there used to be no way to get regional precipitation patterns through historyand now there is. Scientists found these caves in China and elsewhere, and they go in, look for a nice little chamber with stalagmites, and then they chop them up and send them back to the lab, where they do fantastic uranium--thorium dating and measure oxygen isotopes in calcium carbonate. From there they can interpret a record of historic rainfall. The data are incredible: we have got over half a million years of precipitation records all over Asia.

I dont see us reducing fossil fuels by 2030. I dont see us reducing CO2 or atmospheric methane. Some 1.2 billion people in the world right now have no access to electricity, so Im looking forward to the growth in alternative energy going to parts of the world that have no electricity. Thats important because its education, health, everything associated with a Western standard of living. Thats where Im putting my hopes.

Dvora Photography

Anne Lise Kjaer

Futurist, Kjaer Global, London

Prediction for 2030: Adults will learn to grasp new ideas

As a kid I wanted to become an archaeologist, and I did in a way. Archaeologists find artifacts from the past and try to connect the dots and tell a story about how the past might have been. We do the same thing as futurists; we use artifacts from the present and try to connect the dots into interesting narratives in the future.

When it comes to the future, you have two choices. You can sit back and think Its not happening to me and build a great big wall to keep out all the bad news. Or you can build windmills and harness the winds of change.

A lot of companies come to us and think they want to hear about the future, but really its just an exercise for themlets just tick that box, do a report, and put it on our bookshelf.

Sign up for The Download your daily dose of what's up in emerging technology

So we have a little test for them. We do interviews, we ask them questions; then we use a model called a Trend Atlas that considers both the scientific dimensions of society and the social ones. We look at the trends in politics, economics, societal drivers, technology, environment, legislationhow does that fit with what we know currently? We look back maybe 10, 20 years: can we see a little bit of a trend and try to put that into the future?

Whats next? Obviously with technology we can educate much better than we could in the past. But its a huge opportunity to educate the parents of the next generation, not just the children. Kids are learning about sustainability goals, but what about the people who actually rule our world?

Courtesy Photo

Philip Tetlock

Coauthor of Superforecasting and professor, University of Pennsylvania

Prediction for 2030: Well get better at being uncertain

At the Good Judgment Project, we try to track the accuracy of commentators and experts in domains in which its usually thought impossible to track accuracy. You take a big debate and break it down into a series of testable short-term indicators. So you could take a debate over whether strong forms of artificial intelligence are going to cause major dislocations in white-collar labor markets by 2035, 2040, 2050. A lot of discussion already occurs at that level of abstractionbut from our point of view, its more useful to break it down and to say: If we were on a long-term trajectory toward an outcome like that, what sorts of things would we expect to observe in the short term? So we started this off in 2015, and in 2016 AlphaGo defeated people in Go. But then other things didnt happen: driverless Ubers werent picking people up for fares in any major American city at the end of 2017. Watson didnt defeat the worlds best oncologists in a medical diagnosis tournament. So I dont think were on a fast track toward the singularity, put it that way.

Forecasts have the potential to be either self-fulfilling or self-negatingY2K was arguably a self-negating forecast. But its possible to build that into a forecasting tournament by asking conditional forecasting questions: i.e., How likely is X conditional on our doing this or doing that?

What Ive seen over the last 10 years, and its a trend that I expect will continue, is an increasing openness to the quantification of uncertainty. I think theres a grudging, halting, but cumulative movement toward thinking about uncertainty, and more granular and nuanced ways that permit keeping score.

Ryan Young

Keith Chen

Associate professor of economics, UCLA

Prediction for 2030: Well be moreand lessprivate

When I worked on Ubers surge pricing algorithm, the problem it was built to solve was very coarse: we were trying to convince drivers to put in extra time when they were most needed. There were predictable timeslike New Yearswhen we knew we were going to need a lot of people. The deeper problem was that this was a system with basically no control. Its like trying to predict the weather. Yes, the amount of weather data that we collect todaytemperature, wind speed, barometric pressure, humidity datais 10,000 times greater than what we were collecting 20 years ago. But we still cant predict the weather 10,000 times further out than we could back then. And social movementseven in a very specific setting, such as where riders want to go at any given point in timeare, if anything, even more chaotic than weather systems.

These days what Im doing is a little bit more like forensic economics. We look to see what we can find and predict from peoples movement patterns. Were just using simple cell-phone data like geolocation, but even just from movement patterns, we can infer salient information and build a psychological dimension of you. What terrifies me is I feel like I have much worse data than Facebook does. So what are they able to understand with their much better information?

I think the next big social tipping point is people actually starting to really care about their privacy. Itll be like smoking in a restaurant: it will quickly go from causing outrage when people want to stop it to suddenly causing outrage if somebody does it. But at the same time, by 2030 almost every Chinese citizen will be completely genotyped. I dont quite know how to reconcile the two.

Sarah Deragon

Annalee Newitz

Science fiction and nonfiction author, San Francisco

Prediction for 2030: Were going to see a lot more humble technology

Every era has its own ideas about the future. Go back to the 1950s and youll see that people fantasized about flying cars. Now we imagine bicycles and green cities where cars are limited, or where cars are autonomous. We have really different priorities now, so that works its way into our understanding of the future.

Science fiction writers cant actually make predictions. I think of science fiction as engaging with questions being raised in the present. But what we can do, even if we cant say whats definitely going to happen, is offer a range of scenarios informed by history.

There are a lot of myths about the future that people believe are going to come true right now. I think a lot of peoplenot just science fiction writers but people who are working on machine learningbelieve that relatively soon were going to have a human-equivalent brain running on some kind of computing substrate. This is as much a reflection of our time as it is what might actually happen.

It seems unlikely that a human--equivalent brain in a computer is right around the corner. But we live in an era where a lot of us feel like we live inside computers already, for work and everything else. So of course we have fantasies about digitizing our brains and putting our consciousness inside a machine or a robot.

Im not saying that those things could never happen. But they seem much more closely allied to our fantasies in the present than they do to a real technical breakthrough on the horizon.

Were going to have to develop much better technologies around disaster relief and emergency response, because well be seeing a lot more floods, fires, storms. So I think there is going to be a lot more work on really humble technologies that allow you to take your community off the grid, or purify your own water. And I dont mean in a creepy survivalist way; I mean just in a this-is-how-we-are-living-now kind of way.

Noah Willman

Finale Doshi-Velez

Associate professor of computer science, Harvard

Prediction for 2030: Humans and machines will make decisions together

In my lab, were trying to answer questions like How might this patient respond to this antidepressant? or How might this patient respond to this vasopressor? So we get as much data as we can from the hospital. For a psychiatric patient, we might have everything about their heart disease, kidney disease, cancer; for a blood pressure management recommendation for the ICU, we have all their oxygen information, their lactate, and more.

Some of it might be relevant to making predictions about their illnesses, some not, and we dont know which is which. Thats why we ask for the large data set with everything.

Theres been about a decade of work trying to get unsupervised machine-learning models to do a better job at making these predictions, and none worked really well. The breakthrough for us was when we found that all the previous approaches for doing this were wrong in the exact same way. Once we untangled all of this, we came up with a different method.

We also realized that even if our ability to predict what drug is going to work is not always that great, we can more reliably predict what drugs are not going to work, which is almost as valuable.

Im excited about combining humans and AI to make predictions. Lets say your AI has an error rate of 70% and your human is also only right 70% of the time. Combining the two is difficult, but if you can fuse their successes, then you should be able to do better than either system alone. How to do that is a really tough, exciting question.

All these predictive models were built and deployed and people didnt think enough about potential biases. Im hopeful that were going to have a future where these human-machine teams are making decisions that are better than either alone.

Guillaume Simoneau

Abdoulaye Banire Diallo

Professor, director of the bioinformatics lab, University of Quebec at Montreal

Prediction for 2030: Machine-based forecasting will be regulated

When a farmer in Quebec decides whether to inseminate a cow or not, it might depend on the expectation of milk that will be produced every day for one year, two years, maybe three years after that. Farms have management systems that capture the data and the environment of the farm. Im involved in projects that add a layer of genetic and genomic data to help forecastingto help decision makers like the farmer to have a full picture when theyre thinking about replacing cows, improving management, resilience, and animal welfare.

With the emergence of machine learning and AI, what were showing is that we can help tackle problems in a way that hasnt been done before. We are adapting it to the dairy sector, where weve shown that some decisions can be anticipated 18 months in advance just by forecasting based on the integration of this genomic data. I think in some areas such as plant health we have only achieved 10% or 20% of our capacity to improve certain models.

Until now AI and machine learning have been associated with domain expertise. Its not a public-wide thing. But less than 10 years from now they will need to be regulated. I think there are a lot of challenges for scientists like me to try to make those techniques more explainable, more transparent, and more auditable.

Go here to read the rest:
The professionals who predict the future for a living - MIT Technology Review

2020: The Year Of Peak New-Car? Disruption Is Fast Approaching – InsideEVs

In January 2020, Tony Seba gave the keynote speech at the North Carolina Department of Transportation 2020 Summit.

He titled his speech "The Future of Transportation."

It is a very good speech. If you have an hour to watch it, I recommend doing so. Or, you can just read my summary and get the key points he made. This summary is not a transcript. I have quoted some of what he said, but mostly I've organized and paraphrased his words.

Tony begins talking in general about technology disruption. He begins with the adoption of automobiles. He showed a picture of the 1900 Easter parade in New York City. The road was filled with horse-drawn carriages and one automobile. He then showed a picture taken in 1913 in New York City. The road was filled with automobiles and one horse-drawn carriage.

He gives the following definition:

"A disruption is essentially when there is a convergence of technologies that make a product or service possible, and that product, in turn, can help create new markets ... and at the same time destroy or radically transform existing industries."

Tony explains a few case studies of companies that have experienced disruptions. These disruptions were often glossed over by analysts. He cites AT&T and Nokia.

He asks, why do smart organizations fail to anticipate or lead in technology disruptions? He talks a little about his disruption framework, which he has created to examine disruptions. He asks, "can we anticipate, can we forecast, more or less, disruptions to come?"

Tony takes a dive into technology cost curves. As production expands, the costs of technologies come down. Technologies are not adopted linearly but are always adopted in an S curve manner.

He reflects on the speed of the automobile adoption. It went from 0% to 95% nationally in 20 years. However, the adoption went from the tipping point to 80% in just 10 years. All the while, the US concurrently built the oil industry (distribution infrastructure), a national road infrastructure and fought WWI.

Tony touts that technology S curves are getting steeper. Adoption is happening faster and faster.

Analysts' projections are very often linear. Analysts often don't take into consideration the fact that adoptions are about systems, technology improvements working together. He cites forecasts made by the EIA (Energy Information Administration) as an example. The EIA has consistently failed with many projections.

Tony talks about technology convergence, a set of technologies that come together at the same time. He says disruptions happen from the outside. It's very rare that a company disrupts itself.

He briefly discusses ride-hailing, Uber & Lyft. The smart-phone made ride-hailing possible. In just eight years ride-hailing went from 0% to 20% of vehicle miles driven in San Francisco. He predicts that in 2020, we will realize peak new-car (globally). Car ownership will begin to decline thereafter.

Tony talks about the concept of "Market Trauma." Mainstream analysts say that EVs are "only 2% - 3% of the market, how much damage can it do?" They say "it's going to take 10 - 20 years before this takes over." A technology can disrupt the economics of an incumbent way before they have 10 - 20% of the market.

Small changes can have swift, dramatic impact in existing industries. In 2014, Tony wrote a book called "Clean Disruption of Energy & Transportation." The book focuses on these areas; batteries, electric vehicles, autonomous vehicles, on-demand transportation and solar.

In 2014, he made a predictive cost curve for lithium-ion batteries. He predicted that lithium-ion batteries would cost $100/kWh by 2023. His cost curve has actually proven to be a little conservative. Tony gives batteries storage as an example of "Market Trauma."

One example is the Tesla battery bank in Australia. The Tesla battery holds only 2% of the market capacity (ancillary services market) and yet has taken 55% market share. It has pushed down wholesale prices by 90%. Incumbent revenues have been brought down by 90%. Natural gas peaker-plants are being stranded. He also cites GE's mistaken choice to invest heavily in natural gas electrical as another example of market trauma.

Tony transitions to talking about EVs. He talks about the "gas savings EVs enjoy. EVs are much cheaper to operate. They are up to ten times cheaper to maintain. He shows a clip of the Rivian truck doing tank turns, as an example of EVs being a better product.

EVs have a much longer life span, up to 500,000 miles, up to 2.5 times longer than IC vehicles. This is of particular interest to fleet operators. It makes total sense for fleet managers to go full EV.

He shows his cost curve for EVs. He predicts in his curve that basic 200 mile EVs will cost as little as $12,000 by 2025.

Tony predicts that next year is the EV tipping point, "for purely economic reasons." He says "it won't make any sense to buy a gas car." He predicts that every new car after 2025 will be electric. Tony cites Amazon's order of 100,000 delivery vans from Rivian, "for purely economic reasons."

Tony goes on to talk about autonomous technology. He features Waymo's autonomous ride-hailing. More than four dozen companies are investing in autonomous technology. He says "Think of EVs as computers on wheels." No one is waiting around to create autonomous technology. Only two companies will survive (only two autonomous companies).

Autonomous vehicles are safer than humans. Prediction: by 2030, we are going to be talking about taking away drivers licenses from humans. Insurance costs for human drivers will go up.

Tony brings up computing power. How quickly is the supercomputing cost curve improving? In the year 2000, the largest supercomputer on earth cost 46 million dollars and could do 1 TeraFlops (Sandia National Labs). The Apple X on iPhone released in 2019 can do 5 TeraFlops and costs about $600.

The improvements in AI are double exponential. Tony cites an AI learning to play AlphaGo and beating the world champion, then the next generation AI learning to beat the previous generation AI within days rather than months or years.

The real big disruption is in the convergence of electric vehicles, ride-hailing and autonomous vehicles. This convergence will create Transportation as a Service (TaaS). "Everyone is going electric." DIDI (the China equivalent to Uber) expects to have 1 million electric vehicles on the road by 2020.

Tony predicts that when autonomous technology goes live (approved) consumers will face the choice of buying a vehicle or using TaaS. TaaS will be up to ten times less expensive than vehicle ownership. Tony predicts that eventually, TaaS will cost less than 18 cents per mile. He says that by 2030 95% of all vehicles miles driven will be done by TaaS fleets.

By 2030 vehicle ownership will be 60% fleets and 40% personal. However, most of the miles will be driven by TaaS fleets. People will save, on average, $5,600 a year. The total US vehicle fleet will shrink by 70%. There will be fewer cars. Those fewer cars will be doing most of the driving miles.

By 2030 Taas will save the economy 1 trillion dollars per year. US disposable income will increase by 1 trillion dollars per year. The cost of travel will be only 5 cents to 10 cents per mile. This will have implications for the economy, social, health, work and other areas.

Tony predicts that oil demand will peak this year or next (2021). After this, the price of oil will eventually fall to $25 per barrel.

He talks about parking lots. He says 80% of parking will become obsolete. There will be a drastically reduced need for parking lots. That space can be re-utilized. It can be used for other things.

Tony says this disruption is not just about transportation, everything is changing. Now is the time to imagine what type of city we want in 10 years. He says that it is as if we are in 1900, we are on the cusp of the deepest, fastest, most consequential disruption in 100 years, and perhaps ever.

From the comments on the video, Tony has been saying these things for a while. It is of note, though, how closely he has come to the mark.

I question the 10 cents per mile for ride-hailing and 70% fewer automobile ownership. For the ride-hailing to be this low, electricity would have to be very cheap, and the cost of the autonomous vehicles would have to be extremely low. If an autonomous vehicle was available for $20,000 and it could go 1 million miles with minimal maintenance, then the amortized cost could be 2 - 3 cents per mile. Add to that electricity, at least 4 cents a mile. Add to that the ride-hailing service's cut and the total is going to be over 10 cents a mile.

Besides, if I could buy a million-mile EV for $20,000, why wouldn't I? It could be the last car I'd ever have to buy. The low-cost EV that makes the cheap ride-hailing possible also makes cheap automobile ownership possible, a bit of a paradox (sounds like the topic for another article). I guess we have to wait only a few years to see if Tony Seba is correct.

The Future of Transportation, Tony Seba Keynote Speaker at the 2020 NC DOT Summit http://www.youtube.com/watch?v=y916mxoio0E

View original post here:
2020: The Year Of Peak New-Car? Disruption Is Fast Approaching - InsideEVs

Why The Race For AI Dominance Is More Global Than You Think – Forbes

Getty

When people hear about the race for Artificial Intelligence (AI) dominance, they often think that the main competition is between the US and China. After all, the US and China have most of the largest and most well funded AI companies on the planet, and the pace of funding, company growth, and adoption doesnt seem to be slowing anytime soon. However, if you look closely, youll see that many other countries have a stake in the AI race, and indeed, some countries have AI efforts, funding, technologies, and intellectual property that make them serious contenders in the jostling for AI dominance. In fact according to a recent report from analyst firm Cognilytica, France, Israel, United Kingdom, and the United States all are equally strong when it comes to AI, with China, Canada, Germany, Japan, and South Korea equally close in their AI strategic strength. (Disclosure: Im a principal analyst with Cognilytica).

The Current Leaders in AI Funding and Dominance: US and China

AI startups are raising more money than ever. AI-focused companies raised $12 Billion in 2017 alone, more than doubling venture funding over the previous year. Most of this funding is concentrated in US and Chinese companies, but the source of those funds is much more international. Softbank, based in Japan, has amassed a $100 Billion investment fund, with many international investors including Saudi Arabias sovereign investment fund and other global sources of capital. While US companies have put up significant investment rounds with the power of Silicon Valleys VC funds, China now has the most valuable AI startup, Sensetime, which raised over $1.2 Billion and a rumored additional $1 Billion raise on the way.

However, what makes AI as a technology sector different from previous major waves of investment, is that AI is seen as strategic technology by many governments. In 2017 China released a three step program outlining its goal to become a world leader in A.I. by 2030. The government aims to make the AI industry worth about $150 billion and is pushing for greater use of AI in a number of areas such as the military and smart cities. Furthermore, the Chinese government has made big bets including a planned $2.1 Billion AI-focused technology research park. And in 2019 TheBeijing AI Principleswere released by a multistakeholder coalition including the Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University, Institute of Automation and Institute of Computing Technology in Chinese Academy of Sciences, and an AI industrial league involving firms like Baidu, Alibaba and Tencent.

In addition, the Chinese technology ecosystem has developed to become a powerhouse in its own right. China has many multi-billion dollar tech giants including Alibaba, Baidu, Tencent, and Huawei Technologies, who are each heavily investing in AI. Chinese companies also work more closely with the Chinese government, and laws in China are the most relaxed with regards to customer privacy and use of AI technologies such as facial recognition on their citizens. Chinas government has already embraced the use of facial recognition technology and has quickly adopted this technology in everyday use. In most other counties such as the US for example, privacy concerns prevent pervasive use of facial recognition technology, but such concerns or impediments to adoption dont exist in China.

The story of technology company creation and funding in the United States is already well known. Silicon Valley is both a region as well as a euphemism for the entire tech industry, showing how dominant the US has been for the past several decades with technology creation and adoption. Venture capital as an industry was invented and perfected in the US, and the result of that has been the creation of such enduring tech giants like Amazon, Apple, Facebook, Microsoft, Google, IBM and thousands of other technology firms big and small. Collectively trillions of dollars has been invested in these firms by private and public sector investors to create the technology industry as we know it today. Certainly, none of that is going away anytime soon.

In addition, the US has an extremely well developed and highly skilled labor pool with academic powerhouses and research institutions that continue to push the boundaries of what is capable with AI. What is notable is that even in the US, the dominance of Silicon Valley as a specific, San Francisco-bay geographic region is starting to slip. The New York city region has produced many large AI-focused technology firms, and research in the Boston-area centered around MIT and Harvard, Pittsburgh with Carnegie Mellon, the Washington, DC metro area with its legions of government-focused contractors and development shops, Southern Californias emerging tech ecosystem, Seattle-based Amazon and Microsoft, and many more locations in the US are loosening the hold that Northern California has on the technology industry with respect to AI. And just outside the US, Canadian firms from Toronto, Montreal, and Vancouver are further eroding the dominance of Silicon Valley with respect to AI.

In 2018 the United States issued an Executive Order from the President naming AI the second highest R&D priority after the security of the American people for the fiscal year 2020. Additionally, the U.S. Department of Defenseannouncedit will invest up to $2 billion over the next five years towards the advancement of AI. As recently as 2020 the United States launched the American AI Initiative with the strategy aimed at focusing the federal government resources. The US federal government also launched AI.gov to make it easier to access all of the governmental AI initiatives currently underway. Once potentially seen lackluster in comparison to that of China and other countries the US government has really started making AI a priority to keep up in recent years.

Countries With Significant Stakes in AI

As mentioned above, what makes the AI industry unique is that it is actually not a new thing, but rather evolved over decades, even prior to the development of the modern digital computer. As a result, many technology developments, investment, and intellectual property exists outside the US and China. Countries that have been involved with AI since the early days are realizing the strategic nature of AI and doubling down on their efforts to retain a stake in global AI share and maintain their relevance and importance.

Japan

Japan has long been a leader in the AI industry, and in particular their development and adoption of robotics. Japanese firms introduced concepts such as the 3 Ds (Ks) of robotics that we discussed in our research on cobots. Not only is their technology research excellence on par with anywhere in the world, they have the funding to back it up. As mentioned earlier, Japan-based Softbank is an investor powerhouse unrivaled in the venture capital industry.

Japans government released their Artificial Intelligence Technology Strategy in March 2017. This strategy includes an Industrialization Roadmap and focuses the development of AI into three phases: the utilization and application of AI through 2020, the publics use of AI from 2025-2030, and lastly an ecosystem built by connecting multiplying domains. The countrys strategy focuses on R&D for AI, collaboration between industry, government, and academia to advance AI research, and addressing areas related to productivity, welfare and mobility.

However, it is important to note that while Japan continues to exhibit dominance in robotics and other AI fields as well as its Softbank powerhouse, many of the firms that Softbank is investing in are not Japan-based, and so much of the investment is not remaining focused on Japans own AI industry. In addition, while technology development is advanced and rapidly progressing and while Japan is known as a country to embrace technology, many Japanese companies have not been quick to embrace AI technology and the use of AI is largely limited to the financial sector and concentrated in the manufacturing industry. The country is also facing significant demographic pressure, with an aging population, causing a shortage in available workforce. On the one hand, the adoption of AI and robotic technologies are seen as a solution to labor and aging demographics, on the otherhand, the lack of workforce will cause strategic problems for creation of AI dominant companies.

South Korea

South Koreas government is a significant investor and strong supporter of local technology development, and AI is certainly no exception. The government recently announced it plans to spend $2 billion by 2022 to strengthen its AI R&D capability including creating at least six new AI schools by 2020, with plans to educate more than 5,000 new high quality engineers in Korea in response to a shortage of AI engineers. The government also plans to fund large scale AI projects related to medicine, national defense, and public safety as well as starting an AI R&D challenge similar to those developed by the US Defense Advanced Research Projects Agency (DARPA). The government will also invest to support the creation and development of AI startups and businesses. This support includes the creation of an AI-oriented start-up incubator to support emerging AI businesses and funding for the creation of an AI semiconductor by 2029.

South Korea is home to many large tech companies such as Samsung, LG, and Hyundai among others, and is known for its automotive, electronics, and semiconductor industries as well as the use of industrial robotics technology. It also famously hosted the match where DeepMinds AlphaGo defeated Gos world champion Lee Sedol (a Korean-native). Clearly, you cant count South Korea out of any race for AI dominance. The only thing significantly lacking is a well-developed venture capital ecosystem and a large number of startups. South Koreas AI efforts are almost entirely concentrated in the activities of the major technology incumbents and government activities.

United Kingdom

The United Kingdom is a clear leader for AI and the government is financially supporting AI initiatives. In November 2017, the UK government announced 68 million of funding for research into AI and robotics projects aimed at improving safety in extreme environments as well as funding four new research hubs that will be created to help develop robotic technology to improve safety in off-shore wind and nuclear energy. It has a goal to invest about $1.3 billion in AI investment from both public and private funds over the coming years. As part of this plan, Global Brain, a Japan-based venture capital firm, plans to invest about $48 million in AI-focused UK-based tech startups as well as open a European headquarters in the United Kingdom. Canadian venture capital firm Chrysalix also plans to open a European headquarters in the U.K. as well as invest over $100 million in UK-based startups who specialize in AI and robotics. The University of Cambridge is installing a $13 million supercomputer and will give U.K. businesses access to the new supercomputer to help with AI-related projects.

The U.K. is of course also the home of Alan Turing, renowned forefather of computing and an early proponent of AI, with the namesake Turing Test. The UK can also claim (in not such a great light) to be one of the precipitating factors of the first AI Winter when the Lighthill Report was released in 1973 leading to significant declines in AI investment. As such, the UK has exhibited in the past significant influence positively, and negatively, in worldwide AI spending and adoption. To avoid future problems, the U.K. is looking to position itself as a world leader in ethical AI standards. The UK sees this as an opportunity to position itself as an AI leader with ethical AI, helping to create standards used for all. It knows it cant compete with AI funding and development from counties like the US and China but thinks it has a shot by taking an ethical standards approach and leveraging its early status as a lead in AI development.

France

Frances President Emmanuel Macron released a national strategy for artificial intelligence in early 2018. The country announced that over the next five years it will invest more than 1.5 billion for AI-related research and support for emerging startups in a bid to compete with the US, China, and others for AI dominance. The French strategy is to put an emphasis on and target four specific areas of AI related to health, transportation (such as driverless cars), the environment, and defense/security. Some notable AI researchers and data scientists were educated in France, such as Facebooks head of AI Yann LeCun. France wants to try to keep that talent in France instead of moving to overseas companies.

Many companies such as Samsung, Fujitsu, DeepMind, IBM and Microsoft have announced plans to open offices in France for AI research. The French administration also wants to share new data sets with the public making it easy to access and build AI services using those data sets. The caveat to receiving public funds is that research projects or companies financed with public money will have to share their data. Many European Union (EU) officials have expressed dismay with the way that Facebook, Google, Microsoft, Amazon, and others have hoarded user data, and Macron and his administration are concerned about the black box of AI data and decision-making. France is also focused on addressing the ethical concerns around AI as well as trying to create unbiased data sets which is part of the reason for open algorithms and data sets. While Frances efforts are significant, they pale in terms of total money put into the industry and resources available to compete with the efforts of other nations.

Germany

Germany is an industrial powerhouse, has long been known to have great engineering capabilities, and Berlin is currently Europes top AI talent hub. According to Atomicos 2017 State of European Tech report, Germany is most likely to become a leader in areas such as autonomous vehicles, robotics and quantum computing. In fact, almost half of all worldwide patents on autonomous driving come from German car companies or their suppliers such as Bosch, Volkswagen, Audi and Porsche. These German companies had begun their autonomous vehicle development activities as early as 1986.

A new tech hub region in southern Germany, called Cyber Valley, is hoping to create new opportunities for collaboration between academics and businesses with a specific focus on AI. The new hub plans to focus on AI and robotics, make better use of research talent, and collaboratively work with companies such as Porsche, Daimler and Bosch. In addition to autonomous vehicles, Germany has an early lead with robotics, with one of the first cobots developed in Germany for use in manufacturing. Additionally, Germanys AI strategy was published in December 2018 in Nuremberg. And, in 2019, The German government tasked a new Data Ethics Commission with producing guidelines for the development and use of AI.

Despite these intellectual property and early market leads, Germany has not invested at the same levels as other countries, and the technology firms are highly concentrated in manufacturing, automotive, and industrial sectors, leaving other markets mostly untapped with AI capabilities. Furthermore, American automakers such as Ford, GM, and Google Waymo, as well as Uber and other firms are quickly catching up with the number of patents issued and threatening Germanys dominance for intellectual property in that area.

Russia

Russian president Vladimir Putin made a statement that: Artificial intelligence is the future, not only for Russia, but for all of humankind and that whichever country becomes the leader in this sphere will become the ruler of the world. This is one powerful statement. Russia has said that intelligent machines are vital to the future of their national security plans and, by 2025, it plans to make 30% of its countrys military equipment robotic. The government also wants to standardize development of artificial intelligence focusing on image recognition, speech recognition, autonomous military systems, and information support for weapons life-cycle. There is also a new Russian AI Association bringing the academic and private-sector together. Additionally, Russian President Vladimir Putin approved the National Strategy for the Development of Artificial Intelligence (NSDAI) for the period until 2030 on October 2019.

Russia is still a world superpower in terms of military might, and exerts significant influence in world markets, especially in the energy sector. Despite that, Russian investment in AI is still significantly lacking that of other countries, with only a reported $12M invested by the government in research efforts. While Russia has had significant input and efforts around AI research in the university setting, the countrys industry lacks overall AI talent and number of companies working towards AI related initiatives. Many skilled Russian engineers leave the country to work at other firms worldwide who are throwing lots of money at skilled talent. As such, the biggest application of AI in Russia is in physical and cyberwarfare situations, leveraging AI to enhance the capabilities of autonomous vehicles and information warfare. In this arena, Russia is certainly a country to be contended with regards to AI dominance.

Other AI Hotspots

In addition to the above, there are many countries that are seeing AI as a country level strategic initiative including Israel, India, Denmark, Sweden, Estonia, Finland, Netherlands, Poland, Singapore, Malaysia Australia, Italy, Canada, Taiwan, the United Arab Emirates (UAE), and other locations. Some of these countries have more financial than technical resources, or vice-versa. The key is that for each of these countries, they see AI in a strategic light and as such theyve crafted a strategic approach to AI.

AI technologies have the ability to transform and influence the lives of many people. Not only will AI transform the way we work, interact with each other and travel between locations, but it also has an impact on weapons technology, modern warfare, and a countrys cyber security. AI can also have a dramatic impact on the labor market, disrupting entire industries and creating whole new ones. As such, having a focus on AI dominance can also help strengthen that countrys economy, shift global leadership and power, and give military advantages. While the race for AI domination might seem similar to the Space Race or aspects of the Cold War, in reality the AI market doesnt support a winner take all approach. Indeed, continued advancement in AI requires research and industry collaboration, continued research and development, and industry-wide thinking and solutions to problems. While there will no doubt be winners and losers in terms of overall investment and return, countries worldwide will reap the benefits of increased adoption and development of cognitive technologies.

Excerpt from:
Why The Race For AI Dominance Is More Global Than You Think - Forbes

AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun – ZDNet

Geoffrey Hinton, center. talks about what future deep learning neural nets may look like, flanked by Yann LeCun of Facebook, right, and Yoshua Bengio of Montreal's MILA institute for AI, during a press conference at the 34th annual AAAI conference on artificial intelligence.

The rise of dedicated chips and systems for artificial intelligence will "make possible a lot of stuff that's not possible now," said Geoffrey Hinton, the University of Toronto professor who is one of the godfathers of the "deep learning" school of artificial intelligence, during a press conference on Monday.

Hinton joined his compatriots, Yann LeCun of Facebook and Yoshua Bengio of Canada's MILA institute, fellow deep learning pioneers, in an upstairs meeting room of the Hilton Hotel on the sidelines of the 34th annual conference on AI by the Association for the Advancement of Artificial Intelligence. They spoke for 45 minutes to a small group of reporters on a variety of topics, including AI ethics and what "common sense" might mean in AI. The night before, all three had presented their latest research directions.

Regarding hardware, Hinton went into an extended explanation of the technical aspects that constrain today's neural networks. The weights of a neural network, for example, have to be used hundreds of times, he pointed out, making frequent, temporary updates to the weights. He said the fact graphics processing units (GPUs) have limited memory for weights and have to constantly store and retrieve them in external DRAM is a limiting factor.

Much larger on-chip memory capacity "will help with things like Transformer, for soft attention," said Hinton, referring to the wildly popular autoregressive neural network developed at Google in 2017. Transformers, which use "key/value" pairs to store and retrieve from memory, could be much larger with a chip that has substantial embedded memory, he said.

Also: Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws

LeCun and Bengio agreed, with LeCun noting that GPUs "force us to do batching," where data samples are combined in groups as they pass through a neural network, "which isn't efficient." Another problem is that GPUs assume neural networks are built out of matrix products, which forces constraints on the kind of transformations scientists can build into such networks.

"Also sparse computation, which isn't convenient to run on GPUs ...," said Bengio, referring to instances where most of the data, such as pixel values, may be empty, with only a few significant bits to work on.

LeCun predicted they new hardware would lead to "much bigger neural nets with sparse activations," and he and Bengio both emphasized there is an interest in doing the same amount of work with less energy. LeCun defended AI against claims it is an energy hog, however. "This idea that AI is eating the atmosphere, it's just wrong," he said. "I mean, just compare it to something like raising cows," he continued. "The energy consumed by Facebook annually for each Facebook user is 1,500-watt hours," he said. Not a lot, in his view, compared to other energy-hogging technologies.

The biggest problem with hardware, mused LeCun, is that on the training side of things, it is a duopoly between Nvidia, for GPUs, and Google's Tensor Processing Unit (TPU), repeating a point he had made last year at the International Solid-State Circuits Conference.

Even more interesting than hardware for training, LeCun said, is hardware design for inference. "You now want to run on an augmented reality device, say, and you need a chip that consumes milliwatts of power and runs for an entire day on a battery." LeCun reiterated a statement made a year ago that Facebook is working on various internal hardware projects for AI, including for inference, but he declined to go into details.

Also: Facebook's Yann LeCun says 'internal activity' proceeds on AI chips

Today's neural networks are tiny, Hinton noted, with really big ones having perhaps just ten billion parameters. Progress on hardware might advance AI just by making much bigger nets with an order of magnitude more weights. "There are one trillion synapses in a cubic centimeter of the brain," he noted. "If there is such a thing as General AI, it would probably require one trillion synapses."

As for what "common sense" might look like in a machine, nobody really knows, Bengio maintained. Hinton complained people keep moving the goalposts, such as with natural language models. "We finally did it, and then they said it's not really understanding, and can you figure out the pronoun references in the Winograd Schema Challenge," a question-answering task used a computer language benchmark. "Now we are doing pretty well at that, and they want to find something else" to judge machine learning he said. "It's like trying to argue with a religious person, there's no way you can win."

But, one reporter asked, what's concerning to the public is not so much the lack of evidence of human understanding, but evidence that machines are operating in alien ways, such as the "adversarial examples." Hinton replied that adversarial examples show the behavior of classifiers is not quite right yet. "Although we are able to classify things correctly, the networks are doing it absolutely for the wrong reasons," he said. "Adversarial examples show us that machines are doing things in ways that are different from us."

LeCun pointed out animals can also be fooled just like machines. "You can design a test so it would be right for a human, but it wouldn't work for this other creature," he mused. Hinton concurred, observing "house cats have this same limitation."

Also: LeCun, Hinton, Bengio: AI conspirators awarded prestigious Turing prize

"You have a cat lying on a staircase, and if you bounce a soccer ball down the stairs toward a care, the cat will just sort of watch the ball bounce until it hits the cat in the face."

Another thing that could prove a giant advance for AI, all three agreed, is robotics. "We are at the beginning of a revolution," said Hinton. "It's going to be a big deal" to many applications such as vision. Rather than analyzing the entire contents of a static image or video frame, a robot creates a new "model of perception," he said.

"You're going to look somewhere, and then look somewhere else, so it now becomes a sequential process that involves acts of attention," he explained.

Hinton predicted last year's work by OpenAI in manipulating a Rubik's cube was a watershed moment for robotics, or, rather, an "AlphaGo moment," as he put it, referring to DeepMind's Go computer.

LeCun concurred, saying that Facebook is running AI projects not because Facebook has an extreme interest in robotics, per se, but because it is seen as an "important substrate for advances in AI research."

It wasn't all gee-whiz, the three scientists offered skepticism on some points. While most research in deep learning that matters is done out in the open, some companies boast of AI while keeping the details a secret.

"It's hidden because it's making it seem important," said Bengio, when in fact, a lot of work in the depths of companies may not be groundbreaking. "Sometimes companies make it look a lot more sophisticated than it is."

Bengio continued his role among the three of being much more outspoken on societal issues of AI, such as building ethical systems.

When LeCun was asked about the use of factual recognition algorithms, he noted technology can be used for good and bad purposes, and that a lot depends on the democratic institutions of society. But Bengio pushed back slightly, saying, "What Yann is saying is clearly true, but prominent scientists have a responsibility to speak out." LeCun mused that it's not the job of science to "decide for society," prompting Bengio to respond, "I'm not saying decide, I'm saying we should weigh in because governments in some countries are open to that involvement."

Hinton, who frequently punctuates things with a humorous aside, noted toward the end of the gathering his biggest mistake with respect to Nvidia. "I made a big mistake back in with Nvidia," he said. "In 2009, I told an audience of 1,000 grad students they should go and buy Nvidia GPUs to speed up their neural nets. I called Nvidia and said I just recommended your GPUs to 1,000 researchers, can you give me a free one, and they said no.

"What I should have done, if I was really smart, was take all my savings and put it into Nvidia stock. The stock was at $20 then, now it's, like, $250."

Read more here:
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun - ZDNet

So Is an AI Winter Really Coming This Time? – Walter Bradley Center for Natural and Artificial Intelligence

AI has fallen from glorious summers into dismal winters before. The temptation to predict another such tumble recurs naturally. So that is the question the BBC posed to AI researchers: Are we on the cusp of an AI winter:

The 10s were arguably the hottest AI summer on record with tech giants repeatedly touting AIs abilities.

AI pioneer Yoshua Bengio, sometimes called one of the godfathers of AI, told the BBC that AIs abilities were somewhat overhyped in the 10s by certain companies with an interest in doing so.

There are signs, however, that the hype might be about to start cooling off.

I keep up with this kind of thing. The answer is: Yes, and no. AI did surge past milestones during the 2010s that it had not been expected to cross for many more years:

2011 IBMs Watson wins at Jeopardy! IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next (Tech Republic, September 9, 2013)

2012 Google unveils a deep learning systems that recognized images of cats

2015 Image recognition systems outperformed humans in the ImageNet challenge

2016 AlphaGo defeats world Go champion Lee Sedol: In Two Moves, AlphaGo and Lee Sedol Redefined the Future (Wired, March 16, 2016)

2018 Self-driving cars hit the road as Googles Waymo launched (a very limited) self-driving taxi service in Phoenix, Arizona

But other headlines during the period have been less heeded:

Despite High Hopes, Self-Driving Cars Are Way in the Future (2019)

The Next Hot Job: Pretending to Be a Robot (2019)

Boeings Sidelined Fuselage Robots: What Went Wrong? (2019)

Self-driving cars: Hype-filled decade ends on sobering note (2019)

Tesla driver killed in crash with Autopilot active, NHTSA investigating (2016)

Dont fall for these 3 myths about AI, machine learning (2018)

A Sobering Message About the Future at AIs Biggest Party (2019)

And so on.

So which is it? AI Winter or Robot Overlords? I suggest neither. And so do active researchers.

Gary Marcus, an AI researcher at New York University, said: By the end of the decade there was a growing realisation that current techniques can only carry us so far.

He thinks the industry needs some real innovation to go further.

There is a general feeling of plateau, said Verena Rieser, a professor in conversational AI at Edinburgh[s Heriot Watt University.

One AI researcher who wishes to remain anonymous said were entering a period where we are especially sceptical about AGI.

Recent AI developments, notably those lumped under the rubric of Deep Learning have advanced the state-of-the-art in machine learning. Lets not forget that prior efforts, such as the poorly named Expert Systems, had faded because, well, they werent expert at all. Deep Learning systems, as highly flexible pattern matchers, will endure.

What is not coming is the long-predicted AI Overlord, or anything that is even close to surpassing human intelligence. Like any other tool we build, AI has its place when it amplifies and augments our abilities.

Just as tractors and diggers have not led to legions of people who no longer use their arms, the latest advances in AI will not lead to human serfs cowering before beneath an all-intelligent machine. If anything, AI will require more from us, not less, because how we choose to use these tools will make an increasingly stark difference between benefit and ruin.

As Samin Winiger, a former AI research at Google says, What we called AI or machine learning during the past 10-20 years, will be seen as just yet another form of computation

Machines are tool in the toolbox, not a replacement for minds. An AI winter would only be coming if we forgot that.

Here are some of Brendan Dixons earlier musings on the concept of an AI Winter:

Just a light frost? Or an AI winter? Its nice to be right once in a whilecheck out the evidence for yourself

and

AI WinterIs Coming:Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding.

Follow this link:
So Is an AI Winter Really Coming This Time? - Walter Bradley Center for Natural and Artificial Intelligence