Category Archives: Alphago

Is there intelligence in artificial intelligence? – Vaughan Today

This article was republished from France Conversation

Nearly 10 years ago, in 2012, the scientific world was amazed at the exploits of deep learning ( Deep learning). Three years later, this technology allowed AlphaGo to run Defeat the Go Champions. Some feared. Elon Musk, Stephen Hawking and Bill Gates were concerned about the approaching end of humanity, which would be replaced by an AI that would get out of control.

Wasnt that a bit of a stretch? This is exactly what the AI thinks. In an article he has Written in 2020 in a Watchman, GPT-3, this giant neural network of 175 billion parameters shows:

Im here to convince you not to worry. Artificial intelligence will not destroy humans. Trust me.

At the same time, we know that the power of the machines is constantly increasing. Training a network like GPT-3 was literally out of the question just five years ago. It is impossible to know what his successors could do in five, ten, or twenty years. If todays neural networks can replace dermatologists, why not replace us all?

Lets take the question back.

We immediately think of skills that involve our intuition or our creativity. No luck, Amnesty International claims to attack us in these areas as well. As proof of this, the fact that software-generated businesses have been sold quite expensive, some amounting to nearly half a million dollars. On the musical side, of course, everyone will have their own opinion, but we can actually recognize the accepted bluegrass or roughly Rachmaninoff in the MuseNet tradition created, like GPT-3, by OpenAI.

Will we soon have to surrender to the inevitable control of AI? Before calling for rebellion, lets try to see what we are dealing with. Artificial intelligence is based on several technologies, but its recent success is due to only one: neural networks, especially those of deep learning. However, a neural network is nothing more than a machine that can be linked. Deep web that She talked about it in 2012 Associated images: horse, boat, mushroom, with corresponding words. It is not enough to cry a genius.

However, this correlation mechanism has the somewhat miraculous characteristic of being persistent. You present a horse the network has never seen, it recognizes it as a horse. Youre adding noise to the image, dont disturb it. why ? Because the continuity of the process ensures that if the input to the network changes a little, its output will change a little as well. If the still hesitant network forced it to search for its best answer, it probably wouldnt change: the horse remains a horse, even if it differs from learned examples, even if the picture is noisy.

Good, but why do we say such associative behavior is smart? The answer seems clear: it can diagnose skin cancer, grant bank loans, keep the car on the road, detect diseases in physiological signals, etc. Thanks to their power to connect, these networks gain forms of experience that require years of study from humans. And when one of these skills, for example writing a newspaper article, appears to be holding on for some time, it suffices to make the machine swallow more examples, as happened with GPT-3, because the machine begins to produce convincing results.

Is it really to be smart? No. This kind of performance is only a small side of intelligence at best. What neural networks do is like rote learning. This is not, of course, because these networks constantly fill in the gaps between the examples they have been shown. Lets say it is almost by heart. Human experts, be they doctors, pilots or Go players, often dont do anything else when deciding reflexively, thanks to the sheer amount of examples learned during their training. But humans have many other powers.

The neural network cannot learn arithmetic. The correlation between processes such as 32 + 73 and their results has limitations. They can only reproduce the strategy of the mutt trying to guess the outcome, sometimes falling right. Too difficult account? How about an initial IQ test like: Continued Sequence 1223334444. Correlation by continuity does not always help to see the structure, N repeat N 5 times and continues with five 5. Are you still too difficult? The associations programs cant even guess that the animal that died on Tuesday is not alive on the Wednesday. why ? What are they missing?

Modeling in cognitive science has revealed the existence of several mechanisms, other than correlation by continuity, which are all components of human intelligence. Because their experience is prepared in advance, they cannot think of the right time to decide that the dead animal is still dead, or still Understand the meaning From the phrase he still hasnt died and the strangeness of this other sentence: He is not always dead. Their single digestion of large amounts of data does not allow them to do so Identify new structures Very obvious to us, like identical number sets in the sequence 1223334444. Their rote memorization strategy is also blind. Unpublished anomaly.

Detecting anomalies is an interesting case, because it is often through it that we measure the intelligence of others. The neural network will not see that the nose is missing from the face. Through continuity, he will continue to recognize the person, or perhaps confuse him with another. But he had no way of realizing that not having a nose in the middle of the face was an anomaly.

There are many other cognitive mechanisms inaccessible to neural networks. They are searching for their automation. It implements the operations performed at processing time, as neural networks simply perform the associations previously learned.

With a decade of hindsight Deep learning, The informed audience is starting to see neural networks as a super mechanism rather than as intelligence. For example, the press recently alerted to the astonishing performance of the DALL-E program, which produces creative images from verbal descriptions for example, the images that DALL-E imagines from the terms d-armchair. Lawyer, on the site OpenAI). We now hear far more thoughtful judgments than the cautionary reactions that followed the launch of AlphaGo: Its absolutely amazing, but we must not forget that this is an artificial neural network, trained to accomplish a task; there is neither creativity nor any form of intelligence. (Fabian) Chauvier, France Inter, January 31, 2021)

No form of intelligence? Lets not take too much, but lets stay clear about the huge divide separating neuron networks from what would be true AI.

Jean Louis Dessalles wrote Very Artificial Intelligence for the editions of Odile Jacob (2019). Lecturer, Institute of Mines Communication (IMT)

Read the original:
Is there intelligence in artificial intelligence? - Vaughan Today

One Thousand and One Talents: The Race for AI Dominance – Just Security


The March 2016 defeat of Go world champion Lee Sedol by DeepMind, Alphabets artificially intelligent AlphaGo algorithm, will be remembered as a crucial turning point in the U.S.-China relationship. Oxfords Future of Humanity Institute branded the event Chinas Sputnik moment: a moment of realization among its political and military leaders that artificial intelligence (A.I.) could be Chinas key to achieving global hegemony and dominance over the United States.

Since then, Chinas government has plowed ahead in developing its A.I. capabilities, with President Xi Jinping calling for his country to become a world leader[1] as fast as possible. Given that A.I. technologies could contribute an estimated $112 billion to the Chinese economy by 2030[2], it is no surprise that Beijing believes A.I. to be a new focus of international competition.[3]

For the simple reason that China has focused on pragmatic, collaborative policies rather than restrictive, unilateral ones, it is currently on track to overtake the United States in the A.I. race. A.I.s potentially devastating military applications make the A.I. race not just a struggle for economic dominance, but also a national security threat for whichever State loses the advantage.

While the Biden administrations readiness to boost research and development (R&D) spending and reverse the previous administrations assault on U.S. alliances is a promising first step towards combating this technological challenge, much more is necessary to ensure that the United States technological capabilities do not fall behind those of rising powers.

To hold the line, the United States must leverage its historic alliances with Europe, Australia, and Southeast Asia to pool R&D funding into a multilateral A.I. research group. By creating incentives for scientists across the globe to collaborate together on U.S.-led A.I. development, the United States can ensure that its allies and partners maintain a technological edge over China long into the future.

A.I. Scare

In addition to its commercial benefits, the rapid development of A.I. will also advantage China by adding a potentially devastating tool to its cyberwar arsenal. Concerns about weaponized A.I. have recently been raised by the United Kingdoms Government Communications Headquarters. Their 2020 report claimed that military A.I. would facilitate the rapid adaptation of malware and require a speed of response far greater than human decision-making allows, thus making it difficult for countries to defend against it with current software. The conclusion that many experts have drawn is that the threat of A.I. cyberattacks necessitates the development of defensive A.I. by countries at risk of being targeted.

This threat is not merely theoretical; indeed, China has repeatedly indicated its intention to leverage new technologies like A.I. for offensive purposes. From a 2010 hack of Google by a group with ties to Chinas Peoples Liberation Army to a suspected cyberattack on Australian political institutions in 2020, it is clear that China will not shy away from utilizing the military applications of its emerging technologies.

In fact, Chinas 2017 Next Generation Artificial Intelligence Development Plan made this explicit. The report called for enhancing A.I. civil-military integration by establishing lines of communication and coordination between research institutions, private companies, and the military. Given that any future A.I. cyberattacks could be aimed at U.S. allies and interests, it is vital that the United States prioritizes the development of its own A.I. capabilities to defend against novel techniques.

Unfortunately, research shows that the United States is somewhat unprepared for incoming attacks. While China funneled an estimated $70 billion into A.I. in 2020 (up from $12 billion in 2017), the United States government devoted only $4.9 billiona quarter of what was allocated to the Chinese port of Tianjin for A.I. development alone. It was encouraging to see the Trump administration unveil its American A.I. initiative in response to Chinas 2017 plan albeit with a 19 month delay yet this was only a first step in the right direction. A multilateral strategy is also necessary to prevent China from overtaking the United States in a crucial sector which has the potential to tip the global balance of power.

The Xi Doctrine

The forward-leaning policies initiated by President Xi Jinping have led to many advancements, accelerating Chinas A.I. program and imperiling U.S. national security in the process. One of Xis most effective initiatives has been the so-called thousand talents plan, which offers high salaries and tempting benefits to scientists and researchers who agree to work with China on emerging technologies.[4] The plan has been enormously successful: a CIA official estimated that as many as 10,000 scientists from around the world have participated.

Its potential to grant China a strategic edge over the United States and its allies has also led the U.S. Senate to label the program a threat to American interests. Concerns center around the risk that U.S.-based scientists participating in the plan could transfer research achievements from American laboratories to Chinese ones, thereby accelerating Chinese A.I. development at the United States expense.

Instead of mitigating the issue, Trump era policy responses exacerbated Chinas lead by focusing on increasing A.I. export restrictions. In an attempt to prevent the outflow of sensitive military technologies to China and other hostile states, the U.S. Department of Commerce imposed restrictions on the exports of A.I. technologies. Far from giving the United States a competitive edge, the policy likely stymied A.I. investment by requiring businesses to obtain licenses, a requirement which elongates the export process and imposes high compliance costs on struggling startups. Proof of these policies damaging effects came in 2017 when, for the first time ever, Chinese A.I. startups received a greater share of global venture funding than U.S. startups received.

The Washington Pact

In order to improve U.S. A.I. policy, it is vital that the Biden administration understands two points. First, greater R&D spending is necessary to ensure that the United States can keep up with China on A.I. For the most part, the new administration has embraced this: Bidens campaign reiterated former Google CEO Eric Schmidts assertion that the United States must boost tech R&D because China is on track to surpass the U.S. in R&D. It even went on to claim that Chinas main reason for investing in new technologies was to overtake American technological primacy and dominate future industries.

Second, because American allies are themselves investing heavily into A.I., it is prudent to adopt multilateral solutions which leverage the United States historic alliances as opposed to unilateral America first responses. For instance, Germanys A.I. Made in Germany plan has allocated 3 billion to A.I. research over the next five years, while Frances A.I. for Humanity initiative has injected 1.5 billion into the sector. To balance against Chinas advancements, the United States should take advantage of these alliances and ensure that global investments go into developing A.I. capabilities across the broader liberal democratic sphere.

This second necessity does not appear to have received as much attention from the Biden administration so far. Despite its general recommitment to multilateralism through rejoining the Paris Climate Accord, reprioritizing NATO, and calling for a Summit for Democracy, the Biden administration has largely overlooked the idea of multilateral cooperation on A.I. research.

To match the Chinese technological challenge, the United States must establish research initiatives alongside its historic allies which will benefit U.S. A.I. development. This will have the effect of protecting U.S. national security long into the future by guaranteeing that the United States retains the edge over China in crucial A.I. innovations.

At the center of this policy should be an upgraded equivalent of Chinas thousand talents scheme that would be run as a joint initiative between America and its allies. The European Union, United Kingdom, Australia, and Japans determination to invest heavily into A.I., paired with their historic ties to the United States, suggests potential for large-scale multilateral research collaboration led by the United States.

The Biden administration should therefore suggest the foundation of a multilateral research programcall it One Thousand and One Talentswith the aim of attracting the best A.I. specialists from around the globe. Participating governments would funnel their annual A.I. budgets into the scheme in order to fund research projects with important military and commercial applications. The program would ensure that salaries would be directly competitive with Chinas thousand talents program and that incentives would be put in place to make the Western alternative more attractive than the Chinese one. Like NATO, U.S. leadership would be justified by its status as the main benefactor of the scheme.

The emphasis on multilateralism as a response to U.S.-Chinese competition should come as no surprise. As Princeton professor John Ikenberry writes, the key thing for U.S. leaders to remember when dealing with China is that it may be possible for China to overtake the United States alone, but it is much less likely that China will ever manage to overtake the Western order. It is no different with A.I.


The new technological challenges facing America call for a far-sighted and judicious foreign policy worthy of the worlds greatest superpower. While China may have the advantages of unrestricted State investment and well-planned incentive programs, it lacks alliances that run as deep as the NATO friendships the United States has long depended on. To overcome current Chinese advancements in A.I., the United States must unite with its partners around the world in order to increase the talent, funding, and skill available to it.

The proposed Thousand And One Talents research scheme would boost the United States competitiveness vis-a-vis China by pooling the resources of some of the wealthiest and most technologically advanced nations into U.S.-led A.I. development. Given the inevitability of Chinas rise, multilateral cooperation with like-minded democracies is the only way of ensuring that the U.S. does not face an existential security threat in the future.

The Biden administration must rise to the challenge by uniting with U.S. allies to compete with China on A.I. It is too risky to go it alone.

Editors Note:An earlier version of this essay received an honorable mention in New AmericasReshaping U.S. Security Policy for the COVID Eraessay competition.

[1] Quoted in Strittmatter, Kai, We Have Been Harmonized: Life in Chinas Surveillance State, p.165

[2] Ibid, p.166-167

[3] Quoted in Ibid, p.166-167

[4] Strittmatter, Kai, We Have Been Harmonized: Life in Chinas Surveillance State, p.171

Follow this link:
One Thousand and One Talents: The Race for AI Dominance - Just Security

PNYA Post Break Will Explore the Relationship Between Editors and Assistants – Creative Planet Network

n honor of Womens History Month, Post Break, Post New York Alliance (PNYA)s free webinar series, will examine the way two top female editors have worked with their assistants to deliver shows for HBO, Freeform and others.

By ArtisansPR Published: March 23, 2021

Free video conference slated for Thursday, March 25th at 4:00 p.m. EDT

NEW YORK CITYA strong working relationship between the editor and her assistants is crucial to successfully completing films and television shows. In honor of Womens History Month, Post Break, Post New York Alliance (PNYA)s free webinar series, will examine the way two top female editors have worked with their assistants to deliver shows for HBO, Freeform and others.

Agns Challe-Grandits, editor of the upcoming Freeform series Single, Drunk Female and her assistant, Tracy Nayer will join Shelby Siegel, Emmy and ACE award winner for the HBO series The Jinx: The Life and Deaths of Robert Durst and her assistant, JiYe Kim, to discuss collaboration, how they organize their projects and how editors and assistants support one another. The discussion will be moderated by Post Producer Claire Shanley.

The session is scheduled for Thursday, March 25th at 4:00pm EDT. Following the webinar, attendees will have an opportunity to join small, virtual breakout groups for discussion and networking.


Agns Grandits has decades of experience as a film and television editor. Her current project is Single Drunk Female, a new, half-hour comedy for Freeform. Her previous television credits include P. Valley and SweetBitter for STARZ, Divorce for HBO, Odd Mom Out for Bravo and The Breaks for VH1. She also worked for Showtime on The Affair and Nurse Jackie. In addition, she edited The Jim Gaffigan Show for TV Land, Gracepoint for Fox, an episode on the final season of Bored to Death for HBO, and 100 Centre Street, directed by Sydney Lumet for A&E. Her credits with HBO also include Sex and the City and The Wire.

Tracy Nayer has been an Assistant Editor for more than ten years and has been assisting Agns Grandits for five. She began her career in editorial finishing at a large post-production studio.

Shelby Siegel is an Emmy award-winning film and television editor who has worked in New York for more than 20 years. Her credits include Andrew Jareckis Capturing the Friedmans and All Good Things, Jonathan Caouettes Tarnation, and Gary Hustwits Helvetica and Urbanized. She won Emmy and ACE awards for HBOs acclaimed six-part series The Jinx: The Life and Deaths of Robert Durst. Most recently, she edited episodes of Quantico (ABC), High Maintenance (HBO) and The Deuce (HBO). She began her career working under some of the industrys top directors, including Paul Haggis (In the Valley of Elah), Mike Nichols (Charlie Wilsons War), and Ang Lee on his Oscar-winning films, Crouching Tiger, Hidden Dragon and Brokeback Mountain. She also worked on the critically acclaimed series The Wire.

JiYe Kim began her career in experimental films, working with Anita Thacher and Barbara Hammer. Her first credit as an assistant editor came for Alphago (2017). Her most recent credits include High Maintenance, The Deuce, Her Smell and Share.


Claire Shanley is a Post Producer whose recent projects include The Plot Against America and The Deuce. Her background also includes post facility and technical management roles. She served as Managing Director at Sixteen19 and Technical Director at Broadway Video. She Co-Chairs the Board of Directors of the NYC LGBT Center and serves on the Advisory Board of NYWIFT (NY Women in Film & Television).

When: Thursday, March 25, 2021, 4:00pm EDT

Title: The E&A Team


Sound recordings of past Post Break sessions are available here:

Past Post Break sessions in video blog format are available here:

About Post New York Alliance (PNYA)

The Post New York Alliance (PNYA) is an association of film and television post-production facilities, labor unions and post professionals operating in New York State. The PNYAs objective is to create jobs by: 1) extending and improving the New York State Tax Incentive Program; 2) advancing the services the New York Post Production industry provides; and 3) creating avenues for a diverse talent pool to enter into The Industry.

View original post here:
PNYA Post Break Will Explore the Relationship Between Editors and Assistants - Creative Planet Network

Diffblue’s First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers –

Get inside Wall Street with StreetInsider Premium. Claim your 1-week free trial here.

OXFORD, United Kingdom, March 22, 2021 (GLOBE NEWSWIRE) -- Diffblue, creators of the worlds first AI for code solution that automates writing unit tests for Java, today announced that its free IntelliJ plugin, Diffblue Cover: Community Edition, is now available to use to create unit tests for all of an organizations Java code both open source and commercial.

Free for any individual user, the IntelliJ plugin is availablehere for immediate download. It supports both IntelliJ versions 2020.02 and 2020.03. The Diffblue Cover: Community Edition to date has already automatically created nearly 150,000 Java unit tests!

Diffblue also offers a professional version for commercial customers who require premium support as well as indemnification and the ability to write tests for packages. In addition, Diffblue offers a CLI version of Diffblue Cover, perfect for teams to collaborate using.

Diffblues pioneering technology, developed by researchers from the University of Oxford, is based on reinforcement learning, the same machine learning strategy that powered AlphaGo, Alphabet subsidiary DeepMinds software program that beat the world champion player of Go.

Diffblue Cover automates the burdensome task of writing Java unit tests, a task that takes up as much as 20 percent of Java developers time. Diffblue Cover creates Java tests at speeds 10X-100X faster than humans that are also easy for developers to understand, and automatically maintains the tests as the code evolves even on applications with tens of millions of lines of code. Most unit test generators create boilerplate code for tests, rather than tests that compile and run. These tools guess the inputs that can be used as a starting point, but developers have to finish them to get functioning tests. Diffblue Cover is uniquely able to create complete human-readable unit tests that are ready to run immediately.

Diffblue Cover today supports Java, the most popular enterprise programming language in the Global 2000. The technology behind Diffblue Cover can also be extended to support other popular programming languages such as Python, Javascript and C#.

About DiffblueDiffblue is leading the automation of software creation through the power of AI. Founded by researchers from the University of Oxford, Diffblue Cover uses AI for code to write unit tests that help software teams and organizations efficiently improve their code coverage and quality and to ship software faster, more frequently and with fewer defects. With customers including AWS and Goldman Sachs, Diffblue is venture-backed by Goldman Sachs and Oxford Sciences Innovation. Follow us on Twitter:@diffblueHQ

Editorial contact DiffblueLonn Johnston,

Original post:
Diffblue's First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers -

We don’t need to go back to the office to be creative, we need AI –

Despite predictions of the death of the office in the 1990s, remote working has been slow to take off. Across the EU the share of the population working from home has hovered between four and five per cent for the past two decades.

However, Covid-19 looks likely to change all of that. In 2015, researchers at Stanford University found that remote work increased performance by 13 per cent due to fewer breaks, sick-days and a quieter working environment. And several anecdotal studies of remote working during the pandemic show that people working from home have become more productive.

In 2021, however, we will have to grapple with its downsides. While working from home brings efficiency gains in the short run, the danger is that it will imperil the innovation that drives business performance over the long run. Indeed, efficiency is the enemy of innovation, which is fundamentally about exploration. Having everyone working by themselves makes it hard for people to interact and explore new possibilities.

The solution to this dilemma will come from artificial intelligence (AI). The inherent trade-off between exploration and efficiency is well known to AI researchers. One question that those working in AI often have to grapple with is how often an algorithm should take actions that it hasnt tried, as against actions it has already tried that will usually lead to some reward.

Untried actions can yield spectacular results. For example, when the DeepMind computer program AlphaGo beat Go world champion Lee Sedol in 2016, it did so by exploring moves most human players had never seen before. Prior to move 37 in the second match against Sedol, AlphaGo had calculated that there was a one-in-ten-thousand chance that a human player would make that same move. And the adventurous gamble paid off.

Human innovation involves a similar process of exploration and, to facilitate innovation, companies must get their employees to collide. Before the pandemic, this was achieved through open-plan architecture that encouraged water-cooler moments of unplanned encounters. But, with many employees working from home, corporations will have to find different ways to facilitate these kinds of random interactions.

The prime reason why, until now, people prefer to work together in person rather than online is that digital technologies have provided poor substitutes for the sporadic encounters that happen at work. But AI has the potential to change that. AI is already good at matching, whether it is finding the right film on Netflix or the right partner on a dating app.

In 2021, companies will be throwing resources at developing this kind of matching AI in the workplace. Based on employees emails, Google searches and other data, AI algorithms will be able to deduce what people are working on and their current interests and will act upon that by making digital introductions that would otherwise not have happened. Employees will then evaluate the usefulness of each digital encounter, providing feedback for the AI to learn from.

As more companies grapple with the problem of powering innovation at a time when many are forced to work from home, we will see more AI applications being developed to promote sporadic digital encounters in 2021. If we can get this right, as the economist Frances Cairncross observed back in the 1990s, the world in which millions of people trooped from their home to the office each morning, and reversed the procedure each evening, may finally strike us as bizarre.

Carl Benedikt Frey is director of the Futureof Work at Oxford Universitys Martin School and author of The Technology Trap (Princeton)

See the original post here:
We don't need to go back to the office to be creative, we need AI -

Brinks Home Security Will Leverage AI to Drive Customer Experience – Security Sales & Integration

A partnership with startup OfferFit aims to unlock new insights into customer journey mapping with an AI-enabled, self-learning platform.

DALLAS Brinks Home Security has embarked on what it terms an artificial intelligence (AI) transformation in partnership with OfferFit to innovate true 1-to-1 marketing personalization, according to an announcement.

Founded last year, OfferFit uses self-learning AI to personalize marketing offers down to the individual level. Self-learning AI allows companies to scale their marketing offers using real-time results driven by machine learning.

Self-learning AI, also called reinforcement learning, first came to national attention through DeepMinds AlphaGo program, which beat human Go champion Lee Sedol in 2016. While the technology has been used in academic research for years, commercial applications are just starting to be implemented.

Brinks Home Security CEO William Niles approached OfferFit earlier this year about using the AI platform to test customer marketing initiatives, according to the announcement. The pilot program involved using OfferFits proprietary AI to personalize offers for each customer in the sample set.

At first, the AI performed no better than the control. However, within two weeks, the AI had reached two times the performance of the control population. By the end of the third week, it had reached four times the result of the control group, the announcement states.

Brinks Home Security is now looking to expand use cases to other marketing and customer experience campaigns with the goal of providing customers with relevant, personalized offers and solutions.

The companies that flourish in the next decade will be the leaders in AI adoption, Niles says. Brinks Home Security is partnering with OfferFit because we are on a mission to have the best business intelligence and marketing personalization in the industry.

Personalization is a key component in creating customers for life. The consumer electronics industry, in particular, has a huge opportunity to leverage this type of machine learning to provide customers with more meaningful company interactions, not only at the point of sale but elsewhere in the customer lifecycle.

Our goal is to create customers for life by providing a premium customer experience, says Jay Autrey, chief customer officer, Brinks Home Security. To achieve that, we must give each customer exactly the products and services they need to be safe and comfortable in their home. OfferFit lets us reach true one-to-one personalization.

The Brinks Home Security test allowed OfferFit to see its AI adapting through a real-world case. Both companies see opportunities to expand the partnership and its impact on the customer lifecycle.

We know that AI is the future of marketing personalization, and pilot programs like the one that Brinks Home Security just completed demonstrate the value that machine learning can have for a business and on its customers, comments OfferFit CEO George Khachatryan.

Read more from the original source:
Brinks Home Security Will Leverage AI to Drive Customer Experience - Security Sales & Integration

Dialogues with Global Young Scholars Held in Guangzhou – Invest Courier

On November 18th, Dialogues with Global Young Scholars was held at South China University of Technology, Guangzhou, China. The theme of the conference was Innovation, Youth and Future. Four young scholars who had made outstanding contributions to the field of quantum physics and the other 35 under-35-year-old scholars among the laureates of MIT Technology ReviewsInnovators were invited.

Dai Lei, a researchprofessorat the Shenzhen Institutes of Advanced Technology of the Chinese Academy of Sciences, introduced the key role of gut microbiota in health. He set forth the research breakthroughs made in areas including design principles of synthetic organisms, synthetic yeast chromosomes, and genetically modified bacteria as a treatment for tumors, where the research is brimming with both opportunities and challenges. He also emphasized the significance ofbiotechnology in the boom of technological innovation.

He Ke, a professor from the Department of Physics at Tsinghua University, elaborated on the findings regarding the molecular beam epitaxial growth of low-dimensional topological quantum material systems, as well as its electron configuration and quantum effect. He pointed out that as the microscopic world is dominated by laws of quantum mechanics, it is possible for topological quantum computation to solve the issue of error correction in quantum computing.

Chen Yunji, a research professor at the Institute of Computing Technology of the Chinese Academy of Sciences, introduced the principles, design strategies and significance of deep learning processors. He believes that in the future, every computer may need a deep learning processor of its own. His vision for the future is to carry around AlphaGo in pockets.

Radha Boya, a professor of nanosciencefrom the Department of Physics & Astronomy at the University of Manchesterdelivered a presentation on the latest findings and developments in the field of graphene. Due to the COVID-19 pandemic, the lecture was given via a video recording. She believes that by processing graphene materials with filtration technologies to achieve the same effect as atomic-scale capillaries do, we can help solve the key issue of seawater desalination across the globe.

The event was held during the finals of 6th China International College Students Internet+ Innovation and Entrepreneurship Competition. Nearly 500 students, teachers, as well as representatives of the contestants and the South China University of Technologyattended the event. Live-streaming views of the event has hit more than 100,000 online.

Media ContactCompany Name: South China University of TechnologyContact Person: Tao ZhouEmail: Send EmailCountry: ChinaWebsite:

Read more here:
Dialogues with Global Young Scholars Held in Guangzhou - Invest Courier

Artificial intelligence could help fund managers monetise data but will conservatism hold back the industry? – HedgeWeek

Technological advances are shaping the way asset management firms operate, as they look for ways to introduce artificial intelligence applications to monetise data, and improve automation from the front to the back office.

Back in 2016, SEI wrote a white paper entitled The Upside of Disruption: Why the Future of Asset Management Depends on Innovation, in which it highlighted five trends shaping innovation: Watsonisation, Googlisation, Amazonisation, Uberisation and Twitterisation.

Witnessing the exponential changes occurring within and outside of the asset management industry as it relates to artificial intelligence, data management, platforms, social media and the like, SEI, in collaboration with ANZU Research, has updated these themes in its new series, The Exponential Pull of Innovation: asset management and the upside of disruption.

With regards to the first trend, Watsonisation, a lot has changed in terms of the power, sophistication and scale of artificial intelligence applications being used within asset management.

As the first of 5 papers in this series being released over the coming months, SEIs new Watsonisation 2.0 white paper points out, successfully harnessing technology in a complex and heavily regulated industry like ours is not easy. With new technologies and business models making change a constant, the financial services industry is being reorganized, re-engineered and reinvented before our eyes. There are now dedicated AI hedge fund managers such as Aidiyia Holdings, Cerebellum Capital and Numerai, all of whom are pushing the envelope when it comes to harnessing the power of AI in their trading models.

According to a report by Cerulli, AI-driven hedge funds produced cumulative returns of 34 per cent over a three-year period from 2016 to 2019, compared to 12 per cent for the global hedge fund industry. Moreover, Cerullis research shows that European AI-led active equity funds grew at a faster rate than other active equity funds from January to April this year.

That trend will likely continue as asset managers tap into the myriad possibilities afforded by AI. As SEI notes, portfolio management teams are tapping in to the predictive capabilities by working alongside quantitative specialists with the skills needed to train AI systems on large data sets.

Large managers such as Balyasny Asset Management are now actively embracing a quantamental strategy to mine alternative data sets and evolve their investment capabilities. To do this, they are hiring sector analysts; people with sector expertise and superior programming skills in programming languages such as Python. The aim of this is to act as a conduit between Balyasnys quantitative and fundamental analysts.

SEI argues that asset management is perfectly suited for the widespread adoption of AI.

They write: Data is its lifeblood, and there is an abundance of historic and real time data from a huge variety of sources (both public and private/internal). Traditional sources of structured data are always useful but ripe for more automated analytics.

Julien Messias is the co-founder of Quantology Capital Management, a Paris-based asset management that focuses on behavioural analysis, using systematic processes and quantitative tools to generate alpha for the strategy. The aim is to apply a scientific methodology based on collective intelligence.

Our only conviction is with the processes weve created rather than any personal beliefs on how we think the markets will perform. Although it is not possible to be 100 per cent systematic, we aim to be as systematic as possible, in respect to how we run the investment strategy, says Messias.

Messias says the predictive capabilities of AI have been evolving over the last decade but we have really noticed an acceleration over the last three or four years. Its not as straightforward as the report would (seem to) suggest, though. At least 50 per cent of the time is spent by analysts cleansing data. If you want to avoid the Garbage In Garbage Out scenario, you have to look carefully at the quality of data being used, no matter how sophisticated the AI is.

Its not the most interesting job for a quant manager but it is definitely the most important one.

One of the hurdles to overcome in asset management, particularly large blue chip names with decades of investment pedigree, is the inherent conservatism that comes with capital preservation. Large institutions may be seduced by the transformative properties of AI technology but trying to convince the CFO or executive board that more should be done to embrace new technology can be a hard sell. And as SEI rightly points out, any information advantage gained can quickly evaporate, particularly in an environment populated by a growing number of AIs.

We notice an increase in the use of alternative data, to generate sentiment signals, says Messias, but if you look at the performance of some hedge funds that claim to be fully AI, or who have incorporated AI into their investment models, it is not convincing. I have heard some large quant managers have had a tough year in 2020.

The whole concept of AI in investment management has become very popular today and become a marketing tool for some managers. Some managers dont fully understand how to use AI, however, they just claim to use it to sell their fund and make it sound attractive to investors.

When it comes to applying AI, it is compulsory for us to understand exactly how each algorithm works.

This raises an interesting point in respect to future innovation in asset management. For fund managers to put their best foot forward, they will need to develop their own proprietary tools and processes to optimise the use of AI. And in so doing, avoid the risk of jumping on the bandwagon and lacking credibility; investors take note. If the manager claims to be running AI tools, get them to explain exactly how and why they work.

Messias explains that at Quantology they create their own databases and that the aim is to make the investment strategy as autonomous as possible.

Every day we run an automatic batch process. We flash the market, during which all of the algorithms run in order to gather data, which we store in our proprietary system. One example of the data sets we collect is earnings transcripts, when company management teams release guidance etc.

For the last four years, weve been collecting these transcripts and have built a deep database of rich textual data. Our algorithms apply various NLP techniques to elicit an understanding of the transcript data, based on key words, says Messias.

He points out, however, that training algorithms to analyse textual data is not as easy as analyzing quantitative data.

As of today, the algorithms that are dedicated to that task are not efficient enough for us to exploit the data. In two or three years time, however, we think there will be a lot of improvements and the value will not be placed on the algorithms, per se, but on the data, he suggests.

Investment research is a key area of AI application for asset managers to consider, as they seek to evolve over the coming years. Human beings are dogged by multiple behavioural biases that cloud our judgment and often lead to confirmation bias, especially when developing an investment thesis; its the classic case of looking for data to fit the theory, rather than acknowledging when the theory is wrong.

AI systems suffer no such foibles. They are, as SEIs white paper explains, better able to illuminate variables, probabilistically predict outcomes and suggest a sensible course of action.

Messias explains that at Quantology they run numerous trading algorithms that seek to exploit investment opportunities based on two primary pillars; one is behavioural biases which exist in the market. We think our algorithms can detect these biases better than a human being can, states Messias.

The second pillar is collective intelligence; that is, the collective wisdom of the crowd.

We have no idea where the market will go this is not our job, asserts Messias.Our job is to deliver alpha. The way markets react is always the right way. The market is the best example of collective intelligence thats what our algorithms seek to better understand and translate into trading signals.

One of the truly exciting aspects to fund management over the next few years will be to see how AI systems evolve, as their machine learning capabilities enable them to become even smarter at detecting micro patterns in the markets.

Googles AlphaGo became the first computer program to defeat a professional Go player without handicaps in 2015 and went on to defeat the number-one ranked player in the world. As SEI observes: Analysts of AlphaGos play, for example, noted that it played with a unique style that set it apart from human players, taking a relatively conservative approach punctuated with odd moves. This underscores the real power of AI. It is not just faster and more accurate. It is inclined to do things differently.

Logic would suggest that such novel, innovative moves (ie trades) could also become a more prominent feature of systematic fund management. Indeed, it is already happening.

Messias refers to Quantologys algorithms building a strong signal for Tesla when the stock rallied in September last year when the company released its earnings report.

The model sent us a signal that a human being would not have created based on a traditional fundamental way of thinking, he says.

Will we see more hedge funds launching with AI acting as the portfolio manager?

I think that is the way investment management will eventually evolve. Newer firms are likely to test innovations and techniques and if AI shows they can become more competitive than human-based trading, then yes I think the future of investment will be more technology orientated, concludes Messias.

To read the SEI paper, click herefor the US version, and here for the UK version.

Original post:
Artificial intelligence could help fund managers monetise data but will conservatism hold back the industry? - HedgeWeek

The term ‘ethical AI’ is finally starting to mean something – Report Door

Earlier this year, the independent research organisation of which I am the Director, London-based Ada Lovelace Institute, hosted a panel at the worlds largest AI conference, CogX, called The Ethics Panel to End All Ethics Panels. The title referenced both a tongue-in-cheek effort at self-promotion, and a very real need to put to bed the seemingly endless offering of panels, think-pieces, and government reports preoccupied with ruminating on the abstract ethical questions posed by AI and new data-driven technologies. We had grown impatient with conceptual debates and high-level principles.

And we were not alone. 2020 has seen the emergence of a new wave of ethical AI one focused on the tough questions of power, equity, and justice that underpin emerging technologies, and directed at bringing about actionable change. It supersedes the two waves that came before it: the first wave, defined by principles and dominated by philosophers, and the second wave, led by computer scientists and geared towards technical fixes. Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology. It is taking us beyond the principled and the technical, to practical mechanisms for rectifying power imbalances and achieving individual and societal justice.

Between 2016 and 2019, 74 sets of ethical principles or guidelines for AI were published. This was the first wave of ethical AI, in which we had just begun to understand the potential risks and threats of rapidly advancing machine learning and AI capabilities and were casting around for ways to contain them. In 2016, AlphaGo had just beaten Lee Sedol, promoting serious consideration of the likelihood that general AI was within reach. And algorithmically-curated chaos on the worlds duopolistic platforms, Google and Facebook, had surrounded the two major political earthquakes of the year Brexit, and Trumps election.

In a panic for how to understand and prevent the harm that was so clearly to follow, policymakers and tech developers turned to philosophers and ethicists to develop codes and standards. These often recycled a subset of the same concepts and rarely moved beyond high-level guidance or contained the specificity of the kind needed to speak to individual use cases and applications.

This first wave of the movement focused on ethics over law, neglected questions related to systemic injustice and control of infrastructures, and was unwilling to deal with what Michael Veale, Lecturer in Digital Rights and Regulation at University College London, calls the question of problem framing early ethical AI debates usually took as a given that AI will be helpful in solving problems. These shortcomings left the movement open to critique that it had been co-opted by the big tech companies as a means of evading greater regulatory intervention. And those who believed big tech companies were controlling the discourse around ethical AI saw the movement as ethics washing. The flow of money from big tech into codification initiatives, civil society, and academia advocating for an ethics-based approach only underscored the legitimacy of these critiques.

At the same time, a second wave of ethical AI was emerging. It sought to promote the use of technical interventions to address ethical harms, particularly those related to fairness, bias and non-discrimination.The domain of fair-ML was born out of an admirable objective on thepart of computer scientists to bake fairness metrics or hard constraints into AI models to moderate their outputs.

This focus on technical mechanisms for addressing questions of fairness, bias, and discrimination addressed the clear concerns about how AI and algorithmic systems were inaccurately and unfairly treating people of color or ethnic minorities. Two specific cases contributed important evidence to this argument. The first was the Gender Shades study, which established that facial recognition software deployed by Microsoft and IBM returned higher rates of false positives and false negatives for the faces of women and people ofcolor. The second was the 2016 ProPublica investigation into the COMPAS sentencing algorithmic tool, whichfound that Black defendants were far more likely than White defendants to be incorrectly judged to be at a higher risk of recidivism, while White defendants were more likely than Black defendants to be incorrectly flagged as low risk.

Second-wave ethical AI narrowed in on these questions of bias and fairness, and explored technical interventions to solve them. In doing so, however, it may have skewed and narrowed the discourse, moving it away from the root causes of bias and even exacerbating the position of people of color and ethnic minorities. As Julia Powles, Director of the Minderoo Tech and Policy Lab at the University of Western Australia, argued, alleviating the problems with dataset representativeness merely co-opts designers in perfecting vast instruments of surveillance and classification. When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Some also saw the fair-ML discourse as a form of co-option of socially conscious computer scientists by big tech companies. By framing ethical problems as narrow issues of fairness and accuracy, companies could equate expanded data collection with investing in ethical AI.

The efforts of tech companies tochampion fairness-related codes illustrate this point: In January 2018, Microsoft published its ethical principles for AI, starting with fairness; in May2018, Facebook announced a tool to search for bias called Fairness Flow; and in September2018, IBM announced a tool called AI Fairness 360, designed to check for unwanted bias in datasets and machine learning models.

What was missing from second-wave ethical AI was an acknowledgement that technical systems are, in fact, sociotechnical systems they cannot be understood outside of the social context in which they are deployed, and they cannot be optimised for societally beneficial and acceptable outcomes through technical tweaks alone. As Ruha Benjamin, Associate Professor of African American Studies at Princeton University, argued in her seminal text, Race After Technology: Abolitionist Tools for the New Jim Code, the road to inequity is paved with technical fixes. The narrow focus on technical fairness is insufficient to help us grapple with all of the complex tradeoffs, opportunities, and risks of an AI-driven future; it confines us to thinking only about whether something works, but doesnt permit us to ask whether it should work. That is, it supports an approach that asks, What can we do? rather than What should we do?

On the eve of the new decade, MIT Technology Reviews Karen Hao published an article entitled In 2020, lets stop AI ethics-washing and actually do something. Weeks later, the AI ethics community ushered in 2020 clustered in conference rooms at Barcelona, for the annual ACM Fairness, Accountability and Transparency conference. Among the many papers that had tongues wagging was written by Elettra Bietti, Kennedy Sinclair Scholar Affiliate at the Berkman Klein Center for Internet and Society. It called for a move beyond the ethics-washing and ethics-bashing that had come to dominate the discipline. Those two pieces heralded a cascade of interventions that saw the community reorienting around a new way of talking about ethical AI, one defined by justice social justice, racial justice, economic justice, and environmental justice. It has seen some eschew the term ethical AI in favor of just AI.

As the wild and unpredicted events of 2020 have unfurled, alongside them third-wave ethical AI has begun to take hold, strengthened by the immense reckoning that the Black Lives Matter movement has catalysed. Third-wave ethical AI is less conceptual than first-wave ethical AI, and is interested in understanding applications and use cases. It is much more concerned with power, alive to vested interests, and preoccupied with structural issues, including the importance of decolonising AI.An article published by Pratyusha Kalluri, founder of the Radical AI Network, in Nature in July 2020, has epitomized the approach, arguing that When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people.

What has this meant in practice? We have seen courts begin to grapple with, and political and private sector players admit to, the real power and potential of algorithmic systems. In the UK alone, the Court of Appeal found the use by police of facial recognition systems unlawful and called for a new legal framework; a government department ceased its use of AI for visa application sorting; the West Midlands police ethics advisory committee argued for the discontinuation of a violence-prediction tool; and high school students across the country protested after tens of thousands of school leavers had their marks downgraded by an algorithmic system used by the education regulator, Ofqual. New Zealand published an Algorithm Charter and Frances Etalab a government task force for open data, data policy, and open government has been working to map the algorithmic systems in use across public sector entities and to provide guidance.

The shift in gaze of ethical AI studies away from the technical towards the socio-technical has brought more issues into view, such as the anti-competitive practices of big tech companies, platform labor practices, parity in negotiating power in public sector procurement of predictive analytics, and the climate impact of training AI models. It has seen the Overton window contract in terms of what is reputationally acceptable from tech companies; after years of campaigning by researchers like Joy Buolamwini and Timnit Gebru, companies such as Amazon and IBM have finally adopted voluntary moratoria on their sales of facial recognition technology.

The COVID crisis has been instrumental, surfacing technical advancements that have helped to fix the power imbalances that exacerbate the risks of AI and algorithmic systems. The availability of the Google/Apple decentralised protocol for enabling exposure notification prevented dozens of governments from launching invasive digital contact tracing apps. At the same time, governments response to the pandemic has inevitably catalysed new risks, as public health surveillance has segued into population surveillance, facial recognition systems have been enhanced to work around masks, and the threat of future pandemics is leveraged to justify social media analysis. The UKs attempt to operationalize a weak Ethics Advisory Board to oversee its failed attempt at launching a centralized contact-tracing app was the death knell for toothless ethical figureheads.

Research institutes, activists, and campaigners united by the third-wave approach to ethical AI continue to work to address these risks, with a focus on practical tools for accountability (we at the Ada Lovelace Institute, and others such as AI Now, are working on developing audit and assessment tools for AI; and the Omidyar Network has published itsEthical Explorer toolkit for developers and product managers), litigation, protest and campaigning for moratoria, and bans.

Researchers are interrogating what justice means in data-driven societies, and institutes such as Data & Society, the Data Justice Lab at Cardiff University, JUST DATA Lab at Princeton, and the Global Data Justice project at the Tilberg Institute for Law, Technology and Society in the Netherlands are churning out some of the most novel thinking. The Mindaroo Foundation has just launched its new future says initiative with a $3.5 million grant, with aims to tackle lawlessness, empower workers, and reimagine the tech sector. The initiative will build on the critical contribution of tech workers themselves to the third wave of ethical AI, from AI Now co-founder Meredith Whittakers organizing work at Google before her departure last year, to walk outs and strikes performed by Amazon logistic workersand Uber and Lyft drivers.

But the approach of third-wave ethical AI is by no means accepted across the tech sector yet, as evidenced by the recent acrimonious exchange between AI researchers Yann LeCun and Timnit Gebru about whether the harms of AI should be reduced to a focus on bias. Gebru not only reasserted well established arguments against a narrow focus on dataset bias but also made the case for a more inclusive community of AI scholarship.

Mobilized by social pressure, the boundaries of acceptability are shifting fast, and not a moment too soon. But even those of us within the ethical AI community have a long way to go. A case in point: Although wed programmed diverse speakers across the event, the Ethics Panel to End All Ethics Panels we hosted earlier this year failed to include a person of color, an omission for which we were rightly criticized and hugely regretful. It was a reminder that as long as the domain of AI ethics continues to platform certain types of research approaches, practitioners, and ethical perspectives to the exclusion of others, real change will elude us. Ethical AI can not only be defined from the position of European and North American actors; we need to work concertedly to surface other perspectives, other ways of thinking about these issues, if we truly want to find a way to make data and AI work for people and societies across the world.

Carly Kind is a human rights lawyer, a privacy and data protection expert, and Director of the Ada Lovelace Institute.

Visit link:
The term 'ethical AI' is finally starting to mean something - Report Door

This A.I. makes up gibberish words and definitions that sound astonishingly real – Digital Trends

A sesquipedalian is a person who overuses uncommon words like lameen (a bishops letter expressing a fault or reprimand) or salvestate (to transport car seats to the dining room) just for the sake of it. The first of those italicized words is real. The second two arent. But they totally should be. Theyre the invention of a new website called This Word Does Not Exist. Powered by machine learning, it conjures up entirely new words never before seen or used, and even generates a halfway convincing definition for them. Its all kinds of brilliant.

In February, I quit my job as an engineering director at Instagram after spending seven intense years building their ranking algorithms like non-chronological feed, Thomas Dimson, creator of This Word Does Not Exist, told Digital Trends. A friend and I were trying to brainstorm names for a company we could start together in the A.I. space. After [coming up with] some lame ones, I decided it was more appropriate to let A.I. name a company about A.I.

Then, as Dimson tells it, a global pandemic happened, and he found himself at home with lots of time on his hands to play around with his name-making algorithm. Eventually I stumbled upon the Mac dictionary as a potential training set and [started] generating arbitrary words instead of just company names, he said.

If youve ever joked that someone who uses complex words in their daily lives must have swallowed a dictionary, thats pretty much exactly what This Word Does Not Exist has done. The algorithm was trained from a dictionary file Dimson structured according to different parts of speech, definition, and example usage. The model refines OpenAIs controversial GPT-2 text generator, the much-hyped algorithm once called too dangerous to release to the public. Dimsons twist on it assigns probabilities to potential words based on which letters are likely to follow one another until the word looks like a reasonably convincing dictionary entry. As a final step, it checks that the generated word isnt a real one by looking it up in the original training set.

This Word Does Not Exist is just the latest in a series of [Insert object] Does Not Exist creations. Others range from non-existent Airbnb listings to fake people to computer-generated memes which nonetheless capture the oddball humor of real ones.

People have a nervous curiosity toward what makes us human, Dimson said. By looking at these machine-produced demos, we are better able to understand ourselves. Im reminded of the fascination with Deep Blue beating Kasparov in 1996 or AlphaGo beating Lee Sedol in 2016.

See the original post:
This A.I. makes up gibberish words and definitions that sound astonishingly real - Digital Trends