Category Archives: Artificial Intelligence

It’s artificial intelligence to the rescue (and response and recovery) – GreenBiz

This article is adapted from GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here.

As global losses rack up from climate change-exacerbated natural disasters from voracious wildfires to ferocious hurricanes communities are scrambling to prepare (and to hedge their losses).

While information technologies such as machine learning and predictive analytics may not be able to prevent these catastrophes outright, they could help communities be better prepared to handle the aftermath. Thats the spirit behind a unique collaboration between Chicago-based technology services company Exigent and the Schulich School of Business at York University in Toronto, one that aims to create a more cost-effective and efficient marketplace for disaster relief and emergency response services.

The idea is to help state and provincial governments collectively build a more centralized inventory of relief supplies and other humanitarian items based on the data from a particular wildfire or hurricane season.

Rather than buying supplies locally based on the predictions something many small towns in fire-prone areas can ill-afford a community would buy "options" for these services in the marketplace being developed through this partnership. If the town ultimately doesn't need the items, it could "trade" them to another region that does have a need, either in the same state or another location. In effect, towns across a state or region or even country could arrange for protection, without having to make that investment outright.

"Why are we not packing those crates in March, because they are going to go somewhere?" asked Exigent CEO David Holme, referring to the current system.

The most obvious reason is that its expensive: Relief suppliers won't invest in making items unless they have certainty of orders. The intention of the Exigent-Schulich project is to move from a system that is 100 percent reactive, and consequently very slow, to one that is at least 50 percent predictive that can deliver help far more quickly, he said.

To do this, Exigent is working with AI students at Schulich to use information about a communitys demographics, geology and topography, and existing infrastructure to predict what areas affect could need: how many first-aid kits to treat local citizens or how many cement bags to rebuild structures or how many temporary housing units for residents and relief workers. All sorts of data is being consulted, from census information to historical weather data to forward-looking models for wind direction, temperature and humidity, noted Murat Kristal, program director for the Schulich masters program that is involved in the project.

Governments and decision makers are acting in a reactive way right now.

The initial focus of the joint Exigent-Schulich work is on gathering data related to wildfires in Canada and the United States. The prevalence of Californias fires captures many headlines: the insurance losses from the Camp, Hill and Woolsey fires in November 2017 have topped $12 billion. Although it gets far less attention, Texas is also highly prone to wildfires and 80 percent of them are within two miles of a community. To the north, Canadian provinces such as Alberta and Ontario are also at risk: There are an average of 6,000 fires in Canada annually.

Exigent estimates that by deploying supplies to affected regions more quickly, the platform its developing a pilot version is due in June might cut recovery costs by 20 percent and drive down premiums in at-risk regions. "The municipalities and insurers can collaboratively benefit," Holme said. "The more Ive studied the idea, the more useful it seems."

More:

It's artificial intelligence to the rescue (and response and recovery) - GreenBiz

Ethics And AI: Are We Ready For The Rise Of Artificial Intelligence? – The Roanoke Star

Steven Mintz

No job in the United States has seen more hiring growth in the last five years than artificial-intelligence specialist, a position dedicated to building AI systems and figuring out where to implement them.

But is that career growth happening at a faster rate than our ability to address the ethical issues involved when machines make decisions that impact our lives and possibly invade our privacy?

Maybe so, says Dr. Steven Mintz (www.stevenmintzethics.com), author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior.

Rules of the road are needed to ensure that artificial intelligence systems are designed in an ethical way and operate based on ethical principles, he says. There are plenty of questions that need to be addressed. What are the right ways to use AI? How can AI be used to foster fairness, justice and transparency? What are the implications of using AI for productivity and performance evaluation?

Those who take jobs in this growing field will need to play a pivotal role in helping to work out those ethical issues, he says, and already there is somewhat of a global consensus about what should be the ethical principles in AI.

Those principles include:

Mintz points to one recent workplace survey that examined the views of employers and employees in a number of countries with respect to AI ethics policies, potential misuse, liability, and regulation.

More than half of the employers questioned said their companies do not currently have a written policy on the ethical use of AI or bots, Mintz says. Another 21 percent expressed a concern that companies could use AI in an unethical manner.

Progress is being made on some fronts, though.

In Australia, five major companies are involved in a trial run of eight principles developed as part of the government AI Ethics Framework. The idea behind the principles is to ensure that AI systems benefit individuals, society and the environment; respect human rights; dont discriminate; and uphold privacy rights and data protection.

Mintz says the next step in the U.S. should be for the business community likewise to work with government agencies to identify ethical AI principles.

Unfortunately, he says, it seems the process is moving slowly and needs a nudge by technology companies, most of whom are directly affected by the ethical use of AI.

Dr. Steven Mintz (www.stevenmintzethics.com), author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior, has frequently commented on ethical issues in society and business ethics.

Link:

Ethics And AI: Are We Ready For The Rise Of Artificial Intelligence? - The Roanoke Star

Artificial intelligence puts final notes on Beethoven’s ’10th Symphony’ – The Japan Times

BERLIN A few notes scribbled in a notebook are all that German composer Ludwig van Beethoven left of his 10th Symphony before his death in 1827.

Now, a team of musicologists and programmers is racing to complete a version of the piece using artificial intelligence, ahead of the 250th anniversary of his birth next year.

The progress has been impressive, even if the computer still has a lot to learn, said Christine Siegert, head of archives at Beethoven House in the composers hometown of Bonn.

Siegert said she was convinced that Beethoven would have approved since he too was an innovator at the time, citing his compositions for the panharmonicon a type of organ that reproduces the sounds of wind and percussion instruments.

And she insisted the work would not affect his legacy because it would never be regarded as part of his oeuvre.

The final result of the project will be performed by a full orchestra on April 28 next year in Bonn, a centerpiece of celebrations for a composer who defined the romantic era of classical music.

Beethoven, Germanys most famous musical figure, is so loved in his homeland that a duty to prepare for the anniversary was written into the governing coalitions agreement in 2013.

The year of celebrations will begin on Monday, Dec. 16 believed to be his 249th birthday with the opening of his home in Bonn as a museum after extensive renovation.

Beethoven began working on the Tenth Symphony alongside his Ninth, which includes the world-famous Ode To Joy.

But he quickly gave up on the Tenth, leaving only a few notes and drafts by the time he died at age 57.

In the project, machine-learning software has been fed all of Beethovens work and is now composing possible continuations of the symphony in the composers style.

Deutsche Telekom, which is sponsoring the project, hopes to use the findings to develop technology such as voice recognition.

The team said the first results a few months ago were seen as too mechanical and repetitive but the latest AI compositions have been more promising.

Barry Cooper, a British composer and musicologist who himself wrote a hypothetical first movement for the Tenth Symphony in 1988, was more doubtful.

I listened to a short excerpt that has been created. It did not sound remotely like a convincing reconstruction of what Beethoven intended, said Cooper, a professor at the University of Manchester and the author of several works on Beethoven. There is, however, scope for improvement with further work.

Cooper warned that in any performance of Beethovens music, there is a risk of distorting his intentions and this is particularly the case for the Tenth Symphony because the composer had left only fragmentary material.

Similar AI experiments based on works by Bach, Mahler and Schubert have been less than impressive.

A project earlier this year to complete Schuberts Eighth Symphony was seen by some reviewers as being closer to an American film soundtrack than the Austrian composers work.

Continue reading here:

Artificial intelligence puts final notes on Beethoven's '10th Symphony' - The Japan Times

Are We Ready For The First Patent Filed By Artificial Intelligence? – Yahoo News

Patent practitioners and others in the world of intellectual property have expended significant time and money seeking to protect innovation in the field of artificial intelligence (AI). But what happens when an AI tries to patent something itself? Will such an event be possible? If so, who would be named as the inventor? And who would own the rights to the invention?

Given the pace at which machine learning is accelerating, these are the types of questions the patent system will soon have to answer. With computersdriving cars,winning Go tournaments,performing surgeries, and much more, its only a matter of time before an AI is itself capable of inventing patentable subject matter.

Its therefore no surprise thatAndrei Iancu, the director of the United States Patent and Trademark Office (USPTO),solicitedpublic comments last summer on the topic of patenting AI inventions. The goal, according to the USPTO was:

to engage with the innovation community and experts in AI to determine whether further guidance is needed to promote the predictability and reliability of patenting such inventions and to ensure that appropriate patent protection incentives are in place to encourage further innovation in and around this critical area.

Read the original article.

Visit link:

Are We Ready For The First Patent Filed By Artificial Intelligence? - Yahoo News

Researchers call for harnessing, regulation of AI – INQUIRER.net

Image: IStock.com/ metamorworks via AFP Relaxnews

Artificial intelligence (AI) appears to be widening inequality, and its deployment should be subject to tough regulations and limits, especially for sensitive technologies such as facial recognition, a research report said Thursday.

The AI Now Institute, a New York University center studying the social implications of AI, said that as these technologies become widely deployed, the negative impacts are starting to emerge.

The 93-page report examined concerns being raised from AI-enabled management of workers, to algorithmic determinations of benefits and social services, to surveillance and tracking of immigrants and underrepresented communities.

What becomes clear is that across diverse domains and contexts, AI is widening inequality, placing information and control in the hands of those who already have power and further disempowering those who dont, the researchers noted.

The researchers said AI systems are being deployed in areas such as healthcare, education, employment, criminal justice without appropriate safeguards or accountability structures in place.

The report said governments and businesses should halt use of facial recognition in sensitive social and political contexts until the risks are better understood, and that one subset affect recognition or the reading of emotions by computer technology should be banned in light of doubts about whether it works.

Emotion recognition should not be allowed to play a role in important decisions about human lives, such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school, the report stated.

It also called for tech workers to have the right to know what they are building and to contest unethical or harmful uses of their work.

The AI Now report said medical organizations using advanced technologies need to implement data protection policies and allow people affirmative approval opportunities to withdraw from the study or treatment, and from research using their medical information.

More broadly, the researchers said the AI industry needs to make structural changes to ensure that algorithms are not reinforcing racism, prejudice or lack of diversity.

The AI industry is strikingly homogeneous, due in large part to its treatment of women, people of color, gender minorities, and other underrepresented groups, the report said.

Efforts to regulate AI systems are underway, but are being outpaced by government adoption of AI systems to surveil and control, according to the report.

Despite growing public concern and regulatory action, the rollout of facial recognition and other risky AI technologies has barely slowed down, the researchers said.

So-called smart city projects around the world are consolidating power over civic life in the hands of for-profit technology companies, putting them in charge of managing critical resources and information.RGA

RELATED STORIES:

Abu Dhabi unveils worlds first Artificial Intelligence university

Go grandmaster says computers cannot be defeated

Read Next

LATEST STORIES

MOST READ

Subscribe to INQUIRER PLUS to get access to The Philippine Daily Inquirer & other 70+ titles, share up to 5 gadgets, listen to the news, download as early as 4am & share articles on social media. Call 896 6000.

See the original post:

Researchers call for harnessing, regulation of AI - INQUIRER.net

Researchers Slam Artificial Intelligence Software That Predicts Emotions – NDTV

A prominent group of researchers alarmed by the harmful social effects of artificial intelligence called Thursday for a ban on automated analysis of facial expressions in hiring and other major decisions. The AI Now Institute at New York University said action against such software-driven "affect recognition" was its top priority because science doesn't justify the technology's use and there is still time to stop widespread adoption.

The group of professors and other researchers cited as a problematic example the company HireVue, which sells systems for remote video interviews for employers such as Hilton and Unilever. It offers AI to analyse facial movements, tone of voice and speech patterns, and doesn't disclose scores to the job candidates.

The nonprofit Electronic Privacy Information Center has filed a complaint about HireVue to the USFederal Trade Commission, and AI Now has criticised the company before.

HireVue said it had not seen the AI Now report and did not answer questions on the criticism or the complaint.

"Many job candidates have benefited from HireVue's technology to help remove the very significant human bias in the existing hiring process," said spokeswoman Kim Paone.

AI Now, in its fourth annual report on the effects of artificial intelligence tools, said job screening is one of many ways in which such software is used without accountability, and typically favoured privileged groups.

The report cited a recent academic analysis of studies on how people interpret moods from facial expressions. That paper found that the previous scholarship showed such perceptions are unreliable for multiple reasons.

"How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation," wrote a team at Northeastern University and Massachusetts General Hospital.

Companies including Microsoft are marketing their ability to classify emotions using software, the study said. Microsoft did not respond to a request for comment Wednesday evening.

AI Now also criticised Amazon.com, which offers analysis on expressions of emotion through its Rekognition software. Amazon told Reuters that its technology only makes a determination on the physical appearance of someone's face and does not claim to show what a person is actually feeling.

In a conference call ahead of the report's release, AI Now founders Kate Crawford and Meredith Whittaker said that damaging uses of AI are multiplying despite broad consensus on ethical principles because there are no consequences for violating them.

Thomson Reuters 2019

Continued here:

Researchers Slam Artificial Intelligence Software That Predicts Emotions - NDTV

The Ethical Dimension of Artificial Intelligence – The McGill International Review

Artificial intelligence (AI) is a rising field in the technology world that aims to teach machines how to learn, or think, for themselves. Often, when we think of AI we imagine the voice-automated system JARVIS from the Iron Man movies or the 2001 Steven Spielberg movie. In reality, AI looks quite different, and chances are you have already seen it.

Canada, surprisingly, is a global leader in AI. Montreal has the highest concentration of researchers and students studying AI in the world, while Toronto has the highest concentration of AI start-ups. According to Ashley Casovan, the executive director of non-profit AI Global and former Director of Data Architecture and Innovation for the Government of Canada, Canada frequently uses AI for everyday tasks. For instance, lets say youre trying to figure out how to file your taxes while youre on a train home. When you visit a webpage, a chatbot may pop up that will explain the process for you, while your Canadian Pacific Railway train uses its sensors to detect potential blockages on tracks and responds accordingly. Both of these technologies employ machine learning, a technique that trains computers using manually labelled data to respond to new information. The more the program responds to new information that is not labelled, the more it learns. The Canadian government is using this in increasingly innovative ways: using predictive analytics, Canadian scientists were able to identify Zika virus patterns to help mitigate the spread of the virus. In addition, Canadian health services are employing predictive analytics to aid in suicide prevention.

However, the frequency in which the Canadian government employs AI is worrying for some. Fears of governments using AI to infringe on private freedoms are very real, as some countries, such as China, have begun to use facial recognition software for police surveillance. Furthermore, people are rapidly losing confidence in social media platforms and Internet security, often citing the absence of human intervention in the decisions that algorithms make as the cause. Furthermore, 54% of North Americans express concern for their online privacy, and the non-consensual use of personal data by social media companies and federal governments do little to ease these fears. While more Canadians are more concerned about their online security due to threats posed by internet companies, at least 59% fear for their personal information being used by their own government.

More and more people are worried about how their online information is used, perhaps in light of the Cambridge Analytica scandal that implicated Facebook in selling the personal data of millions of users. Furthermore, Russian interference in the 2016 US Presidential Election undoubtedly had an effect on the confidence many users have in their social media platforms of choice. Therefore, it is understandable that many people are hesitant to embrace AI and the idea of inhuman machines processing their information. However, there is little risk that computers will enslave us all. Rather, the prevalence of AI may serve to damage society in other ways, such as the propagation of increased bias.

As AI systems are created by humans, there is often the possibility of an inherent bias in the program itself, either through the data on which it is trained or the application of the program. As most computer programmers are white men, a lack of diversity in AI may serve to reaffirm the presence of gender and racial bias in places where its prevalent. Various organizations have been formed to address this problem, such as AI4ALL, which aims to encourage underrepresented demographics, such as people of colour and women, to pursue careers in AI.

As AI quickly integrates itself into society, the necessity for a comprehensive ethical code arises. According to Casovan, while Canada does have a government policy on responsible AI, it is difficult to enforce as the implementation of the policy often needs to be case-specific. Furthermore, there is little authority that restricts what companies can and cannot do, and evidently even less so for the government itself. Casovan thus proposes a solution: the creation of ethical models that are agile, inclusive, collaborative, and open-sourced to best provide companies with the resources to create ethical AI.

Montreal recently hosted the RE-WORK Deep Learning and Responsible AI Summit from October 24 to 25, a conference for AI-related industry professionals, from computer scientists to journalists to policymakers. During the summit, our team had the opportunity to interview AI professionals about their point of view on potential ethical concerns, such as McGill AI. This student association started in 2017 and aims to bridge the gap between undergraduate students and learning about AI. To do this, McGill AI offers students opportunities to learn about AI machine learning. With annual workshops, bootcamps, courses, lectures and more, students have the opportunity to work in groups on an idea related to AI in order to build functional prototypes.

This is also a way for first-year undergraduate students to learn about machine learning and network with companies, Jenny Long, a representative of McGill AI, tells the MIR. Thanks to its company crawls, McGill AI also make contacts between students and companies who do research on AI, or professors who give their advice on specific topics. McGill AI also organizes workshops that target general audience with their initiative Machine Learning 101, which aims at giving a general feel of what machine learning is and demystifying it. Likewise, McGill is already reaching students for potential initiatives, like reading groups for ethics in AI.

Ethical issues are clearly one of the trending topics at the moment, Long affirmed, As a society, I dont think we see any concerns specifically for the bootcamps but we do hope to make them more accessible in general.

Students interested in artificial intelligence, machine learning, and data science should also consider attending the Centre for Social and Cultural Data Science Expo on January 21, 2020. Hosted by McGill University in New Residence Hall, the Expo will host a variety of talks about the uses of data science in computer science, politics, and other fields.

With many opportunities for students to get involved in AI and machine learning, Canada is evidently working to maintain its status as being a leader in AI. However, those interested in the exciting prospects that AI proposes must also consider its ethical dimensions. AI is serving to reconcile the technological world with the political and social spheres, and can therefore not be chiefly concerned with technological progression. AI researchers must evidently be concerned with the applications of such technology, and what it means for future generations.

Photo credit to Drew Graham, courtesy of Unsplash.

Edited by Alec Regino.

Link:

The Ethical Dimension of Artificial Intelligence - The McGill International Review

Conversing with chatbotsArtificial Intelligence research keeps it more ‘human’ – SFU News – Simon Fraser University News

The rapid advance of artificial intelligence (AI) begs a daunting question will we ever achieve human-like behavior in computational systems? SFU professorSteve DiPaolaand his research team are developing a solution called the AI Empathic Painter, using natural interaction methods to enable users to converse efficiently, while highlighting two major human qualities empathy and creativity.

DiPaolas team showcased its work at a major AI conferenceNeurIPS 2019 in Vancouver this past week. Their demo enables visitors to approach and converse with a 3D avatar chatbot, which creates an artistic portrait of the visitors inspired by their emotions and personality via the teams Empathy-based Affective Portrait Painter.

To achievethis, the researchers have combined their research in empathy-based modeling for AI character agents with machine learning models from the teams creativity artistic system.

With a host of gestural, motion and bio-sensor systems, the teams AI systems are designed to give coherent, empathy-based conversational answers via speech, expression and gesture.

Using our special system, the AI avatar can, through conversation, evaluate the user words, facial expression and voice stress to make an empathetic evaluationjust as a human would be able to, about someone they are talking to, says DiPaola, a professor in the School of Interactive Arts and Technology (SIAT), whose team includes post-doctoral researcher Nilay Yalcin and PhD student Nouf Abukhodair.

Then the researchers take it a step furtherusing the Empathy-based Affective Portrait Painter to paint a unique portrait of the user, based on the empathetic evaluation.DiPaolas AI artwork has been showcased globally in such museums as New Yorks Museum of Modern Art and the Whitney Museum of American Art.

The growing success of dialogue systems research makes conversational agents a perfect candidate for becoming a standard in human-computer interaction, explains Yalcin. The naturalness of communicative acts provides a comfortable ground for the users to interact with. There have been many advances in using multiple communication channels in dialogue systems, simulating humaneness in an artificial agent.

DiPaolas and Yalcins extensive research on empathy in AI is also addressing issues in a variety of industries, including e-health. In a collaborative project with the national AGE-WELL initiative, a helper AI conversational bot is being developed to assist the elderly in staying independent at home. Other applications are geared to the entertainment industry.

After premiering at the NeuroIPS conference, the AI Empathic Painter system will travel to Europe to be showcased in Florence in May 2020.

Formerly from Stanford University, DiPaola lead SFUs Interactive Visualization Lab (iVizLab), which strives to make computational systems bend more to the human experience by incorporating biological, cognitive and behavior knowledge models. The lab creates computational models of human ideals such as expression, emotion, behavior and creativity typically for gaming, sciences, arts and health fields.

Follow this link:

Conversing with chatbotsArtificial Intelligence research keeps it more 'human' - SFU News - Simon Fraser University News

What Veterans Affairs Aims to Accomplish Through Its Artificial Intelligence Institute – Nextgov

The Veterans Affairs Department recently launched a National Artificial Intelligence Institute to coordinate and advance strategic vet-focused research and development efforts to harness the budding technology.

VA has a unique opportunity to be a leader in artificial intelligence, Secretary Robert Wilkie said in a statement. VAs artificial intelligence institute will usher in new capabilities and opportunities that will improve health outcomes for our nations heroes.

Home to Americas largest integrated health care system, the VA trains more doctors and nurses than any other entity in the nation and also houses the largest genomic knowledge base linked to health care information in the world. Throughout 2019, the agency unveiled a variety of deliberate investments and projects to leverage artificial intelligence to better meet veterans needs. For example, the agency and tech giant IBM launched an AI-powered mental fitness app to help veterans transitioning to civilian life earlier this year, and VA collaborated with DeepMind Health to develop an AI system that can forecast a life-threatening kidney disease before it appears.

The agency also appointed Dr. Gil Alterovitz as its first-ever national artificial intelligence director this summer. A Harvard Medical professor who has led national and international collaborative initiatives that used data and technology to innovate across the health care landscape, Alterovitz will serve as the NAIIs director and oversee all of its efforts. He told Nextgov Monday that the new institute has been several months in the making and will garner some federal funding for its efforts. Alterovitz also confirmed that the institute will be housed directly at the VA.

There is a special opportunity to work for veteran needs via AI by focusing on improving health and well-being [through research and development], he said. We hope to focus on veteran priorities in such work.

NAII will engage veterans and stakeholders across the health care sector to solicit and execute flagship AI research projects that emphasize topics like deep learning, explainable AI, and privacy-preserving AI. Theyll aim to demonstrate [the] size, scope, and magnitude of capabilities that deliver positive real-world outcomes for Veterans. According to agency insiders, one of the first tasks the NAII took on was surveying the existing use of AI by VA researchers and going forward, the institute will also boost AI-related research projects already underway by offering up fresh resources and forging new possibilities for collaboration.

Medical centers are across the country and new insights can be best done working together, Alterovitz said.

The AI director also has extensive experience leading projects known as tech sprints, which essentially enable outside organizations to test out data in the VA format to develop tools and programs that can lead to new data-driven insightswithout waiting long periods to establish partnership agreements. NAII insiders will lead AI tech sprints to accelerate innovation in the ecosystem and also aim to create an AI Tech Sprint handbook to help new teams orchestrate sprints to introduce health care solutions.

"We envision a future where AI can give us tools to serve Veterans in the best way possible, as they did for our nation," Alterovitz said.

Read the original post:

What Veterans Affairs Aims to Accomplish Through Its Artificial Intelligence Institute - Nextgov

The Bot Decade: How AI Took Over Our Lives in the 2010s – Popular Mechanics

Bots are a lot like humans: Some are cute. Some are ugly. Some are harmless. Some are menacing. Some are friendly. Some are annoying ... and a little racist. Bots serve their creators and society as helpers, spies, educators, servants, lab technicians, and artists. Sometimes, they save lives. Occasionally, they destroy them.

In the 2010s, automation got better, cheaper, and way less avoidable. Its still mysterious, but no longer foreign; the most Extremely Online among us interact with dozens of AIs throughout the day. That means driving directions are more reliable, instant translations are almost good enough, and everyone gets to be an adequate portrait photographer, all powered by artificial intelligence. On the other hand, each of us now sees a personalized version of the world that is curated by an AI to maximize engagement with the platform. And by now, everyone from fruit pickers to hedge fund managers has suffered through headlines about being replaced.

Humans and tech have always coexisted and coevolved, but this decade brought us closer togetherand closer to the futurethan ever. These days, you dont have to be an engineer to participate in AI projects; in fact, you have no choice but to help, as youre constantly offering your digital behavior to train AIs.

So heres how we changed our bots this decade, how they changed us, and where our strange relationship is going as we enter the 2020s.

All those little operational tweaks in our day come courtesy of a specific scientific approach to AI called machine learning, one of the most popular techniques for AI projects this decade. Thats when AI is tasked not only with finding the answers to questions about data sets, but with finding the questions themselves; successful deep learning applications require vast amounts of data and the time and computational power to self-test over and over again.

Deep learning, a subset of machine learning, uses neural networks to extract its own rules and adjust them until it can return the right results; other machine learning techniques might use Bayesian networks, vector maps, or evolutionary algorithms to achieve the same goal.

In January, Technology Reviews Karen Hao released an exhaustive analysis of recent papers in AI that concluded that machine learning was one of the defining features of AI research this decade. Machine learning has enabled near-human and even superhuman abilities in transcribing speech from voice, recognizing emotions from audio or video recordings, as well as forging handwriting or video, Hao wrote. Domestic spying is now a lucrative application for AI technologies, thanks to this powerful new development.

Haos report suggests that the age of deep learning is finally drawing to a close, but the next big thing may have already arrived. Reinforcement learning, like generative adversarial networks (GANs), pits neural nets against one another by having one evaluate the work of the other and distribute rewards and punishments accordinglynot unlike the way dogs and babies learn about the world.

The future of AI could be in structured learning. Just as young humans are thought to learn their first languages by processing data input from fluent caretakers with their internal language grammar, computers can also be taught how to teach themselves a taskespecially if the task is to imitate a human in some capacity.

This decade, artificial intelligence went from being employed chiefly as an academic subject or science fiction trope to an unobtrusive (though occasionally malicious) everyday companion. AIs have been around in some form since the 1500s or the 1980s, depending on your definition. The first search indexing algorithm was AltaVista in 1995, but it wasnt until 2010 that Google quietly introduced personalized search results for all customers and all searches. What was once background chatter from eager engineers has now become an inescapable part of daily life.

One function after another has been turned over to AI jurisdiction, with huge variations in efficacy and consumer response. The prevailing profit model for most of these consumer-facing applications, like social media platforms and map functions, is for users to trade their personal data for minor convenience upgrades, which are achieved through a combination of technical power, data access, and rapid worker disenfranchisement as increasingly complex service jobs are doubled up, automated away, or taken over by AI workers.

The Harvard social scientist Shoshana Zuboff explained the impact of these technologies on the economy with the term surveillance capitalism. This new economic system, she wrote, unilaterally claims human experience as free raw material for translation into behavioural data, in a bid to make profit from informed gambling based on predicted human behavior.

Were already using machine learning to make subjective decisionseven ones that have life-altering consequences. Medical applications are only some of the least controversial uses of artificial intelligence; by the end of the decade, AIs were locating stranded victims of Hurricane Maria, controlling the German power grid, and killing civilians in Pakistan.

The sheer scope of these AI-controlled decision systems is why automation has the potential to transform society on a structural level. In 2012, techno-socialist Zeynep Tufekci pointed out the presence on the Obama reelection campaign of an unprecedented number of data analysts and social scientists, bringing the traditional confluence of marketing and politics into a new age.

Intelligence that relies on data from an unjust world suffers from the principle of garbage in, garbage out, futurist Cory Doctorow observed in a recent blog post. Diverse perspectives on the design team would help, Doctorow wrote, but when it comes to certain technology, there might be no safe way to deploy:

It doesnt help that data collection for image-based AI has so far taken advantage of the most vulnerable populations first. The Facial Recognition Verification Testing Program is the industry standard for testing the accuracy of facial recognition tech; passing the program is imperative for new FR startups seeking funding.

But the datasets of human faces that the program uses are sourced, according to a report from March, from images of U.S. visa applicants, arrested people who have since died, and children exploited by child pornography. The report found that the majority of data subjects were people who had been arrested on suspicion of criminal activity. None of the millions of faces in the programs data sets belonged to people who had consented to this use of their data.

State-level efforts to regulate AI finally emerged this decade, with some success. The European Unions General Data Protection Regulation (GDPR), enforceable from 2018, limits the legal uses of valuable AI training datasets by defining the rights of the data subject (read: us); the GDPR also prohibits the black box model for machine learning applications, requiring both transparency and accountability on how data are stored and used. At the end of the decade, Google showed the class how not to regulate when they built, and then scrapped, an external AI ethics panel a week later, feigning shock at all the negative reception.

Even attempted regulation is a good sign. It means were looking at AI for what it is: not a new life form that competes for resources, but as a formidable weapon. Technological tools are most dangerous in the hands of malicious actors who already hold significant power; you can always hire more programmers. During the long campaign for the 2016 U.S. presidential election, the Putin-backed IRA Twitter botnet campaignsessentially, teams of semi-supervised bot accounts that spread disinformation on purpose and learn from real propagandainfiltrated the very mechanics of American democracy.

Keeping up with AI capacities as they grow will be a massive undertaking. Things could still get much, much worse before they get better; authoritarian governments around the world have a tendency to use technology to further consolidate power and resist regulation.

Tech capabilities have long since proved too fast for traditional human lawmakers, but one hint of what the next decade might hold comes from AIs themselves, who are beginning to be deployed as weapons against the exact type of disinformation other AIs help to create and spread. There now exists, for example, a neural net devoted explicitly to the task of identifying neural net disinformation campaigns on Twitter. The neural nets name is Grover, and its really good at this.

See the article here:

The Bot Decade: How AI Took Over Our Lives in the 2010s - Popular Mechanics