Category Archives: Artificial Intelligence

VODA.ai and Mueller Form a Relationship to Support Decision-Making Through Artificial Intelligence – AiThority

VODA.aiis pleased to announce we are working with Mueller Water Products to provide machine learning software to power PipeRank virtual condition assessment technology delivered by Echologics.

VODA.ais machine learning engine is designed to deliver remarkably accurate predictions for future water pipe failures, both in the near term (the next twelve months) as well as longer term. Many utilities choose to replace pipes based on pipe material, age and history of prior failures. These methods are significantly less accurate than the PipeRankTMtechnology, powered by VODA.ai.

Recommended AI News: How Coronvirus Could Knock Off All the Good Work of Global Fund Partnership

Current industry best practice leverages some data to identify trends and generate data-supported decisions for failure planning and capital deployment. The PipeRank technology combines pipe degradation factors with VODA.ais machine learning model to enable utilities to prioritize every pipe segment by likelihood and consequences of failure, saidEric Stacey, Vice President and General Manager of Echologics.

The PipeRankTMtechnology identifies pipes likely to fail in the near future and assigns a business risk score to every segment. With this data, a condition assessment can then be performed using the Echologics ePulsetechnology to diagnose specific problems. This makes it easy for utilities to plan their operating and engineering programs by guiding actions and focusing resources on the highest risk assets.

Recommended AI News: Leaders Versus Laggards In AI: Latest Findings On Generating ROI From Your AI Investments

The relationship with the Echologics team is exciting for us. Their industry leadership will introduce VODA.ai to more utilities. Via this relationship, we will work with the Echologics team to support smarter decision making and continue to serve the water industry, saidGeorge Demosthenous, CEO at VODA.ai.

Recommended AI News: HR Path Expands the SAP SuccessFactors Practice in North America

Read more:
VODA.ai and Mueller Form a Relationship to Support Decision-Making Through Artificial Intelligence - AiThority

Impact of Artificial Intelligence on the current education system – Latest Digital Transformation Trends | Cloud News – Wire19

Education can be defined as a process where teachers and students give and receive systematic instructions, respectively. Learning can take place in either a formal or informal setting. More commonly, students receive education in a formal setting such as a high-school, college, university, etc. Education is often considered as a significant determinant of an individuals future success rate. Justifiably, there are various efforts to improve the current education systems in multiple countries worldwide.

Among the many methods employed by various countries to improve the education sector includes the use of AI (Artificial intelligence). AI systems are defined by the use of computers to accomplish tasks that had previously required human intellect. AI utilizes algorithms that collect, classify, organize, and analyze information to conclude it, which is also called machine learning. As such, the use of machine learning has the potential to bring about several benefits for the various industries, including the education system.

Traditional education systems are fast changing to adapt to the technological advancements of todays world. This is especially true with the widespread access to various educational sources of information online. The implementation of educational AI systems has the potential to help students develop their skills and acquire more knowledge in multiple subjects. Therefore, as artificial intelligence continues to evolve, it is our hope that it can help fill the gaps in the education system.

The implementation of AI can improve the efficiency and personalization of learning tasks, as well as streamline administrative tasks. These are benefits enjoyed by students and teachers alike. The implementation of artificial intelligence also helps students to get more time with their respective teachers. This is where unique human qualities are required to supplement where AI would struggle.

AI has altered the students way of learning as they do not need to physically attend classes since they have access to learning material via the internet. As previously mentioned, AI allows educators to spend more time with students by taking over some administrative tasks. However, AI has done so much for education. Below are a few more effects of artificial intelligence on the education industry. They include:

Education should be accessible by everyone regardless of their geographical location. Learning through Artificial intelligence has long been considered as the deciding factor for eliminating geographical boundaries through the facilitation of flexible learning environments globally.

The availability of smart content is a highly debated topic, whereby AI systems can be utilized to offer quality content that is similar to what students buy from some of the best research paper writers online.

AI learning environments can adapt to a students level of skill, mastery of coursework, etc. thus identifying the challenges they face. Accordingly, they provide relevant materials and activities to boost your knowledge base in a specific subject.

You probably already realized that most streaming services offer you a list of shows you are probably going to like, which is an excellent example of AI personalization of your favorite genre of shows. Various other systems can be used in education to cater to the different needs of various students.

Teachers often spend time on administrative duties such as marking exams, reading students assignments, planning the timetable, etc. all of which can be completed by AI systems such as automated assignment processing and grading systems. Thus, teachers get to spend more time with their students.

AI usage, at the very least, reduces the chances of human error delaying specific processes in the learning environment. An excellent example of AI used in school is through the collection of data from various sources and the creation of an accurate forecast to plan for the future effectively.

Besides, the system offers opportunities for international students who either speak different languages or have visual/ hearing defects. For instance, an artificial intelligence system that forms captions in real-time during a presentation. As you can see, the education sector has a lot to gain from the implementation of AI into various systems.

AI systems bring about a world of opportunities to share information globally. Today there are quite a few artificial intelligence systems that help provide a conducive learning environment for all students. The use of AI in learning is quite promising and should be exploited for the benefits it has to offer.

Also read: 9 ways Artificial Intelligence (AI) is impacting education

Read more:
Impact of Artificial Intelligence on the current education system - Latest Digital Transformation Trends | Cloud News - Wire19

Using Artificial Intelligence to Connect Vehicles and Traffic Infrastructure – Chattanooga Pulse

The University of Tennessee-Chattanooga, with the University of Pittsburgh, Georgia Institute of Technology, Oak Ridge National Laboratory, and the City of Chattanooga, have been awarded $1.89 million from the U.S Department of Energy to create a new model for traffic intersections that reduces energy consumption.

UTCs Center for Urban Informatics and Progress will leverage its existing smart corridor to accommodate the new research.

This project is a huge opportunity for us, CUIP Director and principal investigator, Mina Sartipi, said. Collaborating with the City of Chattanooga and working with Georgia Tech, Pitt, and ORNL on a project that is future-oriented, novel, and full of potential is exciting. This work will contribute to the existing body of literature and lead the way for future research. Our existing infrastructure, the MLK Smart Corridor, will be the cornerstone for this work, as it gives us a precedent for applied researchresearch with real-world nuance.

In the DOE proposal, the research team noted the U.S. transportation sector alone accounted for more than 69 percent of petroleum consumption and more than 37 percent of the countrys CO2 emissions.An earlier 2012 National Traffic Signal Report Card found that inefficient traffic signals contribute to 295 million vehicle hours of traffic delay, accounting for 5-10 percent of all traffic related delays.

The project will leverage the capabilities of connected vehicles and infrastructures to optimize and manage traffic flow. The researchers note that while adaptive traffic control systems (ATCS) have been in use for a half century to improve mobility and traffic efficiency, they werent designed to address fuel consumption and emissions.

Likewise, while automobile and vehicle standards have increased significantly, their potential for greater improvement is hampered by inefficient traffic systems that increase idling time and stop-and-go traffic. Finding a solution is paramount since the National Transportation Operations Coalition graded the state of the nations traffic signals as D+.

Our vehicles and phones have combined to make driving safer while nascent ITS has improved traffic congestion in some cities. The next step in their evolution is the merging of these systems through AI," noted Aleksandar Stevanovic, associate professor of civil and environmental engineering at Pitts Swanson School of Engineering and director of the Pittsburgh Intelligent Transportation Systems (PITTS) Lab.

"Creation of such a system, especially for dense urban corridors and sprawling exurbs, can greatly improve energy and sustainability impacts. This is critical as our transportation portfolio will continue to have a heavy reliance on gasoline-powered vehicles for some time.

The goal of the 3+ year project is to develop a dynamic feedback Ecological ATCS (Eco-ATCS) which reduces fuel consumption and greenhouse gases while maintaining a highly operable and safe transportation environment. The integration of AI will allow additional infrastructure enhancements including emergency vehicle preemption, transit signal priority, and pedestrian safety. The ultimate goal is to reduce corridor-level fuel consumption by 20 percent.

Chattanooga is a city focused on embracing technology and innovation to create a safer and more efficient environment, Chattanooga Smart City Director, Kevin Comstock, said. Being supported and affirmed by the Department of Energy is an enormous vote of confidence in the direction were heading.

Georgia Tech team member, Michael Hunter, echoes that sentiment. Through this project we have the potential to develop a pilot deployment that may be replicated throughout the country, helping realize the vast potential of these technologies, he said.

The team consists of Mina Sartipi, Osama Osman, Dalei Wu, and Yu Liang from UTC, Michael Hunter from Georgia Institute of Technology, Aleksander Stevanovic from University of Pittsburgh, Kevin Comstock from the city of Chattanooga, and Derek Deter and Adian Cook from ORNL.

The Center for Urban Informatics and Progress is a smart city research center at the University of Tennessee at Chattanooga. CUIP is committed to applied smart city research that betters the lives of citizens every day. For more on the work were doing and our mission, visit http://www.utc.edu/cuip.

Like this story? Click here to Subscribe to more like this delivered weekly to your inbox!

Excerpt from:
Using Artificial Intelligence to Connect Vehicles and Traffic Infrastructure - Chattanooga Pulse

Artificial Intelligence for Medical Evacuation in Great-Power Conflict – War on the Rocks

It is 4:45 a.m. in southern Afghanistan on a hot September day. A roadside improvised explosive device has just gone off and was followed by the call, Medic! Spc. Chazray Clark stepped right on the bomb, losing both of his feet and his left forearm. Clarks fellow soldiers immediately provided medical care, hoping he might survive. After all, the units forward operating base was only 1.5 miles away, and it had a trained medical evacuation (medevac) team waiting to respond to an event of this nature.

A 9-line medevac request was submitted just moments after the explosion occurred, and Clarks commanding officer, Lt. Col. Mike Katona, had been assured that a medevac helicopter was en route to the secured pickup location. Unfortunately, that was not the case; the medevac team was still awaiting orders 34 minutes after the call for help was transmitted.

Although the casualty collection point was secure, the current policy in place required an armed gunship to escort the medevac helicopter, but none were available. It wasnt until 5:24 a.m. that the medevac helicopter started to fly toward the pickup location, but it was too late. Clark arrived at Kandahar Air Field medical center at 5:49 a.m. and was pronounced dead just moments later.

No one knows if Clark would have survived his wounds if he had received advanced surgical care earlier, but most people would agree that his chances of survival would have been much higher. What went wrong? Why wasnt an armed escort available during this dire time? Are the current medevac policies in place outdated? If so, can artificial intelligence improve upon current practices?

With limited resources available, the U.S. military ought to carefully plan how medevac assets will be utilized prior to and during large-scale combat operations. How should resources be positioned now to maximize medevac effectiveness and efficiency? How can ground and air ambulances be dynamically repositioned throughout the course of an operation based on evolving, anticipated locations and intensities for medevac demand (i.e., casualties)? Moreover, how should those decisions be informed by operational restrictions and (natural and enemy-induced) risks to the use of ground and aerial routes as well as evacuation procedures at the casualty collection points? Finally, whenever a medevac request is received, which of the available assets should be dispatched, considering the anticipated future demands of a given region?

The military medevac enterprise is complex. As a result, any automation of location and dispatching decision-making requires accurate data, valid analytical techniques, and the deliberate integration and ethical use of both. Artificial intelligence and, more specifically, machine-learning techniques combined with traditional analytic methods from the field of operations research provide valuable tools to automate and optimize medevac location and dispatching procedures.

The U.S. military utilizes both ground and aerial assets to perform medevac missions. Rotary-wing air ambulances (i.e., HH-60M helicopters) are typically reserved for the most critically sick and/or wounded, for whom speed of evacuation and flexibility for routing directly to highly capable medical treatment facilities are essential to maximizing survivability. Ground ambulances cannot travel as far or as fast as air ambulances, but this limitation is offset by their greater proliferation throughout the force.

Machine Learning to Predict Medevac Demand

More than 4,500 U.S. military medevac requests were transmitted between 2001 and 2014 for casualties occurring in Afghanistan. The location, threat level, and severity of casualty events resulting in requests for medevac influence the demand for medevac assets. Indeed, it is likely that some regions may have higher demand than others, requiring more medevac assets when combat operations commence. A machine-learning model (e.g., neural networks, support vector regression, and/or random forest) can accurately predict demand for each combat region by considering relevant information, such as current mission plans, projected enemy locations, and previous casualty event data.

Effective machine-learning models require historical data that is representative of future events. Historical data for recent medevac operations can be obtained from significant activity reports from previous conflicts and the Medical Evacuation Proponency Division. For example, one study utilizes Operation Iraqi Freedom flight logs obtained from the Medical Evacuation Proponency Division to approximate the number of casualties at a given location to help identify the best allocation(s) of medical assets during steady-state combat operations. Open-source, unclassified data also exist (e.g., International Council on Security and Development, Defense Casualty Analysis System, and Data on Armed Conflict). Although historical data may not exist for every potential future operating environment, it can still be utilized to generalize casualty event characteristics. For example, one study models the spatial distribution of casualty cluster centers based on their proximity to main supply routes and/or rivers, where large populations are present. It utilizes Monte Carlo simulation to synthetically generate realistic data, which, in turn, can be leveraged by machine-learning practitioners to predict future demand.

Demand prediction via a machine-learning model is essential, but it is not enough to optimize medevac procedures. For example, consider a scenario wherein the majority of demand is projected to occur in two combat regions located on opposite sides of the area of operations. If there are not enough medevac resources to provide a timely response for all anticipated medevac demands in both of those regions, where should medevac assets be positioned? Alternatively, consider a scenario wherein one region needs the majority of medevac support at the beginning of an operation, but the anticipated demand shifts to another region (or multiple regions) later. Should assets be positioned to respond to demand from the first region even if it makes it impossible to reposition assets to respond to future demand from the other regions in a timely manner? How do these decisions impact combat operations in the long run?

Optimization Methods to Locate, Dynamically Relocate, and Dispatch Medevac Assets

How do current decisions impact future decisions? The decisions implemented throughout a combat operation are interdependent and should be made in conjunction with each other. More specifically, to create a feasible, realistic plan, it is necessary to make the initial medevac asset positioning decisions while considering the likely decisions to dynamically reposition assets over the duration of an operation. Moreover, every decision should account for total anticipated demand over all combat regions to ensure the limited resources are managed appropriately.

How many possible asset location options are there for a decision-maker to consider? As an example, suppose there are 20 dedicated ground and aerial medevac assets that need to be positioned across six different forward operating bases. Moreover, suppose decisions regarding the repositioning of these assets occur every day for a 14-day combat operation. For any day of the two-week combat operation, any of the 20 assets can be repositioned to one of six operating bases. Without taking into consideration distances, availability, demand constraints, or multiple asset types, the approximate number of options to consider is over 10,000! It is practically impossible for an individual (or even a team of people) to identify the optimal positioning policy without the benefit of insight provided by quantitative analyses.

Whereas a machine-learning model can predict when and where demand is likely to occur, it does not inform decision-makers where to position limited resources. To overcome this, operations research techniques more specifically, the development and analysis of optimization models can efficiently identify an optimal policy for dynamic asset location strategies for the area of operations over the entire planning horizon. The objectives of an optimization model define the quantitatively measured goal that decision-makers seek to maximize and/or minimize. For example, decision-makers may seek to maximize demand coverage, minimize response time, minimize the cost of repositioning assets, and/or maximize safety and security of medevac personnel. The decisions correspond to when, where, and how many of each type of asset is to be positioned across the forward operating bases for the planned combat operation, as well as how assets are dispatched in response to medevac requests. It is necessary to have information about unit capabilities and dispositions to accurately inform an optimization model. This information includes the number, type, and initial positioning of medevac assets as well as the projected demand locations, threat levels, and injury severity levels. An optimization model also considers operational constraints to ensure a feasible solution is generated. These constraints include travel distances and time, fuel capacity, forward operating base capacity, and political considerations.

Medevac assets may need to be dynamically repositioned (i.e., relocated) across different staging facilities, especially as disposition and intensity of demand changes, despite the long-term and strategic nature of combat operations. For example, it may be necessary to reposition assets from forward operating bases near combat regions with lower projected demand to bases near regions with higher projected demand. Moreover, it is important to consider projected threat and severity levels when determining which type of assets to position. For example, it may be beneficial to position armed escorts closer to combat regions with higher projected threat levels. Similarly, air ambulances should be positioned closer to combat regions with higher projected severity levels (i.e., life-threatening events). Inappropriate positioning of assets may result in delayed response times, increased risks, and decreased casualty survivability rates. One way to determine the location of medevac assets is to develop an optimization model that simultaneously considers the following objectives: maximize demand coverage, minimize response time, and minimize the number of relocations subject to force projection, logistical, and resource constraints. Trade-off analysis can be performed by assigning different weights (i.e., importance levels) to each objective considered. Given an optimal layout of medevac assets, another important decision that should be considered is how air ambulances will be dispatched in response to requests for service.

The U.S. military currently utilizes a closest-available dispatching policy to respond to incoming requests for service, which, as the name suggests, tasks the closest-available medevac unit to rapidly evacuate battlefield casualties from point of injury to a nearby trauma facility. In small-scale and/or low-intensity conflicts, this policy may be optimal. Unfortunately, this is not always the case, especially in large-scale, high-intensity conflicts. For example, suppose a non-life-threatening medevac request is submitted and only one air ambulance is available. Moreover, assume high-intensity operations are ongoing and life-threatening medevac requests are expected to occur in the near future. Is it better to task the air ambulance to service the current, non-life-threatening request, or should the air ambulance be reserved for a life-threatening request that is both expected and likely to occur in the near future?

Many researchers have explored scenarios in which the closest-available dispatching policy can be greatly improved upon by leveraging operations research techniques such as Markov decision processes and approximate dynamic programming. Dispatching decision-makers (i.e., dispatching authorities) should take into account a large number of uncertainties when deciding which medevac assets to utilize in response to requests for service. Utilizing approximate dynamic programming, military analysts can model large-scale, realistic scenarios and develop high-quality dispatching policies that take into account inherent uncertainties and important system characteristics. For example, one study shows that dispatching policies based on approximate dynamic programming can improve upon the closest-available dispatching policy by over 30 percent in regards to a lifesaving performance metric based on response time for a notional scenario in Syria.

Ethical Application Requires a Decision-Maker in the Loop

Optimization models may offer valuable insights and actionable policies, but what should decision-makers do when unexpected events occur (e.g., air ambulances become non-mission capable) or new information is obtained (e.g., an unmanned aerial vehicle captures enemy activity in a new location)? It is not enough to create and implement optimization models. Rather, it is necessary to create and deliver a readily understood dashboard that presents information and recommended decisions, the latter of which are informed by both machine learning and operations research techniques. To yield greater value, such a dashboard should allow its users (i.e., decision-makers) to conduct what-if analysis to test, visualize, and understand the results and consequences of different policies for different scenarios. Such a dashboard is not a be-all and end-all tool. Rather, it is a means for humans to effectively leverage information and analyses to make better decisions.

The future of decision-making involves both artificial intelligence and human judgment. Whereas humans lack the power and speed that artificial intelligence can provide for data processing tasks, artificial intelligence lacks the emotional intelligence needed when making tough and ethical decisions. For example, a machine-learning model may be able to diagnose complex combat operations and recommend decisions to improve medevac system performance, but the judgment of a human being is necessary to address intangible criteria that may elude quantification and input as data.

Whereas the effectiveness and efficiency of the U.S. military medevac system has been very successful for recent operations in Afghanistan, Iraq, and Syria, future operating environments may be vastly different from where the United States has been fighting over the past 20 years. Artificial intelligence and operations research techniques can combine to create effective decision-making tools that, in conjunction with human judgment, improve the medevac enterprise for large-scale combat operations, ultimately saving more lives.

The Way Forward

The Air Force Institute of Technology is currently examining a variety of medevac scenarios with different problem features to determine both the viability and benefit of incorporating the aforementioned artificial intelligence and operations research techniques within active medevac operations. Once a viable approach is developed, the next step is to obtain buy-in from senior military leaders. With a parallel, macroscopic-level focus, the Joint Artificial Intelligence Center, the Department of Defenses Artificial Intelligence Center of Excellence, is currently seeking new artificial intelligence initiatives to demonstrate value and spur momentum to accelerate the adoption of artificial intelligence and create a force fit for this era.

Capt. Phillip R. Jenkins, PhD, is an assistant professor of operations research at the Air Force Institute of Technology. His academic research involves problems relating to military defense, such asthe location, allocation, and dispatch of medical evacuation assets in a deployed environment. He is an active-duty Air Force officer with nearly eight years of experience as an operations research analyst.

Brian J. Lunday, PhD, is a professor of operations research at the Air Force Institute of Technology who researches optimal resource location and allocation modeling. He served for 24 years as an active-duty Army officer, both as an operations research analyst and a combat engineer.

Matthew J. Robbins, PhD, is an associate professor of operations research at the Air Force Institute of Technology. His academic research involves the development and application of computational stochastic optimization methods for defense-oriented problems. Robbins served for 20 years as an active-duty Air Force officer, holding a variety of intelligence and operations research analyst positions.

The views expressed in this article are those of the authors and do not reflect the official policy or position of the U.S. Air Force, the Department of Defense, or the U.S. government.

Image: Sgt. 1st Class Thomas Wheeler

Read this article:
Artificial Intelligence for Medical Evacuation in Great-Power Conflict - War on the Rocks

Qualtrics Announces Delighted AI, a Machine Learning Engine to Automate Every Step of the Customer Feedback Process – PRNewswire

SALT LAKE CITY, SEATTLE, and PALO ALTO, Calif., Sept. 23, 2020 /PRNewswire/ -- Qualtrics, the leader in customer experience and creator of the experience management category, today announced Delighted AI, an artificial intelligence and machine learning engine built directly into Delighted's customer experience platform. Delighted, a Qualtrics company, developed its AI technology to intelligently automate every aspect of the customer feedback process, from scheduling to analysis and reporting, so that companies can focus on closing feedback loops faster than ever. Delighted AI is complementary to Qualtrics' existing Text iQ enterprise technology for CustomerXM, optimized for Delighted customers.

Today, the most successful customer experience programs are no longer measurement or metrics-based. Over the past few months, Net Promoter Scores have significantly declined in response to COVID-19, exposing customer experience gaps that companies have failed to address or identify. The companies who have emerged as customer experience leaders in the crisis have continuously listened to their customers, and more importantly, responded quickly to their preferences and expectations.

Delighted AI was created based on semantics and themes in the millions of customer feedback responses that Delighted and its customers have analyzed over several years to drive customer experience success.

"Delighted AI helped the right teams at our company understand customer feedback with more precision than ever before, which has been critical in the middle of a pandemic where we need to adapt and respond even more quickly to our customers' needs and expectations," said Roxana Turcanu, Growth Director for Adore Me, a New York-based e-commerce company. "We just recently launched a new try-at-home brand called Outlines, and we were able to do so with the help of Delighted AI by capturing and applying feedback early - this enabled us to pivot, at a rate we've never been able to do, towards what our customers actually wanted from our brand."

Benefits of Delighted AI include:

"Customer experience programs are rapidly evolving as companies have realized that relying on traditional metrics alone does not determine customer success. Instead, the customer experience leaders are winning based on gathering in-the-moment feedback that is immediately actionable and building a culture of continuous listening," said Caleb Elston, co-founder of Delighted. "We created Delighted AI to empower companies to spend less time configuring, implementing, and analyzing so they can focus on acting on insights faster than any other technology or human could before."

Acquired by Qualtrics in 2018, Delighted is one of the fastest and easiest ways to take action on customer feedback, which enables innovative brands and organizations of any size to quickly implement a customer experience program across every channel.

Learn more about Delighted AI here.

About QualtricsQualtrics, the leader in customer experience and creator of the Experience Management (XM) category, is changing the way organizations manage and improve the four core experiences of businesscustomer, employee, product, and brand. Over 11,000 organizations around the world are using Qualtrics to listen, understand, and take action on experience data (X-data)the beliefs, emotions, and intentions that tell you why things are happening, and what to do about it. The Qualtrics XM Platform is a system of action that helps businesses attract customers who stay longer and buy more, engage employees who build a positive culture, develop breakthrough products people love, and build a brand people are passionate about. To learn more, please visit qualtrics.com.

Contact: [emailprotected]

SOURCE Qualtrics

http://www.qualtrics.com

Original post:
Qualtrics Announces Delighted AI, a Machine Learning Engine to Automate Every Step of the Customer Feedback Process - PRNewswire

Artificial Intelligence (AI) in Security Market 2020 Latest Trending Technology, Growing Demand, Application, Types, Services, Regional Analysis and…

Artificial Intelligence (AI) in Security Market report includes a survey, which explains value chain structure, industrial outlook, regional analysis, applications, market size, share, and forecast. The Coronavirus (COVID-19) outbreak influencing the growth of the market globally. The rapidly changing market scenario and initial and future assessment of the impact is covered in the research report. The Artificial Intelligence (AI) in Security market provides an overall analysis of the market based on types, applications, regional analysis, and for the forecast period from 2020 to 2026. The reports also include investment opportunities and probable threats in the market based on an intelligent analysis. This report focuses on the Artificial Intelligence (AI) in Security Market trends, future forecasts, growth opportunities, key end-user industries, and market-leading players. The objectives of the study are to present the key developments of the market across the globe. The report presents a 360-degree overview and SWOT analysis of the competitive landscape of the industries.

Get sample copy of Artificial Intelligence (AI) in Security Market report @ https://www.adroitmarketresearch.com/contacts/request-sample/1317

Artificial Intelligence (AI) in Security Market 2020 Industry Research Report is a professional and in-depth study on the current state of the Global Artificial Intelligence (AI) in Security industry. Moreover, research report categorizes the global Artificial Intelligence (AI) in Security market by top players/brands, region, type and end user. Artificial Intelligence (AI) in Security Market report also tracks the latest market dynamics, such as driving factors, restraining factors, and industry news like mergers, acquisitions, and investments. It provides market size (value and volume), Artificial Intelligence (AI) in Security market share, growth rate by types, applications, and combines both qualitative and quantitative methods to make micro and macro forecasts in different regions or countries.

Top Leading Key Players are:

Amazon.Com, Inc., Fortinet, Google (Alphabet Inc.), IBM Corporation, Intel Corporation, Micron Technology Inc., Nvidia Corporation, Palo Alto Networks Inc., Samsung Electronics Co., Ltd., Symantec. Acalvio Technologies, Inc., Cylance Inc., Darktrace, Securonix, Inc., Sift Science, Sparkcognition Inc., Threatmetrix Inc., Xilinx Inc.

Browse the complete report @ https://www.adroitmarketresearch.com/industry-reports/artificial-intelligence-ai-in-security-market

A thorough market study and investigation of trends in consumer and supply chain dynamics covered in this report helps businesses draw the strategies about sales, marketing, and promotion. Besides, market research performed in this Artificial Intelligence (AI) in Security Market report puts a light on the challenges, market structures, opportunities, driving forces, and competitive landscape for the business. It assists in obtaining an extreme sense of evolving industry movements before competitors. To gain competitive advantage in this swiftly transforming marketplace, opting for such market research report is highly suggested as it gives a lot of benefits for a thriving business.

Global Artificial Intelligence (AI) in Security market is segmented based by type, application and region.Based on Type, the market has been segmented into:

by Component (Platform, Services), Application (Identity and Access Management, Unified Threat Management, Antivirus/Antimalware, Risk and Compliance Management, Fraud Detection, and others), Industry Vertical (BFSI, retail, IT & Telecommunication, Automotive & Transportation, Manufacturing, Government & Defense, and others)

Based on application, the market has been segmented into:

By Application (Identity and Access Management, Unified Threat Management, Antivirus/Antimalware, Risk and Compliance Management, Fraud Detection, and others), Industry Vertical (BFSI, retail, IT & Telecommunication, Automotive & Transportation, Manufacturing, Government & Defense, and others)

This study also contains company profiling, product picture and specifications, sales, market share and contact information of various international, regional, and local vendors of Global Artificial Intelligence (AI) in Security Market. The market competition is constantly growing higher with the rise in technological innovation and M&A activities in the industry.

The report provides an in-depth analysis of the key developments and innovations of the market, such as research and development advancements, product launches, mergers & acquisitions, joint ventures, partnerships, government deals, and collaborations. The report provides a comprehensive overview of the regional growth of each market player. Additionally, the report provides details about the revenue estimation, financial standings, capacity, import/export, supply and demand ratio, production and consumption trends, CAGR, market share, market growth dynamics, and market segmentation analysis.

For Any Query on the Artificial Intelligence (AI) in Security Market: https://www.adroitmarketresearch.com/contacts/enquiry-before-buying/1317

About Us :

Contact Us :

Ryan JohnsonAccount Manager Global3131 McKinney Ave Ste 600, Dallas,TX 75204, U.S.APhone No.: USA: +1 972-362 -8199 / +91 9665341414

Read more:
Artificial Intelligence (AI) in Security Market 2020 Latest Trending Technology, Growing Demand, Application, Types, Services, Regional Analysis and...

Scientists around the world join forces to combat anti-Semitism with artificial intelligence – New York Post

BERLIN An international team of scientists said Monday it had joined forces to combat the spread of anti-Semitism online with the help of artificial intelligence.

The project Decoding Anti-Semitism includes discourse analysts, computational linguists and historians who will develop a highly complex, AI-driven approach to identifying online anti-Semitism, the Alfred Landecker Foundation, which supports the project, said in a statement Monday.

In order to prevent more and more users from becoming radicalized on the web, it is important to identify the real dimensions of anti-Semitism also taking into account the implicit forms that might become more explicit over time, said Matthias Becker, a linguist and project leader from the Technical University of Berlin.

The team also includes researchers from Kings College in London and other scientific institutions in Europe and Israel.

Computers will help run through vast amounts of data and images that humans wouldnt be able to assess because of their sheer quantity, the foundation said.

Studies have also shown that the majority of anti-Semitic defamation is expressed in implicit ways for example through the use of codes (juice instead of Jews) and allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images, the statement said.

As implicit anti-Semitism is harder to detect, the combination of qualitative and AI-driven approaches will allow for a more comprehensive search, the scientists think.

The problem of anti-Semitism online has increased, as seen by the rise in conspiracy myths accusing Jews of creating and spreading COVID-19, groups tracking anti-Semitism on the internet have found.

The focus of the current project is initially on Germany, France and the U.K., but will later be expanded to cover other countries and languages.

The Alfred Landecker Foundation, which was founded in 2019 in response to rising trends of populism, nationalism and hatred toward minorities, is supporting the project with 3 million euros ($3.5 million), the German news agency dpa reported.

Read the original post:
Scientists around the world join forces to combat anti-Semitism with artificial intelligence - New York Post

UK Information Commissioner’s Office publishes guidance on artificial intelligence and data protection – Lexology

On 30 July, the UK's Information Commissioner's Office ("ICO") published new guidance on artificial intelligence ("AI") and data protection. The ICO is also running a series of webinars to help organisations and businesses to comply with their obligations under data protection law when using AI systems to process personal data. This legal update summarises the main points from the guidance and the AI Accountability and Governance webinar hosted by the ICO on 22 September 2020.

As AI increasingly becomes a part of our everyday lives, businesses worldwide have to navigate the expanding landscape of legal and regulatory obligations associated with the use of AI systems. The ICO guidance recognises that using AI can have undisputable benefits, but that it can also pose risks to the rights and freedoms of individuals. The guidance offers a framework for how businesses can assess and mitigate these risks from a data protection perspective. It also stresses the value of considering data protection at an early stage of AI development, emphasising that mitigation of AI-associated risks should come at the design stage of the AI system.

Although the new guidance is not a statutory code of practice, it represents what the ICO deems to be best practice for data protection-compliant AI solutions and sheds light on how the ICO interprets data protection obligations as they apply to AI. However, the ICO confirmed that businesses might be able use other ways to achieve compliance. The guidance is the result of the ICO consultation on the AI auditing framework which was open for public comments earlier in 2020. It is designed to complement existing AI resources published by the ICO, including the recent Explaining decisions made with AI guidance produced in collaboration with The Alan Turing Institute (for further information on this guidance, please see our alert here) and the Big Data and AI report.

Who is the guidance aimed at and how is the guidance structured?

The guidance can be useful for (i) those undertaking compliance roles within organisations, such as data protection officers, risk managers, general counsel and senior management, and (ii) technology specialists, namely AI developers, data scientists, software developers / engineers and cybersecurity / IT risk managers.

The guidance is split into four sections:

Although the ICO notes that the guidance is written so that each section is accessible for both compliance and technology specialists, the ICO states that sections 1 and 4 are primarily aimed at those in compliance roles, with sections 2 and 3 containing the more technical material.

1. ACCOUNTABILITY AND GOVERNANCE IMPLICATIONS OF AI

The first section of the guidance focuses on the accountability principle, which is one of seven data processing principles in the European General Data Protection Regulation ("GDPR"). The accountability principle requires organisations to be able to demonstrate compliance with data protection laws. Though the ICO acknowledges the ever-increasing technical complexity of AI systems, the guidance highlights that the onus is on organisations to ensure their governance and risk capabilities are proportionate to the organisation's use of AI systems.

The ICO is clear in its message that organisations should not "underestimate the initial and ongoing level of investment and effort that is required" when it comes to demonstrating accountability for use of AI systems when processing personal data. The guidance indicates that senior management should understand and effectively address the risks posed by AI systems, such as through ensuring that appropriate internal structures exist, from policies to personnel, to enable businesses to effectively identify, manage and mitigate those risks.

With respect to AI-specific implications of accountability, the guidance focuses on three areas:

(a) Businesses processing personal data through AI systems should undertake DPIAs:

The ICO has made it clear that a data protection impact assessment ("DPIA") will be required in the vast majority of cases in which an organisation uses an AI system to process personal data because AI systems may involve processing which is likely to result in a high risk to individual's rights and freedoms.

The ICO stresses that DPIAs should not be considered just a box-ticking exercise. A DPIA allows organisations to demonstrate that they are accountable when making decisions with respect to designing or acquiring AI systems. The ICO suggested that organisations might consider having two versions of the DPIA: (i) a detailed internal one which is used by the organisation to help it identify and minimise data protection risk of the project and (ii) an external-facing one which can be shared with individuals whose data is processed by the AI system to help the individuals understand how the AI is making decisions about them.

The DPIA should be considered a living document which gets updated as the AI system evolves (which can be particularly relevant for deep learning AI systems). The guidance notes that where an organisation decides that it does not need to undertake a DPIA with respect to any processing related to an AI system, the organisation will still need to document how it reached such a conclusion.

The guidance provides helpful commentary on a number of considerations which businesses may need to grapple with when conducting a DPIA for AI systems, including guidance on:

The ICO also refers businesses to its general guidance on DPIAs and how to complete them outside the context of AI.

(b) Businesses should consider the data protection roles carried out by different parties in relation to AI systems and put in place appropriate documentation:

The ICO acknowledges that assigning controller / processor roles in respect to AI systems can be inherently complex, given the number of actors involved in the subsequent processing of personal data via the AI system. In this respect, the ICO draws attention to its work on data protection and cloud computing, with revisions to the ICO's Cloud Computing Guidance expected in 2021.

The ICO outlines a number of examples in which organisations take the role of controller / processor with respect to AI systems. The ICO is planning to consult on each of these controller and processor scenarios in the Cloud Computing Guidance review, so organisations can expect further clarity in 2021.

(c) Businesses should put in place documentation for accountability purposes to identify any "trade-offs" when assessing AI-related risks:

The ICO notes that there is a number of "trade-offs" when assessing different AI-related risks. Some common examples of such trade-offs are included in the guidance itself, such as where an organisation wishes to train an AI system capable of producing accurate statistical output on one hand, versus the data minimisation concerns associated with the quantity of personal data required to train such an AI system on the other.

The guidance provides advice to businesses seeking to manage risk associated with such trade-offs. The ICO recommends to put in place effective and accurate documenting processes for accountability purposes, but also for businesses to consider specific instances such as: (i) where an organisation acquires an AI solution and whether the associated trade-offs formed part of the organisation's due diligence processes, (ii) social acceptability concerns associated with certain trade-offs, and (iii) whether mathematical approaches can mitigate trade-off associated privacy risk.

2. ENSURING LAWFULNESS, FAIRNESS AND TRANSPARENCY IN AI SYSTEMS

The second section of the guidance focuses on ensuring lawfulness, fairness and transparency in AI systems and covers three main areas:

(a) Businesses should identify the purpose and an appropriate lawful basis for each processing operation in an AI system:

The guidance makes it clear that organisations must identify the purpose and an appropriate lawful basis for each processing operation in an AI system and specify these in their privacy notice.

It adds that it might be more appropriate to choose different lawful bases for the development and deployment phases of an AI system. For example, while performance of a contract might be an appropriate ground for processing personal data to deploy an AI system (e.g. to provide a quote to a customer before entering into a contract), it is unlikely that relying on this basis would be appropriate to develop an AI system.

The guidance makes it clear that legitimate interests provide the most flexible lawful basis for processing. However, if businesses rely on it, they are taking on an additional responsibility for considering and protecting people's rights and interests and must be able to demonstrate the necessity and proportionality of the processing through a legitimate interests assessment.

The guidance mentions that consent may be an appropriate lawful basis but individuals must have a genuine choice and be able to withdraw the consent as easily as they give it.

It might also be possible to rely on legal obligation as a lawful basis for auditing and testing the AI system if businesses are able to identify the specific legal obligation they are subject to (e.g. under the Equality Act 2010). However, it is unlikely to be appropriate for other uses of that data.

If the AI system processes special category or criminal convictions data, then the organisation will also need to ensure compliance with additional requirements in the GDPR and the Data Protection Act 2018.

(b) Businesses should assess the effectiveness of the AI system in making statistically accurate predictions about individuals:

The guidance notes that organisations should assess the merits of using a particular AI system in light of its effectiveness in making statically accurate and therefore valuable predications. In particular, organisations should monitor the system's precision and sensitivity. Organisations should also prioritise avoiding certain kind of errors based on the severity and nature of the particular risk.

Businesses should agree regular updates (retraining of the AI system) and reviews of statistical accuracy to guard against changing data, for example, if the data originally used to train the AI system is no longer reflective of the current users of the AI systems.

(c) Businesses should address the risks of bias and discrimination in using an AI system:

AI systems may learn from data which may be imbalanced (e.g. because the proportion of different genders in the training data is different than in the population using the AI system) and / or reflect past discrimination (e.g. if in the past, male candidates were invited more often to job interviews) which could lead to producing outputs which have discriminatory effect on individuals. The guidance makes it clear that obligations relating to discrimination under data protection law is separate and additional to organisations' obligations under the Equality Act 2010.

The guidance mentions various approaches developed by computer scientists studying algorithmic fairness which aim to mitigate AI-driven discrimination. For example, in cases of imbalanced training data, it may be possible to balance it out by adding or removing data about under/over-represented subsets of the population. In cases where the training data reflects past discrimination, the data may be manually modified, the learning process could be adapted to reflect this, or the model can be modified after training. However, the guidance warns that in some cases, simply retraining the AI model with a more diverse training set may not be sufficient to mitigate its discriminatory impact and additional steps might need to be taken.

The guidance recommends that businesses put in place policies and good practices to address risks related to bias and discrimination and undertake robust testing of the AI system on an ongoing basis against selected key performance metrics.

3. SECURITY ASSESSMENT AND DATA MINIMISATION IN AI SYSTEMS

The third section of the guidance is aimed at technical specialists and covers two main issues:

(a) Businesses should assess the security risks AI introduces and take steps to manage the risks of privacy attacks on AI systems:

AI systems introduce new kinds of complexity not found in more traditional IT systems. AI systems might also rely heavily on third party code and are often integrated with several other existing IT components. This complexity might make it more difficult to identify and manage security risks. As a result, businesses should ensure that they actively monitor and take into account the state-of-the-art security practices when using personal data in an AI context. Businesses should use these practices to assess AI systems for security risks and ensure that their staff have appropriate skills and knowledge to address these security risks. Businesses should also ensure that their procurement process includes sufficient information sharing between the parties to perform these assessments.

The guidance warns against two kinds of privacy attacks which allow the attacker to infer personal data of the individuals used to train the AI system:

The guidance then suggests some practical technical steps that businesses can take to manage the risks of such privacy attacks.

The guidance also warns against novel risks, such as adversarial examples which allow attackers to feed modified inputs into an AI model that will be misclassified by the AI system. The ICO notes that in some cases this could lead to a risk to the rights and freedom of individuals (e.g. if a facial recognition system is tricked to misclassify an individual for someone else). This would raise issues not only under data protection laws but possibly also under the Network and Information Systems (NIS) Directive.

(b) Business should take steps to minimise personal data when using AI systems and adopt appropriate privacy-enhancing methods:

AI systems generally require large amounts of data but the GDPR data minimisation principle requires business to identify the minimum amount of personal data they need to fulfil their purposes. This can create some tensions but the guidance suggests steps businesses can take to ensure that the personal data used by the AI system is "adequate, relevant and limited".

The guidance recommends that individuals accountable for the risk management and compliance of AI systems are familiar with techniques such as: perturbation (i.e. adding 'noise' to data), using synthetic data, adopting federated learning, using less "human readable" formats, making inferences locally rather than on a central server, using privacy-preserving query approaches, and considering anonymisation and pseudonymisation of the personal data. The guidance goes into some detail for each of these techniques and explains when they might be appropriate.

Importantly, ensuring security and data minimisation in AI systems is not a static process. The ICO suggests that compliance with data protection obligations requires ongoing monitoring of trends and developments in this area and being familiar with and adopting the latest security and privacy-enhancing techniques for AI systems. As a result, any contractual documentation that businesses put in place with service providers should take these privacy concerns into account.

4. INDIVIDUAL RIGHTS IN AI SYSTEMS

The final section of the guidance is aimed at compliance specialists and covers two main areas:

(a) Businesses must comply with individual rights requests in relation to personal data in all stages of the AI lifecycle, including training data, deployment data and data in the model itself:

Under the GDPR, individuals have a number of rights relating to their personal data. The guidance states that these rights apply wherever personal data is used at any of the various stages of the AI lifecycle from training the AI model to deployment.

The guidance is clear that even if the personal data is converted into a form that makes the data potentially much harder to link to a particular individual, this is not necessarily considered sufficient to take the data out of scope of the data protection law because the bar for anonymisation of personal data under the GDPR is high.

If it possible for an organisation to identify an individual in the data, directly or indirectly (e.g. by combining it with other data held by the organisation or other data provided by the individual), the organisation must respond to requests from individuals to exercise their rights under the GDPR (assuming that the organisation has taken reasonable measures to verify their identity and no other exceptions apply). The guidance recognises that the use of personal data with AI may sometimes make it harder to fulfil individual rights but warns that just because it may be harder to fulfil the GDPR obligations in the context of AI, they should not be regarded as manifestly unfounded or excessive. The guidance also provides further detail about how business should comply with specific individual rights requests in the context of AI.

(b) Businesses should consider the requirements necessary to support a meaningful human review of any decisions made by, or with the support of, AI using personal data:

There are specific provisions in the GDPR (particularly Article 22 GDPR) covering individuals' rights where processing involves solely automated individual decision-making, including profiling, with legal or similarly significant effects. Businesses that use such decision-making must tell individuals whose data they are processing that they are doing so for automated decision-making and give them "meaningful information about the logic involved, as well as the significance and the envisaged consequences" of the processing. The ICO and the European Data Protection Board have both previously published detailed guidance on the obligations concerning automated individual decision-making which can be of further assistance.

The GDPR requires businesses to implement suitable safeguards, such as right to obtain human intervention, express their point of view, contest the decision or obtain an explanation about the logic of such decision. The guidance mentions two particular reasons why AI decisions might be overturned: (i) if the individual is an outlier and their circumstances are substantially different from those considered in the training data, and (ii) if the assumptions in the AI model can be challenged, e.g. because of specific design choices. Therefore, businesses should consider the requirements necessary to support a meaningful human review of any solely automated decision-making process (including the interpretability requirements, training of staff and giving them appropriate authority). The guidance from the ICO and The Alan Turning Institute on Explaining decision made with AI considers this issue in further detail (for more information on that guidance, please see our alert here).

In contrast, decisions that are not fully automated but for which the AI system provides support to a human decision-maker do not fall within the scope of Article 22 GDPR. However, the guidance is clear that a decision does not fall outside of the scope of Article 22 just because a human has "rubber-stamped" it and the human decision-maker must have a meaningful role in the decision-making process to take the decision-support tool outside the scope of Article 22.

The guidance also warns that to have a meaningful human oversight also means that businesses need to address the risks of automation bias by human reviewers (i.e. relying on the output generated by the decision-support system and not using their own judgment) and the risks of lack of interpretability (i.e. outputs from AI systems that are difficult for a human reviewer to interpret / understand, for example, in deep-learning AI models). The guidance provides some suggestions how such risks might be addressed, including by considering these risks when designing / procuring the AI systems, by training staff and by effectively monitoring the AI system and the human reviewers.

Conclusion

This guidance from the ICO is another welcome step for the rising number of businesses that use AI systems in their day-to-day operations. It also provides more clarity on how businesses should interpret their data protection obligations as they apply to AI. This is especially important because this area of compliance is attracting the focus of different regulators.

The ICO mentions "monitoring intrusive and disruptive technology" as one of its three focus areas and AI as one of its priorities for its regulatory approach during the COVID-19 pandemic and beyond. As a result, the ICO is also running a free webinar series in autumn 2020 on various topics covered in the guidance to help businesses achieve data protection compliance when using AI systems. The ICO stated on the AI Accountability and Governance webinar on 22 September 2020 that it is currently developing its AI auditing capabilities so it can use its powers to conduct audits of AI systems in the future. However, the ICO staff on the webinar confirmed the ICO would take into account the effect of the COVID-19 pandemic before conducting any AI audits.

Other regulators have also been interested in the implications of AI. For example, the Financial Conduct Authority is working with The Alan Turing Institute on AI transparency in financial markets. Businesses should therefore follow the guidance from their respective regulators and put in place a strategy how to address the data protection (and other) risks associated with using AI systems.

See original here:
UK Information Commissioner's Office publishes guidance on artificial intelligence and data protection - Lexology

University of Illinois Professor Vikram Adve to lead new artificial intelligence institute – newsindiatimes.com

Computer Science Professor at University of Illinois Vikram Adve will lead the AI Institute for Future Agricultural Resiliencne Management and Sustainability funded by the federal government. Photo: L. Brian Staufer at illinois.edu

The National Science Foundation and the U.S. Department of Agricultures National Institute of Food and Agriculture are announcing an investment of more than $140 million to establish seven artificial intelligence institutes in the U.S.

One of the new AI institutes will be led by an Indian American, Professor Vikram Adve of the University of Illinois, Urbana-Champaign, according to a press release from U of I. Each of the new institutes will receive about $20 million over five years.

Two of the seven AI institutes will be led by teams at the University of Illinois, Urbana-Champaign, one by Adve. They will support the work of researchers at the U. of I. and their partners at other academic and research institutions.

The USDA-NIFA will fund the AI Institute for Future Agricultural Resilience, Management and Sustainability (AIFARMS) at the U. of I. Illinois led by Adve who is a computer science professor .

The NSF will fund the AI Institute for Molecular Discovery, Synthetic Strategy and Manufacturing.

AIFARMS will advance AI research in computer vision, machine learning, soft-object manipulation and intuitive human-robot interaction to solve major agricultural challenges, the NSF reports.

Such challenges include sustainable intensification with limited labor, efficiency and welfare in animal agriculture, the environmental resilience of crops and the preservation of soil health. The institute will feature a novel autonomous farm of the future, new education and outreach pathways for diversifying the workforce in agriculture and technology, and a global clearinghouse to foster collaboration in AI-driven agricultural research, Adve is quoted saying in the press release.

The Molecule Maker Lab Institute will focus on the development of new AI-enabled tools to accelerate automated chemical synthesis to advance the discovery and manufacture of novel materials and bioactive compounds,

Read more from the original source:
University of Illinois Professor Vikram Adve to lead new artificial intelligence institute - newsindiatimes.com

Why Fashion Needs More Imagination When It Comes To Using Artificial Intelligence – Forbes

Virtual Fashion Show created using 3D digital design and AI machine learning algorithms

Until now, the use of artificial intelligence (AI) in the fashion industry has focused mostly on streamlining processes and increasing sales conversion. Areas which have traditionally taken precedence have been: finding efficiencies through automation, detecting product defects and counterfeit goods with image recognition and increasing sales conversion through personalised styling. Creative uses of AI have been underexplored, but pose a mammoth opportunity for an industry rapidly digitising its design and presentation methods during the pandemic and most likely afterwards too. Why is creative AI so underutilised in and what are the nascent opportunities for designers and brands? Is the use of AI in fashion design and presentations inevitable?

Matthew Drinkwater, Head of the Fashion Innovation Agency at London College of Fashion believes that: Initial uses of Artificial Intelligence have focused on quantifiable business needs, which has allowed for start-ups to offer a service to brands. He contests that: Creativity is much more difficult to quantify and therefore more likely to follow behind.

In a practical sense, perhaps an additional limitation has been the gulf between the skillsets of fashion designers and computer scientists. London College of Fashion seems to think so, having recently launched an 8 week AI course for 20 volunteer fashion students to learn Python to write code to gather fashion data, then use it to develop creative fashion solutions and experiences. When asked about the potential of AI in fashion, Drinkwater said: For me, it is in the unpredictability of an algorithm. He acknowledged the creative talent of designers but suggested that the collaboration between creative and neural networks may be where the unexpected is delivered. Its here that he predicts an imperfect result that challenges our perception of what fashion design or showcasing could or should be could arise.

The AI course was developed by the Fashion Innovation Agency (FIA) in partnership with Dr Pinar Yanardag of MIT Media Lab. Working on the course was FIAs 3D Designer, Costas Kazantzis, who also designed 3D environments for one of the course outputsan AI-driven catwalk. He explained during a Zoom call that the students hadnt coded before and were from a wide range of courses, including pattern cutting (for garment construction) and fashion curation. Despite being complete beginners learning Python, When they understood the technical capabilities of AI they were able to thrive, he said.

The AI models used were generative adversarial networks (GANs), a type of machine learning where two adversarial models are trained simultaneouslya generator ("the designer") which learns to create images that look real, and a discriminator ("the design critic") which learns to tell real images apart from fakes. During training, the generator becomes better at creating images that look real, while the discriminator becomes better at detecting fakes. The application of this creatively allows computer-generated imagery and movement that look plausible (and likely aesthetically pleasing) to the viewer.

The students formed teams and devised proof-of-concept showcases of the uses of AI within the fashion industry, as well as being shown how and where to gather appropriate data to train their own algorithms. The course covered a range of AI applications, including training an AI model to classify items of clothing and predict fashion trends from social media, and style transfer to recognise imagery and create new designs. A pivotal output from the course was a virtual fashion show which was created from archive catwalk show footage but was placed in a new 3D environment with the models wearing new 3D-generated outfits. Drinkwater believes this is an example of how even those with limited experience in the field can collaborate to push boundaries.

Talking me through the workflow for the virtual show, Kazantzis explained that computer vision algorithms were used to estimate skeletal movement data from an archive fashion show video. This data was then turned into a 3D pose simulation using another algorithm and applied to a 3D avatar in Blender to replicate the models movement in the original video.

CLO software was used to design and animate the garments for the avatar models, and style transfer (which uses image recognition via convolutional neural networks, or CNNs, to recognise patterns, textures and colours then suggests designs and placement on the garment) was used to develop the textiles and final garment surfaces. The 3D environment for the virtual show was created in gaming engine Unity, which Kazantzis favours for its flexibility in design and diverse outputs, including VR and AR applications. He used particle systems to create atmospheric weather effects including fog and to create sea life, including jellyfish in the underwater environment. The show was brought together in Unity (once the animated garments and textures were imported), creating a final experience ready for export as a VR scene, a website which can be navigated in 360 degrees or as an AR experience in Sketchfab, for example. Its here that the power of AI to develop creative products, environment design and immersive content simultaneously seems most potent.

Katzantzis worked alongside Greta Gandossi, a 2019 graduate of the MA Pattern and Garment Technology course at London College of Fashion (who also holds an architecture degree) and Tracy Bergstrom (who has a data science background). The trio formed a pipeline for extraction of the movement from the archive footage, creation of 3D garments and import into Unity. The students who created this virtual fashion show alongside them were Mary Thrift, Tirosh Yellin and Ashwini Deshpande.

The AI course commenced in March and the proof-of-concept virtual show was completed in June. This seems incredibly swift, and prompted me to ask Matthew Drinkwater whether this type of content creation is affordable and feasible for small and large brands alike? Absolutely, he said, explaining that the project was created with a nominal budget. A caveat? The more GPU's you throw into the mix the more impressive your results are likely to be. Additionally, he recognised that the skill sets required are varied and that these factors would impact the timeframe. Despite this, he said: I would fully expect to see many more examples of AI appearing on the catwalk in seasons to come.

This proof-of-concept virtual show launches today on the fifth day of London Fashion Week, which is operating in a decentralized manner across digital and physical platforms. Most brands are choosing to Livestream a catwalk show happening behind closed doors, or release a conceptual or catwalk-style video online at a specified showtime. Data from Launchmetrics has indicated that engagement generated from these digital show methods has been much lower than for physical fashion shows. Could AI-generated virtual fashion experiences shape the future of fashion shows? Echoing others in the industry, Drinkwater said: It has long been evident that fashion weeks have needed to evolve to provide a much more varied and accessible experience. He went on to add: One fact is undeniable, the increased blurring of our physical and digital lives is going to lead to fashion shows that are markedly different from the traditional runway of the past.

Landmark uses of creative AI include the computer-generated artwork, which sold at Christies in 2018 for $432,500 (almost 45 times higher than the estimate). The artwork Portrait of Edmond Belamy was created by self-taught AI artist Robbie Barrat using a GAN model, working in partnership with Paris-based arts-collective Obvious. Barrat has also worked on an AI-generated Balenciaga runway show and trained a neural network on the past collections of fashion brand Acne Studios to generate designs for their AW20 mens collection. On the consumer and marketing side, there has been an expansion of deep fakes to place consumers into the content of the brands they covet. The RefaceAI app face swaps the user into branded videos, and recently generated more than one million refaces and 400,000 shares in a day during a test collaboration with Gucci.

Mathilde Rougier generative upcycled textile 'tiles'

On the experimental side and seeking to address sustainability through upcycling of waste, fashion design graduate Mathilde Rougier is using convolutional neural networks (CNNs) to design new textiles composed of interlocking offcut fabrics (akin to Lego) to create perpetually new from old fashion products. Her process is explained in detail in a recent Techstyler article and marks a new level of convergence between fashion design, AI and sustainability problem-solving.

Creative AI in fashion is in its infancy but is clearly gaining momentum. With the rapid adoption of 3D digital design in both fashion education and the industry and the ongoing restrictions in physical showcasing, the widespread creative use of AI appears to depend only on a critical mass of use cases to inspire industry adoption. If a group of students with no coding experience can develop this virtual show in just a few months on a nominal budget, the future of the fashion show looks refreshingly unpredictable.

View post:
Why Fashion Needs More Imagination When It Comes To Using Artificial Intelligence - Forbes