Category Archives: Artificial Intelligence

A Tale Of Two Jurisdictions: Sufficiency Of Disclosure For Artificial Intelligence (AI) Patents In The US And The EPO – Intellectual Property – United…

PatentNext Summary: In order to prepareapplications for filing in multiple jurisdictions, practitionersshould be cognizant of claiming styles in the various jurisdictionsthat they expect to file AI-related patent applications in, anddraft claims accordingly. For example, different jurisdictions,such as the U.S. and EPO, have different legal tests that canresult in different styles for claiming artificialintelligence(AI)-related inventions.

In this article, we will compare two applications, one in theU.S. and the other in the EPO, that have the same or similarclaims. Both applications claim priority to the same PCTApplication (PCT/AT2006/000457) (the "'427 PCTApplication"), which is published as PCT Pub. No.WO/2007/053868.

As we shall see, despite the application having the same orsimilar claims, prosecution of the applications in the twojurisdictions nonetheless resulted in different outcomes, with theU.S. application prosecuted to allowance and the EPO applicationending in rejection.

****

Pertinent to our discussion is an overview of AI. A briefdescription of AI follows before analysis of the AI-related claimsat issue.

Artificial Intelligence (AI) is fundamentally a data-driventechnology that takes unique datasets as input to train AI computermodels. Once trained, an AI computer model may take new data asinput to predict, classify, or otherwise output results for use ina variety of applications.

Machine learning, arguably the most widely used AI technique,may be described as a process that uses data and algorithms totrain (or teach) computer models, which usually involves thetraining of weights of the model. Training typically involvescalculating and updating mathematical weights (i.e., numeralvalues) of a model based on input that can comprise hundreds,thousands, millions, etc. sets of data. The trained model allowsthe computer to make decisions without the need for explicit orrule-based programming.

In particular, machine learning algorithms build a model ontraining data to identify and extract patterns from the data andtherefore acquire (or learn) unique knowledge that can be appliedto new data sets.

For more information, see Artificial Intelligence & the IntellectualProperty Landscape

AI inventions are fundamentally software-related inventions. Inthe U.S., as a practical rule, software-related patents shoulddisclose an algorithm by which the software-related invention isachieved. An algorithm provides support for a software-relatedpatent pursuant to 35 U.S.C. 112(a) including (1) byproviding sufficiency of disclosure for the patent's"written description" and (2) by "enabling" oneof ordinary skill in the art (e.g., a computer engineer or computerprogrammer) to make or use the related software-related inventionwithout "undue experimentation." Without such support, apatent claim can be held invalid. For more information regardinggeneral aspects of the sufficiency of disclosure in the U.S. forsoftware-related inventions, see Why including an "Algorithm" isImportant for Software Patents (Part 2)

U.S. Patent 8,920,327 (the "'327 Patent") issuedfrom the '457 PCT Application. The ''327 Patent is anexample of an AI patent that did not experiencesufficiency issues in the U.S. The below provides an overview ofthe '327 Patent.

The '327 Patent is titled "Method for DeterminingCardiac Output" and includes a single independent claimregarding a method for cardiac output from an arterial bloodpressure curve. The method is implemented via a cardiac device, asillustrated in Figure 1 (copied below):

Fig. 1 illustrates device 1 for implementing the invention ofthe '327 patent, where measuring device 2 measures theperipheral blood pressure curve, and where related measurement datais fed into device 1 via line 3, and stored and evaluated there.The device further comprises optical display device 4, input panel5, and keys 6 for inputting and displaying information.

The claimed method includes an AI aspect, i.e., namely the useof "an artificial neural network having weightingvalues that are determined by learning."

Claim 1 is copied below (with the AI aspectbolded):

1. A method for determiningcardiac output from an arterial blood pressure curve measured at aperipheral region, comprising the steps of:

measuring the arterial bloodpressure curve at the peripheral region; arithmeticallytransforming the measured arterial blood pressure curve to anequivalent aortic pressure; and

calculating the cardiac outputfrom the equivalent aortic pressure,

wherein the arithmetictransformation of the arterial blood pressure curve measured at theperipheral region into the equivalent aortic pressure is performedby the aid of an artificial neural networkhaving weighting values that are determined bylearning.

Figure 3 of the '327 patent (copied below) is a schematicillustration of the artificial neural network, as recited in claim1.

The specification of the '327 patent describes that"FIG. 3 illustrates the structure of the neural network...,and it is apparent that the neural network ... is comprised ofthree layers 14, 15, 16." The specification discloses that asupervised learning algorithm is used to train the weights of themodel, e.g., "[t]he weights and the bias for the latter twolayers 15 and 16 are determined by supervised learning."

The input data fed to the supervised learning algorithm to trainthe AI model includes "associated blood pressure curve pairsactually determined by measurements in the periphery or in theaorta, respectively, are used." The measurements used for theinput data may come "from patients of different ages, sexes,constitutional types, health conditions and the like."

No issues with respect to sufficiency were raised during theprosecution of the application in the U.S. that was issued as the'327 patent.

More generally, issues of sufficiency in the U.S. typicallyarise in litigation, and result in expert testimony, i.e., "abattle of the experts," where expert witnesses (e.g.,typically university professors or industry consultants) fromopposing sides opine on the knowledge of a person of ordinary skillin the art and sufficiency of disclosure in view of thatperson.

The EPO has developed its own, yet similar, stance on AI-relatedinvention when compared with the U.S. Nonetheless, outcomes ofprosecution can be different. The below provides a cursory overviewof developments in the EPO with respect to AI-related inventionsand analyzes the treatment of an EPO application as filed based onthe PCT Application '457 (which is the same PCT Application asfor the '327 patent discussed above).

Generally, artificial intelligence inventions may be patented inthe European Patent Office (EPO). For example, in its Guidelinesfor Examination, the EPO defines AI and machine learning as"based on computational models and algorithms forclassification, clustering, regression and dimensionalityreduction, such as neural networks, genetic algorithms, supportvector machines, k-means, kernel regression and discriminantanalysis." Section 3.3.1 (Artificial intelligence and machinelearning).

As such, the EPO dubs AI and machine learning as "per se ofan abstract mathematical nature," irrespective of whether suchmodels may be trained with training data. Id. Thus, simplyclaiming a machine learning model (e.g., such as a "neuralnetwork") does not, alone, necessarily imply the use of a"technical means" in accordance with EPO law.

Nonetheless, the Guidelines for Examination at the EPO recognizethat the use of an AI model, when claimed as a whole with theadditional subject matter, may demonstrate a sufficient technicalcharacter. Id. As an example, the Guidelines forExamination at the EPO states that "the use of a neuralnetwork in a heart-monitoring apparatus for the purpose ofidentifying irregular heartbeats makes a technicalcontribution." Id. As a further example, the EPOGuidelines for Examination further states that "[t]heclassification of digital images, videos, audio or speech signalsbased on low-level features (e.g. edges or pixel attributes forimages) are further typical technical applications ofclassification algorithms." Id.

In a decision in 2020, the EPO Board of Appeals rejected amachine learning-based patent application that claimed an"artificial neural network" because the patentspecification failed to sufficiently disclose how the artificialneural network was trained. See T0161/18 (Equivalent aortic pressure / ARCSEIBERSDORF). The application in question claimed priority to thePCT Application '457, which is the same parent application asthe '327 patent, as discussed above.

The claims were the same or similar as to those in the U.S.,where the claims-at-issue directed to determining cardiac outputfrom an arterial blood pressure curve measured at a periphery, andrecited, in part (with respect to AI), that the "bloodpressure curve measured on the periphery is converted into theequivalent aortic pressure with the help of anartificial neural network, the weighting values ??ofwhich are determined bylearning."

Claim 1 is reproduced below (in English based on a machinetranslation of the original opinion German):

1. A method for determining thecardiac output from an arterial blood pressure curve measured atthe periphery, in which the blood pressure curve measured at theperiphery is mathematically transformed to the equivalent aorticpressure and the cardiac output is calculated from the equivalentaortic pressure, characterized in that the transformation of theblood pressure curve measured on the periphery is converted intothe equivalent aortic pressure with the help of anartificial neural network, the weighting values ??ofwhich are determined by learning.

The Board analyzed the claim in view of the specificationpursuant to Article 83 EP (Sufficient disclosure). As described bythe Board, Article 83 EPC requires that the invention be disclosedin the European patent application so clearly and completely that aperson skilled in the art can carry it out. For this, thedisclosure of the invention in the application must enable theperson skilled in the art to reproduce the technical teachinginherent in the claimed invention on the basis of his generalspecialist knowledge.

The Board then turned to the specification to determine whetherit disclosed enough support to meet these requirements in view ofthe claimed "artificial neural network." However, thespecification was found lacking because it failed to"disclose which input data aresuitable for training the artificial neural network according tothe invention, or at least one data set suitable for solving thetechnical problem at hand."

Instead, the Board found that the specification "merelyreveals that the input data should cover a broad spectrum ofpatients of different ages, genders, constitution types, healthstatus and the like."

Therefore, the Board found that the training of the artificialneural network could therefore not be reworked by the personskilled in the art, and the person skilled in the art can thereforenot carry out the invention.

Because of these deficiencies, the Board found that thespecification failed to provide sufficient disclosure pursuant toArticle 83 EPC.

For similar reasons, the Board further found that the claimedsubject matter lacked an "inventive step" pursuant toArticle 56 EPC. Specifically, the Board found that the claimed"artificial neural network" was not adapted for thespecific, claimed application because the specification failed todisclose how the artificial neural network was trained, andspecifically failed to disclose weight values that resulted fromsuch training. For this reason, the claimed "artificial neuralnetwork" could not be distinguished from the cited prior art,which resulted in failure to demonstrate the requisite inventivestep.

As the Board described:

In the present case, the claimedneural network is therefore not adapted for the specific, claimedapplication. In the opinion of the Chamber, there is therefore onlyan unspecified adaptation of the weight values, which is in thenature of every artificial neural network. The board is thereforenot convinced that the claimed effect will be achieved in theclaimed method over the entire range claimed. This effect cannot,therefore, be taken into account in the assessment of inventivestep in the sense of an improvement over the prior art.

Accordingly, at least with respect to patent applications filedin the EPO, and where an AI or machine learning model is to bedistinguished from the prior art, then a patent applicant may wantto include an example training data set, example trained weights,or at least sufficiently describe the input used to train the modelon a specific, claimed application or end-use. For example, atleast one example of data can be provided (or claimed) to show theinputs used to train specific weights, which may allow for theclaim to have sufficient disclosure, and, at the same time allowthe claim to cover a spectrum of AI models trained with aparticular set of data.

For the time being, such disclosure for an EPO case could beconsidered as additional when compared with the sufficiency ofdisclosure in the U.S. However, it is to be understood that theU.S. Patent office has also indicated the importance of includingtraining data or specific species of data used to train a model inits example guidance. See How to Patent an Artificial Intelligence (AI)Invention: Guidance from the U.S. Patent Office (USPTO). In anyevent, while there have been few court cases on AI-relatedinventions in the U.S. (see How the Courts treat Artificial Intelligence (AI)Patent Inventions: Through the Years since Alice), future casesmay indicate whether the U.S. will trend towards the EPO'sdecision in T0161/18 with respect to the sufficiency ofdisclosure.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Go here to read the rest:
A Tale Of Two Jurisdictions: Sufficiency Of Disclosure For Artificial Intelligence (AI) Patents In The US And The EPO - Intellectual Property - United...

Artificial Intelligence (AI) in Manufacturing Market Worth $13.96 billion by 2028 Exclusive Report by Meticulous Research – Yahoo Finance

Artificial Intelligence in Manufacturing Market By Component, Technology (ML, NLP, Computer Vision), Application (Predictive Maintenance Quality Management, Supply Chain, Production Planning), Industry Vertical, & Geography - Global Forecast to 2028

Redding, California, Nov. 02, 2021 (GLOBE NEWSWIRE) -- According to a new market research report titled, AI in Manufacturing Market By Component, Technology (ML, NLP, Computer Vision), Application (Predictive Maintenance Quality Management, Supply Chain, Production Planning), Industry Vertical, and Geography Global Forecast to 2028, published by Meticulous Research, the artificial intelligence (AI) in manufacturing market is expected to grow at a CAGR of 38.6% during the forecast period to reach $13.96 billion by 2028.

Download Free Sample Report Now @ https://www.meticulousresearch.com/download-sample-report/cp_id=4983

The rising popularity of artificial intelligence in manufacturing industry for optimizing logistics & supply chains, enhancing production outcomes, advancing process effectiveness, reducing costs and downtime in production lines while delivering finished products to consumers are expected to drive the growth of the AI in manufacturing market. Additionally, the advent of Industrial 4.0, the increasing volume of large complex data, and the rising adoption of industrial IoT further contribute to market growth.

However, the lack of infrastructure and high procurement and operating costs are expected to restrain the growth of this market to a certain extent.

Impact of COVID-19 on AI in Manufacturing Market

The COVID-19 pandemic outbreak created serious challenges to the worlds economy and for industry verticals. The SARS-CoV-2, the virus responsible for the global COVID-19 pandemic, started showing its distressing collision on most profitable businesses across the globe, leading to a remote workforce, ensuring peoples health & safety, and business application integrity. The impact of the COVID-19 outbreak has varied by each industry sector's level of resilience. Additionally, the lockdowns imposed to contain the pandemic resulted in severe losses to businesses. Manufacturers across the globe faced grave challenges, such as diminished demand, production, and revenues, as the COVID-19 pandemic intensified in 2020. The automobile, semiconductors & electronics, and heavy metal & machinery manufacturing industries witnessed raw material shortages, with manufacturers temporarily closing down factories or minimizing production.

Story continues

Speak to our Analysts to Understand the Impact of COVID-19 on Your Business: https://www.meticulousresearch.com/speak-to-analyst/cp_id=4983

According to the United Nations Conference on Trade and Development (UNCTAD), the COVID-19 pandemic is expected to reduce the global FDI by around 515% due to the temporary shutdown of the manufacturing sector. A survey conducted by the National Association of Manufacturers (NAM) stated that around 78% of manufacturers anticipated a financial impact, and 35.5% faced supply chain disruptions due to COVID-19. These factors led manufacturing companies to deprioritize their digital transformation strategies, including equipping their production units with AI.

Consequently, the AI in manufacturing market witnessed a sharp decline in 2020. Thus, manufacturing industries require considerable productive time and assistance from local governments to get back on track and overcome the COVID-19 crisis. Several governments plan to launch favorable initiatives, such as incentive programs promoting investments in the private sector, tax exemptions, and lowering corporate interest rates. For instance, in 2021, Cisco Systems, Inc. (U.S.) launched a collaborative framework under Ciscos Country Digital Acceleration (CDA) program to accelerate digitization and support inclusive pandemic recovery across South Korea. Such developments and initiatives are exhibiting positive impacts on the growth of the market. Based on geography, the EU countries were affected the most by the COVID-19 pandemic, followed by the U.S. On the other hand, China is gradually recovering from the pandemic, with positive developments in the supply chain industry.

Several organizations post-COVID-19 pandemic might strategize to downsize by cutting business lines considered as non-critical. Many leading AI in manufacturing players are eying this crisis as a new opportunity for restructuring and revisiting their existing strategies with advanced product portfolios. AI technology providers for manufacturing industries are focused on new applications and delivery models to create smart automation technologies, digitization, and advanced AI applications. For instance, in 2021, Nvidia Corporation (U.S.) partnered with Google Cloud (U.S.) to create the industrys first AI-on-5G Lab. This partnership helped accelerate the creation of smart cities, smart factories, and other advanced 5G and AI applications. Also, in 2021, General Electric Company (U.S.) partnered with the Global Manufacturing and Industrialization Summit (GMIS) (UAE) to explore the role of digitization, lean manufacturing, and workplace safety. Such developments and initiatives are expected to help manufacturing companies recover faster and reduce dependencies on physical process handling.

Hence, despite the pandemic affecting the AI in manufacturing market, it still holds considerable potential to bounce back with the gradual recovery of the manufacturing sector.

The AI in manufacturing market is segmented based on component (hardware [processors, memory solutions, and networking solutions], software [AI platforms and AI solutions], service [deployment & integration, support & maintenance]), technology (machine learning, natural language processing, computer vision, speech & voice recognition, context-aware computing), application (predictive maintenance & machinery inspection, quality management, supply chain optimization, industrial robot, production planning, material handling, field services, safety planning, cybersecurity, energy management), industry verticals (automotive, semiconductors & electronics, heavy metals & machine manufacturing, energy & power, aerospace & defense, medical devices, pharmaceuticals, and FMCG), and region. The study also evaluates industry competitors and analyses the market at the regional and country levels.

Based on component, the hardware segment is estimated to account for the largest share of the AI in manufacturing market in 2021. The large market share of this segment is primarily driven by the increasing demand for robust and cost-effective devices, including servers, storage, and networking devices. However, the software segment is slated to grow at the fastest CAGR during the forecast period due to the high adoption of cloud-based technologies and the increasing demand for AI platforms to streamline processes and operations.

Based on technology, the machine learning segment is estimated to account for the largest share of the AI in manufacturing market in 2021. The large market share of this segment is primarily driven by the rising need for identifying, monitoring, and analyzing the critical system variables during the manufacturing process, growing demand for predictive maintenance & machinery inspection, and the increase in unstructured data generated by the manufacturing industry. However, the natural language processing segment is slated to grow at the fastest CAGR during the forecast period due to the need to strengthen interactions with search engines by allowing queries to be assessed faster in an efficient manner and the growing demand for cloud-based NLP solutions to reduce overall costs, facilitate smart environments, and enhance scalability.

Based on application, the predictive maintenance & machinery inspection segment is estimated to account for the largest share and witness the fastest CAGR of the AI in manufacturing market in 2021. This segment's large market share and high growth rate are primarily driven by the increasing demand to reduce costs related to operating heavy equipment, growing demand for equipment uptime & availability, reducing maintenance planning time, improving production capacity, and real-time reporting of manufacturing issues in industries.

Quick Buy AI in Manufacturing Market Research Report: https://www.meticulousresearch.com/Checkout/49201841

Based on industry vertical, the automotive industry is estimated to account for the largest share of the overall AI in manufacturing market in 2021. The large market share of this segment is primarily driven by the rising adoption of advanced AI automotive solutions for fault detection & isolation, quality management, smart manufacturing, production monitoring, and the need for predictive maintenance & machinery inspection solutions.

However, the medical devices manufacturing sector is slated to grow at the fastest CAGR during the forecast period due to the outbreak of the COVID-19 pandemic and the rising focus on preventive medical equipment maintenance to reduce unplanned downtime, enhance production quality control, and improve operational productivity.

Based on geography, Asia-Pacific is estimated to account for the largest share and witness the fastest CAGR of the AI in manufacturing market in 2021. This regions large market share and high growth rate are primarily attributed to the presence of major AI in manufacturing players along with several emerging startups in the region, increasing investments by technology leaders, and increasing digitization along with the strong presence of automobile and electronics and semiconductor companies and their focus on developing advanced solutions to optimize manufacturing operations and processes in the region.

The report also includes an extensive assessment of the key strategic developments adopted by the leading market participants in the industry over the past four years. The AI in manufacturing market has witnessed various strategies in recent years, such as partnerships & agreements. These strategies enabled companies to broaden their product portfolios, advance capabilities of existing products, and gain cost leadership in the AI in manufacturing market. For instance, in 2021, SAP SE (Germany) partnered with Google Cloud (U.S.) to augment existing business systems with Google Cloud capabilities in Artificial Intelligence (AI) and Machine Learning (ML). Also, SAP SE partnered with Plataine Ltd. (U.S.) to integrate IIoT and AI-based software for digital manufacturing. This partnership enabled customers to benefit from a holistic smart factory solution that extends across production operations. In 2021, Robert Bosch (Germany) collaborated with Capgemini SE (France) for intelligent manufacturing, digitization, and sustainability of their production plants.

The AI in manufacturing market is fragmented in nature. The major players operating in this market include Alphabet, Inc. (U.S.), IBM Corporation (U.S.), Intel Corporation (U.S.), Microsoft Corporation (U.S.), Nvidia Corporation (U.S.), Oracle Corporation (U.S.), Amazon Web Services, Inc. (U.S.), Siemens AG (Germany), General Electric Company (U.S.), SAP SE (Germany), Robert Bosch GmbH (Germany), Cisco Systems, Inc. (U.S.), Rockwell Automation, Inc. (U.S.), Advanced Micro Devices, Inc. (U.S.), and Sight Machine Inc. (U.S.) among others.

To gain more insights into the market with a detailed table of content and figures, click here: https://www.meticulousresearch.com/product/artificial-intelligence-in-manufacturing-market-4983

Scope of the Report:

AI in Manufacturing Market, by Component

Processors

Memory Solutions

Networking Solutions

Deployment & Integration

Support & Maintenance

AI in Manufacturing Market, by Technology

AI in Manufacturing Market, by Application

Predictive Maintenance & Machinery Inspection

Quality Management

Supply Chain Optimization

Industrial Robot/Robotics & Factory Automation

Production Planning

Material Handling

Field Services

Safety Planning

Cybersecurity

Energy management

AI in Manufacturing Market, by Industry Vertical

AI in Manufacturing Market, by Geography

North America

Europe

Germany

U.K.

France

Italy

Spain

Netherlands

Russia

Ireland

Turkey

Rest of Europe

Asia-Pacific

Japan

China

India

South Korea

Australia & New Zealand

Thailand

Indonesia

Taiwan

Vietnam

Rest of Asia-Pacific

Latin America

Mexico

Brazil

Rest of Latin America

Middle East and Africa

Download Free Sample Report Now @ https://www.meticulousresearch.com/download-sample-report/cp_id=4983

Amidst this crisis, Meticulous Research is continuously assessing the impact of COVID-19 pandemic on various sub-markets and enables global organizations to strategize for the post-COVID-19 world and sustain their growth. Let us know if you would like to assess the impact of COVID-19 on any industry here- https://www.meticulousresearch.com/custom-researchRelated Reports:

Artificial Intelligence in Retail Market by Product, Application (Predictive Merchandizing, Programmatic Advertising), Technology (Machine Learning, Natural Language Processing), Deployment (Cloud, On-Premises), and Geography - Global Forecast to 2027

https://www.meticulousresearch.com/product/artificial-intelligence-in-retail-market-4979

Healthcare Artificial Intelligence Market by Product and Services (Software, Services), Technology (Machine Learning, NLP), Application (Medical Imaging, Precision Medicine, Patient Management), End User (Hospitals, Patients) - Global Forecast to 2027

https://www.meticulousresearch.com/product/healthcare-artificial-intelligence-market-4937

Automotive Artificial Intelligence (AI) Market by Component (Hardware, Software), Technology (Machine Learning, Computer Vision), Process (Signal Recognition, Image Recognition) and Application (Semi-Autonomous Driving) - Global Forecast to 2027

https://www.meticulousresearch.com/product/automotive-artificial-intelligence-market-4996

Artificial Intelligence in Supply Chain Market by Component (Platforms, Solutions) Technology (Machine Learning, Computer Vision, Natural Language Processing), Application (Warehouse, Fleet, Inventory Management), and by End User - Global Forecast to 2027

https://www.meticulousresearch.com/product/artificial-intelligence-ai-in-supply-chain-market-5064

Artificial Intelligence (AI) in Cybersecurity Market by Technology (ML, NLP), Security (Endpoint, Cloud, Network), Application (DLP, UTM, Encryption, IAM, Antivirus, IDP), Industry (Retail, Government, Automotive, BFSI, IT, Healthcare, Education), Geography - Global Forecast to 2027

https://www.meticulousresearch.com/product/artificial-intelligence-in-cybersecurity-market-5101

About Meticulous Research

Meticulous Research was founded in 2010 and incorporated as Meticulous Market Research Pvt. Ltd. in 2013 as a private limited company under the Companies Act, 1956. Since its incorporation, the company has become the leading provider of premium market intelligence in North America, Europe, Asia-Pacific, Latin America, and the Middle East & Africa.

The name of our company defines our services, strengths, and values. Since the inception, we have only thrived to research, analyze, and present the critical market data with great attention to details. With the meticulous primary and secondary research techniques, we have built strong capabilities in data collection, interpretation, and analysis of data including qualitative and quantitative research with the finest team of analysts. We design our meticulously analyzed intelligent and value-driven syndicate market research reports, custom studies, quick turnaround research, and consulting solutions to address business challenges of sustainable growth.

Contact:Mr. Khushal BombeMeticulous Market Research Inc.1267 Willis St, Ste 200 Redding, California, 96001, U.S.USA: +1-646-781-8004Europe : +44-203-868-8738APAC: +91 744-7780008Email- sales@meticulousresearch.com Visit Our Website: https://www.meticulousresearch.com/Connect with us on LinkedIn- https://www.linkedin.com/company/meticulous-researchContent Source: https://www.meticulousresearch.com/pressrelease/294/artificial-intelligence-in-manufacturing-market-2028

Read more from the original source:
Artificial Intelligence (AI) in Manufacturing Market Worth $13.96 billion by 2028 Exclusive Report by Meticulous Research - Yahoo Finance

Why testing must address the trust-based issues surrounding artificial intelligence – Aerospace Testing International

Words byJonathan Dyble

Aviation celebrates its 118th birthday this year. Over the years there have been many milestone advances, yet today engineers are still using the latest technology to enhance performance and transform capabilities in both the defence and commercial sectors.

Artificial Intelligence (AI) is arguably one of the most exciting areas of innovation and like many sectors, AI is garnering a great amount of attention in aviation.

Powered by significant advances in the processing power of computers, AI is today making aviation experts probe the opportunities of what was once seemingly impossible. It is worth noting that AI-related aviation transformation remains in its infant stages.

Given the huge risks and costs involved, full confidence and trust is required for autonomous systems to be deployed at scale. As a result, AI remains somewhat of a novelty in the aviation industry at present but attention is growing, progress continues to be made and the tide is beginning to turn.

One individual championing AI developments in aviation is Luuk Van Dijk, CEO and founder of Daedalean, a Zurich-based startup specializing in the autonomous operation of aircraft.

While Daedalean is focused on developing software for pilotless and affordable aircraft, Van Dijk is a staunch advocate of erring on the side of caution when it comes to deploying AI in an aviation environment.We have to be careful of what we mean by artificial intelligence, says Van Dijk. Any sufficiently advanced technology is indistinguishable from magic, and AI has always been referred to as the kind of thing we can almost but not quite do with computers. By that definition, AI has unlimited possible uses, but unfortunately none are ready today.

When we look at things that have only fairly recently become possible understanding an image for example that is obviously massively useful to people. But these are applications of modern machine learning and it is these that currently dominate the meaning of the term AI.

While such technologies remain somewhat in their infancy, the potential is clear to see.

Van Dijk says, When we consider a pilot, especially in VFR, they use their eyes to see where they are, where they can fly and where they can land. Systems that assist with these functions such as GPS and radio navigation, TCAS and ADS-B, PAPI [Precision approach path indicator], and ILS are limited. Strictly speaking they are all optional, and none can replace the use of your eyes.

With AI imagine that you can now use computer vision and machine learning to build systems that can help the pilot to see. That creates significant opportunities and possibilities it can reduce the workload in regular flight and in contingencies and therefore has the potential to make flying much safer and easier.

A significant reason why such technologies have not yet made their way into the cockpit is because of a lack of trust something that must be earned through rigorous, extensive testing. Yet the way mechanical systems and software is tested is significantly different, because of an added layer of complexity in the latter.

For any structural or mechanical part of an aircraft there are detailed protocols on how to conduct tests that are statistically sound and give you enough confidence to certify the system, says Van Dijk. Software is different. It is very hard to test because the failures typically depend on rare events in a discrete input space.

This was a problem that Daedalean encountered in its first project with the European Union Aviation Safety Agency (EASA), working to explore the use of neural networks in developing systems to measurably outperform humans on visual tasks such as navigation, landing guidance, and traffic detection.While the software design assurance approach that stems from the Software Considerations in Airborne Systems and Equipment Certification (DO-178C) works for more traditional software, its guidance was deemed to be only partially applicable to machine learned systems.

Instead of having human programmers translating high level functional and safety requirements into low-level design requirements and computer code, in machine learning a computer explores the design space of possible solutions given a very precisely defined target function that encodes the requirements, says Van Dijk.

If you can formulate your problem into this form, then it can be a very powerful technique, but you have to somehow come up with the evidence that the resulting system is fit for purpose and safe for use in the real world.

To achieve this, you have to show that the emergent behavior of a system meets the requirements. Thats not trivial and actually requires more care than building the system in the first place.

From these discoveries, Daedalean recently developed and released a joint report with EASA in the aim of maturing the concept of learning assurance and pinpointing trustworthy building blocks upon which AI applications could be tested thoroughly enough to be safely and confidently incorporated into an aircraft. The underlying statistical nature of machine learning systems actually makes them very conducive to evidence and arguments based on sufficient testing, Van Dijk confirms, summarizing the findings showcased in the report.

The requirements to the system then become traceable to the requirements on the test data you have to show that your test data is sufficiently representative of the data you will encounter during an actual flight.For that you must show that you have sampled any data with independence a term familiar to those versed in the art of design assurance, but something that has a much stricter mathematical meaning in this context.

Another person helping to make the strides needed to make the use of AI in the cockpit a reality is Dan Javorsek, Commander of Detachment 6, Air Force Operational Test and Evaluation Center (AFOTEC) at Nellis Air Force Base in Nevada. Javorsekt is also director of the F-35 US Operational Test Team and previously worked as a program manager for the Defense Advanced Research Projects Agency (DARPA) within its Strategic Technology Office.

Much like Van Dijk, Javorsek points to trust as being the key element in ensuring potentially transformational AI and automated systems in aircraft becoming accepted and incorporated more into future aircraft. Furthermore he believes that this will be hard to achieve using current test methods.Traditional trust-based research relies heavily on surveys taken after test events. These proved to be largely inadequate for a variety of reasons, but most notably their lack of diagnostics during different phases of a dynamic engagement, says Javorsek.

As part of his research, Javorsek attempted to address this challenge directly by building a trust measurement mechanism reliant upon a pilots physiology. Pilots attentions were divided between two primary tasks concurrently, forcing them to decide which task to accomplish and which to offload to an accompanying autonomous system.

Through these tests we were able to measure a host of physiological indicators shown by the pilots, from their heart rate and galvanic skin response to their gaze and pupil dwell times on different aspects of the cockpit environment, Javorsek says.

As a result, we end up with a metric for which contextual situations and which autonomous system behaviors give rise to manoeuvres that the pilots appropriately trust.

However a key challenge that Javorsek encountered during this research was related to the difficulty machines would have in assessing hard to anticipate events in what he describes as very messy military situations.

Real world scenarios will often throw up unusual tactics and situations, such as stale tracks and the presence of significant denial and deception on both sides of an engagement. In addition electronic jammers and repeaters are often used attempt to mimic and confuse an adversary.

This can lead to an environment prone to accidental fratricide that can be challenging for even the most seasoned and experienced pilots, Javorsek says. As a result, aircrews need to be very aware of the limitations of any autonomous system they are working with and employing on the battlefield.

It is perhaps for these reasons that Nick Gkikas, systems engineer for Airbus Defence and Space, human factors engineering and flight deck, argues that the most effective use of AI and machine learning is outside the cockpit at present. In aviation, AI and machine learning is most effective when it is used offline and on the ground in managing and exploiting big data from aircraft health and human-in / on-the-loop mission performance during training and operations, he says.

In the cockpit, most people imagine the implementation of machine learning as an R2D2 type of robot assistant. While such a capability may be possible today, it is currently still limited by the amount of processing power available on-board and the development of effective human-machine interfaces with machine agents in the system.

Gkikas agrees with Javorsek and Van Dijk in believing that AI currently hasnt be sufficiently developed to be part of the cockpit in an effective and safe manner. Until such technologies are more advanced, effectively tested, and able to be powered by an even greater sophistication in computing power, it seems AI may be better placed to be used in other aviation applications such as weapons systems.

Javorsek also believes it will be several years before AI and machine learning software will be successful in dynamically controlling the manoeuvres of fleet aircraft traditionally assigned to contemporary manned fighters. However, there is consensus amongst experts that there is undoubted potential for such technologies to be developed further and eventually incorporated within the cockpit of future aircraft.

For AI in the cockpit and in aircraft in general, I am confident we will see unmanned drones, eVTOL aircraft and similarly transformative technologies being rolled out beyond test environments in the not-so-distant future, concludes Van Dijk.

View post:
Why testing must address the trust-based issues surrounding artificial intelligence - Aerospace Testing International

US/EU Initiative Spotlights Cooperation, Differing Approaches To Regulation Of Artificial Intelligence Systems – Privacy – Worldwide – Mondaq News…

To print this article, all you need is to be registered or login on Mondaq.com.

In late September 2021, representatives from the U.S. and theEuropean Union met to coordinate objectives related to the U.S.-EUTrade and Technology Council, and high on the Council's agendawere the societal implications of the use of artificialintelligence systems and technologies ("AISystems"). The Council's public statements on AISystems affirmed its "willingness and intention to develop andimplement trustworthy AI" and a "commitment to ahuman-centric approach that reinforces shared democraticvalues," while acknowledging concerns that authoritarianregimes may develop and use AI Systems to curtail human rights,suppress free speech, and enforce surveillance systems. Given theincreasing focus on the development and use of AI Systems from bothusers and investors, it is becoming imperative for companies totrack policy and regulatory developments regarding AI on both sidesof the Atlantic.

At the heart of the debate over the appropriate regulatorystrategy is a growing concern over algorithmic bias thenotion that the algorithm powering the AI Systems in question hasbias "baked in" that will manifest in its results.Examples of this issue abound job applicant systemsfavoring certain candidates over others, or facial recognitionsystems treating African Americans differently than Caucasians,etc. These concerns have been amplified over the last 18 months associal justice movements have highlighted the real-worldimplications of algorithmic bias.

In response, some prominent tech industry players have postedposition statements on their public-facing websites regarding theiruse of AI Systems and other machine learning practices. Thesestatements typically address issues such as bias, fairness, anddisparate impact stemming from the use of AI Systems, but often arenot binding or enforceable in any way. As a result, these publicstatements have not quelled the debate around regulating AISystems; rather, they highlight the disparate regulatory regimesand business needs that these companies must navigate.

When the EU's General Data Protection Regulation("GDPR") came into force in 2018, itprovided prescriptive guidance regarding the treatment of automateddecision-making practices or profiling. Specifically, Article 22 isgenerally understood to implicate technology involving AI Systems.Under that provision, EU data subjects have the right not to besubject to decisions based solely on automated processing (andwithout human intervention) which may produce legal effects for theindividual. In addition to Article 22, data processing principlesin the GDPR, such as data minimization and purpose limitationpractices, are applicable to the expansive data collectionpractices inherent in many AI Systems.

Consistent with the approach enacted in GDPR, recently proposedEU legislation regarding AI Systems favors tasking businesses,rather than users, with compliance responsibilities. The EU'sArtificial Intelligence Act (the "Draft AI Regulation"),released by the EU Commission in April 2021, would requirecompanies (and users) who use AI Systems as part of their businesspractices in the EU to limit the harmful impact of AI. If enacted,the Draft AI Regulation would be one of the first legal frameworksfor AI designed to "guarantee the safety and fundamentalrights of people and businesses, while strengthening AI uptake,investment and innovation across the EU." The Draft AIRegulation adopts a risk-based approach, categorizing AISystems as unacceptable risk, high risk, and minimal risk. Much ofthe focus and discussion with respect to the Draft AI Regulationhas concerned (i) what types of AI Systems are consideredhigh-risk, and (ii) the resulting obligations on such systems.Under the current version of the proposal, activities that would beconsidered "high-risk" include employee recruiting andcredit scoring, and the obligations for high-risk AI Systems wouldinclude maintaining technical documentation and logs, establishinga risk management system and appropriate human oversight measures,and requiring incident reporting with respect to AI Systemmalfunctioning.

While AI Systems have previously been subject to guidelines fromgovernmental entities and industry groups, the Draft AI Regulationwill be the most comprehensive AI Systems law in Europe, if not theworld. In addition to the substantive requirements previewed above,it proposes establishing an EU AI board to facilitateimplementation of the law, allowing Member State regulators toenforce the law, and authorizing fines up to 6% of acompany's annual worldwide turnover. The draft law will likelybe subject to a period of discussion and revision with thepotential for a transition period, meaning that companies that dobusiness in Europe or target EU data subjects will have a few yearsto prepare.

Unlike the EU, the U.S. lacks comprehensive federal privacylegislation and has no law or regulation as specifically tailoredto AI activities. Enforcement of violations of privacy practices,including data collection and processing practices through AISystems, primarily originates from Section 5 of the Federal TradeCommission ("FTC") Act, which prohibitsunfair or deceptive acts or practices. In April 2020, the FTCissued guidance regarding the use of AI Systems designed to promotefairness and equity. Specifically, the guidance directed that theuse of AI tools should be "transparent, explainable, fair, andempirically sound, while fostering accountability." The changein administration has not changed the FTC's focus on AIsystems. First, public statements from then-FTC Acting ChairRebecca Slaughter in February 2021 cited algorithms that result inbias or discrimination, or AI-generated consumer harms, as a keyfocus of the agency. Then, the FTC addressed potential bias in AISystems on its website in April 2021 and signaled that unlessbusinesses adopt a transparency approach, test for discriminatoryoutcomes, and are truthful about data use, FTC enforcement actionsmay result.

At the state level, recently enacted privacy laws in California,Colorado and Virginia will enable consumers in those states toopt-out of the use of their personal information in the context of"profiling," defined as a form of automated processingperformed on personal information to evaluate, analyze, or predictaspects related to individuals. While AI Systems are notspecifically addressed, the three new state laws require datacontrollers (or equivalent) to conduct data protection impactassessments to determine whether processing risks associated withprofiling may result in unfair or disparate impact to consumers. Inall three cases, yet-to-be promulgated implementing regulations mayprovide businesses (and consumers) with additional guidanceregarding operationalizing automated decision-making requests upuntil the laws' effective dates (January 2023 for Virginia andCalifornia, July 2023 for Colorado).

Proliferating use of AI Systems has dramatically increased thescale, scope, and frequency of processing of personal information,which has led to an accompanying increase in regulatory scrutiny toensure that harms to individuals are minimized. Businesses thatutilize AI Systems should adopt a comprehensive governance approachto comply with both the complimentary and divergent aspects of theU.S. and EU approaches to the protection of individual rights.Although laws governing the use of AI Systems remain in flux onboth sides of the Atlantic, businesses that utilize AI in theirbusiness practices should consider asking themselves the followingquestions:

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Excerpt from:
US/EU Initiative Spotlights Cooperation, Differing Approaches To Regulation Of Artificial Intelligence Systems - Privacy - Worldwide - Mondaq News...

How does McLaren win the race both on the track and in cyberspace? With innovation and Artificial Intelligence – Entrepreneur

This article was translated from our Spanish edition using AI technologies. Errors may exist due to this process.Opinions expressed by Entrepreneur contributors are their own.

Depositphotos.com

Innovation is essential for success. With the Mexican Grand Prix approaching soon, as it will take place on November 7, these innovations will be the center of attention, highlighting some of the most incredible feats of engineering: Formula 1 (F1). The fastest team will largely depend on your ability to innovate to win the race.

For the McLaren F1 team, that means perfecting the car to stay ahead of the competition. Every 17 minutes, McLaren creates a new part for its car. And, at the end of the season, 80% of the car will be completely different. With more than 180 Grand Prix victories and 20 championships, this strategy has served the historic McLaren team well.

But innovation is also critical to winning the race in cyberspace. To outcompete cyber attackers, organizations must invest in technologies that prioritize research and development, deploying technology to combat the growing number of increasingly sophisticated threats.

The constant evolution of the car makes McLaren's intellectual property - that is, the car's designs and its performance characteristics - its most prized jewel. Just as a startup's business model or product designs are crucial to any entrepreneur, McLaren's proprietary data is its most precious resource.

Both entrepreneurs and companies must take the necessary precautions to ensure that criminals cannot steal your ideas. Hackers don't care about legal frameworks or patents. They will do whatever it takes to cause harm.

A successful cyber attack could cause serious problems for any organization. These attacks could lead to a deterioration in reputation, significant capital losses, or the theft of ideas that are already patented. For McLaren or any other team, the damage that a cyberattack would cause - ranging from access to intellectual property, to the competition strategy or the data from the sensors connected to the cars - could be the difference between winning or losing.

The pace is so fast in racing that F1 teams need technology and innovation to keep up. Proper cybersecurity measures are critical to ensuring intellectual property protection, ensuring equipment can function, and most importantly ensuring victory is achieved.

Beyond data loss, cyberattacks can have physical consequences in the racing world. A successful cyberattack could disrupt the activities of a business entirely. The closure of any activity for a period of time would be unsustainable.

But for McLaren, not taking the car out on the track is inconceivable. A shutdown attack on a race weekend is the kind of situation that keeps McLaren leaders up at night.

Organizations must understand that hackers will eventually infiltrate. More important than building perimeter defenses with legacy technologies such as firewalls, companies must focus on mitigating the spread of threats and minimizing damage to avoid a shutdown.

The digital environment of an entrepreneur, like that of F1, moves at high speed: multiple processes and activities happen simultaneously. For that reason, leveraging artificial intelligence (AI) -based solutions for cybersecurity defense is vital.

The McLaren team uses AI technology to ensure the defense of its digital infrastructure. These artificial intelligence-based cybersecurity solutions automatically alert security teams to threats in their digital infrastructure. These real-time alerts allow security teams to focus their attention on responding to and remediating threats.

Especially during race weekends, it is important that the entire employee base - from the CEO to the team in the box - does not waste valuable time evaluating whether an email or other communication is authentic. They need to trust AI to examine that data for them.

This autonomous responsiveness allows the team to focus on more complex security tasks, without having to worry about relying on the decisions of a single individual to protect the entire company and its infrastructure. Not only is one person at risk by clicking on a suspicious link. The entire company is at risk.

Cybersecurity based on artificial intelligence (AI) is the best way to secure complex and sophisticated digital environments, such as a growing startup or a Formula 1 team. Every weekend, McLaren is in a different place, on a circuit different races.

The most successful AI has the ability to learn about the regular business operations of an organization, thereby identifying abnormal behaviors for that specific environment. In that sense, AI can prevent partial or total business interruption, data theft and other negative repercussions of a cyber attack. This type of AI, called self-learning, can adapt as its environment changes, which in the case of F1 is very common.

McLaren learned that it was essential to adopt this type of technology in advance. As attacks and their perpetrators become more sophisticated, defensive technologies need to rely on innovation to stay ahead of threats.

Every entrepreneur and businessman should follow McLaren's example: embrace new technologies to defend and protect their ideas and their work. McLaren took decisive steps to ensure its cyber integrity; companies should too.

Read the original post:
How does McLaren win the race both on the track and in cyberspace? With innovation and Artificial Intelligence - Entrepreneur

Ethics and governance of artificial intelligence for health

Overview

The WHO guidance on Ethics & Governance of Artificial Intelligence for Health is the product of eighteen months of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health. While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, according to the report, must put ethics and human rights at the heart of its design, deployment, and use.

The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders in the public and private sector accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use.

Read more

Global health ethics

View post:
Ethics and governance of artificial intelligence for health

DISA Moves to Combat Intensifying Cyber Threats with Artificial Intelligence – Nextgov

In the near term, Defense Information Systems Agency officials plan to strategically employ artificial intelligence capabilities for defensive cyber operations.

First of all, the threat has never been higher. It's also been commoditized: Malware has become commercialized as essentially organized crime on an international scale, Deputy Commander of the Joint Force Headquarters-Department of Defense Information Network Rear Adm. William Chase III, told reporters during a media roundtable last week. So, one of the first questions we have to ask ourselves is: What are we actually vulnerable to?

The press event was associated with DISAs Forecast to Industry and the release of its strategic plan for 2022 through 2024.

That document organizes some of the agencys broad aims to accelerate [its] efforts to connect and protect the warfighter in cyberspace as the conflict landscape evolves. The vision includes lines of effort promoting activities to ultimately implement and refine a global network infrastructure and unified capabilities, such as leverage data as a center of gravity, and drive force readiness through innovation.

We're now standing up the Office of the Chief Data Officer to be able to catalog and understand all of the data sources that we haveand then be able to apply AI and machine learning to actually help our cyber defenders be able to, in more real-time, have visibility of the attacks as they're actually occurring on the network, DISA Chief Information Officer Roger Greenwell explained.

Greenwell, who also serves as the agencys acting risk management executive and Enterprise Integration and Innovation Center director, said officials are still in the process of finalizing who the chief data officer tapped to lead that office will be. But, he noted, the new hub is being built out and stood up, and it is populated with a number of individuals.

Its establishment comes at a crucial time when DISA is processing massive volumes of data. The agency oversees roughly 300 billion Internet Protocol version 4 addresses, and recognizes that it is simply not possible for analysts to have visibility into all those endpoints that exist and manually manage everything.

So, AI and machine learning are absolutely critical to that. We have some pilot efforts ongoing right now. Certainly, the Joint AI Center is a partner with us in terms of how we actually will go about taking advantage of AI, Greenwell explained. But that is, to me, the most critical need that we have for AI at this moment, but there certainly are other use cases for it as well.

The CIO and other senior officials at the roundtable also reflected on pivots being made to confront modern challenges. Director of DISAs Cyber Security and Analytics Directorate Brian Hermann noted that tools and networks are having to be rearchitected to match new demands accelerated by the COVID-19 pandemic.

And at the same time, as the others noted, cyber crime is increasing and becoming more organized.

The reality is that we can't continue to do the things that we've done for years, in the same way, and be secure against that threat. And so, what we're focused on is automation, AI and tools like that so that we can relieve the pressure on the analysts, and get the high priority things in front of them very quickly, Hermann said, and deal with the known issuesthe challenges that come up all the timein a very automated way.

Read the original post:
DISA Moves to Combat Intensifying Cyber Threats with Artificial Intelligence - Nextgov

Artificial Intelligence, Automation and The Future of Corporate Finance – PRNewswire

NASHVILLE, Tenn., Nov. 1, 2021 /PRNewswire/ --Algorithms rule the world or, at least, the world is headed that way. How can you prepare your company and its financial underpinnings not only to survive but also thrive under this new big data paradigm? In his new book, Deep Finance: Corporate Finance in the Information Age, author Glenn Hopper provides a clear guide for finance professionals and non-technologists who aspire to digitally transform their companies into modern, data-driven organizations streamlined for success and profitability.

Hopper, who comes to this subject armed with a unique background in finance and technology, contends that the finance department is perfectly placed to lead the digital revolution bringing companies of all sizes into a new era of efficiency while future-proofing the role of chief financial officer.

Deep Financeis written for a wide audience, ranging from those who don't know AI from A/R to those who are already working with data to drive business decisions. The book illuminates the path toward digital transformation with instructions on how finance professionals can elevate their leadership and become champions for data science.

InDeep Finance, readers will:

"In this Age of AI, every function in every company has to go through its own digital transformation to enable their organizations to succeed.Glenn Hopper provides an essential roadmap to accounting and finance executives on how to embrace analytics and AI as core tools for modern finance. This book should be a required reading for every general manager."

Karim R. Lakhani | Co-Author of Competing in the Age of AICo-Director of Laboratory for Innovation Science at Harvard and Co-Chair of Harvard Business Analytics Program

A former Navy journalist, filmmaker, and business founder, Hopper has spent the past two decades helping startups transition into going concerns, operate at scale, and prepare for funding and/or acquisition. He is passionate about transforming the role of CFO from a historical reporter and bookkeeper to a forward-looking strategist who is integral to a company's future. He has served as a finance leader in a variety of industries, including telecommunications, retail, Internet, and legal technology. He has a master's degree in finance with a graduate certificate in business analytics from Harvard University, and an MBA from Regis University.

Deep Financeis distributed by Simon & Schuster and will be available November 16, 2021, in eBook and print versions at Amazon, Barnes and Noble, and other online booksellers.

Contact:Glenn Hopper 615.756.7354[emailprotected]

SOURCE Glenn Hopper

Read more:
Artificial Intelligence, Automation and The Future of Corporate Finance - PRNewswire

A Look Into The Future: EEOC Announces Artificial Intelligence Initiative – JD Supra

Seyfarth Synopsis: While businesses have shifted their operations to digital platforms over the last few decades, the COVID-19 pandemic has greatly accelerated the transformation of the workplace. One area where employers have looked to increase the efficiency of their hiring processes is through the use of artificial intelligence. The EEOC has been paying attention to this trend as well, and on October 28, 2021, the Commission announced an initiative to ensure that artificial intelligence (AI) and other emerging tools used in hiring and employment decisions comply with the federal civil rights laws that the agency enforces. It behooves employers to understand and heed the Commissions new initiative.

Artificial Intelligence In The Employment Setting

Businesses are routinely looking for new and improved ways to source, screen, and on-board talented employees. The era of written applications dropped off in person by candidates has given way to electronic tools that can include online job postings, web-based applications and questionnaires, computer-aided screening tools, and video conference interviews and presentations. Innovative employers may use keyword searches and predictive algorithms sometimes created in-house and other times licensed through vendors to help target and rank candidates best suited to their needs. Employers facing the challenges of the tight labor market may see artificial intelligence as a way to bring unique efficiencies to the hiring process.

Of course, while the tools for hiring may be evolving, the guardrails set by employment laws remain in place. And that means oversight by the EEOC can be expected.

The EEOCs Announcement

At an external event on October 28, 2021, EEOC Chair Charlotte A. Burrows announced the EEOCs intent to more closely scrutinize this potential area for discrimination. Burrows acknowledged both the potential benefits and challenges at hand: Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment. At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination. Burrows comments follow recent comments by fellow EEOC Commissioner Keith Sonderling. On October 20, 2021, Sonderling gave a speech in New York (and tweeted more broadly later) that highlighted the potential #cybersecurity and #privacy concerns employers must be aware of when using #AI to make employment decisions. As a thought-leader in this space, Sonderling also has written articles and given statements to other publications on the topic. Those public remarks from EEOC Commissioners appointed by different administrations confirm the Commissions intent to focus on this area.

The EEOCs announcement explains that the, initiative will examine more closely how technology is fundamentally changing the way employment decisions are made. It aims to guide applicants, employees, employers, and technology vendors. Burrows added that, While the technology may be evolving, anti-discrimination laws still apply, and perhaps most importantly for employers, Bias in employment arising from the use of algorithms and AI falls squarely within the Commissions priority to address systemic discrimination. Id.

The EEOC laid out five prongs to its initiative: (1) establish an internal working group to coordinate the agencys work on the initiative; (2) launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications; (3) gather information about the adoption, design, and impact of hiring and other employment-related technologies; (4) identify promising practices; and (5) issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions. Id. The EEOC indicates these plans build off work it has been doing in this area since 2016. Id.

Implications For Employers

When the Commission declares an area to be a systemic discrimination priority, employers should take heed. Employers who utilize artificial intelligence, algorithmic decision-making tools, and other automated processes should evaluate their use to ensure no resulting bias. Likewise, when considering third party vendors, employers should ask what steps have been taken to ensure that the tools are compliant with employment. And during EEOC investigations, employers should be on the alert for requests that suggest the EEOC is interested in taking a closer look at the use of these tools. In sum, as business practices evolve with the technology, so too does the EEOC in its enforcement priorities.

Follow this link:
A Look Into The Future: EEOC Announces Artificial Intelligence Initiative - JD Supra

Yuval Noah Harari on the power of data, artificial intelligence and the future of the human race – CBS News

When Yuval Noah Harari published his first book, "Sapiens," in 2014 about the history of the human species, it became a global bestseller, and turned the little-known, Israeli history professor into one of the most popular writers and thinkers on the planet. But when we met with Harari in Tel Aviv this summer, it wasn't our species' past that concerned him, it was our future. Harari believes we may be on the brink of creating not just a new, enhanced species of human, but an entirely new kind of being - one that is far more intelligent than we are. It sounds like science fiction, but Yuval Noah Harari says it's actually much more dangerous than that.

Anderson Cooper: You said, "We are one of the last generations of Homo sapiens. Within a century or two, Earth will be dominated by entities that are more different from us than we are different from chimpanzees."

Yuval Noah Harari: Yeah.

Anderson Cooper: What the hell does that mean? That freaked me out.

Yuval Noah Harari: You know we will soon have the power to re-engineer our bodies and brains, whether it is with genetic engineering or by directly connecting brains to computers, or by creating completely non-organic entities, artificial intelligence which is not based at all on the organic body and the organic brain. And these technologies are developing at break-neck speed.

Anderson Cooper: If that is true, then it creates a whole other species.

Yuval Noah Harari: This is something which is way beyond just another species.

Yuval Noah Harari is talking about the race to develop artificial intelligence, as well as other technologies like gene editing - that could one day enable parents to create smarter or more attractive children, and brain computer interfaces that could result in human/machine hybrids.

Anderson Cooper: What does that do to a society? It seems like the rich will have access whereas others wouldn't.

Yuval Noah Harari: One of the dangers is that we will see in the coming decades a process of-- of s-- of-- greater inequality than in any previous time in history because for the first time, it will be real biological inequality. If the new technologies are available only to the rich or only to people from a certain country then Homo sapiens will split into different biological castes because they really have different bodies and-- and different abilities.

Harari has spent the last few years lecturing and writing about what may lie ahead for humankind.

Harari at Davos in 2018: In the coming generations we will learn how to engineer bodies and brains and minds.

He has written two books about the challenges we face in the future -- "Homo Deus" and "21 Lessons for the 21st Century" -- which along with "Sapiens" have sold more than 35 million copies and been translated into 65 languages. His writings have been recommended by President Barack Obama, as well as tech moguls, Bill Gates, and Mark Zuckerberg.

Anderson Cooper: You raise warnings about technology. You're also embraced by a lot of folks in Silicon Valley.

Yuval Noah Harari: Yeah.

Anderson Cooper: Isn't that sort of a contradiction?

Yuval Noah Harari: They are a bit afraid of their own power. That they have realized the immense influence they have over the world, over the course of evolution, really. And I think that spooks at least some of them. And that's a good thing. And this is why they are kind of to some extent open to listening.

Anderson Cooper: You started as a history professor. What do you call yourself now?

Yuval Noah Harari: I'm still a historian. But I think history is the study of change, not just the study of the past. But it covers the future as well.

Harari got his Ph.D. in history at Oxford, and lives in Israel, where the past is still very present. He took us to an archeological site called Tel Gezer.

Harari says cities like this were only possible because about 70,000 years ago our species - Homo sapiens - experienced a cognitive change that helped us create language, which then made it possible for us to cooperate in large groups and drive Neanderthals and all other less cooperative human species into extinction.

Harari fears we are now the ones at risk of being dominated, by artificial intelligence.

Yuval Noah Harari: Maybe the biggest thing that we are facing is really a kind of evolutionary divergence. For millions of years, intelligence and consciousness went together. Consciousness is the ability to feel things, like pain and pleasure and love and hate. Intelligence is the ability to solve problems. But computers or artificial intelligence, they don't have consciousness. They just have intelligence. They solve problems in a completely different way than us. Now in science fiction, it's often assumed that as computers will become more and more intelligent, they will inevitably also gain consciousness. But actually, it's-- it's much more frightening than that in a way they will be able to solve more and more problems better than us without having any consciousness, any feelings.

Anderson Cooper: And they will have power over us?

Yuval Noah Harari: They are already gaining power over us.

Some lenders routinely use complex artificial intelligence algorithms to determine who qualifies for loans and global financial markets are moved by decisions made by machines analyzing huge amounts of data in ways even their programmers don't always understand.

Harari says the countries and companies that control the most data will in the future be the ones that control the world.

Yuval Noah Harari: Today in the world, data is worth much more than money. Ten years ago, you had these big corporations paying billions and billions for WhatsApp, for Instagram. And people wondered, "Are they crazy? Why do they pay billions to get this application that doesn't produce any money?" And the reason why? Because it produced data.

Anderson Cooper: And data is the key?

Yuval Noah Harari: The world is increasingly kind of cut up into spheres of-- of data collection, of data harvesting. In the Cold War, you had the Iron Curtain. Now we have the Silicon Curtain between the USA and China. And where does the data go? California or does it go to Shenzhen and to Shanghai and to Beijing?

Harari is concerned the pandemic has opened the door for more intrusive kinds of data collection, including biometric data.

Anderson Cooper: What is biometric data?

Yuval Noah Harari: It's data about what's happening inside my body. What we have seen so far. It's corporations and governments collecting data about where we go, who we meet, what movies we watch. The next phase is surveillance going under our skin.

Anderson Cooper: I'm wearing a, like a tracker that tracks my heart rate, my sleep. I don't know where that information is going.

Yuval Noah Harari: You wear the KGB agent on your wrist willingly.

Anderson Cooper: And I think it's benefiting me.

Yuval Noah Harari: And it is benefiting you. I mean, the whole thing is that it's not just dystopian. It's also utopian. I mean, this kind of data can also enable us to create the best health care system in history. The question is what else is being done with that data? And who supervises it? Who regulates it?

Earlier this year, the Israeli government gave its citizens' health data to Pfizer to get priority access to their vaccine. The data did not include individual citizens' identities.

Anderson Cooper: So what does Pfizer want the data of all Israelis for?

Yuval Noah Harari: Because to develop new medicines, new treatments you need the medical data. Increasingly, that's the basis for how-- for medical research. It's not all bad.

Harari has been criticized for pointing out problems without offering solutions, but he does have some ideas about how to limit the misuse of data.

Yuval Noah Harari: One key rule is that if you get my data, the data should be used to help me and not to manipulate me. Another key rule, that whenever you increase surveillance of individuals you should simultaneously increase surveillance of the corporation and governments and the people at the top. And the third principle is that, never allow all the data to be concentrated in one place. That's the recipe for a dictatorship.

Harari speaking at The Future of Education: Netflix tells us what to watch and Amazon tells us what to buy. Eventually within 10 or 20 or 30 years such algorithms could also tell you what to study at college and where to work and whom to marry and even whom to vote for.

Without greater regulation, Harari believes we are at risk of becoming what he calls "hacked humans."

Anderson Cooper: What does that mean?

Yuval Noah Harari: To hack a human being is to get to know that person better than they know themselves. And based on that, to increasingly manipulate you This outside system, it has the potential to remember everything. Everything you ever did. And to analyze and find patterns in this data and to get a much better idea of who you really are. I came out as gay when I was 21. It should've been obvious to me when I was 15 that I'm gay. But something in the mind blocked it. Now, if you think about a teenager today, Facebook can know that they are gay or Amazon can know that they are gay long before they do just based on analyzing patterns.

Anderson Cooper: And based on that, you can tell somebody's sexual orientation?

Yuval Noah Harari: Completely. And what does it mean if you live in Iran or if you live in Russia or in some other homophobic country and the police know that you are gay even before you know it?

Anderson Cooper: When people think about data they think about companies finding out what their likes and dislikes are but the data that you're talking about goes much deeper than that?

Yuval Noah Harari: Like, think in 20 years when the entire personal history of every journalist, every judge, every politician, every military officer is held by somebody in Beijing or in Washington? Your ability to manipulate them is like nothing before in history.

Harari lives outside Tel Aviv with his husband, Itzik Yahav. They have been together for nearly 20 years. It was Yahav who read Harari's lecture notes for a history course and convinced him to turn them into his first book "Sapiens."

Itzik Yahav: I read the lessons. I couldn't stop talking about it. For me, it was clear that it could be a huge bestseller.

Yahav is now Harari's agent, and together they started a company called Sapienship. They are creating an interactive exhibit that will take visitors through the history of human evolution and challenge them to think about the future of mankind.

Harari also just published the second installment of a graphic novel based on "Sapiens." And he's teaching courses at Israel's Hebrew University in ethics and philosophy for computer scientists and bioengineers.

Harari teaching: When people write code, they are reshaping politics and economics and ethics, and the structure of human society.

Anderson Cooper: When I think of coders and engineers, I don't think of philosophers and poets.

Yuval Noah Harari: It's not the case now, but it should be the case because they are increasingly solving philosophical and poetical riddles. If you're designing, you know, a self-driving car, so the self-driving car will need to make ethical decisions. Like suddenly, a kid jumps in front of the car. And the only way to-- to-- to prevent running over the kid is to swerve to the side and be hit by a truck. And your own-owner who is asleep in the backseat will-- might be killed. You need to tell the algorithm what to do in this situation. So you need to actually solve the philosophical question, who to kill.

Last month the United Nations suggested a moratorium on artificial intelligence systems that seriously threaten human rights until safeguards are agreed upon, and advisers to President Biden are proposing what they call a "bill of rights" to guard against some of the new technologies. Harari says just as Homo sapiens learned to cooperate with each other many thousands of years ago, we need to cooperate now.

Yuval Noah Harari: Certainly. Now, we are at the point when we need global cooperation. You cannot regulate the explosive power of artificial intelligence on a national level. I'm not trying to kind of prophesy what will happen. I'm trying to warn people about the most dangerous possibilities, in the hope that we will do something in the present to prevent them.

Produced by Denise Schrier Cetta. Associate producer, Katie Brennan. Broadcast associate, Annabelle Hanflig. Edited by Stephanie Palewski Brumbach.

Follow this link:
Yuval Noah Harari on the power of data, artificial intelligence and the future of the human race - CBS News