Category Archives: Artificial Intelligence

3 Ethical Considerations When Investing in AI – Manufacturing Business Technology

While Artificial Intelligence (AI) has been prevalent in industries such as the financial sector, where algorithms and decision trees have long been used in approving or denying loan requests and insurance claims, the manufacturing industry is at the beginning of its AI journey. Manufacturers have started to recognize the benefits of embedding AI into business operationsmarrying the latest techniques with existing, widely used automation systems to enhance productivity.

A recent international IFS study polling 600 respondents, working with technology including Enterprise Resource Planning (ERP), Enterprise Asset Management (EAM), and Field Service Management (FSM), found more than 90 percent of manufacturers are planning AI investments. Combined with other technologies such as 5G and the Internet of Things (IoT), AI will allow manufacturers to create new production rhythms and methodologies. Real-time communication between enterprise systems and automated equipment will enable companies to automate more challenging business models than ever before, including engineer-to-order or even custom manufacturing.

Despite the productivity, cost-savings and revenue gains, the industry is now seeing the first raft of ethical questions come to the fore. Here are the three main ethical considerations companies must weigh-up when making AI investments.

At first, AI in manufacturing may conjure up visions of fully automated smart factories and warehouses, but the recent pandemic highlighted how AI can play a strategic role in the back-office, mapping different operational scenarios and aiding recovery planning from a finance standpoint. Scenario planning will become increasingly important. This is relevant as governments around the world start lifting lockdown restrictions and businesses plan back to work strategies. Those simulations require a lot of data but will be driven by optimization, data analysis and AI.

And of course, it is still relevant to use AI/Machine Learning to forecast cash. Cash is king in business right now. So, there will be an emphasis on working out cashflows, bringing in predictive techniques and scenario planning. Businesses will start to prepare ways to know cashflow with more certainty should the next pandemic or crisis occur.

For example, earlier in the year the conversation centered on the just-in-time scenarios, but now the focus is firmly on what-if planning at the macro supply chain level:

Another example is how you can use a Machine Learning service and internal knowledge base to facilitate Intelligent Process Automation allowing recommendations and predictions to be incorporated into business workflows, as well as AI-driven feedback on how business processes themselves can be improved or automated.

The closure of manufacturing organizations and reduction in operations due to depleting workforces highlight AI technology in the front-office isnt perhaps as readily available as desired, and that progress needs to be made before it can truly provide a level of operational support similar to humans.

Optimists suggest AI may replace some types of labor, with efficiency gains outweighing transition costs. They believe the technology will come to market at first as a guide-on-the-side for human workers, helping them make better decisions and enhancing their productivity, while having the potential to upskill existing employees and increase employment in business functions or industries that are not in direct competition with AI.

Indeed, recent IFS research points to an encouraging future for a harmonized AI and human workforce in manufacturing. The IFS AI study revealed that respondents saw AI as a route to create, rather than cull, jobs. Around 45 percent of respondents stated they expect AI to increase headcount, while 24 percent believe it wont impact workforce figures.

The pandemic has demonstrated AI hasnt developed enough to help manufacturers maintain digital-only operations during unforeseen circumstances, and decision makers will be hoping it can play a greater role to mitigate extreme situations in the future.

It is easy for organizations to say they are digitally transforming. They have bought into the buzzwords, read the research, consulted the analysts, and seen the figures about the potential cost savings and revenue growth.

But digital transformation is no small change. It is a complete shift in how you select, implement and leverage technology, and it occurs company-wide. A critical first step to successful digital transformation is to ensure that you have the appropriate stakeholders involved from the very beginning. This means manufacturing executives must be transparent when assessing and communicating the productivity and profitability gains of AI against the cost of transformative business changes to significantly increase margin.

When businesses first invested in IT, they had to invent new metrics that were tied to benefits like faster process completion or inventory turns and higher order completion rates. But manufacturing is a complex territory. A combination of entrenched processes, stretched supply chains, depreciating assets and growing global pressures makes planning for improved outcomes alongside day-to-day requirements a challenging prospect. Executives and their software vendors must go through a rigorous and careful process to identify earned value opportunities.

Implementing new business strategies will require capital spending and investments in process change, which will need to be sold to stakeholders. As such, executives must avoid the temptation of overpromising. They must distinguish between the incremental results they can expect from implementing AI in a narrow or defined process as opposed to a systemic approach across their organization.

There can be intended or unintended consequences of AI-based outcomes, but organizations and decision makers must understand they will be held responsible for both. We have to look no further than tragedies from self-driving car accidents and the subsequent struggles that followed as liability is assigned not on the basis of the algorithm or the inputs to AI, but ultimately the underlying motivations and decisions made by humans.

Executives therefore cannot afford to underestimate the liability risks AI presents. This applies in terms of whether the algorithm aligns with or accounts for the true outcomes of the organization, and the impact on its employees, vendors, customers and society as a whole. This is all while preventing manipulation of the algorithm or data feeding into AI that would impact decisions in ways that are unethical, either intentionally or unintentionally.

Margot Kaminski, associate professor at the University of Colorado Law School, raised the issue of automation biasthe notion that humans trust decisions made by machines more than decisions made by other humans. She argues the problem with this mindset is that when people use AI to facilitate decisions or make decisions, they are relying on a tool constructed by other humans, but often they do not have the technical capacity, or practical capacity, to determine if they should be relying on those tools in the first place.

This is where explainable AI will be criticalAI which creates an audit path so both before and after the fact, there is a clear representation of the outcomes the algorithm is designed to achieve and the nature of the data sources it is working form. Kaminski asserts explainable AI decisions must be rigorously documented to satisfy different stakeholdersfrom attorneys to data scientists through to middle managers.

Manufacturers will soon move past the point of trying to duplicate human intelligence using machines, and towards a world where machines behave in ways that the human mind is just not capable. While this will reduce production costs and increase the value organizations are able to return, this shift will also change the way people contribute to the industry, the role of labor, and civil liability law.

There will be ethical challenges to overcome, but those organizations who strike the right balance between embracing AI and being realistic about its potential benefits alongside keeping workers happy will usurp and take over. Will you be one of them?

Originally posted here:
3 Ethical Considerations When Investing in AI - Manufacturing Business Technology

Artificial Intelligence in Business: The New Normal in Testing Times – Analytics Insight

The COVID 19 situation, has rendered the industry into an unprecedented situation. Businesses across the globe are now resorting to plan out new strategies to keep the operations going, to meet clients demands.

Work-from-Home is the new normal for both the employees and the employers to function in a mitigated manner. Twitter on their tweet had suggested their employees, to function through Work-from-Home, forever, if they want to. This new trend can be easily surmised as being effective for a while to manage operations, but cannot be ruled out as the necessary solution, for satisfying the customers and clients in the long run.

Companies need to employ ethically approved ideas and strategies that would assure employees, clients, and customers, without breaching the data.

With the present situation, where social distancing is a must, classroom training cannot be ruled out as the plausible solution for training employees. Thats where Virtual Reality comes into play.

Virtual Reality (VR), which was earlier ruled out to be used in the gaming interface has now the potential to become the face of the industrial enterprise. Areportby PwC states that VR and Augmented Reality has the potential to surge US$1.5trillion globally by the year 2030. Another report by PwC states that VR can train employees four times faster than classroom training. Individuals trained through VR has confidence 2.5 times more than those who are trained through classroom programs or e-courses, and 2.3 times more emotionally inclined towards the content that they are working on. Employees trained using VR are also 1.5 times more focused than that through classroom programs and e-courses.

The only drawback in using PwC will be in its cost-effectiveness as it is 47 percent costlier than classroom courses.

Ever since its evolution, one of the major concerns regarding AI amongst clients, customers, and employees is the breach of ethical AI practices. A report byCapgemini Research Institutestates that amongst 62% of customers who were surveyed would like to place their trust in an organization that practices AI ethically.

For any organization to keep its business and employees safe during the time of crisis, the development of an ethically viable AI is a must. This can only be achieved by practicing ethical use of AI applications, informing and educating the customers about the practices of AI.

Areportby PwC, states that planning out a new strategy in both data and technology, evaluating the ethical flaws associated with the existing data, and only collecting the required amount of data, would help in maintaining trust amongst both the customers and employees.

Given the present situation, sales executives are facing a daunting task of maintaining their operations. However, the use of AI can easily redeem this time consuming and laborious task. Withthe use of an AI algorithm, the sales executive or manager can identify the higher probable inclination of the client towards a particular service. The AI algorithm would also, help in offering a new product according to the pre-requisite preferences of the client.

In the time of crisis, new solutions must be thought about for repurposing business. PwC states that this can be achieved by repurposing business assets, forming a new business partnership, rapid innovation, and testing and learning.

This will not only help in building trust amongst employees but also build resilience within the organization, for the future endeavor.

Go here to see the original:
Artificial Intelligence in Business: The New Normal in Testing Times - Analytics Insight

RadNet and Hologic Announce Collaboration to Advance the Development of Artificial Intelligence Tools in Breast Health – GlobeNewswire

LOS ANGELES and MARLBOROUGH, Mass., Aug. 06, 2020 (GLOBE NEWSWIRE) -- RadNet, Inc. (Nasdaq: RDNT), a national leader in providing high-quality, cost-effective, fixed-site outpatient diagnostic imaging services, and Hologic, Inc. (Nasdaq: HOLX), an innovative medical technology company primarily focused on improving womens health, have entered into a definitive collaboration to advance the use of artificial intelligence (A.I.) in breast health.

As the world leader in mammography, Hologic will contribute capabilities and insights behind its market-leading hardware and software, and will benefit from access to data produced by RadNets fleet of high-resolution mammography systems, the largest in the nation, to train and refine current and future products based on A.I. RadNet will share data from its extensive network of imaging centers, as well as provide in-depth knowledge of the patient pathway and workflow needs to help make a positive impact across the breast care continuum. The collaboration will enable new joint market opportunities and further efforts to build clinician confidence and develop and integrate new A.I. technologies.

We believe the future of breast health will rely heavily on the integration of A.I. tools, such as our 3DQuorum imaging technology, as well as next generation CAD software, that aid in the early detection of breast cancer, said PeteValenti, Hologics Division President, Breast and Skeletal Health Solutions. We are energized by the opportunities this transformative collaboration with RadNet creates for patients and clinicians alike. Access to data is critical in training and refining A.I. algorithms. With this collaboration, we now have the opportunity to leverage data from the largest fleet of high-resolution mammography systems to develop new tools across the continuum of care, provide workflow efficiencies, and improve patient satisfaction and outcomes.

As part of its collaboration with Hologic, RadNet intends to upgrade its entire fleet of Hologic mammography systems to feature Hologics 3DQuorum imaging technology, powered by Genius AI. This technology works in tandem with Clarity HD high resolution imaging technology to reduce tomosynthesis image volume for radiologists by 66 percent.i Additionally, all of RadNets Hologic systems are anticipated to feature the Genius 3D Mammography exam, the only mammogram clinically proven and FDA approved as superior for all women, including those with dense breasts, compared with 2D mammography alone. ii,iii,iv,v

The collaboration will be bolstered by RadNets recent acquisition of DeepHealth (Cambridge, MA), which uses machine learning to develop software tools to improve cancer detection and provide clinical decision support. Led by Dr. Gregory Sorensen, DeepHealths team of A.I. experts is focused on enabling industry-leading care by providing products that clinicians and patients can trust. In addition, the DeepHealth team will integrate its A.I. tools within the Hologic ecosystem. When seeking a partner and reviewing options amongst all mammography vendors, we selected to integrate our tools with Hologics market-leading technology, said Dr. Sorensen. Hologics systems produce the highest level of spatial resolution in the market. Hologic also has the largest domestic footprint and market share in 3D Mammography systems. This integration will allow the DeepHealth team to train its algorithms for use with the most advanced screening technology possible. As Hologic and RadNet share their respective capabilities and tools, greater efficiency and accuracy can be achieved by our radiologists.

Much like RadNet, Hologic is a highly innovative company and market leader in breast health, said Howard Berger, MD, RadNets Chairman and CEO. When Hologics leading screening technology is paired with RadNets approximately 1.2 million annual screening mammograms, the resulting dataset becomes a powerful tool to train algorithms. We see the future as being transformative for both of our organizations.

We have witnessed how the application of our Genius AI technology platform has improved cancer detection, operational efficiency and clinical decision support across the breast cancer care continuum, said Samir Parikh, Hologics Global Vice President for Research and Development, Breast and Skeletal Health Solutions. We look forward to building upon these advances in collaboration with Dr. Sorensen and the RadNet team to expand the use of machine learning, big data applications and automated algorithms impacting global breast care.

About RadNet, Inc.RadNet, Inc. is the leading national provider of freestanding, fixed-site diagnostic imaging services in the United States based on the number of locations and annual imaging revenue. RadNet has a network of 335 owned and/or operated outpatient imaging centers. RadNet's core markets include California, Maryland, Delaware, New Jersey and New York. In addition, RadNet provides radiology information technology solutions, teleradiology professional services and other related products and services to customers in the diagnostic imaging industry. Together with affiliated radiologists, and inclusive of full-time and per diem employees and technicians, RadNet has a total of approximately 8,600 employees. For more information, visit http://www.radnet.com.

About Hologic, Inc.Hologic, Inc. isan innovative medical technology company primarily focused on improving womens health and well-being through early detection and treatment.For more information on Hologic, visitwww.hologic.com.

The Genius 3D Mammography exam (also known as the Genius exam) is only available on a Hologic 3D Mammography system. It consists of a 2D and 3D image set, where the 2D image can be either an acquired 2D image or a 2D image generated from the 3D image set. There are more than 6,000 Hologic 3D Mammography systems in use in the United States alone, so women have convenient access to the Genius exam. To learn more, visit http://www.Genius3DNearMe.com.

Hologic, 3D Mammography, 3DQuorum, 3Dimensions, Clarity HD, Genius and Genius AI are trademarks and/or registered trademarks of Hologic, Inc., and/or its subsidiaries in the United States and/or other countries.

Forward-Looking StatementsThis news release may contain forward-looking information that involves risks and uncertainties, including statements about the use of Hologic products. There can be no assurance these products will achieve the benefits described herein or that such benefits will be replicated in any particular manner with respect to an individual patient, as the actual effect of the use of the products can only be determined on a case-by-case basis. In addition, there can be no assurance that these products will be commercially successful or achieve any expected level of sales. Hologic and RadNet expressly disclaim any obligation or undertaking to release publicly any updates or revisions to any such statements presented herein to reflect any change in expectations or any change in events, conditions or circumstances on which any such data or statements are based.

This information is not intended as a product solicitation or promotion where such activities are prohibited. For specific information on what products are available for sale in a particular country, please contact a local Hologic sales representative or write to womenshealth@hologic.com.

Media and Investor Contact RadNet, Inc.:Mark StolperExecutive Vice President & Chief Financial Officer310-445-2800

Media Contact Hologic, Inc.:Jane Mazur508-263-8764 (direct)585-355-5978 (mobile)

Investor Contact Hologic, Inc.:Michael Watts858-410-8588

i Report: CSR-00116

ii Results from Friedewald, SM, et al. "Breast cancer screening using tomosynthesis in combination with digital mammography." JAMA 311.24 (2014): 2499-2507; a multi-site (13), non-randomized, historical control study of 454,000 screening mammograms investigating the initial impact the introduction of the Hologic Selenia Dimensions on screening outcomes. Individual results may vary. The study found an average 41% increase and that 1.2 (95% CI: 0.8-1.6) additional invasive breast cancers per 1000 screening exams were found in women receiving combined 2D FFDM and 3D mammograms acquired with the Hologic 3D Mammography System versus women receiving 2D FFDM mammograms only.

iii Freidewald SM, Rafferty EA, Rose SL, Durand MA, Plecha DM, Greenberg JS, Hayes MK, Copit DS, Carlson KL, Cink TM, Carke LD, Greer LN, Miller DP, Conant EF, Breast Cancer Screening Using Tomosynthesis in Combination with Digital Mammography,JAMAJune 25, 2014.

iv Bernardi D, Macaskill P, Pellegrini M, etal. Breast cancer screening with tomosynthesis (3D mammography) with acquired or synthetic 2D mammography compared with 2D mammography alone (STORM-2): a population-based prospective study.Lancet Oncol.2016 Aug;17(8):1105-13.

v FDA submissions P080003, P080003/S001, P080003/S004, P080003/S005

Read the original:
RadNet and Hologic Announce Collaboration to Advance the Development of Artificial Intelligence Tools in Breast Health - GlobeNewswire

VIEW: Digitisation in pathology and the promise of artificial intelligence – CNBCTV18

The COVID-19 pandemic has had a profound impact across industries and healthcare in particularevery aspect of it is undergoing changefrom diagnosis to treatment and through the entire continuum of care. This has also created an urgency in the healthcare industry, to look for innovative solutions and a boost to the faster, efficient application of technologies like Artificial Intelligence (AI) and Deep Learning. Pathology is one area which stands to greatly benefit from these applications.

Pathologists today spend a significant amount of time observing tissue samples under a microscope and they are facing resource shortages, growing complexity of requests, and workflow inefficiencies with the growing burden of diseases. Their work underpins every aspect of patient care, from diagnostic testing and treatment advice to the use of cutting-edge genetic technologies. They also have to work together in a multidisciplinary team of doctors, scientists and healthcare professionals to diagnose, treat and prevent illness. With increasing emphasis on sub-specialisation, taking a second opinion from specialists, means shipping several glass slides across laboratories, sometimes to another country. This means reduced efficiency and delayed diagnosis and treatment. The current situation has disrupted this workflow.

Digitization in pathology

Digitization in Pathology has enabled an increase in efficiency, speed and enhanced quality of diagnosis. Recent technological advances have accelerated the adoption of digitisation in pathology, similar to the digital transformation that radiology departments have experienced over the last decade. Digital Pathology has enabled the conversion of the traditional glass slide to a digital image, which can then be viewed on a monitor, annotated, archived and shared digitally across the globe, for consultation based on organ sub-specialisation. With digitisation, a vast data set has become available, supporting new insights to pathologists, researchers, and pharmaceutical development teams.

The promise of AI

The availability of vast data is enabling the use of Artificial Intelligence methods, to further transform the diagnosis and treatment of diseases at an unprecedented pace. Human intelligence assisted with articial intelligence can provide a well-balanced view of what neither of them could do on their own. The evolution of Deep Learning neural networks and the improvement in accuracy for image pattern recognition has been staggering in the last few years. Similar to how we learn from experience, the deep learning algorithm would perform a task repeatedly, each time improving it a little to achieve more accurate outcomes.

The approach to diagnosis that incorporates multiple sources of data (e.g., pathology, radiology, clinical, molecular and lab operations) and using mathematical models to generate diagnostic inferences and presenting with clinically actionable knowledge to customers is Computational Pathology. Computational Pathology systems are able to correlate patterns across multiple inputs from the medical record, including genomics, enhancing a pathologists diagnostic capabilities, to make a more precise diagnosis. This allows Pathologists to eliminate tedious and time-consuming tasks while focusing more on interpreting data and detailing the implications for a patients diagnosis.

AI applications that can easily augment a Pathologists cognitive ability and save time are, for example, identifying the sections of greatest interest in biopsies, finding metastases in the lymph nodes of breast cancer patients, counting mitoses for cancer grading or measuring tumors point-to-point. The ultimate goal going forward is the integration of all these tools and algorithms into the existing workflow and make it seamless and more efficient.

The Challenge

However, Artificial Intelligence in Pathology is quite complex. The IT infrastructure required in terms of data storage, network bandwidth and computing power is significantly higher as compared to Radiology. Digitisation of Whole Slide Images (WSI) in pathology generate large amounts of gigapixel sized images and processing them needs high-performance computing. Training a deep learning network on a whole slide image at full resolution can be very challenging. With the increase in the processing power with the use of GPUs, there is a promise to train deep learning networks successfully, starting with training smaller regions of interest.

Another key aspect for training deep learning algorithms is the need for large amounts of labeled data. For supervised learning, a ground truth must first be included in the dataset to provide appropriate diagnostic context and this will be time-consuming. Obtaining adequately labeled data by experts is the key.

Digitisation in pathology supported by appropriate IT infrastructure is enabling Pathologists to work remotely without the need to wait for glass slides to be delivered and maintaining social distancing norms. The promise of Artificial Intelligence will only further accelerate the seamless integration of algorithms into the existing workflow. These unprecedented times have raised many challenges, but are also providing us a chance to accelerate the application of AI and in turn to achieve the quadruple aim: enhancing the patient experience, improving health outcomes, lowering the cost of care, and improving the work-life of care providers.

Read the rest here:
VIEW: Digitisation in pathology and the promise of artificial intelligence - CNBCTV18

Artificial Intelligence and Its Partners – Modern Diplomacy

Digitalization and the development of artificial intelligence (AI) bring up many philosophical and ethical questions about the role of man and robot in the nascent social and economic order. How real is the threat of an AI dictatorship? Why do we need to tackle AI ethics today? Does AI provide breakthrough solutions? We ask these and other questions in our interview with Maxim Fedorov, Vice-President for Artificial Intelligence and Mathematical Modelling at Skoltech.

On 13 July, Maxim Fedorov chaired the inaugural Trustworthy AI online conference on AI transparency, robustness and sustainability hosted by Skoltech.

Maxim, do you think humanity already needs to start working out a new philosophical model for existing in a digital world whose development is determined by artificial intelligence (AI) technologies?

The fundamental difference between todays technologies and those of the past is that they hold up a mirror of sorts to society. Looking into this mirror, we need to answer a number of philosophical questions. In times of industrialization and production automation, the human being was a productive force. Today, people are no longer needed in the production of the technologies they use. For example, innovative Japanese automobile assembly plants barely have any people at the floors, with all the work done by robots. The manufacturing process looks something like this: a driverless robot train carrying component parts enters the assembly floor, and a finished car comes out. This is called discrete manufacturing the assembly of a finite set of elements in a sequence, a task which robots manage quite efficiently. The human being is gradually being ousted from the traditional economic structure, as automated manufacturing facilities generally need only a limited number of human specialists. So why do we need people in manufacturing at all? In the past, we could justify our existence by the need to earn money or consume, or to create jobs for others, but now this is no longer necessary. Digitalization has made technologies a global force, and everyone faces philosophical questions about their personal significance and role in the modern world questions we should be answering today, and not in ten years when it will be too late.

At the last World Economic Forum in Davos, there was a lot of discussion about the threat of the digital dictatorship of AI. How real is that threat in the foreseeable future?

There is no evil inherent in AI. Technologies themselves are ethically neutral. It is people who decide whether to use them for good or evil.

Speaking of an AI dictatorship is misleading. In reality, technologies have no subjectivity, no I. Artificial intelligence is basically a structured piece of code and hardware. Digital technologies are just a tool. There is nothing mystical about them either.

My view as a specialist in the field is that AI is currently a branch of information and communications technology (ICT). Moreover, AI does not even live in an individual computer. For a person from the industry, AI is a whole stack of technologies that are combined to form what is called weak AI.

We inflate the bubble of AIs importance and erroneously impart this technology stack with subjectivity. In large part, this is done by journalists, people without a technical education. They discuss an entity that does not actually exist, giving rise to the popular meme of an AI that is alternately the Terminator or a benevolent super-being. This is all fairy tales. In reality, we have a set of technological solutions for building effective systems that allow decisions to be made quickly based on big data.

Various high-level committees are discussing strong AI, which will not appear for another 50 to 100 years (if at all). The problem is that when we talk about threats that do not exist and will not exist in the near future, we are missing some real threats. We need to understand what AI is and develop a clear code of ethical norms and rules to secure value while avoiding harm.

Sensationalizing threats is a trend in modern society. We take a problem that feeds peoples imaginations and start blowing it up. For example, we are currently destroying the economy around the world under the pretext of fighting the coronavirus. What we are forgetting is that the economy has a direct influence on life expectancy, which means that we are robbing many people of years of life. Making decisions based on emotion leads to dangerous excesses.

As the philosopher Yuval Noah Harari has said, millions of people today trust the algorithms of Google, Netflix, Amazon and Alibaba to dictate to them what they should read, watch and buy. People are losing control over their lives, and that is scary.

Yes, there is the danger that human consciousness may be robotized and lose its creativity. Many of the things we do today are influenced by algorithms. For example, drivers listen to their sat navs rather than relying on their own judgment, even if the route suggested is not the best one. When we receive a message, we feel compelled to respond. We have become more algorithmic. But it is ultimately the creator of the algorithm, not the algorithm itself, that dictates our rules and desires.

There is still no global document to regulate behaviour in cyberspace. Should humanity perhaps agree on universal rules and norms for cyberspace first before taking on ethical issues in the field of AI?

I would say that the issue of ethical norms is primary. After we have these norms, we can translate them into appropriate behaviour in cyberspace. With the spread of the internet, digital technologies (of which AI is part) are entering every sphere of life, and that has led us to the need to create a global document regulating the ethics of AI.

But AI is a component part of information and communications technologies (ICT). Maybe we should not create a separate track for AI ethics but join it with the international information security (IIS) track? Especially since IIS issues are being actively discussed at the United Nations, where Russia is a key player.

There is some justification for making AI ethics a separate track, because, although information security and AI are overlapping concepts, they are not embedded in one another. However, I agree that we can have a separate track for information technology and then break it down into sub-tracks where AI would stand alongside other technologies. It is a largely ontological problem and, as with most problems of this kind, finding the optimal solution is no trivial matter.

You are a member of the international expert group under UNESCO that is drafting the first global recommendation on the ethics of AI. Are there any discrepancies in how AI ethics are understood internationally?

The group has its share of heated discussions, and members often promote opposing views. For example, one of the topics is the subjectivity and objectivity of AI. During the discussion, a group of states clearly emerged that promotes the idea of subjectivity and is trying to introduce the concept of AI as a quasi-member of society. In other words, attempts are being made to imbue robots with rights. This is a dangerous trend that may lead to a sort of technofascism, inhumanity of such a scale that all previous atrocities in the history of our civilization would pale in comparison.

Could it be that, by promoting the concept of robot subjectivity, the parties involved are trying to avoid responsibility?

Absolutely. A number of issues arise here. First, there is an obvious asymmetry of responsibility. Let us give the computer with rights, and if its errors lead to damage, we will punish it by pulling the plug or formatting the hard drive. In other words, the responsibility is placed on the machine and not its creator. The creator gets the profit, and any damage caused is someone elses problem. Second, as soon as we give AI rights, the issues we are facing today with regard to minorities will seem trivial. It will lead to the thought that we should not hurt AI but rather educate it (I am not joking: such statements are already being made at high-level conferences). We will see a sort of juvenile justice for AI. Only it will be far more terrifying. Robots will defend robot rights. For example, a drone may come and burn your apartment down to protect another drone. We will have a techno-racist regime, but one that is controlled by a group of people. This way, humanity will drive itself into a losing position without having the smallest idea of how to escape it.

Thankfully, we have managed to remove any inserts relating to quasi-members of society from the groups agenda.

We chose the right time to create the Committee for Artificial Intelligence under the Commission of the Russian Federation for UNESCO, as it helped to define the main focus areas for our working group. We are happy that not all countries support the notion of the subjectivity of AI in fact, most oppose it.

What other controversial issues have arisen in the working groups discussions?

We have discussed the blurred border between AI and people. I think this border should be defined very clearly. Then we came to the topic of human-AI relationships, a term which implies the whole range of relationships possible between people. We suggested that relationships be changed to interactions, which met opposition from some of our foreign colleagues, but in the end, we managed to sort it out.

Seeing how advanced sex dolls have become, the next step for some countries would be to legalize marriage with them, and then it would not be long before people starting asking for church weddings. If we do not prohibit all of this at an early stage, these ideas may spread uncontrollably. This approach is backed by big money, the interests of corporations and a different system of values and culture. The proponents of such ideas include a number of Asian countries with a tradition of humanizing inanimate objects. Japan, for example, has a tradition of worshipping mountain, tree and home spirits. On the one hand, this instills respect for the environment, and I agree that, being a part of the planet, part of nature, humans need to live in harmony with it. But still, a person is a person, and a tree is a tree, and they have different rights.

Is the Russian approach to AI ethics special in any way?

We were the only country to state clearly that decisions on AI ethics should be based on a scientific approach. Unfortunately, most representatives of other countries rely not on research, but on their own (often subjective) opinion, so discussions in the working group often devolve to the lay level, despite the fact that the members are highly qualified individuals.

I think these issues need to be thoroughly researched. Decisions on this level should be based on strict logic, models and experiments. We have tremendous computing power, an abundance of software for scenario modelling, and we can model millions of scenarios at a low cost. Only after that should we draw conclusions and make decisions.

How realistic is the fight against the subjectification of AI if big money is at stake? Does Russia have any allies?

Everyone is responsible for their own part. Our task right now is to engage in discussions systematically. Russia has allies with matching views on different aspects of the problem. And common sense still prevails. The egocentric approach we see in a number of countries that is currently being promoted, this kind of self-absorption, actually plays into our hands here. Most states are afraid that humans will cease to be the centre of the universe, ceding our crown to a robot or a computer. This has allowed the human-centred approach to prevail so far.

If the expert group succeeds at drafting recommendations, should we expect some sort of international regulation on AI in the near future?

If we are talking about technical standards, they are already being actively developed at the International Organization for Standardization (ISO), where we have been involved with Technical Committee 164 Artificial Intelligence (TC 164) in the development of a number of standards on various aspects of AI. So, in terms of technical regulation, we have the ISO and a whole range of documents. We should also mention the Institute of Electrical and Electronics Engineers (IEEE) and its report on Ethically Aligned Design. I believe this document is the first full-fledged technical guide on the ethics of autonomous and intelligent systems, which includes AI. The corresponding technical standards are currently being developed.

As for the United Nations, I should note the Beijing Consensus on Artificial Intelligence and Education that was adopted by UNESCO last year. I believe that work on developing the relevant standards will start next year.

So the recommendations will become the basis for regulatory standards?

Exactly. This is the correct way to do it. I should also say that it is important to get involved at an early stage. This way, for instance, we can refer to the Beijing agreements in the future. It is important to make sure that AI subjectivity does not appear in the UNESCO document, so that it does not become a reference point for this approach.

Let us move from ethics to technological achievements. What recent developments in the field can be called breakthroughs?

We havent seen any qualitative breakthroughs in the field yet. Image recognition, orientation, navigation, transport, better sensors (which are essentially the sensory organs for robots) these are the achievements that we have so far. In order to make a qualitative leap, we need a different approach.

Take the chemical universe, for example. We have researched approximately 100 million chemical compounds. Perhaps tens of thousands of these have been studied in great depth. And the total number of possible compounds is 1060, which is more than the number of atoms in the Universe. This chemical universe could hold cures for every disease known to humankind or some radically new, super-strong or super-light materials. There is a multitude of organisms on our planet (such as the sea urchin) with substances in their bodies that could, in theory, cure many human diseases or boost immunity. But we do not have the technology to synthesize many of them. And, of course, we cannot harvest all the sea urchins in the sea, dry them and make an extract for our pills. But big data and modelling can bring about a breakthrough in this field. Artificial intelligence can be our navigator in this chemical universe. Any reasonable breakthrough in this area will multiply our income exponentially. Imagine an AIDS or cancer medicine without any side effects, or new materials for the energy industry, new types of solar panels, etc. These are the kind of things that can change our world.

How is Russia positioned on the AI technology market? Is there any chance of competing with the United States or China?

We see people from Russia working in the developer teams of most big Asian, American and European companies. A famous example is Sergey Brin, co-founder and developer of Google. Russia continues to be a donor of human resources in this respect. It is both reassuring and disappointing because we want our talented guys to develop technology at home. Given the right circumstances, Yandex could have dominated Google.

As regards domestic achievements, the situation is somewhat controversial. Moscow today is comparable to San Francisco in terms of the number, quality and density of AI development projects. This is why many specialists choose to stay in Moscow. You can find a rewarding job, interesting challenges and a well-developed expert community.

In the regions, however, there is a concerning lack of funds, education and infrastructure for technological and scientific development. All three of our largest supercomputers are in Moscow. Our leaders in this area are the Russian Academy of Sciences, Moscow State University and Moscow Institute of Physics and Technology organizations with a long history in the sciences, rich traditions, a sizeable staff and ample funding. There are also some pioneers who have got off the ground quickly, such as Skoltech, and surpassed their global competitors in many respects. We recently compared Skoltech with a leading AI research centre in the United Kingdom and discovered that our institution actually leads in terms of publications and grants. This means that we can and should do world-class science in Russia, but we need to overcome regional development disparities.

Russia has the opportunity to take its rightful place in the world of high technology, but our strategy should be to overtake without catching up. If you look at our history, you will see that whenever we have tried to catch up with the West or the East, we have lost. Our imitations turned out wrong, were laughable and led to all sorts of mishaps. On the other hand, whenever we have taken a step back and synthesized different approaches, Asian or Western, without blindly copying them, we have achieved tremendous success.

We need to make a sober assessment of what is happening in the East and in the West and what corresponds to our needs. Russia has many unique challenges of its own: managing its territory, developing the resource industries and continuous production. If we are able to solve these tasks, then later we can scale up our technological solutions to the rest of the world, and Russian technology will be bought at a good price. We need to go down our own track, not one that is laid down according to someone elses standards, and go on our way while being aware of what is going on around us. Not pushing back, not isolating, but synthesizing.

From our partner RIAC

Related

More:
Artificial Intelligence and Its Partners - Modern Diplomacy

Artificial intelligence isnt destroying jobs, its making them more inclusive – The Globe and Mail

A new world of work is on the horizon, driven by artificial intelligence. By 2025, the World Economic Forum predicts that 52 per cent of total task hours across existing jobs will be performed by machines. By 2030, up to 800 million jobs could be replaced by technology altogether.

That said, the outlook is far from bleak. Rather than eliminating positions, technology is expected to bring about net positive jobs over the coming decade but a fact equally as important (and often overlooked) is that artificial intelligence presents an opportunity for a more socioeconomically inclusive career start.

Throughout much of the past century, a persons success in life could be largely attributed to their socioeconomic circumstances at birth. Studies have shown that children born into middle-class homes have greater access to opportunities that are more highly correlated with successful occupational outcomes, such as good schools and financial support. As a result, these children are far more likely to succeed in primary school, high school and post-secondary education.

Story continues below advertisement

These advantages are compounded when it comes to hiring for jobs out of post-secondary school. Resumes, in this way, mirror our privilege.

The criteria for success in the future of work, however, presents an opportunity for a fairer system to assess job fit: skills.

If machine intelligence becomes a large source of expertise (i.e., cancer-screening detection, market research analytics and driving, just to name a few), people will need to adapt and change their skillsets to remain employable. A recent white paper published by IBM rated adaptability as the most important skill that executives will be hiring for in the future. Moreover, as technology continues to advance, our technical skills continue to depreciate (by approximately 50 per cent every five years).

As a result of all of these changes, we will have to upskill (which is the process of learning new skills or teaching workers new skills). Well have to learn and unlearn throughout the majority of our working lives. This changes the formula from front-loading education early in life to a life of continuous learning. It also places skills, like that adaptability mentioned above, more centrally as the currency of labour.

As the CEO of Upwork, one of the fastest-growing gig platforms in the world, wrote two years ago, What matters to me is not whether someone has a computer science degree, but how well they can think and how well they can code. The CEO of JPMorgan Chase, Jamie Dimon, echoed a similar sentiment, stating that the reality is, the new world of work is about skills, not necessarily degrees.

Of course, degrees will still have value. It will also take some time to readjust our job-fit assessment infrastructures. However, paths that do not include a four-year post-secondary degree will also be included in the job-fit assessment as skills become central. This can make room for more inclusive opportunities for career advancement.

Having a more inclusive job-fit assessment infrastructure, however, will not happen automatically. There are many challenges that governments and employers will have to overcome, and actions they will need to take:

Story continues below advertisement

The adoption of advanced technologies in the workforce will revolutionize work. In fact, our very definition of what it means to work may change. How governments and employers respond to these changes will have a large impact on whether this results in positive gains for more people. We have the potential to build a future that works for more people than it currently does, and it is up to us to make it happen.

Sinead Bovell is a futurist and founder of WAYE (Weekly Advice for Young Entrepreneurs), an organization aiming to educate young entrepreneurs on the intersection of business, technology, and the future. She is the Leadership Lab columnist for August 2020.

This column is part of Globe Careers Leadership Lab series, where executives and experts share their views and advice about the world of work. Find all Leadership Lab stories at tgam.ca/leadershiplab and guidelines for how to contribute to the column here.

Stay ahead in your career. We have a weekly Careers newsletter to give you guidance and tips on career management, leadership, business education and more. Sign up today or follow us at @Globe_Careers.

Read more:
Artificial intelligence isnt destroying jobs, its making them more inclusive - The Globe and Mail

Artificial Intelligence in Healthcare: Beyond disease prediction – ETHealthworld.com

By Monojit Mazumdar, Partner and Krishatanu Ghosh, Manager, Deloitte IndiaIn Deloitte Centre for Health Solutions 2020 survey conducted in January 20201, 83% of respondents have mentioned Artificial Intelligence and Machine Learning (AI/ML) as one of their top two priorities.

Conventional wisdom has it that physicians cannot work from home. In the field of healthcare, traditional leverage of AI has been on disease detection and prediction. AI engines have generally been efficient in predicting anomalies in CT scans to detect onset of a disease.

Does it need to remain restricted to detection only? At specific scenario. Many of the Type1 diabetes patients now use a Continuous Glucose Monitor (CGM) to get a near real time reading of their blood sugar levels to determine insulin dosage. These commercially available devices pull the data and load into a cloud based data set-up at a regular interval.

Physicians look at the data during review and suggest adjustment to foods and dosage. A simple AI algorithm can take this further by recommending precise set of treatment recommendations for physicians to validate.

Since routine visits are getting deferred, this simple intervention has the potential to increase both precision and accuracy of the treatment process for all conditions that require timely and routine physician visits.

This opens up the possibility of AI being used as a recommendation tool as opposed to a detection only model. This single change has the ability to transform the entire business model of physical healthcare. From a facility to physically host healthcare professionals along with patients, hospitals and clinics may start operating as a digitally driven operations nerve center.

AI based scheduling service may listen to the patients conditions through a chat bot or voice application. It can ask a series of questions, look at the clinical records of the patients in the system and get a basic hypothesis ready for diagnosis based on data.

It can then schedule an appointment with the most competent physician available depending on the urgency. Before the appointment, the AI engine may prepare a complete briefing with potential diagnosis and recommended treatments. It can answer a set of follow on questions and allow the recommendations to be overridden.

In case of a required diagnostic intervention, AI driven scheduler should be able to arrange for an agent to collect the samples and add them in the patient dossier. Post tele or video consultation, a personal yet non-intervening Voice AI service may do regular follow-throughs, a reminder on medication and other recommended treatment follow through along with any future treatment recommendation. AI engine can sharpen this recommendation by constantly looking through data stream coming from devices that monitor the patient, by consulting physicians.

While this sounds futuristic, we have the technology components commercially available. With a strong and progressively cheaper data network, communication has just got easier. Cloud based storage and delivery of information has cut down the cost of computing infrastructure to a fraction. AI can process faster with advanced hardware gaining speed. Finally, a compulsive situation out of a pandemic has changed our mindset to believe things can be equally good if not better in a remote mode.

Through an efficient sharing of this data with suppliers, typical gaps of demand and supply can be bridged as well. Most important component of making the system work, the need for healthcare professionals may be calibrated as well and with increasing load on healthcare system, a changing model of treatment aided by AI seems to be a good option for future.

DISCLAIMER: The views expressed are solely of the author and ETHealthworld.com does not necessarily subscribe to it. ETHealthworld.com shall not be responsible for any damage caused to any person/organisation directly or indirectly.

Continue reading here:
Artificial Intelligence in Healthcare: Beyond disease prediction - ETHealthworld.com

Drones, blockchain, bots, artificial intelligencethe new auditors on the block – Economic Times

Experts say that apart from the jazzy tech like drones, some of the auditors are also using artificial intelligence and bots for auditing.

Auditors fear that at a time when they are working from home and unable to hit the ground, technology could be the only solution that could give them comfort as the fear of fraud increases due to movement restrictions and inability to do physical checks.

Mumbai: Though change came late to the musty world of auditing, it has finally arrived. Thanks to Covid-19, some of the top firms are using drones, robotics, artificial intelligence and blockchain technology to complete their auditing assignments during the pandemic.The eye in the sky that is the drone will now be used for cross-check whether inventory in a power companys financials tallies with the actual position of the stock of coal on

BY

ET Bureau

AbcSmall

AbcMedium

AbcLarge

Access the exclusive Economic Times stories, Editorial and Expert opinion

Already a Member? Sign In now

Sharp Insight-rich, Indepth stories across 20+ sectors

Access the exclusive Economic Times stories, Editorial and Expert opinion

Clean experience withMinimal Ads

Comment & Engage with ET Prime community

Exclusive invites to Virtual Events with Industry Leaders

A trusted team of Journalists & Analysts who can best filter signal from noise

Go here to see the original:
Drones, blockchain, bots, artificial intelligencethe new auditors on the block - Economic Times

Artificial Intelligence (AI) in the Freight Transportation Industry Market – Global Industry Growth Analysis, Size, Share, Trends, and Forecast 2020 …

Global Artificial Intelligence (AI) in the Freight Transportation Industry Market 2020 report focuses on the major drivers and restraints for the global key players. It also provides analysis of the market share, segmentation, revenue forecasts and geographic regions of the market.

The Artificial Intelligence (AI) in the Freight Transportation Industry market research study is an extensive evaluation of this industry vertical. It includes substantial information such as the current status of the Artificial Intelligence (AI) in the Freight Transportation Industry market over the projected timeframe. The basic development trends which this marketplace is characterized by over the forecast time duration have been provided in the report, alongside the vital pointers like regional industry layout characteristics and numerous other industry policies.

Request a sample Report of Artificial Intelligence (AI) in the Freight Transportation Industry Market at:https://www.marketstudyreport.com/request-a-sample/2833612?utm_source=Algosonline.com&utm_medium=AN

The Artificial Intelligence (AI) in the Freight Transportation Industry market research report is inclusive of myriad pros and cons of the enterprise products. Pointers like the impact of the current market scenario about investors have been provided. Also, the study enumerates the enterprise competition trends in tandem with an in-depth scientific analysis about the downstream buyers as well as the raw material.

Unveiling a brief of the competitive scope of Artificial Intelligence (AI) in the Freight Transportation Industry market:

Unveiling a brief of the regional scope of Artificial Intelligence (AI) in the Freight Transportation Industry market:

Ask for Discount on Artificial Intelligence (AI) in the Freight Transportation Industry Market Report at:https://www.marketstudyreport.com/check-for-discount/2833612?utm_source=Algosonline.com&utm_medium=AN

Unveiling key takeaways from the Artificial Intelligence (AI) in the Freight Transportation Industry market report:

For More Details On this Report: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-artificial-intelligence-ai-in-the-freight-transportation-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Reports:

1. COVID-19 Outbreak-Global Liposomes Drug Delivery Industry Market Report-Development Trends, Threats, Opportunities and Competitive Landscape in 2020Read More: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-liposomes-drug-delivery-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

2. COVID-19 Outbreak-Global Radio Frequency (RF) Cable Industry Market Report-Development Trends, Threats, Opportunities and Competitive Landscape in 2020Read More: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-radio-frequency-rf-cable-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Report : https://www.marketwatch.com/press-release/Automated-Parcel-Delivery-Terminals-Market-2020-08-06

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

See the article here:
Artificial Intelligence (AI) in the Freight Transportation Industry Market - Global Industry Growth Analysis, Size, Share, Trends, and Forecast 2020 ...

What is artificial intelligence? – Brookings

Few concepts are as poorly understood as artificial intelligence. Opinion surveys show that even top business leaders lack a detailed sense of AI and that many ordinary people confuse it with super-powered robots or hyper-intelligent devices. Hollywood helps little in this regard by fusing robots and advanced software into self-replicating automatons such as the Terminators Skynet or the evil HAL seen in Arthur Clarkes 2001: A Space Odyssey, which goes rogue after humans plan to deactivate it. The lack of clarity around the term enables technology pessimists to warn AI will conquer humans, suppress individual freedom, and destroy personal privacy through a digital 1984.

Part of the problem is the lack of a uniformly agreed upon definition. Alan Turing generally is credited with the origin of the concept when he speculated in 1950 about thinking machines that could reason at the level of a human being. His well-known Turing Test specifies that computers need to complete reasoning puzzles as well as humans in order to be considered thinking in an autonomous manner.

Turing was followed up a few years later by John McCarthy, who first used the term artificial intelligence to denote machines that could think autonomously. He described the threshold as getting a computer to do things which, when done by people, are said to involve intelligence.

Since the 1950s, scientists have argued over what constitutes thinking and intelligence, and what is fully autonomous when it comes to hardware and software. Advanced computers such as the IBM Watson already have beaten humans at chess and are capable of instantly processing enormous amounts of information.

The lack of clarity around the term enables technology pessimists to warn AI will conquer humans, suppress individual freedom, and destroy personal privacy through a digital 1984.

Today, AI generally is thought to refer to machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention. According to researchers Shubhendu and Vijay, these software systems make decisions which normally require [a] human level of expertise and help people anticipate problems or deal with issues as they come up. As argued by John Allen and myself in an April 2018 paper, such systems have three qualities that constitute the essence of artificial intelligence: intentionality, intelligence, and adaptability.

In the remainder of this paper, I discuss these qualities and why it is important to make sure each accords with basic human values. Each of the AI features has the potential to move civilization forward in progressive ways. But without adequate safeguards or the incorporation of ethical considerations, the AI utopia can quickly turn into dystopia.

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. As such, they are designed by humans with intentionality and reach conclusions based on their instant analysis.

An example from the transportation industry shows how this happens. Autonomous vehicles are equipped with LIDARS (light detection and ranging) and remote sensors that gather information from the vehicles surroundings. The LIDAR uses light from a radar to see objects in front of and around the vehicle and make instantaneous decisions regarding the presence of objects, distances, and whether the car is about to hit something. On-board computers combine this information with sensor data to determine whether there are any dangerous conditions, the vehicle needs to shift lanes, or it should slow or stop completely. All of that material has to be analyzed instantly to avoid crashes and keep the vehicle in the proper lane.

With massive improvements in storage systems, processing speeds, and analytic techniques, these algorithms are capable of tremendous sophistication in analysis and decisionmaking. Financial algorithms can spot minute differentials in stock valuations and undertake market transactions that take advantage of that information. The same logic applies in environmental sustainability systems that use sensors to determine whether someone is in a room and automatically adjusts heating, cooling, and lighting based on that information. The goal is to conserve energy and use resources in an optimal manner.

As long as these systems conform to important human values, there is little risk of AI going rogue or endangering human beings. Computers can be intentional while analyzing information in ways that augment humans or help them perform at a higher level. However, if the software is poorly designed or based on incomplete or biased information, it can endanger humanity or replicate past injustices.

AI often is undertaken in conjunction with machine learning and data analytics, and the resulting combination enables intelligent decisionmaking. Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it with data analytics to understand specific issues.

For example, there are AI systems for managing school enrollments. They compile information on neighborhood location, desired schools, substantive interests, and the like, and assign pupils to particular schools based on that material. As long as there is little contentiousness or disagreement regarding basic criteria, these systems work intelligently and effectively.

Figuring out how to reconcile conflicting values is one of the most important challenges facing AI designers. It is vital that they write code and incorporate information that is unbiased and non-discriminatory. Failure to do that leads to AI algorithms that are unfair and unjust.

Of course, that often is not the case. Reflecting the importance of education for life outcomes, parents, teachers, and school administrators fight over the importance of different factors. Should students always be assigned to their neighborhood school or should other criteria override that consideration? As an illustration, in a city with widespread racial segregation and economic inequalities by neighborhood, elevating neighborhood school assignments can exacerbate inequality and racial segregation. For these reasons, software designers have to balance competing interests and reach intelligent decisions that reflect values important in that particular community.

Making these kinds of decisions increasingly falls to computer programmers. They must build intelligent algorithms that compile decisions based on a number of different considerations. That can include basic principles such as efficiency, equity, justice, and effectiveness. Figuring out how to reconcile conflicting values is one of the most important challenges facing AI designers. It is vital that they write code and incorporate information that is unbiased and non-discriminatory. Failure to do that leads to AI algorithms that are unfair and unjust.

The last quality that marks AI systems is the ability to learn and adapt as they compile information and make decisions. Effective artificial intelligence must adjust as circumstances or conditions shift. This may involve alterations in financial situations, road conditions, environmental considerations, or military circumstances. AI must integrate these changes in its algorithms and make decisions on how to adapt to the new possibilities.

One can illustrate these issues most dramatically in the transportation area. Autonomous vehicles can use machine-to-machine communications to alert other cars on the road about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved experience is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions.

A similar logic applies to AI devised for scheduling appointments. There are personal digital assistants that can ascertain a persons preferences and respond to email requests for personal appointments in a dynamic manner. Without any human intervention, a digital assistant can make appointments, adjust schedules, and communicate those preferences to other individuals. Building adaptable systems that learn as they go has the potential of improving effectiveness and efficiency. These kinds of algorithms can handle complex tasks and make judgments that replicate or exceed what a human could do. But making sure they learn in ways that are fair and just is a high priority for system designers.

In short, there have been extraordinary advances in recent years in the ability of AI systems to incorporate intentionality, intelligence, and adaptability in their algorithms. Rather than being mechanistic or deterministic in how the machines operate, AI software learns as it goes along and incorporates real-world experience in its decisionmaking. In this way, it enhances human performance and augments peoples capabilities.

Of course, these advances also make people nervous about doomsday scenarios sensationalized by movie-makers. Situations where AI-powered robots take over from humans or weaken basic values frighten people and lead them to wonder whether AI is making a useful contribution or runs the risk of endangering the essence of humanity.

With the appropriate safeguards,countries can move forward and gain the benefits of artificial intelligence and emerging technologies without sacrificing the important qualities that define humanity.

There is no easy answer to that question, but system designers must incorporate important ethical values in algorithms to make sure they correspond to human concerns and learn and adapt in ways that are consistent with community values. This is the reason it is important to ensure that AI ethics are taken seriously and permeate societal decisions. In order to maximize positive outcomes, organizations should hire ethicists who work with corporate decisionmakers and software developers, have a code of AI ethics that lays out how various issues will be handled, organize an AI review board that regularly addresses corporate ethical questions, have AI audit trails that show how various coding decisions have been made, implement AI training programs so staff operationalizes ethical considerations in their daily work, and provide a means for remediation when AI solutions inflict harm or damages on people or organizations.

Through these kinds of safeguards, societies will increase the odds that AI systems are intentional, intelligent, and adaptable while still conforming to basic human values. In that way, countries can move forward and gain the benefits of artificial intelligence and emerging technologies without sacrificing the important qualities that define humanity.

See more here:
What is artificial intelligence? - Brookings