Category Archives: Artificial Intelligence
COVID: Artificial intelligence in the pandemic – DW (English)
If artificial intelligence is the future, then the future is now. This pandemic has shown us just how fast artificial intelligence, or AI, works and what it can do in so many different ways.
Right from the start, AI has helped us learn about SARS-CoV-2, the virus that causes COVID-19 infections.
It's helped scientists analyse the virus' genetic information its DNA at speed. DNA is the stuff that makes the virus, indeed any living thing, what it is. And if you want to defend yourself, you had better know your enemy.
AI has also helped scientists understand how fast the virus mutates and helped them develop and test vaccines.
We won't be able to get into all of it this is just an overview. But let's start by recapping the basics about AI.
An AI is a set of instructions that tells a computer what to do, from recognizing faces in the photo albums on our phones to sifting through huge dumps of data for that proverbial needle in a haystack.
People often call them algorithms. It sounds fancy but an algorithm is nothing more thana static list of rules that tells a computer: "If this, then that."
A machine learning (ML) algorithm, meanwhile, is the kind of AI that many of us like to fear. It's an AI that can learn from the things it reads and analyzes and teach itself to do new things. And wehumansoften feel like we can't control or even know what ML algorithms learn. But actually, we can because we write the original code. Soyou can afford to relax. A bit.
In summary, AIs and MLs are programs that let us process lots and lots of information, a lot of it "raw" data, very fast. They are not all evil monsters out to kill us or steal our jobs not necessarily, anyway.
With COVID-19, AI and ML may have helped save a few lives. They have been used in diagnostic tools that read vast numbers of chest X-raysfaster than any radiologist. That's helped doctors identify and monitor COVID patients.
In Nigeria, the technology has been used at a very basic but practical level to help people assess their of risk of getting infected. People answer a series of questions online and depending on their answers, are offered remote medical advice or redirected to a hospital.
The makers, a company called Wellvis, say it has reduced the number of people calling disease control hotlines unnecessarily.
One of the most important things we've had to handle is finding out who is infected fast. And in South Korea, artificial intelligence gave doctors ahead start.
Way back when the rest of the world was still wondering whether it was time to go into the first lockdown, a company in Seoul used AI to develop a COVID-19 test in mere weeks. It would have taken them months without AI.
It was "unheard of," said Youngsahng "Jerry" Suh, head of data science and AI development at the company, Seegene, in an interview with DW.
Seegene's scientists ordered raw materials for the kits on January 24 and by February 5, the first version of the test was ready.
It was only the third time the company had used its supercomputer and Big Data analysis to design a test.
But they must have done something right because by mid-March 2020, international reports suggested that South Korea had tested 230,000 people.
And, at least for a while, the country was able to keep the number of new infections per day relatively flat.
"And we're constantly updating that as new variants and mutations come to light. So, that allows our machine learning algorithm to detect those new variants as well," says Suh.
One of the other major issues we've had to handle is tracking how the disease especially new variants and their mutations spread through a community and from country to country.
In South Africa, researchers used an AI-based algorithm to predictfuture daily confirmed cases of COVID-19.
It was based on historical data from South Africa's past infection history and other information, such as the way people move from one community to another.
In May, they say they showed the country had a low risk of a third wave of the pandemic.
"People thought the beta variant was going to spread around the continent and overwhelm our health systems, but with AI we were able to control that," says Jude Kong, who leadsthe Africa-Canada Artificial Intelligence and Data Innovation Consortium.
The project is a collaboration between Wits University and the Provincial Government of Gauteng in South Africa and York University in Canada, where Kong, who comes from Cameroon, is an assistant professor.
Kong says "data is very sparse in Africa" and one of the problems is getting over the stigma attached to any kind of illness, whether it's COVID, HIV, Ebola or malaria.
But AI has helped them "reveal hidden realities" specific to each area, and that's informed local health policies, he says.
They have deployed their AI modelling in Botswana, Cameroon, Eswatini, Mozambique, Namibia, Nigeria, Rwanda, South Africa, and Zimbabwe.
"A lot of information is one-dimensional," Kong says. "You know the number of people entering a hospital and those that get out. But hidden below that is their age, comorbidities, and the community where they live. We reveal that with AI to determine how vulnerable they are and inform policy makers."
Other types of AI, similar to facial recognition algorithms, can be used to detect infected people, or those with elevated temperatures, in crowds. And AI-driven robots can clean hospitals and other public spaces.
But, beyond that, there are experts who say AI's potential has been overstated.
They include Neil Lawrence, a professor of machine learning at the University of Cambridge who was quoted in April 2020, calling out AI as "hyped."
It was not surprising, he said, that in a pandemic, researchers fell back on tried and tested techniques, like simple mathematical modelling. But one day, he said, AI might be useful.
That was only 15 months ago. And look how far we've come.
That's how to do it: If humans have COVID-19, dogs had better cuddle with their stuffed animals. Researchers from Utrecht in the Netherlands took nasal swabs and blood samples from 48 cats and 54 dogs whose owners had contracted COVID-19 in the last 200 days. Lo and behold, they found the virus in 17.4% of cases. Of the animals, 4.2% also showed symptoms.
About a quarter of the animals that had been infected were also sick. Although the course of the illness was mild in most of the animals, three were considered to be severe. Nevertheless, medical experts are not very concerned. They say pets do not play an important role in the pandemic. The biggest risk is human-to-human transmission.
The fact that cats can become infected with coronaviruses has been known since March 2020. At that time, the Veterinary Research Institute in Harbin, China, had shown for the first time that the novel coronavirus can replicate in cats. The house tigers can also pass on the virus to other felines, but not very easily, said veterinarian Hualan Chen at the time.
But cat owners shouldn't panic. Felines quickly form antibodies to the virus, so they aren't contagious for very long. Anyone who is acutely ill with COVID-19 should temporarily restrict outdoor access for domestic cats. Healthy people should wash their hands thoroughly after petting strange animals.
Should this pet pig keep a safe distance from the dog when walking in Rome? That question may now also have to be reassessed. Pigs hardly come into question as carriers of the coronavirus, the Harbin veterinarians argued in 2020. But at that time they had also cleared dogs of suspicion. Does that still apply?
Nadia, a four-year-old Malaysian tiger, was one of the first big cats to be detected with the virus in 2020 at a New York zoo. "It is, to our knowledge, the first time a wild animal has contracted COVID-19 from a human," the zoo's chief veterinarian told National Geographic magazine.
It is thought that the virus originated in the wild. So far, bats are considered the most likely first carriers of SARS-CoV-2. However, veterinarians assume there must have been another species as an intermediate host between them and humans in Wuhan, China, in December 2019. Only which species this could be is unclear.
This racoon dog is a known carrier of the SARS viruses. German virologist Christian Drosten spoke about the species being a potential virus carrier. "Racoon dogs are trapped on a large scale in China or bred on farms for their fur," he said. For Drosten, the racoon dog is clearly the prime suspect.
Pangolins are also under suspicion for transmitting the virus. Researchers from Hong Kong, China and Australia have detected a virus in a Malaysian Pangolin that shows stunning similarities to SARS-CoV-2.
Hualan Chen also experimented with ferrets. The result: SARS-CoV-2 can multiply in the scratchy martens in the same way as in cats. Transmission between animals occurs as droplet infections. At the end of 2020, tens of thousands of martens had to be killed in various fur farms worldwide because the animals had become infected with SARS-CoV-2.
Experts have given the all-clear for people who handle poultry, such as this trader in Wuhan, China, where scientists believe the first case of the virus emerged in 2019. Humans have nothing to worry about, as chickens are practically immune to the SARS-CoV-2 virus, as are ducks and other bird species.
Author: Fabian Schmidt
Continue reading here:
COVID: Artificial intelligence in the pandemic - DW (English)
NIST Proposal Aims to Reduce Bias in Artificial Intelligence – Government Technology
The National Institute of Standards and Technology (NIST) recently announced the publication of A Proposal for Identifying and Managing Bias in Artificial Intelligence.
The proposal outlines a possible approach for reducing risk of bias in the use of artificial intelligence (AI) technology, and the agency is seeking comments from the public to strengthen that effort until Aug. 5.
Studies have shown that AI can be biased against people of color, and while there are legislative efforts in progress to tackle this issue from a policy standpoint, much of the issue hinges on the way the technology functions at its most basic level.
The proposal seeks to help industries using AI technology to develop a risk-based framework. The proposal notes that while reducing risk in these products is critical, it remains insufficiently defined.
The announcement details some of the possible discriminatory outcomes that can come from AI systems, such as wrongful arrests or unfairly rejecting qualified job applicants.
NISTs proposed approach involves three stages for reducing that bias: predesign, design and development, and deployment.
The first stage refers to where the AI products and their parameters are defined, as well as the determination of a products central purpose. In this phase, forward-thinking to possible problems is critical.
The next stage is design and development, where the engineering and modeling take place. In this stage, software designers must pay close attention to context and how predictions may affect different populations.
Finally, in the deployment stage, it is important that the products continue to be monitored. In some cases, they are deployed to the public with very little moderation in what follows.
The proposal concludes that while bias is not new or unique to AI, identifying and reducing that bias can help with a responsible use of the technology. According to one of the reports authors, NISTs Reva Schwartz, bias exists throughout this AI life cycle.
Determining methods for identifying and managing it is a vital next step, Schwartz said.
NIST is welcoming public feedback in the approach outlined in the proposal from people both within and outside of the technical industry. Comments can be made by downloading and completing a template and sending it via email to ai-bias@list.nist.gov.
See the original post:
NIST Proposal Aims to Reduce Bias in Artificial Intelligence - Government Technology
Northwell Health uses machine learning to reduce readmissions by nearly 24% – Healthcare IT News
Reducing readmissions is a major focus for healthcare organizations operating under value-based care contracts.
Clinicians at Northwell Health, the largest healthcare provider in New York State, are applying clinical artificial intelligence to augment their post-discharge workflows and have reduced readmissions by 23.6%. The clinicians studied clinical AI stratified patients for their risk of readmissions, identified the clinical and nonclinical factors driving their risk, and recommended targeted outreach and interventions to reduce patient risk.
The clinicians noted the contrast between prescriptive clinical AI and traditional predictive analytics, and their impacts on patient outcomes.
"Predictive analytics as a whole is a powerful tool using a combination of historical data, statistical modeling, data mining and machine learning in order to predict events and identify patterns," said Dr. Zenobia Brown, vice president and medical director at Northwell Health, a health system based in Manhasset, New York.
"Despite those powerful insights, predictive analytics are really just a starting place in terms of enacting meaningful change at the population and individual levels.
"Prescriptive analytics, a tool that uses predictive modeling to make specific recommendations across a matrix of potential decision points, adds the ability to operationalize the information given which is key," she continued. "When orienting clinical teams to prescriptive analytics, I liken it to how we as providers make recommendations based on our understanding of the clinical data and our experience over time, which [lead]us to the 'right clinical decision.'"
Clinical staff members accept, and the data would support,that the more experienced one is the more historical information staff hasabout the pattern of outcomes, given a certain set of circumstances and interventionthe better the outcomes, she explained.
"I ask my teams to imagine how much better their decision-making would be if they had onemillion times the experiences in that set of clinical data, and the experience of treating the disease onemillion different waysin a million different types of patients," Brown said. "This is what prescriptive analytics supports; a way to make decisions in managing the complexity represented by patients beyond the data set that is limited by the human brain."
The technology supports a hyper-informed recommendation based on a complex matrix of data points specific to achieving the desired outcomes.
"It's a really exciting time in healthcare right now when it is widely accepted that the factors that influence the overall health of people extend way beyond the strictly clinical risk," Brown said. "Many believe that social determinants are equally if not more impactful on the overall clinical outcomes.
"We had a really interesting case of a cardiac patient who was in the healthcare field," she continued. "While diet was discussed as part of his routine care, based on his high education level and clinical background, this would not have been identified as a high-risk area. As it turned out, this particular patient had social isolation, living in a food desert, as well as other nonclinical factors that cause the prescriptive AI to recommend multiple nutrition interventions."
When the recommendation first appeared, the care navigator was perplexed, but when she contacted the patient, she in fact found that this was a gaping hole in the patient's self-management and ability to recover successfully from surgery. In the clinical domain, typically staff looks at historical utilization, disease severity and acuity to determine the risk.
"In terms of the more typical clinical risk factors, AI-driven recommendations contribute a deeper understanding of the most likely intervention to impact the outcome," Brown said. "In this example, what has been fascinating is that the order of recommended interventions might be unexpected.
"For instance, in a typical heart failure patient, we would typically prioritize medication reconciliation, education about daily weights, etc., to mitigate the risk of a CHF readmission/exacerbation," she continued. "In one heart failure case that comes to mind, the AI recommended a nephrology consult as the first most important intervention to accomplish."
The team might have gotten to a nephrology consult over the course of the patient care plan, but probably not as the first thing, and probably not in time to prevent a readmission, she added.
"Medical providers and people in general are very good at recognizing the patterns with which we are familiar," she noted. "It's the ones we don't recognize, don't see and can't prioritize that represent the opportunities to keep patients on the path to wellness."
So how does clinical AI integrate into the clinical workflow to augment transitions of care and prevent readmissions post-discharge?
"The first, most important step is for the providers of care to be confident in the technology," Brown stated. "If they don't believe it works, or don't see the value in how it helps their time or helps the patient, there is zero chance of good operational integration. In our case, we had a mature transitional program that was already seeing good outcomes, so it was even harder to convince providers that this would be additive.
"Having said that, an important part of the journey was sharing these cases of patterns that otherwise would have been missed; the 'good catches,'" she continued. "This reinforced the value of the tool. Also important was making sure the predictions and recommendations were timely, such that the team had appropriate lead time to impact each patient."
For the team, that meant that the AI/predictive modeling tool was being refreshed multiple times per day, while the patients were still in the hospital, so that the identification of the high-risk patients could happen as far upstream as possible.
"It also allowed for interventions to occur in the hospital that might be more difficult or less timely in the ambulatory setting specialty consults particularly," she said. "In terms of how it integrates into the workflow, it's like another vital sign or lab report. It's an additional piece of data or information that can be used to connect with the patients in meaningful ways. It does not replace what happens in that provider/navigator/patient relationship, but it can enhance the interactions."
Brown will offer more detail during her HIMSS21 session, "Applying Clinical AI to Reduce Readmissions by More Than 20%." It's scheduled for August 11, from 4:15 to 5:15 p.m., in Venetian Murano 3201A.
Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.
Go here to read the rest:
Northwell Health uses machine learning to reduce readmissions by nearly 24% - Healthcare IT News
Navigating the Intersections of Data, Artificial Intelligence, and Privacy – JD Supra
Companies can expertly address AI-related privacy concerns with the right knowledge and team.
While the U.S. is figuring out privacy laws at the state and federal level, artificial and augmented intelligence (AI) is evolving and becoming commonplace for businesses and consumers. These technologies are driving new privacy concerns. Years ago, consumers feared a stolen Social Security number. Now, organizations can uncover political views, purchasing habits, and much more. The repercussions of data are broader and deeper than ever.
H5 recently convened a panel of experts to discuss these emerging issues and ways leaders can tackle their most urgent privacy challenges in the webinar Everything Personal: AI and Privacy.
The panel featured Nia M. Jenkins, Senior Associate General Counsel, Data, Technology, Digital Health & Cybersecurity at Optum (UnitedHealth Group); Kimberly Pack, Associate General Counsel, Compliance, at Anheuser-Busch; Jennifer Beckage, Managing Director at Beckage; and Eric Pender, Engagement Manager at H5; and was moderated by Sheila Mackay, Managing Director, Corporate Segment at H5.
While the regulatory and technology landscape continues to rapidly change, the panel highlighted some key takeaways and solutions to protect and manage sensitive data leaders should consider:
Build, nurture, and utilize cross-functional teams to tackle data challenges
Develop robust and well-defined workflows to work with AI technology
Understand the type and quality of data your organization collects and stores
Engage with experts and thought leadership to stay current with evolving technology and regulations
Collaborate with experts across your organization to learn the needs of different functions and business units and how they can deploy AI
Enable your companys innovation and growth by understanding the data, technology, and risks involved with new AI
While addressing challenges related to data and privacy certainly requires technical and legal expertise, the need for strong teamwork and knowledge sharing should not be overlooked. Nia Jenkins said her organization utilizes cross-functional teams, which can pull together privacy, governance, compliance, security, and other subject matter experts to gain a line of sight into the data thats coming in and going out of the organization.
We also have an infrastructure where people are able to reach out to us to request access to certain data pools, Jenkins said. With that team, we are able to think through, is it appropriate to let that team use the data for their intended purpose or use?
In addition to collaboration, well-developed workflows are paramount too. Kimberly Pack explained that her company does have a formalized team that comes together on a bi-monthly basis and defined workflows that are improving daily. She emphasized that it all begins with having clarity about how business gets done.
Jennifer Beckage highlighted the need for an organization to develop a plan, build a strong team, and understand the type and quality of the data it collects before adopting AI. Businesses have to address data retention, cybersecurity, intellectual property, and many other potential risks before taking full advantage of AI technology.
Keeping up with a dynamic regulatory landscape requires expanding your information network. Pack was frank that its too much for one person to learn themselves. She relies on following law firms, becoming involved in professional organizations and forums, and connecting with privacy professionals on LinkedIn. As she continually educates herself, she creates training for various teams at her organization, including human resources, procurement, and marketing.
Really cascade that information, said Pack. Really try to tailor the training so that it makes sense for people. Also, try to have tools and infographics, so people can use it, pass it along. Record all your trainings because everyones not going to show up.
The panel discussed how their companies are using AI and whether theres any resistance. Pack noted her organization has carefully taken advantage of AI for HR, marketing, enterprise tools, and training. She noted that providing your teams with information and assistance is key to comfort and adoption.
AI is just a tool, right? Pack said. Its not good, its not bad. The privacy team conducts a privacy impact assessment to understand how the business can use the technology. Then her team places any necessary limitations and builds controls to ensure the team uses the technology ethically. Pack and Jenkins both noted that the companies must proactively address potential bias and not allow automated decision-making.
The panel agreed organizations should adopt AI to remain competitive and meet consumer expectations. Pack pointed out the purpose of AI technology is for it to learn. Businesses adopting it now will see the benefits sooner than those that wait.
Eric Pender noted advanced technologies are becoming more common for particular uses: cybersecurity breach response, production of documents, including privilege review and identifying Personally Identifiable Information (PII), and defensible disposal. Many of these tasks have tight timelines and require efficiency and accuracy, which AI provides.
The risks of AI depend on the nature of the specific technology, according to Beckage. Its each organizations responsibility to perform a risk assessment, determine how to use the technology ethically, and perform audits to ensure the technology is working without unintended consequences.
It is also important to remember that in-house and outside counsel dont have to be dream killers when it comes to innovation. Lawyers with a good understanding of their companys data, technology, and ways to mitigate risk can guide their businesses in taking advantage of AI now and years down the road.
Pack encouraged compliance professionals to enjoy the problem-solving process. Continue to know your business. Be in front of what their desires are, what their goals are, what their dreams are, so that you can actively support that, she said.
Pender says companies are shifting from a reactive approach to a proactive approach, and advised that data thats been defensively disposed of is not a risk to the company. Though implementing AI technology is complex and challenging, managing sensitive, personal data is achievable, and the potential benefits are enormous.
Jenkins encouraged the four Bs. Be aware of the data, be collaborative with your subject matter experts, be willing to learn and ask tough questions of your team, and be open to learning more about the product, whats happening with your business team, and privacy in an ever-changing landscape.
Beckage closed out the webinar by warning organizations not to reinvent the wheel. While its risky to copy another organizations privacy policy word for word, organizations can learn from the people in the privacy space who know what theyre doing well.
Go here to see the original:
Navigating the Intersections of Data, Artificial Intelligence, and Privacy - JD Supra
Companies leading the way for artificial intelligence in the power sector – Power Technology
Siemens AG and Vestas Wind Systems AS are leading the way for artificial intelligence investment among top power companies according to our analysis of a range of GlobalData data.
Artificial intelligence has become one of the key themes in the power sector of late, with companies hiring for increasingly more roles, making more deals, registering more patents and mentioning it more often in company filings.
These themes, of which artificial intelligence is one, are best thought of as any issue that keeps a CEO awake at night, and by tracking and combining them, it becomes possible to ascertain which companies are leading the way on specific issues and which are dragging their heels.
According to GlobalData analysis, Siemens AG is one of the artificial intelligence leaders in a list of high-revenue companies in the power industry, having advertised for 1,397 positions in artificial intelligence, made zero deals related to the field, filed 81 patents and mentioned artificial intelligence one times in company filings between January 2020 and June 2021.
Our analysis classified two companies as Most Valuable Players or MVPs due to their high number of new jobs, deals, patents and company filings mentions in the field of artificial intelligence. An additional seven companies are classified as Market Leaders and one are Average Players. 11 more companies are classified as Late Movers due to their relatively lower levels of jobs, deals, patents and company filings in artificial intelligence.
For the purpose of this analysis, weve ranked top companies in the power sector on each of the four metrics relating to artificial intelligence: jobs, deals, patents and company filings. The best-performing companies the ones ranked at the top across all or most metrics were categorised as MVPs while the worst performers companies ranked at the bottom of most indicators were classified as Late Movers.
Siemens AG is spearheading the artificial intelligence hiring race, advertising for 1,397 new jobs between January 2020 and June 2021. The company reached peak hiring in February 2021, when it listed 134 new job ads related to artificial intelligence.
E.ON SE followed Siemens AG as the second most proactive artificial intelligence employer, advertising for 446 new positions. Schneider Electric SE was third with 178 new job listings.
When it comes to deals, Chubu Electric Power Co Inc leads with one new artificial intelligence deal announced from January 2020 to June 2021.
GlobalData's Financial Deals Database covers hundreds of thousands of M&A contracts, private equity deals, venture finance deals, private placements, IPOs and partnerships, and it serves as an indicator of economic activity within a sector.
One of the most innovative power companies in recent months was Siemens AG, having filed 81 patent applications related to artificial intelligence since the beginning of last year. It was followed by Vestas Wind Systems AS with three patents and Electricite de France SA with one.
GlobalData collects patent filings from 100+ counties and jurisdictions. These patents are then tagged according to the themes they relate to, including artificial intelligence, based on specific keywords and expert input. The patents are also assigned to a company to identify the most innovative players in a particular field.
Finally, artificial intelligence was a commonly mentioned theme in power company filings. Schneider Electric SE mentioned artificial intelligence four times in its corporate reports between January 2020 and June 2021. Centrica Plc filings mentioned it two times and Southern Co mentioned it two times.
Methodology:
GlobalDatas unique Job analytics enables understanding of hiring trends, strategies, and predictive signals across sectors, themes, companies, and geographies. Intelligent web crawlers capture data from publicly available sources. Key parameters include active, posted and closed jobs, posting duration, experience, seniority level, educational qualifications and skills.
Gas and Steam Turbine Insulation Products
28 Aug 2020
Advanced Storage Solutions for Bulk Solids
28 Aug 2020
More:
Companies leading the way for artificial intelligence in the power sector - Power Technology
Four ways artificial intelligence is helping us learn about the universe – The Conversation UK
Astronomy is all about data. The universe is getting bigger and so too is the amount of information we have about it. But some of the biggest challenges of the next generation of astronomy lie in just how were going to study all the data were collecting.
To take on these challenges, astronomers are turning to machine learning and artificial intelligence (AI) to build new tools to rapidly search for the next big breakthroughs. Here are four ways AI is helping astronomers.
There are a few ways to find a planet, but the most successful has been by studying transits. When an exoplanet passes in front of its parent star, it blocks some of the light we can see.
By observing many orbits of an exoplanet, astronomers build a picture of the dips in the light, which they can use to identify the planets properties such as its mass, size and distance from its star. Nasas Kepler space telescope employed this technique to great success by watching thousands of stars at once, keeping an eye out for the telltale dips caused by planets.
Humans are pretty good at seeing these dips, but its a skill that takes time to develop. With more missions devoted to finding new exoplanets, such as Nasas (Transiting Exoplanet Survey Satellite), humans just cant keep up. This is where AI comes in.
Time-series analysis techniques which analyse data as a sequential sequence with time have been combined with a type of AI to successfully identify the signals of exoplanets with up to 96% accuracy.
Time-series models arent just great for finding exoplanets, they are also perfect for finding the signals of the most catastrophic events in the universe mergers between black holes and neutron stars.
When these incredibly dense bodies fall inwards, they send out ripples in space-time that can be detected by measuring faint signals here on Earth. Gravitational wave detector collaborations Ligo and Virgo have identified the signals of dozens of these events, all with the help of machine learning.
By training models on simulated data of black hole mergers, the teams at Ligo and Virgo can identify potential events within moments of them happening and send out alerts to astronomers around the world to turn their telescopes in the right direction.
Read more: What happens when black holes collide with the most dense stars in the universe
When the Vera Rubin Observatory, currently being built in Chile, comes online, it will survey the entire night sky every night collecting over 80 terabytes of images in one go to see how the stars and galaxies in the universe vary with time. One terabyte is 8,000,000,000,000 bits.
Over the course of the planned operations, the Legacy Survey of Space and Time being undertaken by Rubin will collect and process hundreds of petabytes of data. To put it in context, 100 petabytes is about the space it takes to store every photo on Facebook, or about 700 years of full high-definition video.
You wont be able to just log onto the servers and download that data, and even if you did, you wouldnt be able to find what youre looking for.
Machine learning techniques will be used to search these next-generation surveys and highlight the important data. For example, one algorithm might be searching the images for rare events such as supernovae dramatic explosions at the end of a stars life and another might be on the lookout for quasars. By training computers to recognise the signals of particular astronomical phenomena, the team will be able to get the right data to the right people.
As we collect more and more data on the universe, we sometimes even have to curate and throw away data that isnt useful. So how can we find the rarest objects in these swathes of data?
One celestial phenomenon that excites many astronomers is strong gravitational lenses. This is what happens when two galaxies line up along our line of sight and the closest galaxys gravity acts as a lens and magnifies the more distant object, creating rings, crosses and double images.
Finding these lenses is like finding a needle in a haystack a haystack the size of the observable universe. Its a search thats only going to get harder as we collect more and more images of galaxies.
In 2018, astronomers from around the world took part in the Strong Gravitational Lens Finding Challenge where they competed to see who could make the best algorithm for finding these lenses automatically.
The winner of this challenge used a model called a convolutional neural network, which learns to break down images using different filters until it can classify them as containing a lens or not. Surprisingly, these models were even better than people, finding subtle differences in the images that we humans have trouble noticing.
Over the next decade, using new instruments like the Vera Rubin Observatory, astronomers will collect petabytes of data, thats thousands of terabytes. As we peer deeper into the universe, astronomers research will increasingly rely on machine-learning techniques.
View post:
Four ways artificial intelligence is helping us learn about the universe - The Conversation UK
WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use – World Health Organization
Artificial Intelligence (AI) holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use, according to new WHO guidance published today.
The report, Ethics and governance of artificial intelligence for health, is the result of 2 years of consultations held by a panel of international experts appointed by WHO.
Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm, said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.
Artificial intelligence can be, and in some wealthy countries is already being used to improve the speed and accuracy of diagnosis and screening for diseases; to assist with clinical care; strengthen health research and drug development, and support diverse public health interventions, such as disease surveillance, outbreak response, and health systems management.
AI could also empower patients to take greater control of their own health care and better understand their evolving needs. It could also enable resource-poor countries and rural communities, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.
However, WHOs new report cautions against overestimating the benefits of AI for health, especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.
It also points out that opportunities are linked to challenges and risks, including unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cybersecurity, and the environment.
For example, while private and public sector investment in the development and deployment of AI is critical, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.
The report also emphasizes that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.
AI systems should therefore be carefully designed to reflect the diversity of socio-economic and health-care settings. They should be accompanied by training in digital skills, community engagement and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision-making and autonomy of providers and patients.
Ultimately, guided by existing laws and human rights obligations, and new laws and policies that enshrine ethical principles, governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technologys design, development, and deployment.
To limit the risks and maximize the opportunities intrinsic to the use of AI for health, WHO provides the following principles as the basis for AI regulation and governance:
Protecting human autonomy: In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.
Promoting human well-being and safety and the public interest. The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.
Ensuring transparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.
Fostering responsibility and accountability. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.
Ensuring inclusiveness and equity. Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.
Promoting AI that is responsive and sustainable. Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.
These principles will guide future WHO work to support efforts to ensure that the full potential of AI for healthcare and public health will be used for the benefits of all.
Instances of Ethical Dilemma in the Use of Artificial Intelligence – Analytics Insight
To be or not to be- the ethical dilemma is a constant in human life whenever it comes to taking a decision. In the world of technology, artificial intelligence comes closest to human-like attributes. It aims to imitate the automation of human intelligence in times of operation or taking a decision. However, the AI machine cant take an independent decision and the mentality of the programmer reflects upon the operation of the AI Machine. While driving an autonomous car, in the chance of an accident, the car intelligence might have to decide whom to save first or should a child be saved before an adult. Several ethical challenges that are faced by AI machines are lack of transparency, biased decisions, surveillance practices for data gathering and privacy of court users, and fairness and risk for Human Rights and other fundamental values.
While human attention and patience are limited, the emotional energy of a machine is not rather, a machines experience of limitations is technical. Although this could benefit certain fields like customer service, this limitless capacity could create human addiction to robot affection. Using this idea, many apps are using algorithms to nurture addictive behavior. Tinder, for example, is designed to keep users on the A.I.-powered app by instigating less likely matches the longer a user engages in a session.
One of the most pressing and widely-discussed A.I. ethics issues is the training of bias in systems that involve predictive analysis, like hiring or crime. Amazon most famously ran into a hiring bias issue after training an A.I.-powered algorithm to present strong candidates based on historical data. Because previous candidates were chosen through human bias, the algorithm favored men as well. This showcased gender bias in Amazons hiring process, which is not ethical. In March, the NYPD disclosed that it developed Patternizer, an algorithmic machine-learning software that shifts through police data to find patterns and connect similar crimes, and has used it since 2016. The software is not used for rape or homicide cases and excludes factors like gender and race when searching for patterns. Although this is a step forward from previous algorithms that were trained on racial bias to predict crime and parole violation, actively removing bias from historical data sets is not standard practice. That means this trained bias is at best an insult and inconvenience; at worst, a risk to personal freedom a and catalyst of systematic oppression.
Deep Fakes are quite popular in the usage of AI. It is a technique that uses A.I. to superimpose images, videos, and audio onto others, creating a false impression of original media and audio, most often with malicious intent. Deep fakes can include face swaps, voice imitation, facial re-enactment, lip-syncing, and more. Unlike older photo and video editing techniques, deep fake technology will become progressively more accessible to people without great technical skills. Similar tech was used during the last U.S. presidential election when Russia implemented Reality Hacking (like the influence of fake news on our Facebook feeds). This information warfare is becoming commonplace and exists not only to alter acts but to powerfully change opinions and attitudes. This practice was also used during the Brexit campaign and is increasingly being used as an example of the rising political tensions and confusing global perspectives.
Most consumer devices (from cell phones to blue-tooth enabled light bulbs) use artificial intelligence to collect our tour to provide better, more personalized service. If consensual, and if the data collection is done with transparency, this personalization is an excellent feature. Without consent and transparency, this feature could easily become malignant. Although a phone tracking app is useful after leaving your iPhone in a cab, or losing your keys between the couch cushions, tracking individuals could be un for at a small scale (like domestic abuse survivors seeking privacy) or at a large scale (like government compliance).
These instances answer the question of how artificial intelligence raises the question of ethical dilemmas. It also confirms the fact that AI can only be ethical once its creators and programmers want it to be.
Share This ArticleDo the sharing thingy
Read this article:
Instances of Ethical Dilemma in the Use of Artificial Intelligence - Analytics Insight
The Uneasy Alliance Between Business Leaders And Artificial Intelligence – Forbes
BREAKINGMatthew McConaugheys Bestselling Memoir Suggests Why He Could Become Texas Next Governor","scope":{"happening":{"title":"Matthew McConaugheys Bestselling Memoir Suggests Why He Could Become Texas Next Governor","uri":"https://www.forbes.com/sites/alisondurkee/2021/07/02/matthew-mcconaughey-bestselling-memoir-greenlights-suggests-why-he-could-become-texas-next-governor/","date":"1 hour ago","index":1,"contentBadge":{"class":"content-badge"}}},"id":"4ejklcpo1g8o00"},{"textContent":"Miami Condo Collapse: Death Toll Rises To 22 While Number Of Missing Drops After Audit","scope":{"happening":{"title":"Miami Condo Collapse: Death Toll Rises To 22 While Number Of Missing Drops After Audit","uri":"https://www.forbes.com/sites/nicholasreimann/2021/07/02/miami-condo-collapse-death-toll-rises-to-22-while-number-of-missing-drops-after-audit/","date":"1 hour ago","index":2,"contentBadge":{"class":"content-badge"}}},"id":"5ree1iqnf1ak00"},{"textContent":"Reggie Bush Can Get His Heisman Trophy BackIf NCAA Allows It","scope":{"happening":{"title":"Reggie Bush Can Get His Heisman Trophy BackIf NCAA Allows It","uri":"https://www.forbes.com/sites/nicholasreimann/2021/07/02/reggie-bush-can-get-his-heisman-trophy-back-if-ncaa-allows-it/","date":"3 hours ago","index":3,"contentBadge":{"class":"content-badge"}}},"id":"5o9l1qrrid0o00"},{"textContent":"Biden Taps College President His Former Employer As German Ambassador","scope":{"happening":{"title":"Biden Taps College President His Former Employer As German Ambassador","uri":"https://www.forbes.com/sites/andrewsolender/2021/07/02/biden-taps-college-president--his-former-employer--as-german-ambassador/","date":"3 hours ago","index":4,"contentBadge":{"class":"content-badge"}}},"id":"f5kek66fnojc00"},{"textContent":"FDA Approves Latest Johnson & Johnson Vaccine Batch From Troubled Factory","scope":{"happening":{"title":"FDA Approves Latest Johnson & Johnson Vaccine Batch From Troubled Factory","uri":"https://www.forbes.com/sites/graisondangor/2021/07/02/fda-approves-latest-johnson--johnson-vaccine-batch-from-troubled-factory/","date":"3 hours ago","index":5,"contentBadge":{"class":"content-badge"}}},"id":"e5llqh7mjf1k00"},{"textContent":"Ghislaine Maxwell Tells Judge Her Case Should Be Tossed Out Like Cosbys","scope":{"happening":{"title":"Ghislaine Maxwell Tells Judge Her Case Should Be Tossed Out Like Cosbys","uri":"https://www.forbes.com/sites/joewalsh/2021/07/02/ghislaine-maxwell-tells-judge-her-case-should-be-tossed-out-like-cosbys/","date":"3 hours ago","index":6,"contentBadge":{"class":"content-badge"}}},"id":"co3bdhmmrgg000"},{"textContent":"GOP Rep. Gosar Promotes More Jan. 6 Conspiracy Theories In Campaign Email","scope":{"happening":{"title":"GOP Rep. Gosar Promotes More Jan. 6 Conspiracy Theories In Campaign Email","uri":"https://www.forbes.com/sites/andrewsolender/2021/07/02/gop-rep-gosar-promotes-more-jan-6-conspiracy-theories-in-campaign-email/","date":"4 hours ago","index":7,"contentBadge":{"class":"content-badge"}}},"id":"9nj7f1dq8bkg00"},{"textContent":"After Arizona Audit, Is Pennsylvania Next? State Sen. Reportedly Begins Push For Privately Funded Election Probe","scope":{"happening":{"title":"After Arizona Audit, Is Pennsylvania Next? State Sen. Reportedly Begins Push For Privately Funded Election Probe","uri":"https://www.forbes.com/sites/alisondurkee/2021/07/02/after-arizona-audit-is-pennsylvania-next-state-sen-reportedly-begins-push-for-privately-funded-election-probe/","date":"4 hours ago","index":8,"contentBadge":{"class":"content-badge"}}},"id":"dm1j2no0ef6o00"},{"textContent":"Dodgers Trevor Bauer Put On Leave After Sexual Assault Allegations","scope":{"happening":{"title":"Dodgers Trevor Bauer Put On Leave After Sexual Assault Allegations","uri":"https://www.forbes.com/sites/nicholasreimann/2021/07/02/dodgers-trevor-bauer-put-on-leave-after-sexual-assault-allegations/","date":"5 hours ago","index":9,"contentBadge":{"class":"content-badge"}}},"id":"3opn0nm4cfgg00"},{"textContent":"Videos Show Capitol Rioter Translating For GOP Lawmakers During Border Trip","scope":{"happening":{"title":"Videos Show Capitol Rioter Translating For GOP Lawmakers During Border Trip","uri":"https://www.forbes.com/sites/andrewsolender/2021/07/02/videos-show-capitol-rioter-translating-for-gop-lawmakers-during-border-trip/","date":"6 hours ago","index":10,"contentBadge":{"class":"content-badge"}}},"id":"943q491e4cfc00"},{"textContent":"Surge Of Republican Women To Run For House In 2022","scope":{"happening":{"title":"Surge Of Republican Women To Run For House In 2022","uri":"https://www.forbes.com/sites/jackbrewster/2021/07/02/surge-of-republican-women-to-run-for-house-in-2022/","date":"8 hours ago","index":11,"contentBadge":{"class":"content-badge"}}},"id":"cj8h5ikmkfeo00"},{"textContent":"Kids Among Most Vulnerable To Infectious Delta Variant Heres Why You Should Be More Worried","scope":{"happening":{"title":"Kids Among Most Vulnerable To Infectious Delta Variant Heres Why You Should Be More Worried","uri":"https://www.forbes.com/sites/roberthart/2021/07/02/kids-among-most-vulnerable-to-infectious-delta-variant---heres-why-you-should-be-more-worried/","date":"8 hours ago","index":12,"contentBadge":{"class":"content-badge"}}},"id":"qo9464541bhg"},{"textContent":"Two Pilots Hospitalized As Plane Sinks After Emergency Landing Off Hawaii","scope":{"happening":{"title":"Two Pilots Hospitalized As Plane Sinks After Emergency Landing Off Hawaii","uri":"https://www.forbes.com/sites/graisondangor/2021/07/02/engine-trouble-forces-boeing-737-cargo-plane-to-make-emergency-landing-off-hawaii/","date":"9 hours ago","index":13,"contentBadge":{"class":"content-badge"}}},"id":"550d67lhrp6800"},{"textContent":"Supreme Court Rejects Hearing Case Of Florist Who Discriminated Against Same-Sex Couples","scope":{"happening":{"title":"Supreme Court Rejects Hearing Case Of Florist Who Discriminated Against Same-Sex Couples","uri":"https://www.forbes.com/sites/alisondurkee/2021/07/02/supreme-court-rejects-hearing-case-of-florist-who-discriminated-against-same-sex-couples/","date":"9 hours ago","index":14,"contentBadge":{"class":"content-badge"}}},"id":"bq1f5mki5gk000"},{"textContent":"Poll: Biden Receives Low Marks On Handling Of Crime As Overall Approval Rating Slides","scope":{"happening":{"title":"Poll: Biden Receives Low Marks On Handling Of Crime As Overall Approval Rating Slides","uri":"https://www.forbes.com/sites/jackbrewster/2021/07/02/poll-biden-receives-low-marks-on-handling-of-crime-as-overall-approval-rating-slides/","date":"10 hours ago","index":15,"contentBadge":{"class":"content-badge"}}},"id":"b13fm8h7eo8g00"},{"textContent":"Tesla Notches 122% Gain In Quarterly Vehicle Deliveries, But Analysts Wanted More","scope":{"happening":{"title":"Tesla Notches 122% Gain In Quarterly Vehicle Deliveries, But Analysts Wanted More","uri":"https://www.forbes.com/sites/alanohnsman/2021/07/02/tesla-notches-122-gain-in-quarterly-vehicle-deliveries-but-falls-short-of-expectations/","date":"10 hours ago","index":16,"contentBadge":{"class":"content-badge"}}},"id":"4k4qrk6agr5c00"},{"textContent":"Hurricane Elsa: First Hurricane Of The Year Declared Over Caribbean As Tropical Storm Strengthens","scope":{"happening":{"title":"Hurricane Elsa: First Hurricane Of The Year Declared Over Caribbean As Tropical Storm Strengthens","uri":"https://www.forbes.com/sites/alisondurkee/2021/07/02/hurricane-elsa-first-hurricane-of-the-year-declared-over-caribbean-as-tropical-storm-strengthens/","date":"10 hours ago","index":17,"contentBadge":{"class":"content-badge"}}},"id":"ed2gcre1dopk00"},{"textContent":"U.S. Added 850,000 Jobs In JuneMore Than Experts ExpectedAs Labor Market Recovery Gains Steam","scope":{"happening":{"title":"U.S. Added 850,000 Jobs In JuneMore Than Experts ExpectedAs Labor Market Recovery Gains Steam","uri":"https://www.forbes.com/sites/sarahhansen/2021/07/02/us-added-850000-jobs-in-june-more-than-experts-expected-as-labor-market-recovery-gains-steam/","date":"11 hours ago","index":18,"contentBadge":{"class":"content-badge"}}},"id":"8703mohl9jko00"},{"textContent":"GM Invests In Controlled Thermal Resources For U.S. Lithium Production","scope":{"happening":{"title":"GM Invests In Controlled Thermal Resources For U.S. Lithium Production","uri":"https://www.forbes.com/sites/samabuelsamid/2021/07/02/gm-invests-in-controlled-thermal-resources-for-us-lithium-production/","date":"12 hours ago","index":19,"contentBadge":{"class":"content-badge"}}},"id":"bbmfho5781eo00"},{"textContent":"The Least Vaccinated States Also Earn The Least, An Analysis Of Data Shows","scope":{"happening":{"title":"The Least Vaccinated States Also Earn The Least, An Analysis Of Data Shows","uri":"https://www.forbes.com/sites/roberthart/2021/07/02/the-least-vaccinated-states-also-earn-the-least-an-analysis-of-data-shows/","date":"12 hours ago","index":20,"contentBadge":{"class":"content-badge"}}},"id":"374f9dqgrn3000"},{"textContent":"U.S. Military Vacates Afghanistans Bagram Air Base After Nearly Two Decades","scope":{"happening":{"title":"U.S. Military Vacates Afghanistans Bagram Air Base After Nearly Two Decades","uri":"https://www.forbes.com/sites/siladityaray/2021/07/02/us-military-vacates-afghanistans-bagram-air-base-after-nearly-two-decades/","date":"17 hours ago","index":21,"contentBadge":{"class":"content-badge"}}},"id":"76dao8gm64qo00"}],"breakpoints":[{"breakpoint":"@media all and (max-width: 767px)","config":{"enabled":false}},{"breakpoint":"@media all and (max-width: 768px)","config":{"inView":2,"slidesToScroll":1}},{"breakpoint":"@media all and (min-width: 1681px)","config":{"inView":6}}]};
Read the original:
The Uneasy Alliance Between Business Leaders And Artificial Intelligence - Forbes
Regulating artificial intelligence in the EU: top 10 issues for businesses to consider – JD Supra
The European Commissions Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (Draft AI Regulation) is far from being final and even further from taking effect. Indeed, it was published on April 21, 2021, is open to consultation until August 6, 2021, and is subject to the ordinary legislative procedure, which requires not less than two years to conclude.
We already reviewed some of the aspects of the Draft AI Regulation in our recent article focusing on high-risk AI systems. In addition, we have had a chance to talk and brainstorm about the Draft AI Regulation with good friends and notable clients, sharing our first comments and impressions. In this article, we have shortlisted the top 10 issues that have emerged so far in our conversations: they include five things that were generally liked, and five things that a number of commentators would have wanted more of.
Finally!
After years spent wondering what an artificial intelligence system is, we now have a definition: software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with (AI System).
Compared with older definitions, this one does not make any reference to personal data. Such a lack is only apparent, since the Draft AI Regulation focuses a lot on personal data: both in requiring that high-risk AI Systems are trained using only high-quality sets of data, and in the many similarities with the GDPR that lead one to infer that the GDPR served as a starting point for the Draft AI Regulation.
The fact that the Draft AI Regulation has many similarities with the GDPR should come as no surprise, bearing in mind that (as we all know) artificial intelligence is fed by data.
These similarities include:
Another similarity with the GDPR is the establishment of the European Artificial Intelligence Board, composed of representatives of national supervisory bodies and the European Data Protection Supervisor and chaired by the EU Commission.
The European Artificial Intelligence Board, like the European Data Protection Board, will be in charge of supporting EU member states in the uniform interpretation and application of the new regulatory regime. This task is of paramount importance, taking into account that the EU Commission has opted for a regulation (rather than a directive), thus confirming the need for the most consistent legal regime to be implemented Europe-wide.
Furthermore, the European Artificial Intelligence Board will also help in achieving a unitary approach to the European strategy.
AI regulatory sandboxes make it clear that the purpose of the Draft AI Regulation is not to limit, but to foster innovation.
They will offer a controlled environment, which facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service, under the direct supervision and guidance from the competent authorities (including data protection authorities, where personal data are processed within the sandbox). In doing so, the EC aims to ensure compliance with the requirements of the Regulation and, where relevant, other Union and member states legislation supervised within the sandbox.
Almost all the new rules will apply to high-risk AI systems only.
The EU Commission pointed out that most AI systems entail a minimal risk and, therefore, fall out of the scope of the Draft AI Regulation. This clearly results in a simplification, since the (rich) new regulatory and compliance regime also considering that certain AI systems are banned will be applied in a limited number of cases. Consequently, it is unlikely that the Draft AI Regulation will disincentivize the use of AI Systems.
As said, the risk-based approach proposed by the Draft AI Regulation is undoubtedly appreciated; however, clearer criteria to determine to which risk category an AI system belongs may be needed.
It is predictable that many AI systems will fall halfway between the high-risk and the limited-risk categories. In these cases, the decision to consider the AI system as high risk (for precautionary reasons) would lead to the application of a very different regulatory/compliance regime, with definitely more onerous compliance requirements. Such requirements would apply not only by the developer of the AI system, but to all people and entities involved in the AI system lifecycle (in light of the extended scope of application proposed by the EU Commission). The risk is clearly that, when in doubt, the decisions will be in favor of the lower level of risk, with a consequent lower protection for individuals.
Furthermore, the Draft AI Regulation does not provide any criteria to distinguish between limited-risk and minimal-risk AI Systems, and therefore, differentiating between the two will not be easy. In such a case, however, adopting a precautionary approach and deciding to consider the AI system as low risk (rather than as minimal risk) will entail only one regulatory requirement: i.e. disclosing to the users that they are interacting with an AI system, thus giving them the opportunity to decide to stop the interaction.
One of the major concerns when speaking about artificial intelligence is: who is liable for defaults and damages attributable to acts and omissions of the AI system?
It is clear that the EU Commission is trying to reinforce trust in AI by adopting an ex ante approach, i.e. trying to ensure that only safe AI systems are used, by reinforcing regulatory and compliance rules to be complied with before placing the AI system on the market, as well during its lifecycle.
However, clear rules on allocation of liability may also help in achieving this purpose. When individuals know (for sure) that someone (who can be clearly and easily identified) is ultimately liable for any default of the AI System, they will undoubtedly be more prone to use (and trust) an AI System.
It has been disclosed that new rules on liability regimes will soon be issued: hopefully, they will be applicable to all AI systems (and not only to high-risk AI systems).
Like the GDPR, the Draft AI Regulation provides for maximum fines only (ranging from 0 to 30 million or 6% of the total annual turnover of the preceding financial year, whichever is higher), without clarifying in detail the criteria which will be used to assess the amount of the fine to be concretely issued in each case.
As happened with the GDPR, there will likely be guidelines from the European and national authorities on how to assess fines, but if such guidelines are not sufficiently detailed and specific, the difficulties experienced in the past three years of application of the GDPR will re-occur. Risks in using AI systems will not be concretely assessable, and this may result in a disincentive to employ artificial intelligence.
The Draft AI Regulation requires that high-risk AI systems are explainable, and therefore understandable, by human beings. This is rather utopian, since the functioning of AI systems may not be fully clear to their developers themselves.
To ensure that the Draft AI Regulation remains relevant and is applied in its entirety, a more concrete approach would be appreciated. For example, it may be advisable to provide exceptions to the general rule, taking into account when the control over the AI systems has to be controlled itself.
We all know the problems that data controllers and data processors are (still) experiencing after the so-called Schrems II Judgement to lawfully transfer personal data outside the European Union.
This judgment will undoubtedly affect data transfers for the purposes of constituting the high quality data sets to train the AI systems, as required by the Draft AI Regulation. It is predictable that the more difficulties and burdens in the processing of personal data, the more difficulties and burdens in ensuring the highest quality of data sets.
Perhaps, new and clear rules on data transfers will be issued soon and may also include derogations, exceptions or specific provisions to be applied for AI systems.
What are your top 10 issues with regards to the Draft AI Regulation? Do you agree with the above or do you have more?
See original here:
Regulating artificial intelligence in the EU: top 10 issues for businesses to consider - JD Supra