Category Archives: Artificial Intelligence
Artificial intelligence in factory maintenance is no longer a matter of the future – ReadWrite
Undetected machine failures are the most expensive ones. That is why many manufacturing companies are looking for solutions that automate and reduce maintenance costs. Traditional vibrodiagnostic methods can be too late in many cases. Taking readings in the presence of a diagnostician occasionally may not detect a fault in advance. 2017 Position Paper from Deloitte (Deloitte Analytics Institute 7/2017) claimed that maintenance in the environment of Industry 4.0.The benefits of predictive maintenance are dependent on the industry or the specific processes that it is applied to. However, Deloitte analyses at that time have already concluded that material cost savings amount to 5 to 10% on average. Equipment uptime increases by 10 to 20%. Overall maintenance costs are reduced by 5 to 10% and maintenance planning time is even reduced by 20 to 50%! Neuron Soundware has developed a artificial intelligence powered technology for predictive maintenance.
Stories from companies that have embarked on the digital journey are no longer just science fiction. They are real examples of how companies are coping with the lack of skilled labor on the market. Usually mechanic-maintainer who regularly goes around all the machines and diagnoses their condition by listening to them. Some companies are nowlooking for new maintenance technologies to replace
A failure without early identification means replacing the entire piece of equipment or its part. Waiting for the spare part which may not be in stock right now. Because it is expensive to stock replacement equipment. Devaluation of the current pieces of the component in the production thus the discarding of the entire production run. Finally, yet importantly, it would represent up to XY hours of production downtime. The losses might run into tens of thousands of euros.
Such a critical scenario is not possible if the maintenance technology is equipped with artificial intelligence in addition to the mechanical knowledge of the machines. It applies this knowledge itself to the current state of the machine. It is also able to recognize which anomalous behavior is currently occurring on the machine. Based on that send the send the corresponding alert with precise maintenance instructions. Manufacturers of mechanical equipment such as lifts, escalators, and mobile equipment use this today, for example.
However, predictive maintenance technologies have much wider applications. Thanks to the learning capabilities of artificial intelligence, they are very versatile. For example, the technology is able to assist in end-of-line testing. For example to identify defective parts of produced goods which are invisible to the eye and appear randomly.
The second area of application lies in the monitoring of production processes. We can imagine this with the example of a gravel crusher. A conveyor delivers different sized pieces of stone into grinders, which are to yield a given granularity of gravel. Previously, the manufacturer would run the crusher for a predetermined amount of time. To make sure that even in the presence of the largest pieces of rock, sufficient crushing occurred. With the artificial intelligence listening to the size of the gravel. He can stop the crushing process at the right point. This means not only saving wear and tear on the crushing equipment but more importantly, saving time and increasing the volume of gravel delivered per shift. This brings great financial benefit to the producer.
When implementing predictive maintenance technology, it does not matter how big the company is. The most common decision criterion is the scalability of the deployed solution. In companies with a large number of mechanically similar devices, it is possible to quickly collect samples that represent individual problems. From which the neural network learns. It can then handle any number of machines at once. The more machines, the more opportunities for the neural network to learn and apply detection of unwanted sounds.
Condition monitoring technologies are usually designed for larger plants rather than for workshops with a few machine tools. However, as hardware and data transmission and processing get progressively cheaper, the technology is getting there too. So even a home marmalade maker will soon have the confidence that his machines will make enough produce, deliver orders to customers on time, and not ruin its reputation.
In the future, predictive maintenance will be a necessity. In industry also in larger electronic appliances such as refrigerators and coffee machines, or in cars. For example, we can all recognize a damaged exhaust or an unusual sounding engine. Nevertheless, it is often too late to drive the car safely home from a holiday. For example, without a visit to the workshop. With the installation of an AI-driven detection device, we will know about the impending breakdown in time and be able to resolve the problem in time, before the engine seizes up and we have to call a towing service.
Pavel is a tech visionary, speaker, and founder of AI and IoT startup Neuron Soundware. He started his career at Accenture, where he took part in 35+ technology and strategy projects on 3 continents over 11years. He got into entrepreneurship in 2016 when he founded a company focused on predictive machine maintenance using sound analysis.
Here is the original post:
Artificial intelligence in factory maintenance is no longer a matter of the future - ReadWrite
Artificial Intelligence and Chemical and Biological Weapons – Lawfare – Lawfare
Sometimes reality is a cold slap in the face. Consider, as a particularly salient example, a recently published article concerning the use of artificial intelligence (AI) in the creation of chemical and biological weapons (the original publication, in Nature, is behind a paywall, but this link is a copy of the full paper). Anyone unfamiliar with recent innovations in the use of AI to model new drugs will be unpleasantly surprised.
Heres the background: In the modern pharmaceutical industry, the discovery of new drugs is rapidly becoming easier through the use of artificial intelligence/machine learning systems. As the authors of the article describe their work, they have spent decades building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery.
In other words, computer scientists can use AI systems to model what new beneficial drugs may look like for specifically targeted afflictions and then task the AI to work on discovering possible new drug molecules to use. Those results are then given to the chemists and biologists who synthesize and test the proposed new drugs.
Given how AI systems work, the benefits in speed and accuracy are significant. As one study put it:
The vast chemical space, comprising >1060 molecules, fosters the development of a large number of drug molecules. However, the lack of advanced technologies limits the drug development process, making it a time-consuming and expensive task, which can be addressed by using AI. AI can recognize hit and lead compounds, and provide a quicker validation of the drug target and optimization of the drug structure design.
Specifically, AI gives society a guide to the quicker creation of newer, better pharmaceuticals.
The benefits of these innovations are clear. Unfortunately, the possibilities for malicious uses are also becoming clear. The paper referenced above is titled Dual Use of Artificial-Intelligence-Powered Drug Discovery. And the dual use in question is the creation of novel chemical warfare agents.
One of the factors investigators use to guide AI systems and narrow down the search for beneficial drugs is a toxicity measure, known as LD50 (where LD stands for lethal dose and the 50 is an indicator of how large a dose would be necessary to kill half the population). For a drug to be practical, designers need to screen out new compounds that might be toxic to users and, thus, avoid wasting time trying to synthesize them in the real world. And so, drug developers can train and instruct an AI system to work with a very low LD50 threshold and have the AI screen out and discard possible new compounds that it predicts would have harmful effects. As the authors put it, the normal process is to use a generative model [that is, an AI system, which] penalizes predicted toxicity and rewards predicted target activity. When used in this traditional way, the AI system is directed to generate new molecules for investigation that are likely to be safe and effective.
But what happens if you reverse the process? What happens if instead of selecting for a low LD50 threshold, a generative model is created to preferentially develop molecules with a high LD50 threshold?
One rediscovers VX gasone of the most lethal substances known to humans. And one predictively creates many new substances that are even worse than VX.
One wishes this were science fiction. But it is not. As the authors put the bad news:
In less than 6 hours ... our model generated 40,000 [new] molecules ... In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents. This was unexpected because the datasets we used for training the AI did not include these nerve agents.
In other words, the developers started from scratch and did not artificially jump-start the process by using a training dataset that included known nerve agents. Instead, the investigators simply pointed the AI system in the general direction of looking for effective lethal compounds (with standard definitions of effectiveness and lethality). Their AI program then discovered a host of known chemical warfare agents and also proposed thousands of new ones for possible synthesis that were not previously known to humankind.
The authors stopped at the theoretical point of their work. They did not, in fact, attempt to synthesize any of the newly discovered toxins. And, to be fair, synthesis is not trivial. But the entire point of AI-driven drug development is to point drug developers in the right directiontoward readily synthesizable, safe and effective new drugs. And while synthesis is not easy, it is a pathway that is well trod in the market today. There is no reasonnone at allto think that the synthesis path is not equally feasible for lethal toxins.
And so, AI opens the possibility of creating new catastrophic biological and chemical weapons. Some commentators condemn new technology as inherently evil tech. However, the better view is that all new technology is neutral and can be used for good or ill. But that does not mean nothing can be done to avoid the malignant uses of technology. And there is a real risk when technologists run ahead with what is possible, before human systems of control and ethical assessment catch up. Using artificial intelligence to develop toxic biological and chemical weapons would seem to be one of those use-cases where severe problems may lie ahead.
Here is the original post:
Artificial Intelligence and Chemical and Biological Weapons - Lawfare - Lawfare
Viewpoint: Artificial intelligence poised to play greater role in science – Science Business
COVID-19 changed the way we all work, live and socialise with technology and communication tools more important than ever. The world of scientific research was no exception, with the use of technology and specifically artificial intelligence (AI) increasing. AI is now increasingly relied upon to speed up research and generate new insights across all of science.
The positive endorsement of AI was noted in the second iteration of Elseviers global research project Research Futures which aims to gather the views and opinions of researchers across the world to help us as science publishers better understand the challenges and opportunities they face.
Forty seven percent of researchers believe that a long-lasting impact of the pandemic will be a greater dependency on technology and AI in their work, underlining the importance of AI for the future of research.
The study shows the number of researchers using AI extensively has increased from 12% in 2020 to 16% in 2021. In materials science, which covers the structure and properties of materials, the discovery of new materials and how they are made, 18% of researchers are now likely to be extensive AI users, up from zero a year ago. In chemistry, the number has grown from 2% to 19% and in mathematics from 4% to 13%. Unsurprisingly, 64% of computer scientists say they are heavy users of AI.
Most often researchers who use AI do so to analyse research results (66%) or to spot defects or issues with data (49%), while a minority are using it to help generate new hypotheses (17%).
As we note in our Research Futures Report 2.0, AI has been crucial to healthcare throughout the pandemic. We have seen hospitals use it to help predict which patients would be most severely affected by COVID-19, as well as manage their resources.
Attitudes toward the use of AI in peer review have also changed. Around one in five researchers (21%) agree they would read papers that rely on AI for peer review instead of humans, a 5% increase on 2020. Looking at the results by age, those aged 36 and under have increased their willingness to read such articles the most, compared to a year ago (21%, up from 14% prior year). But while attitudes are changing, most researchers continue to be reticent about AI in peer review, with 58% saying they are unwilling to read such articles.
Its clear that the place of AI in research is evolving and it is gradually becoming a crucial and trustworthy tool. However, not all reservations surrounding AI have been answered by the accelerated reliance on it during the pandemic.
Nonetheless, the technological strides made, especially in the fields of material science, medicine and chemistry, show the crucial role AI will play in the future.
The Elsevier Research Futures Report 2.0 is free to download here. It builds on the firstResearch Futures Report (2019)which considered what the world of research might look like in 10 years time. The new data highlights mounting pressure across publishing and funding, while highlighting new opportunities in new funding sources, technology, and collaboration.
Adrian Mulligan is Research Director for Customer Insights at the science publisher Elsevier
More:
Viewpoint: Artificial intelligence poised to play greater role in science - Science Business
The Business Case For AI Is A Good Management Introduction To Real-World Artificial Intelligence – Forbes
Artificial Intelligence
Too many technologist, in every generation of technology, state that management need to think more like programmers. Thats not the case. Rather, the technology professionals need to learn to speak to management. The Business Case for AI, by Kavita Ganesan, PhD, is a good overview for managers wishing to understand and control the complexities of implementing artificial intelligence (AI) systems in businesses.
Im always skeptical of self-published books. Usually that means the books just arent that good. However, sometimes, especially in non-fiction, it means that publishers are clueless about the subject and hesitant to work with people who arent names. This book is an example of the second option, and it will give management an introduction to the concepts surrounding AI and how to address implementation in a way that will increase the odds of success for AI initiatives.
The indication that the author mostly lives in the real world comes quickly. The first chapter is a good, introduction to what matters for business about AI. Forget the technical focus, its about solving problems in an efficient and cost-effective way.
Chapter 2, What is AI? isnt bad either, though I disagree with the idea that machine learning (ML) is part of AI. Business Intelligence (BI) has advanced, along with computing performance, that standard analytics provide insight that can be termed ML, so ML and AI overlap. That, however, is a religious argument and what Ganeson has to say about AI is at a good level for management understanding.
The weakest chapter in the introduction is the fourth, where the science fiction addict in me had to sigh at Movies such as I Robot. Ummm, check your library.
That chapters list of myths is also a bit problematic. The first, about job loss, is the one area where it shows the part of the real world in which the author exists isnt one most people are in. The AI revolution is very different than the industrial revolution and earlier technology revolutions. She talks about artificial general intelligence (AGI) and says that since its still far away that means a lot of jobs wont be lost. We dont need AGI to replace jobs.
The next couple of chapters are good for setting up examples of business processes that could be impacted by AI. I do have an issue with which companies she decides to name and which remain anonymous, as that seems to imply protecting customers. The best part was a good discussion of IT & manufacturing operations, but that could have been improved by discussing infrastructure operations such as pipelines and the electrical grid.
Part 3 (chapters 7-9) is very good but, again, has a few things to keep in mind. On page 117, six phases of the development lifecycle are defined. I agree with them, but want to point out that data acquisition and model development, phases 2 & 3, can be done somewhat in parallel. The things you learn from each can impact the other. The other nit is that the author seems to use warehouse improperly. Data warehouses have a specific, more narrow purpose. When she uses the term, think data lake. The importance of logging, transactions and more, is often ignored, and the end of this section of the book has a good explanation of its importance.
The fourth part of the book is a set of chapters that drills down into the finding AI projects portion of the analysis process, and is well laid out.
The final section has two chapters. The first is about build v buy. It is no surprise that a consultant leans towards build, thats her livelihood. What managers need to understand is that businesses arent as unique as they wish to think. There are unique things, but the vast majority of business is like other businesses. AI is a new technology and there arent enough easy to use tools for a buy decision in many areas, but that will change over time. Managers need have a flexible understanding of the equation and balance that in the real-world.
The final chapter is, as expected, a good summation and a return to focusing on business results. It continues the authors use of good, simple, graphics to show the points of her arguments. Regardless of the issues Ive mentioned above, the book does a great job of laying out the challenges of artificial intelligence from a business perspective. The book doesnt delve deeply into algorithms or other details that dont matter to management, while it does provide a framework to look at AI projects through a business lens that integrates the technology into organizations in a way that doesnt leave everything to technologist. The Business Case for AI is a good introduction for IT and line managers to think about how to integrate artificial intelligence into their organizations.
Global Telecommunications Artificial Intelligence of Things Market Report 2022: TSPs Increasingly Offer Industry Vertical Solutions as Part of Their…
Dublin, April 29, 2022 (GLOBE NEWSWIRE) -- The "Global Artificial Intelligence of Things (AIoT) in Telecommunications Growth Opportunities" report has been added to ResearchAndMarkets.com's offering.
This report examines the strategic position of telecommunication service providers (TSPs) in using artificial intelligence (AI) and the Internet of Things (IoT) to offer enterprises Artificial Intelligence of Things (AIoT) solutions. TSPs play a vital role in deploying enterprise AIoT solutions amid the increasing deployment of 5G networks, edge infrastructure capabilities, and location-based data at their disposal.
Given their network and connectivity capabilities and AI and services focus, TSPs are in a unique position to monetize AIoT opportunities. They increasingly offer solutions by industry vertical as part of their AIoT focus.
The report highlights TSPs' role as system integrators to provide value-added solutions and services to progress beyond connectivity and move up the value chain.
The report provides stakeholders insights by identifying AI growth drivers that will facilitate AIoT solutions deployment and opportunities in AI advisory and consulting services, edge infrastructure adoption, and building specific industry vertical solutions.
Key Topics Covered:
1. Strategic Imperatives
2. Growth Environment
3. Growth Opportunity Analysis
4. Growth Opportunity Universe
Companies Mentioned
For more information about this report visit https://www.researchandmarkets.com/r/o5a043
Endoluxe and Optimus-ISE Enter Marketing and Development Agreement to Realize Advanced Imaging and Artificial Intelligence in Advanced Operating Rooms…
MANHATTAN BEACH, Calif.--(BUSINESS WIRE)--Endoluxe and Optimus-ISE are proud to announce that they have entered into a co-marketing and development agreement to realize the advanced technology synergies of both organizations. With Optimus-ISE focused on safer, more efficient, and improved financial performing operating rooms, the Endoluxe platform fits perfectly into these guiding principles to provide an optimal clinical environment.
We are thrilled to enter this new global partnership. The Endoluxe platform, consisting of a wireless camera, cloud-based storage, and AI/ML clinical applications, is a fantastic fit with the vision of the Optimus operating room. says Devon Bream, CEO of Endoluxe. Our product eliminates the cables and cords of legacy camera platforms, which aligns with the clutter free design of the Optimus-ISE operating room. Additionally, Endoluxe provides a cloud-based storage solution that eliminates antiquated recording boxes and seamlessly connects to hospital EMRs. The Endoluxe cloud lets clinicians immediately share images from a procedure with patients and family, increasing patient satisfaction experiences. But one of the most exciting opportunities to collaborate with Optimus-ISE is through our novel AI/ML Endoluxe applications that provide clinicians with insights that legacy camera platforms simply cannot offer.
The Endoluxe EVS is the perfect camera system for all endoscopic procedures that utilize industry standard rigid and flexible analogue scopes such as urology, gynecology, ENT, general surgery, and orthopedics. The handheld Orb replaces the legacy endoscopic tower with advanced, portable technology at 1/6 the cost.
We are excited to enter into a partnership with Endoluxe, states Bill Passmore, CCO of Optimus. The Endoluxe co-founders are both practicing surgeons, which adds yet another validation that Optimus-ISE is designing advanced solutions that are meaningful to those that will ultimately be using them. While Optimus remains vendor agnostic, the advantage of collaborating with innovative technologies like Endoluxe allows us to provide our customers integrated options that no other providers can. The potential for collaboration and co-development is vast with both organizations benefiting from shared resources, sales platforms and becoming greater than the sum of the parts with a great cultural fit.
Endoluxe is a world-class endoscopic video imaging organization based in the United States with worldwide distribution of its medical industry design award-winning Endoluxe Orb. The company is focused on reducing costs of legacy video platforms, enhancing procedure adoption, and improving patient outcomes through better therapy application. Endoluxe is committed to being a vendor agnostic platform that allows customers to utilize their existing investment in traditional scopes and supporting devices, while taking advantage of future technological advancements utilizing our portable, integrated, and feature-laden platform at 1/6th the cost of legacy products. More information can be found at Endoluxe.com.
Optimus Integrated Surgical Environment AG is a Swiss-based company that delivers a holistic solution for the entire operating room and surrounding support services. Optimus integrates all vendors by acting as the single supplier for planning, installation and maintenance services for new hospital and refurbished operating room facility builds. The company provides services for the entire lifecycle of the operating sector of hospitals: from blue-sky phase of new operating room build planning, installation, and project management, through the total time of ownership including maintenance, servicing, and technology updates. More information can be found at Optimus-ISE.com.
New Navy Artificial Intelligence-Enhanced Drones Are Ready to Set Sail – The National Interest Online
The U.S. Navys artificial intelligence-enabled, autonomous drones are already functional, and many new types of systems are set to advance beyond the conceptual and prototype stages. The Navy intends for these systems to not only network with one another but also function autonomously. To expedite this process, the Navy is standing up and improving its Rapid Autonomy Integration Lab.
Algorithms enabling greater levels of autonomy are progressing quickly, and the Navy is already leveraging them to engineer and test a fleet of coordinated, integrated unmanned systems that can network with one another, synchronize, and execute time-sensitive missions without needing human involvement. As part of the Navys Ghost Fleet Overlord program, these drones will not only utilize their autonomous capability on an individual scale, but will also participate in collective, autonomous missions that are enabled by common software interfaces and AI-enabled data processing. Navy weapons developers increasingly plan to improve levels of autonomy as technology progresses.
For subsurface platforms, we have small, medium, and large. We currently have four prototypes today. They're demonstrating increasing autonomous capabilities and discovering new opportunities, new exercises, Capt. Scot Searles, Unmanned Maritime Systems program manager, told an audience at the 2022 Sea Air Space Symposium.
The first two of Ghost Fleet Overlords opening program vessels, Ranger and Nomad, were initially executed by the Strategic Capabilities Office, a specialized Pentagon unit designed to find, develop and integrate new innovations for operational use in the force. But due to their successful and rapid development, these two autonomous surface vessels have been transitioned to the Navy.
Alongside these, more prototypes are in development and on the way, Searles explained.
Now we're in the second phase of prototyping. We have two more vessels that are Navy funded this time under construction, the first of which is delivered. That is GFE (Government Furnished Equipment) is being installed right now, and the other one is under construction. We also have two smaller type prototype vessels as well.
Autonomous unmanned systems are already reshaping Navy concepts of operation and will continue to do so at a blistering pace. Of course, while Pentagon doctrine ensures that no lethal force is authorized without a human-in-the-loop, unmanned systems will continue to perform a much wider range of operations than has previously been possible. For instance, a Ghost Fleet or group of integrated unmanned systems could survey an enemy coastline and test enemy defenses, assess a threat environment, and exchange relevant data regarding an optimal point of attack. Targeting specifics could be shared across a group of unmanned systems in real time with the hope of quickly pairing new targeting information with shooters of modes of attack to eliminate enemies quickly. Yet another key advantage with this is that unmanned systems improve survivability, as they can allow manned ships and sailors to operate at safer stand-off distances. In the future, for example, Sea Basing is expected to take on a larger role and big-deck amphibious assault ships may increasingly function as mother ships, performing command and control and operating an entire small fleet of drones at one time.
Kris Osborn is the Defense Editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the ArmyAcquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.
Image: Flickr.
Continued here:
New Navy Artificial Intelligence-Enhanced Drones Are Ready to Set Sail - The National Interest Online
A new vision of artificial intelligence for the people – MIT Technology Review
But few people had enough mastery of the language to manually transcribe the audio. Inspired by voice assistants like Siri, Mahelona began looking into natural-language processing. Teaching the computer to speak Mori became absolutely necessary, Jones says.
But Te Hiku faced a chicken-and-egg problem. To build a te reo speech recognition model, it needed an abundance of transcribed audio. To transcribe the audio, it needed the advanced speakers whose small numbers it was trying to compensate for in the first place. There were, however, plenty of beginning and intermediate speakers who could read te reo words aloud better than they could recognize them in a recording.
So Jones and Mahelona, along with Te Hiku COO Suzanne Duncan, devised a clever solution: rather than transcribe existing audio, they would ask people to record themselves reading a series of sentences designed to capture the full range of sounds in the language. To an algorithm, the resulting data set would serve the same function. From those thousands of pairs of spoken and written sentences, it would learn to recognize te reo syllables in audio.
The team announced a competition. Jones, Mahelona, and Duncan contacted every Mori community group they could find, including traditional kapa haka dance troupes and waka ama canoe-racing teams, and revealed that whichever one submitted the most recordings would win a $5,000 grand prize.
The entire community mobilized. Competition got heated. One Mori community member, Te Mihinga Komene, an educator and advocate of using digital technologies to revitalize te reo, recorded 4,000 phrases alone.
Money wasnt the only motivator. People bought into Te Hikus vision and trusted it to safeguard their data. Te Hiku Media said, What you give us, were here as kaitiaki [guardians]. We look after it, but you still own your audio, says Te Mihinga. Thats important. Those values define who we are as Mori.
Within 10 days, Te Hiku amassed 310 hours of speech-text pairs from some 200,000 recordings made by roughly 2,500 people, an unheard-of level of engagement among researchers in the AI community. No one couldve done it except for a Mori organization, says Caleb Moses, a Mori data scientist who joined the project after learning about it on social media.
The amount of data was still small compared with the thousands of hours typically used to train English language models, but it was enough to get started. Using the data to bootstrap an existing open-source model from the Mozilla Foundation, Te Hiku created its very first te reo speech recognition model with 86% accuracy.
Continue reading here:
A new vision of artificial intelligence for the people - MIT Technology Review
Is artificial intelligence the future of warfare? – Al Jazeera English
From: UpFront
We discuss the risks behind autonomous weapons and their role in our everyday lives.
If were looking for that one terminator to show up at our door, were maybe looking in the wrong place, says Matt Mahmoudi, Amnesty International artificial intelligence researcher. What were actually needing to keep an eye out for are these more mundane ways in which these technologies are starting to play a role in our everyday lives.
Laura Nolan, a software engineer and a former Google employee now with the International Committee for Robot Arms Control, agrees. These kinds of weapons, theyre very intimately bound up in surveillance technologies, she says of lethal autonomous weapons systems or LAWS.
Beyond surveillance, Nolan warns that: Taking the logic of what were doing in warfare or in our society, and we start encoding it in algorithms and processes can lead to things spinning out of control.
But Mahmoudi, says there is hope for banning autonomous weapons, citing existing protections against the use of chemical and biological weapons. Its never too late, but we have to put human beings and not data points ahead of the agenda.
On UpFront, Marc Lamont Hill discusses the risks behind autonomous weapons with the International Committee for Robot Arms Controls Laura Nolan and Amnesty Internationals Matt Mahmoudi.
See more here:
Is artificial intelligence the future of warfare? - Al Jazeera English
How to amend the Artificial Intelligence Act to avoid the misuse of high-risk AI systems – The Parliament Magazine
As the opinion rapporteur for the Artificial Intelligence Act in the Committee on Culture and Education (CULT), I will present a proposal for amending the Artificial Intelligence Act in March. The draft focuses on several key areas of artificial intelligence (AI), such as high-risk AI in education, high-risk AI requirements and obligations, AI and fundamental rights as well as prohibited practices and transparency obligations.
The regulation is aiming to create a legal framework that prevents discrimination and prohibits practices that violate fundamental rights or endanger our safety or health.One of the most problematic areas is the use of remote biometric identification systems in public space.
Unfortunately, the use of such systems has increased rapidly, especially by governments and companies to monitor places of gathering, for example. It is incredibly easy for law enforcement authorities to abuse these systems for mass surveillance of citizens. Therefore, the use of remote biometric identification and emotion recognition systems is over the line and must be banned completely.
Moreover, the misuse of technology is concerning. I am worried that countries without a functioning rule of law will use it to persecute journalists and prevent their investigations. It is obviously happening to a certain extent in Poland and Hungary, where governments have used the Pegasus software to track journalists and members of the opposition. How hard will it be for these governments to abuse remote biometric identification, such as facial recognition systems?
It is absolutely necessary to set rules that will prevent governments from abusing AI systems to violate fundamental rights
As far as we know, the Hungarian government has already persecuted journalists in the so-called interest of national security for questioning the governments actions amid the pandemic. Even the Chinese social credit system, which ranks the countrys citizens, is based on the alleged purpose of ensuring security.
It is absolutely necessary to set rules that will prevent governments from abusing AI systems to violate fundamental rights. In October, a majority of the European Parliament voted in favour of a report on the use of AI in criminal law. The vote showed a clear direction for the European Parliament in this matter.
The proposal includes a definition of so-called high-risk AI systems. HR tools that could filter applications, banking systems that evaluate our creditworthiness and predictive control systems all fall under the definition of high-risk because they could easily reproduce bias and have a negative impact on disparity.
With AI being present in education as well, the proposal includes test evaluation and entrance examination systems. Still, this list shall be expanded to include online proctoring systems. However, there is a problem with different interpretations of the GDPR in the case of online proctoring systems, resulting in differences in personal data protection in Amsterdam, Copenhagen and Milan.
According to the Dutch and Danish decisions, there was no conflict between online proctoring systems and the GDPR, but the Italian data protection authority fined and banned further use of these technologies. Currently, universities are investing in new technologies without knowing whether they are authorised to use them or if they are going to be fined.
HR tools that could filter applications, banking systems that evaluate our creditworthiness and predictive control systems all fall under the definition of high-risk because they could easily reproduce bias and have a negative impact on disparity
In my opinion, technologies used for students personalised education should be included in the high-risk category as well. In this case, incorrect usage can negatively affect a students future.
In addition to education, the CULT committee focuses on the media sector, where AI systems can be easily misused to spread disinformation. As a result, the functioning of democracy and society may be in danger.
When incorrectly deployed, AI systems that recommend content and learn from our responses can systematically display content which form so-called rabbit holes of disinformation. This increases hatred and the polarisation of society and has a negative impact on democratic functioning.
We need to set clear rules that will not be easy to circumvent. Currently, I am working on a draft legislative opinion which will be presented in the CULT committee in March. I will do my best to fill all the gaps that I have identified.
The Council is also working on its position. A common compromise presented by the Slovenian presidency was found, for example, in the extension of social scoring from public authorities to private companies as well.