Category Archives: Artificial Intelligence

How to save America with artificial intelligence | TheHill – The Hill

Political polarization is ripping America apart. References to a second American civil war no matter how far-fetched reveal a bitterly divided nation. Indeed, the Founding Fathers worst nightmare is coming to pass.

For all of its promise, technology bears much of the blame for fracturing America. For one, social media platforms create powerful echo chambers that feed us a nonstop diet of one-sided, hyper-partisan news and commentary. This dangerous phenomenon where our beliefs are constantly reinforced and rarely challenged is not unique to liberals or conservatives. Instead, it is a function of technology capitalizing on an ever-expanding cultural and social divide.

But what if technology could be harnessed to reverse this corrosive effect on American society and its underlying cause? Moreover, at a time when factual news reporting is all too often immediately dismissed as fake, we are in desperate need of voices broadly respected by all Americans.

Enter Americas Founding Fathers and machine learning.

Despite the passage of centuries, Americans of all political stripes continue to invoke the ideas and writings of the Founders. Few figures hold more sway or command more respect among political pundits, politicians and everyday patriots than Adams, Hamilton, Jay, Jefferson, Madison and Washington.

While it may seem far-fetched on its face, what if artificial intelligence and machine learning could bring these titans of history back to life to weigh in on the challenges facing the United States today?

Artificial intelligence, in short, amounts to providing machines with enough data to make decisions or predictions without human input. Autonomous cars, for example, drive around American cities gathering real-time experience to inform decision-making. The challenge with driverless cars, however, is that staggering amounts of data are required to predict the many surprises that these machines are likely to encounter on the road.

But when it comes Americas Founding Fathers, we have all the data that we need in their writings, speeches and legislative records to resurrect them through machine learning. Indeed, the Founders discussed and debated the most contentious issues from the media to taxation, education, religion and beyond that America confronts today. Human nature, after all, ensures that history tends to repeat itself.

Bringing the Founders back through artificial intelligence processes would bestow enormous benefits. For one, the addition of such revered and respected voices would allow us to regain some semblance of civility in public discourse. Indeed, it would be difficult to denigrate Jefferson or Madisons take on contemporary issues such as the national debt or impeachment as partisan fake news.

Most importantly, the most corrosive effects of hyper-partisan, ratings-driven media outlets and the social media platforms that enable them would be blunted, reining in the extreme division and political polarization gripping America.

To be sure, significant challenges would accompany such an ambitious venture. The process of coding the Founders writings and records into mathematical vectors digestible by machines could prove complex, stretching current capabilities to their limitations. The same is true for the all-important task of accurately translating the issues dividing America today into machine-readable data. But the good news is that significant groundwork has been done in this arena: Artificial intelligence and neural networks already conduct political predictions as well as complex, issue-based analyses.

With little potential for profit, securing adequate funding for such an endeavor will also prove challenging. But thanks to initiatives such as Googles Artificial Intelligence for Social Good and grants supporting AI-enabled fact-checking, there is reason for optimism. Indeed, the inherently ethical and positively disruptive nature of such technology may attract broad support from an ideologically diverse cross-section of civically-minded institutions and individuals.

Ultimately, the Founding Founders lasting gift to the American people is a treasure trove of wisdom on civil discourse, shared values and sound governance. At a time when America finds itself dangerously divided, we must not hesitate to harness the Founding Fathers collective legacy for the betterment of the nation that they cherished so dearly.

Marik von Rennenkampff served as an analyst with the U.S. Department of States Bureau of International Security and Nonproliferation, as well as an Obama administration appointee at the U.S. Department of Defense. Follow him on Twitter @MvonRen.

The rest is here:
How to save America with artificial intelligence | TheHill - The Hill

Art, artificial intelligence and technology – The Aggie

The muse of 21st century art is hidden in lines of code

The mystique of robots taking over humanity, or the notion that humans will eventually be forced to fight for their relevance among super-human robots that outgrow the need of their human creators, is a trope that has existed in artistic expression for decades. The ever-increasing discussion about artificial intelligence has fostered a sleeker, more modern incorporation of technology in art as a subject, a tool and a means of measuring the value of art.

Grimes began her latest project in November 2018 with the release of We Appreciate Power, a nod to the capabilities that lie in endless lines of code and an embrace of the reign of AI. Its futuristic synth-pop with the lyrics, Baby, plug in, upload your mind / Come on, youre not even alive / If youre not backed up on a drive. This statement begins to sound more realistic as the recombinant power of innovation expands.

Artists such as Bjrk have even gone as far as giving AI some creative freedom with her work. She and Microsoft recently partnered to create Krsafn, meaning choral archives in Icelandic, which uses AI to recombine fragments of her music to react to patterns in the weather. For example, the chords sound different during sunrise and sunset. The project takes place inside the hotel Sister City in New York City. Its a generative lobby score powered by Microsoft AI, according to Microsofts website.

Technology has historically had a large influence on music and has helped expand the array of sounds that can be incorporated into a song. There may be some who say that technology has worsened the quality of music, but overall it contributes to musics evolution. This reminds me of the song Intro on Odeszas Summers Gone, with the lyrics, You combine segments of magnetic tape/By these means and many others you can create sounds which no one has ever heard before.

British artist Matthew Stone designed the album cover of FKA Twigs Mary Magdalene by creating digital brushstrokes that resemble paint on canvas, creating a truly three-dimensional shape thats arguably more believable than traditional painting. A computer-generated program always draws a perfect line, but will art created by AI be objectively better?

The incorporation of technology and AI into art are redefining who and what can be an artist. In the case of Krsafn, AI is doing the work for itself and isnt created with human direction. The program is given input and recombines them based on musical rules. Its one thing for AI and tech to be the subject of an artists work, but its another thing entirely when it doesnt need a human artist.

Artists experiences and struggles, whether documented on canvas or with musical chords, hold a value unmatched by data collected to create something that is most likely to be liked by the masses. Good art is disinterested in what people already want and is often a catalyst that breaks the mold a trait on which humans still have a monopoly. Life imitates art wouldnt be very interesting anymore if predicted by a program.

Written by: Josh Madrid arts@theaggie.org

Read the rest here:
Art, artificial intelligence and technology - The Aggie

Compliance technology will rely on artificial intelligence in the future – ELE Times

Over 40% of privacy compliance technology will rely on artificial intelligence (AI) by 2023, up from 5% today, according to Gartner, Inc. Privacy laws, such as General Data Protection Regulation (GDPR), presented a compelling business case for privacy compliance and inspired many other jurisdictions worldwide to follow, said Bart Willemsen, research vice president at Gartner.

More than 60 jurisdictions around the world have proposed or are drafting postmodern privacy and data protection laws as a result. Canada, for example, is looking to modernize their Personal Information Protection and Electronic Documents Act (PIPEDA), in part to maintain the adequacy standing with the EU post-GDPR.

Privacy leaders are under pressure to ensure that all personal data processed is brought in scope and under control, which is difficult and expensive to manage without technology aid. This is where the use of AI-powered applications that reduce administrative burdens and manual workloads come in.

AI-Powered Privacy Technology Lessens Compliance Headaches

At the forefront of a positive privacy user experience (UX) is the ability of an organization to promptly handle subject rights requests (SRRs). SRRs cover a defined set of rights, where individuals have the power to make requests regarding their data and organizations must respond to them in a defined time frame.

According to the 2019 Gartner Security and Risk Survey, many organizations are not capable of delivering swift and precise answers to the SRRs they receive. Two-thirds of respondents indicated it takes them two or more weeks to respond to a single SRR. Often done manually as well, the average costs of these workflows are roughly $1,400 USD, which pile up over time.

The speed and consistency by which AI-powered tools can help address large volumes of SRRs not only saves an organization excessive spend, but also repairs customer trust, said Mr. Willemsen. With the loss of customers serving as privacy leaders second highest concern, such tools will ensure that their privacy demands are met.

Global Privacy Spending on Compliance Tooling Will Rise to $8 Billion Through 2022

Through 2022, privacy-driven spending on compliance tooling will rise to $8 billion worldwide. Gartner expects privacy spending to impact connected stakeholders purchasing strategies, including those of CIOs, CDOs and CMOs. Todays post-GDPR era demands a wide array of technological capabilities, well beyond the standard Excel sheets of the past, said Mr. Willemsen.

The privacy-driven technology market is still emerging, said Mr. Willemsen. What is certain is that privacy, as a conscious and deliberate discipline, will play a considerable role in how and why vendors develop their products. As AI turbocharges privacy readiness by assisting organizations in areas like SRR management and data discovery, well start to see more AI capabilities offered by service providers.

For more information, visit http://www.gartner.com

Go here to read the rest:
Compliance technology will rely on artificial intelligence in the future - ELE Times

Artificial Intelligence in Banking: More Hype Than Reality – The Financial Brand

Subscribe to The Financial Brand via email for FREE!

Over the past year, the Digital Banking Report has conducted several research studies on the deployment and potential impact of data and artificial intelligence on the banking industry. We have found that the improved use of data and advanced analytics can improve customer experiences, generate better marketing results, streamline deposit and lending operations, increase consumer engagement, support innovation, and be a foundation for digital transformation.

Being a data-driven financial institution is no longer optional (if it ever was). In every industry, winners will be determined by how well data and AI can be used for the benefit of the consumer. Big tech firms such as Google, Apple, Facebook and Amazon (GAFA) are setting the pace, delivering experiences that are improving valuations and providing the foundation for entry into financial services. Fintech firms and non-traditional banking challengers are using data and insights to steal business from legacy banks and credit unions.

From tracking social media engagement to looking at spending patterns and the use of existing financial services, a data-driven approach completely changes organic growth opportunities from cross-selling to providing proactive advice. Instead of being a privacy threat, the intelligent use of data can provide a value proposition that the consumer appreciates and may even pay for (similar to how people pay Amazon for the right to shop digitally).

Unfortunately, while there is virtually no question of the benefits of data and AI for the benefit of the consumer, the vast majority of deployment by legacy organizations still focuses on cost reduction and productivity and/or risk management. While these use cases certainly help financial institutions meet quarterly financial goals and protect against losses from fraud, the consumer rarely feels any personal benefit.

Read More:

( sponsored content )

From a consumers perspective, most use of AI for a better experience has been superficial at best. While the industry continues to say that the use of Data and AI is a major trend and is a priority as shown in this years Retail Banking Trends and Predictions report, research on use of AI shows that deployment for the benefit of the consumer has lagged the hype by a significant amount.

Except for the largest financial institutions, and some of the smallest, few organizations profess to be adept at advanced targeting, multichannel communications, real-time contextual offers or proactive advice. This is very disappointing given the marketplace realities across industries.

It is clear that banks and credit unions are testing the use of data and AI across businesses, but they are definitely not as bullish or proficient as public announcements would suggest. Where there is an investment in advanced analytics, our research shows that the impact continues to be focused on back-office efficiency, risk avoidance and cost reductions.

Read More:

REGISTER FOR THIS FREE WEBINAR

Exposed: The Real Reasons Why Banking Customers Switch

Register for this webinar for exclusive data from the worlds largest syndicated customer experience study, covering 5,212 banking institutions (and probably yours)

Tuesday, March 17th at 2pm EST

Not all AI implementations have been internally-focused. For instance, Bank of Americas AI-powered digital assistant, Erica, has more than ten million users and completed 100 million client requests in the first 18 months since introduction. According to Bank of America, The app can be configured to a persons preferences and usage, giving everyone a different home page similar to the way Amazon and Netflix give every user a different home screen.

While not near the final potential of the app, the ability to be notified of a potential overdraft, remind a customer of a recurring payment, understand a customers spending and saving habits and warn a customer about a duplicate payment is a capability that few banks or credit unions can match.

With hundreds of billions of tweets, likes and searches each day, financial institutions have the ability to supplement internal balance and transaction insights to create value for the customer or member. That said, few financial institutions even use the massive data at their disposal internally.

The key is to support intelligent interactions based on this data in real time. The ability to create these type of engagements has become easier and easier with the creation of new technologies and ways to process data. The cost to do this type of analysis has dropped, even though the availability of talent to create and manage models has become more challenging.

Financial institutions can also create digital-driven products that have AI as part of the foundation. This can be done in-house or in collaboration with fintech or big tech providers using open banking APIs and the cloud. As discussed in the Innovation in Retail Banking report published by the Digital Banking Report (and available for free download), this type of collaboration speeds up the innovation process and supports digital transformation.

There is no arguing that organizations must respect the consumers desire for security and privacy and that any use of data for internal and challenges can not be considered roadblocks. Financial institutions of all sizes are beginning to focus on how data and insights can benefit consumers directly, because these same consumers are expecting more from their financial institution partner. Banks and credit unions must use current data and insights to:

Using data and analytics to improve the customer experience is not a new concept. In fact, the banking industry has discussed this capability for decades. The difference today is that the consumer understands the potential of using their data for their personalized benefit. Its time that the banking industry walk the walk as opposed to simply talking the talk.

Read the original post:
Artificial Intelligence in Banking: More Hype Than Reality - The Financial Brand

Air Travelers Cant See All of It, but More Tech Is Moving Them Along – The New York Times

Airports, often hemmed in by neighborhoods, highways or water, already struggle to keep up with the rising number of air travelers. And the number is expected to keep going up to more than seven billion globally by 2035, an airline trade association says, nearly doubling from 2016.

So while airports are expanding their physical facilities where they can, governments and the travel industry are leaning more heavily on technology, especially artificial intelligence, to process more air travelers more quickly.

The airports in Osaka, Japan, and Abu Dhabi have tested autonomous check-in kiosks that move themselves to help manage peaks of passenger flow.

Seattle-Tacoma International Airport and Miami International Airport are among those using visual sensors to monitor passenger line lengths and how quickly people are moving through security checkpoints. Managers can use the information to adjust where they need more workers and to send passengers to shorter lines. Passengers can see how long their wait will be on signs or on a phone app. The goal is to help reduce travelers worries about whether they are going to make their flight.

For international flights, more airlines are installing what are known as self-boarding gates that use a photo station to take and compare a photo of the traveler with the picture in the persons passport and other photos in Customs and Border Protection files. The gates, which are using facial recognition technology, replace agents who check boarding passes and identification cards.

Seven percent of airlines have installed some self-boarding gates, and about a third of all airlines plan to use some type of this gate by the end of 2022, according to SITA, a technology company serving about 450 airports and airlines. Sherry Stein, head of technology strategy for SITA, said the goals are to reduce hassle for passengers, speed boarding and increase security.

Still, there are privacy concerns over the use of the photos. The general public doesnt receive much information about how the photos will be used or stored, said Oren Etzioni, the chief executive of the Allen Institute for AI in Seattle.

So even though we consciously give up our privacy, we still worry that these kinds of digital records can be used against us in unanticipated ways by the government, our employer, or criminals, he said. A photo taken at the airport leaves another digital footprint that makes us more traceable, he added.

The Department of Homeland Security said it did not retain photos of U.S. citizens once their identities were confirmed at airports.

Technology similar to that used in self-boarding gates is being deployed for some foreign passengers arriving in the United States. Miami International Airport, for example, began using facial recognition screening at its facility for international passenger arrivals in 2018 and reported that it can screen as many as 10 passengers per minute using the technology. Travelers who have been to the United States previously step up to facial recognition stations, and a customs official checks their passports to make sure they are valid. First-time visitors still need to present a passport or visa and agree to have their fingerprints and photos taken.

Some of the new technology is aimed at easing language difficulties. Kennedy International Airport in New York recently installed three A.I.-based real-time translation devices from Google at information stations around the airport. Travelers choose their language from a counter-mounted screen and ask their questions aloud to the device. The device repeats the question in English to the person at the station. That person responds in English, and the device translates that aloud to the travelers.

Artificial intelligence is also being used behind the scenes to reduce the time airplanes spend at the gate between flights, which can mean shorter waiting time for passengers who have boarded and buckled up. London-Gatwick, Qubec City and Cincinnati/Northern Kentucky airports are among about 30 around the world testing or installing a visual A.I. system made by the Swiss company Assaia. The system uses cameras pointed at a plane parked at the gate to track everything that happens after the aircraft lands: how long it takes for fuel and catering trucks to arrive, whether the cargo door is open, and even if employees on the ground are wearing their safety vests.

While humans can do each of these tasks, monitoring and analyzing the operations of these various functions can speed the turnaround of the plane and prevent accidents, according to Assaia. After the same plane has, for instance, been filmed doing hundreds of turnarounds at a particular airport, the A.I. system can identify the elements or situations that most often cause delays, and managers can take corrective action. Accidents like ground crew injuries or service vehicle collisions can also be analyzed for their causes.

The time an airplane spends waiting for a gate after landing or waiting in line to take off could also be reduced. A group at SITA focused on airport management systems is helping to design technology that can synthesize data from many sources, including changing aircraft arrival times, weather conditions at destination airports and logistical issues to improve runway schedules and gate assignments.

Artificial intelligence software can also make a difference with rebooking algorithms, Mr. Etzioni said. When weather or mechanical issues disrupt travel, the airlines speed in recomputing, rerouting and rescheduling matters, he said.

The data streams get even more complex when the whole airport is considered, Ms. Stein of SITA said. A number of airports are creating a digital twin of their operations using central locations with banks of screens that show the systems, people and objects at the airport, including airplane locations and gate activity, line lengths at security checkpoints, and the heating, cooling and electrical systems monitored by employees who can send help when needed. These digital systems can also be used to help with emergency planning.

The same types of sensors that can be used to supply data to digital twins are also being used to reduce equipment breakdowns. Karen Panetta, the dean of graduate engineering at Tufts University and a fellow at the Institute of Electrical and Electronics Engineers, said hand-held thermal imagers used before takeoff and after landing can alert maintenance crews if an area inside the airplanes engine or electrical system is hotter than normal, a sign something may be amiss. The alert would help the crew schedule maintenance right away, rather than be forced to take the aircraft out of service at an unexpected time and inconvenience passengers.

At the moment, people, rather than technology, evaluate most of the data collected, Dr. Panetta said. But eventually, with enough data accumulated and shared, more A.I. systems could be built and trained to analyze the data and recommend actions faster and more cost effectively, she said.

Air travel isnt the only segment of the transportation industry to begin using artificial intelligence and machine learning systems to reduce equipment failure. In the maritime industry, a Seattle company, ioCurrents, digitally monitors shipping vessel engines, generators, gauges, winches and a variety of other mechanical systems onboard. Their data is transmitted in real time to a cloud-based A.I. analytics platform, which flags potential mechanical issues for workers on the ship and on land.

A.I. systems like these and others will continue to grow in importance as passenger volume increases, Ms. Stein said. Airports can only scale so much, build so much and hire so many people.

Read the original post:
Air Travelers Cant See All of It, but More Tech Is Moving Them Along - The New York Times

Artificial Intelligence White Paper: What Are The Practical Implications? – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

On 19 February 2020, the European Commission published a WhitePaper, "On Artificial Intelligence: A European approach toexcellence and trust". The purpose of this White Paper onartificial intelligence (AI), of which leaks began circulatingalready in January 2020, is to discuss policy options on how toachieve two objectives: (i) promoting the uptake of AI and (ii)addressing the risks associated with certain uses of AI.

Europe aspires to become a "global leader in innovation inthe data economy and its applications", and would like todevelop an AI ecosystem that brings the benefits of that technologyto citizens, business and public interest.

The European Commission identifies two key components that willallow such an AI ecosystem to develop in a way that benefits EUsociety as a whole: excellence and trust, and it highlights theEU's "Ethics Guidelines for Trustworthy ArtificialIntelligence" of April 2019 as a core element that isrelevant for both of those components.

Like with many White Papers, however, the practical implicationsappear far off in the future. We have therefore included a fewnotes ("Did you know?") withadditional information to illustrate them or show what alreadyexists, and conclude with some guidance on what you can already dotoday.

The European Commission identifies several key aspects that willhelp create an ecosystem of excellence in relation to artificialintelligence:

Where AI is developed and deployed, it must address concernsthat citizens might have in relation to e.g. unintended effects,malicious use, lack of transparency. In other words, it must betrustworthy.In this respect, the White Paper refers to the (non-binding) EthicsGuidelines, and in particular the seven key requirements for AIthat were identified in those guidelines:

Yet this is no legal framework.

a) Existing laws & AI

There is today no specific legal framework aimed at regulatingAI. However, AI solutions are subject to a range of laws, as withany other product or solution: legislation on fundamental rights(e.g. data protection, privacy, non-discrimination), consumerprotection, product safety and liability rules.[Did you know? AI-powered chatbots used forcustomer support are not rocket-science in legal terms, but theanswers they provide are deemed to stem from the organisation, andcan thus make the organisation liable. Because such a chatbot needsinitial data to understand how to respond, organisations typically"feed" them previous real-life customer support chats andtelephone exchanges, but the use of those chats and conversationsis subject to data protection rules and rules on the secrecy ofelectronic communications.]

According to the European Commission, however, the currentlegislation may sometimes be difficult to enforce in relation to AIsolutions, for instance because of the AI's opaqueness(so-called "black box-effect"), complexity,unpredictability and partially autonomous behaviour. As such, theWhite Paper highlights the need to examine whether any legislativeadaptations or even new laws are required.

The main risks identified by the European Commission are (i)risks for fundamental rights (in particular data protection, due tothe large amounts of data being processed, and non-discrimination,due to bias within the AI) and (ii) risks for safety and theeffective functioning of the liability regime. On the latter, theWhite Paper highlights safety risks, such as an accident that anautonomous car might cause by wrongly identifying an object on theroad. According to the European Commission, "[a] lack ofclear safety provisions tackling these risks may, in addition torisks for the individuals concerned, create legal uncertainty forbusinesses that are marketing their products involving AI in theEU".[Did you know? Data protection rules do notprohibit e.g. AI-powered decision processes or data collection formachine learning, but certain safeguards must be taken into account and it's easier to do so at the design stage.]

The European Commission recommends examining how legislation canbe improved to take into account these risks and to ensureeffective application and enforcement, despite AI's opaqueness.It also suggests that it may be necessary to examine andre-evaluate existing limitations of scope of legislation (e.g.general EU safety legislation only applies to products, notservices), the allocation of responsibilities between differentoperators in the supply chain, the very concept of safety, etc.

b) A future regulatory framework for AI

The White Paper includes lengthy considerations on what a newregulatory framework for AI might look like, from its scope (thedefinition of "AI") to its impact. A key elementhighlighted is the need for a risk-based approach (as in the GDPR),notably in order not to create a disproportionate burden,especially for SMEs. Such a risk-based approach, however, requiressolid criteria to be able to distinguish high-risk AI solutionsfrom others, which might be subject to fewer requirements.According to the European Commission, an AI application should beconsidered high-risk where it meets the following twocumulative criteria:

Yet the White Paper immediately lists certain exceptions thatwould irrespective of the sector be "high-risk", statingthat this would be relevant for certain "exceptionalinstances".In the absence of actual legislative proposals, the merit of thisprinciple-exception combination is difficult to judge. However, itwould not surprise us to see a broader sector-independent criterionfor "high-risk" AI solutions appear situationsthat are high-risk irrespective of the sector due to their impacton individuals or organisations.

Those high-risk AI solutions would then likely be subject tospecific requirements in relation to the following topics:

In practice, these requirements would cover a range of aspectsof the development and deployment cycle of an AI solution, and therequirements are therefore not meant solely for the developer orthe person deploying the solution. Instead, according to theEuropean Commission, "each obligation should be addressedto the actor(s) who is (are) best placed to address any potentialrisk". The question of liability might still be dealtwith differently under EU product liability law,"liability for defective products is attributed to theproducer, without prejudice to national laws which may also allowrecovery from other parties".

Because the aim would be to impose such requirements on"high-risk" AI solutions, the European Commissionanticipates that a prior conformity assessmentwill be required, which could include procedures for testing,inspection or certification and checks of the algorithms and of thedata sets used in the development phase. Some requirements (e.g.information to be provided) might not be included in such priorconformity assessment. Moreover, depending on the nature of the AIsolution (e.g. if it evolves and learns from experience), it may benecessary to carry out repeated assessments throughout the lifetimeof the AI solution.The European Commission also wishes to open up the possibility forAI solutions that are not "high-risk" to benefit fromvoluntary labelling, to show voluntary compliance with (some or allof) those requirements.

The White Paper sets out ambitious objectives, but also gives anidea of the direction in which the legal framework applicable to AImight evolve in the coming years.

We do feel it is important to stress that this future frameworkshould not be viewed as blocking innovation. Too many organisationshave the impression already that the GDPR prevents them fromprocessing data, when it is precisely a tool that allows better andmore responsible processing of personal data. The frameworkdescribed by the European Commission in relation to AI appears tobe similar in terms of its aim: these rules would helporganisations build better AI solutions or use AI solutions moreresponsibly.

In this context, organisations working today on AI solutionswould do well to consider building the recommendations of the WhitePaper already into their solutions. While there is no legalrequirement to do so now, anticipating those requirements mightgive those organisations a frontrunner status and a competitiveedge when the rules materialise.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Link:
Artificial Intelligence White Paper: What Are The Practical Implications? - Mondaq News Alerts

Technology – Integrating artificial intelligence on board – Superyacht News – The Superyacht Report

As the superyacht industry welcomes a new generation of crew, with new demands and expectations, the operational experience on board will need to evolve to suit their needs. While vessels are growing more complex, Gunther Alvarado, head of yacht management and marine operation at Al Seer Marine, believes systems need to be updated to keep up with the technology crew are used to in their everyday lives.

Seventy per cent of crew in our fleet are millennials and Generation Z. These generations are born with an iPhone in their hand, said Alvarado during a panel discussion at The Superyacht Forum2019. We need to make our systems a lot more interactive, from flag states and regulatory bodies, to make it easy for crew to access information and automatically remind them about things to do.

The inevitable integration of artificial intelligence (AI) and machine learning (ML) on board will have an impact on the crew of the future, with the potential to transform operations such as navigation, maintenance and even service. Our owners have these technologies in their own businesses and homes, so it wont be long until they want them on their yachts, says Mike Blake, president of Palladium Technologies.

Our owners have these technologies in their own businesses and homes, so it wont be long until they want them on their yachts...

The commercial shipping industry is already testing autonomous vessels, and superyacht captains only drive the yacht a small percentage of the time anyway, so AI systems can take it over because they have an endless attention span to do the repetitive task of monitoring all the information in the bridge. AI will also be used in the engine room to monitor all systems at once and predict any failures. It could even be used in the interior I think we will have systems that will read who the guests are and be able to give a much better service.

Joseph Adir, founder and CEO of WinterHaven, agrees that AI and ML are going to transform the way in which superyachts are operated. The continued growth of IoT technology utilising deep-learning computer systems and high-volume data analytics on OnPrem and Cloud will deliver greater benefits for superyacht owners in the future, says Adir.

Integrating IoTs and sensors connected to Edge ML, all aggregated into a large AI/ML platform, will deliver a smart interactive 3D interface on board [that] will give all stakeholders an in-depth view of the asset. Artificial Intelligence and Machine Learning can then be used to mine the big data and monitor all equipment and systems on board for performance management and optimisation, as well as predictive maintenance.

Safe operation on yachts will likely rely on using this technology alongside a documented and structured manual check process. These future solutions will then not only increase safety on board, but also eventually reduce the need for human-machine interaction by automating selected tasks and processes, while the captain and crew remain at the centre of critical decision-making and on-board expertise.

Profile links

Palladium Technologies, Inc

If you like reading our Editors' premium quality journalism on SuperyachtNews.com, you'll love their amazing and insightful opinions and comments in The Superyacht Report. If youve never read it, click here to request a sample copy - it's 'A Report Worth Reading'. If you know how good it is, click here to subscribe - it's 'A Report Worth Paying For'.

Read more:
Technology - Integrating artificial intelligence on board - Superyacht News - The Superyacht Report

Exclusive: Global trust in the tech industry is slipping – Axios

The backlash against Big Tech has long flourished among pundits and policymakers, but a new survey suggests it's beginning to show up in popular opinion as well.

Driving the news: New data from Edelman out Tuesday finds that trust in tech companiesis declining and that people trust cutting-edge technologies like artificial intelligence less than they do the industry overall.

Why it matters: The Edelman study finds people favor more regulation of industries and technologies that they distrust. Rising public support for regulation could move policymakers from talk to action.

Details: Edelman's 2020 Trust Barometer found that while tech still enjoys high levels of trust globally, its approval rating fell four points between 2019 and 2020.

What they're saying: "The trend of eroding trust in the technology sector continues," said Sanjay Nair, global technology chair of Edelman.

Between the lines: Trust is higher for many sectors broadly than it is for the leading-edge technology within a field. Far more people trust the tech industry broadly than AI specifically.

The big picture: Two studies from Pew Research also show the impact of declining trust.

Yes, but: Technology is still one of the most trusted sectors, despite recent erosion. Such was the case overall for the Edelman study, although a record 13 markets had higher trust in a sector other than technology.

Go deeper: The "ominous" decline of democracy around the world

See the original post:
Exclusive: Global trust in the tech industry is slipping - Axios

EU Proposes Rules for Artificial Intelligence to Limit Risks – The New York Times

LONDON The European Union unveiled proposals Wednesday to regulate artificial intelligence that call for strict rules and safeguards on risky applications of the rapidly developing technology.

The report is part of the bloc's wider digital strategy aimed at maintaining its position as the global pacesetter on technological standards. Big tech companies seeking to tap Europe's vast and lucrative market, including those from the U.S. and China, would have to play by any new rules that come into force.

The EU's executive Commission said it wants to develop a framework for trustworthy artificial intelligence." European Commission President Ursula von der Leyen had ordered her top deputies to come up with a coordinated European approach to artificial intelligence and data strategy 100 days after she took office in December.

We will be particularly careful where essential human rights and interests are at stake," von der Leyen told reporters in Brussels. Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people's rights."

EU leaders, keen on establishing technological sovereignty," also released a strategy to unlock data from the continent's businesses and the public sector so it can be harnessed for further innovation in artificial intelligence. Officials in Europe, which doesn't have any homegrown tech giants, hope to to catch up with the U.S. and China by using the bloc's vast and growing trove of industrial data for what they anticipate is a coming wave of digital transformation.

They also warned that even more regulation for foreign tech companies is in store with the upcoming Digital Services Act, a sweeping overhaul of how the bloc treats digital companies, including potentially holding them liable for illegal content posted on their platforms. A steady stream of Silicon Valley tech bosses, including Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Microsoft President Brad Smith, have visited Brussels in recent weeks as part of apparent lobbying efforts.

It is not us that need to adapt to today's platforms. It is the platforms that need to adapt to Europe, said Thierry Breton, commissioner for the internal market. That is the message that we delivered to CEOs of these platforms when they come to see us.

If the tech companies aren't able to build systems for our people, then we will regulate, and we are ready to do this in the Digital Services Act at the end of the year, he said.

The EU's report said clear rules are needed to address high-risk AI systems, such as those in recruitment, healthcare, law enforcement or transport, which should be transparent, traceable and guarantee human oversight. Other artificial intelligence systems could come with labels certifying that they are in line with EU standards.

Artificial intelligence uses computers to process large sets of data and make decisions without human input. It is used, for example, to trade stocks in financial markets, or, in some countries, to scan faces in crowds to find criminal suspects.

While it can be used to improve healthcare, make farming more efficient or combat climate change, it also brings risks. It can be unclear what data artificial intelligence systems work off. Facial recognition systems can be biased against certain social groups, for example. There are also concerns about privacy and the use of the technology for criminal purposes, the report said.

Human-centered guidelines for artificial intelligence are essential because none of the positive things will be achieved if we distrust the technology," said Margrethe Vestager, the executive vice president overseeing the EU's digital strategy.

Under the proposals, which are open for public consultation until May 19, EU authorities want to be able to test and certify the data used by the algorithms that power artificial intelligence in the same way they check cosmetics, cars and toys.

It's important to use unbiased data to train high-risk artificial intelligence systems so they can avoid discrimination, the commission said.

Specifically, AI systems could be required to use data reflecting gender, ethnicity and other possible grounds of prohibited discrimination."

Other ideas include preserving data to help trace any problems and having AI systems clearly spell out their capabilities and limitations. Users should be told when they're interacting with a machine and not a human while humans should be in charge of the system and have the final say on decisions such as rejecting an application for welfare benefits, the report said.

EU leaders said they also wanted to open a debate on when to allow facial recognition in remote identification systems, which are used to scan crowds to check people's faces to those on a database. It's considered the most intrusive form" of the technology and is prohibited in the EU except in special cases.

___

For all of the APs technology coverage: https://apnews.com/apf-technology.

___

See the rest here:
EU Proposes Rules for Artificial Intelligence to Limit Risks - The New York Times

EASA Expects Certification of First Artificial Intelligence for Aircraft Systems by 2025 – Aviation Today

The European Aviation Safety Agency expects to certify the first integration of artificial intelligence technology in aircraft systems by 2025.

The European Aviation Safety Agency (EASA) has published its Artificial Intelligence Roadmap in anticipation of the first certification for the use of AI in aircraft systems coming in 2025.

EASA published the 33-page roadmap after establishing an internal AI task force in October 2018 to identify staff competency, standards, protocols and methods to be developed ahead of moving forward with actual certification of new technologies. A representative for the agency confirmed in an emailed statement to Avionics International that they have already received project submissions from industry designed to provide certification for AI pilot assistance technology.

The Agency has indeed received its first formal applications for the certification of AI-based aircraft systems in 2019. It is not possible to be more specific on these projects at this stage due to confidentiality. The date in our roadmap, 2025, corresponds to the project certification target date anticipated by the applicants, the representative for EASA said.

In the roadmap document, EASA notes that moving forward, the agency will define AI as any technology that appears to emulate the performance of a human. The roadmap further divides AI applications into model-driven AI and data driven AI, while linking these two forms of AI to breakthroughs in machine learning, deep learning and the use of neural networks to enable applications such as computer vision and natural language processing.

In order to be ready by 2025 for the first certification of AI-based systems, the first guidance should be available in 2021, so that the applicant can be properly guided during the development phase. The guidance that EASA will develop will apply to the use of AI in all domains, including aircraft certification as well as drone operations, the representative for EASA said.

Eight specific domains of aviation are identified as potentially being impacted by the introduction of AI to aviation systems, including the following:

The roadmap foresees the potential use of machine learning for flight control laws optimization, sensor calibration, fuel tank quantity evaluation, icing detection to be among those aircraft systems where the need for human analysis of possible combination and associated parameter values could be replaced by machine learning.

The roadmap for EASA's certification of AI in aircraft systems. Photo: EASA

EASA also points to several research and development projects and prototypes featuring the use of artificial intelligence for air traffic management already available. These include Singapore ATM Research Institutes application that generates resolution proposals that can assist controllers in resolving airspace system conflicts. There is also the Single European Sky ATM Research Joint Undertakings BigData4ATM project tasked with analyzing passenger-centric geo-located data to identify patterns in airline passenger behavior and the Machine Learning of Speech Recognition Models for Controller Assistance (MALORCA) project that has developed a speech recognition tool for use by controllers.

Several aviation industry research and development initiatives have been looking at the integration of AI and ML into aircraft systems and air traffic management infrastructure in recent years as well. During a November visit to its facility in Toulouse, Thales showed some of the technologies it is researching and developing including a virtual assistant that will provide both voice and flight intention recognition to pilots as part of its next generation FlytX avionics suite.

Zurich, Switzerland-based startup Daedalean is also developing what it describes as the aviation industrys first autopilot system to feature an advanced form of artificial intelligence (AI) known as deep convolutional feed forward neural networks. The system is to feature software that can replicate a human pilots level of decision-making and situational awareness.

NATS, the U.K.s air navigation service provider (ANSP) is also pioneering an artificial intelligence for aviation platform. At Heathrow Airport, the company has installed 18 ultra-HD 4K cameras on the air traffic control tower and others along the airports northern runway that are feeding images to a platform developed by Searidge Technology called AIMEE. The goal is for AIMEEs advanced neural network framework to become capable of identifying when a runway is cleared for takeoffs and arrivals in low visibility conditions.

As the industry moves forward with more AI developments, EASA plans to continually update its roadmap with new insights. Their roadmap proposes a possible classification of AI and ML applications separated into three levels based on the level of human oversight on a machine. Level 1 is to categorize the use of artificial intelligence for routine tasks, while Level 2 features applications where a human is a performing a function and the machine is monitoring. Level 3 is to feature full autonomy, where machines perform functions with no human intervention.

At this stage version 1.0 identifies key elements that the Agency considers should be the foundation of its human-centric approach: integration of the ethical dimension, and the new concepts of trustworthiness, learning assurance and explainability of AI, the representative for EASA said. This should be the main take away for the agencys industry stakeholders. In essence, the roadmap aims at establishing the baseline for the Agencys vision on the safe development of AI.

More here:
EASA Expects Certification of First Artificial Intelligence for Aircraft Systems by 2025 - Aviation Today