Category Archives: Artificial Intelligence

Artificial Intelligence and Machine Learning in the Operating Room – 24/7 Wall St.

Most applications of artificial intelligence (AI) and machine learning technology provide only data to physicians, leaving the doctors to form a judgment on how to proceed. Because AI doesnt actually perform any procedure or prescribe a course of medication, the software that diagnoses health problems does not have to pass a randomized clinical trial as do devices such as insulin pumps or new medications.

A new study published Monday at JAMA Network discusses a trial including 68 patients undergoing elective noncardiac surgery under general anesthesia. The object of the trial was to determine if a predictive early warning system for possible hypotension (low blood pressure) during the surgery might reduce the time-weighted average of hypotension episodes during the surgery.

In other words, not only would the device and its software keep track of the patients mean average blood pressure, but it would sound an alarm if an 85% or greater risk of a patients blood pressure falling below 65 mm of mercury (Hg) was possible in the next 15 minutes. The device also encouraged the anesthesiologist to take preemptive action.

Patients in the control group were connected to the same AI device and software, but only routine pulse and blood pressure data were displayed. That means that the anesthesiologist had no early warning about a hypotension event and could take no action to prevent the event.

Among patients fully connected to the device and software, the median time-weighted average of hypotension was 0.1 mm Hg, compared to an average of 0.44 mm Hg in the control group. In the control group, the median time of hypotension per patient was 32.7 minutes, while it was just 8.0 minutes among the other patients. Most important, perhaps, two patients in the control group died from serious adverse events, while no patients connected to the AI device and software died.

The algorithm used by the device was developed by different researchers who had trained the software on thousands of waveform features to identify a possible hypotension event 15 minutes before it occurs during surgery. The devices used were a Flotrac IQ sensor with the early warning software installed and a HemoSphere monitor. The devices are made by Edwards Lifesciences, and Edwards also had five of eight researchers among the developers of the algorithm. The study itself was conducted in the Netherlands at Amsterdam University Medical Centers.

In an editorial at JAMA Network, associate editor Derek Angus wrote:

The final model predicts the likelihood of future hypotension via measurement of multiple variables characterizing dynamic interactions between left ventricular contractility, preload, and afterload. Although clinicians can look at arterial pulse pressure waveforms and, in combination with other patient features, make educated guesses about the possibility of upcoming episodes of hypotension, the likelihood is high that an AI algorithm could make more accurate predictions.

Among the past decades biggest health news stories were the development of immunotherapies for cancer and a treatment for cystic fibrosis. AI is off to a good start in the new decade.

By Paul Ausick

See original here:
Artificial Intelligence and Machine Learning in the Operating Room - 24/7 Wall St.

Artificial Intelligence (AI) And The Law: Helping Lawyers While Avoiding Biased Algorithms – Forbes

Sergey Tarasov - stock.adobe.com

Artificial intelligence (AI) has the potential to help every sector of the economy. There is a challenge, though, in sectors that have fuzzier analysis and the potential to train with data that can continue human biases. A couple of years ago, I described the problem with bias in an article about machine learning (ML) applied to criminal recidivism. Its worth revisiting the sector as time have changed in how bias is addressed. One way is to look at sectors in the legal profession where bias is a much smaller factor.

Tax law has a lot more explicit rules than, for instance, do many criminal laws. As much as there have been issues with ML applied to human resource systems (Amazons canceled HR system), employment law is another area where states and nations have created explicit rules. The key in choosing the right legal area. What seems to be the focus, according to conversations with people at Blue J Legal, is the to focus on areas with strong rules as opposed to standards. The former provide the ability to have clear feature engineering while that later dont have the specificity to train an accurate model.

Blue J Legal arose from a University of Toronto course started by the founders, combining legal and computer science skills to try to predict cases. The challenge was, as it has always been in software, to understand the features of the data set in the detail needed to properly analyze the problem. As mentioned, the choice of the tax system was picked for the first focus. Tax law has a significant set of rules that can be designed. The data can then be appropriately labeled. After their early work on tax, they moved to employment.

The products are aimed at lawyers who are evaluating their cases. The goal is to provide the attorneys statistical analysis about the strength and weaknesses of each case.

It is important to note that employment is a category of legal issues. Each issue must be looked at separately, and each issue has its own set of features. For instance, in todays gig economy, Is the worker a contractor or an employee? is a single issue. The Blue J Legal team mentioned that they found between twenty and seventy features for each issue theyve addressed.

That makes clear that feature engineering is a larger challenge than is the training of the ML system. That has been mentioned by many people but still too many folks have focused on the inference engine because its cool. Turning data into information is a more critical part of the ML challenge.

Once the system is trained, the next challenge is to get the lawyers to provide the right information in order to analyze their current cases. They must enter (or their clerks must enter) information about each case that match the features to be analyzed.

On a slightly technical note, their model uses decision trees. They did try the Random Forest model, of interest in other fields, but found their accuracy dropped.

Blue J Legal claims their early version provides 80-90% accuracy.

By removing variables that can drive bias, such as male v female, they are able to train a more general system. Thats good from a pure law point of view, but unlike the parole system mentioned above, that could cause problems in a lawyers analysis of a problem. For instance, if a minority candidate is treated more poorly in the legal system, a lawyer should know about that. The Blue J Legal team says they did look at bias, both in their Canadian and USA legal data, but state that the two areas they are addressing dont see bias that would change the results in a significant way.

One area of bias theyve also ignored is that of judges, for the same reason as above. Im sure its also ignored for marketing reasons. As they move to legal areas with fewer rules and more standards, I could see a strong value for lawyers in knowing if the judge to whom the case has been assigned has strong biases based on features of the case or the plaintiff. Still, if they analyzed the judges, I could see other bias being added as judges might be biased against lawyers using the system. Its an interesting conundrum that will have to be addressed in the future.

There is a clear ethical challenge in front of lawyers that exists regardless of bias. For instance, if the system comes back and tells the lawyer that 70% of cases that are similar go against the plaintiff, should the lawyer take the case? Law is a fluid profession with many cases being similar but not identical. How does the lawyer decide if the specific client is in the 70% or the 30%? How can a system provide information help a lawyer decide to take a case with lower probability or reject one with a higher probability? The hope is, as with any other profession, that the lawyer would carefully evaluate the results. However, as in all industries, busy people take shortcuts and far too many people have taken the old acronym of GIGO to no longer mean garbage in, garbage out, but rather garbage in, gospel out.

One way to help is to provide a legal memo. The Blue J Legal system provides a list of lawyer provided answers and similar cases for each answer. Not being a lawyer, I cant tell how well that has been done, but it is a critical part of the system. Just as too many developers focus on the engine rather than feature engineering, they focus on the engine while minimizing the need to explain the engine. In all areas where machine learning is applied, but especially in professions, black box systems cant be trusted. Analysis must be supported in order for lawyers to understand and evaluate how the generic decision impacts their specific cases.

Law is an interesting avenue in which to test the integration between AI and people. Automation wont be replacing the lawyer any time soon, but as AI evolves it will be able to increasingly assist the people in the industry, to become more educated about their options and to use their time more efficiently. Its the balance between the two that will be interesting to watch.

The rest is here:
Artificial Intelligence (AI) And The Law: Helping Lawyers While Avoiding Biased Algorithms - Forbes

Can we realistically create laws on artificial intelligence? – Open Access Government

Regulation is an industry, but effective regulation is an art. There are a number of recognised principles that should be considered when regulating an activity, such as efficiency, stability and regulatory structure, general principles, and the resolution of conflicts between these various competing principles. With the regulation of artificial intelligence (AI) technology, a number of factors make the centralised application of these principles difficult to realise but AI should be considered as a part of any relevant regulatory regime.

Because AI technology is still developing, it is difficult to discuss the regulation of AI without reference to a specific technology, field or application where these principles can be more readily applied. For example, optical character recognition (OCR) was considered to be AI technology when it was first developed, but today, few would call it AI.

Predictive technology for marketing and for navigation; Technology for ridesharing applications; Commercial flights routing; And even email spam filters.

These technologies are as different from each other as they are from OCR technology. This demonstrates why the regulation of AI technology (from a centralised regulatory authority or based on a centralised regulatory principle) is unlikely to truly work.

Efficiency-related principles include the promotion of competition between participants by avoiding restrictive practices that impair the provision of new AI-related technologies. This subsequently lowers barriers of entry for such technologies, providing the freedom of choice between AI technologies and creating competitive neutrality between existing AI technologies and new AI technologies (i.e. a level playing field). OCR technology was initially unregulated, at least from a central authority, and it was therefore allowed to develop and become faster and more efficient, even though there are many situations where OCR documents contained a large number of errors.

In a similar manner, a centralised regulation regime that encompasses all uses of AI mentioned above from a central authority or based on a single focus (e.g. avoiding privacy violations) would be inefficient.

The reason for this inefficiency is clear: the function and markets for these technologies are unrelated.

Strict regulations that require all AI applications to evaluate and protect the privacy of users might not only result in the failure to achieve any meaningful goals to protect privacy, but could also render those AI applications commercially unacceptable for reasons that are completely unrelated to privacy. For example, a regulation that requires drivers to be assigned based on privacy concerns could result in substantially longer wait times for riders if the closest drivers have previously picked up the passenger at that location. However, industry-specific regulation to address privacy issues might make sense, depending on the specific technology and specific concern within that industry.

Stability-related principles include providing incentives for the prudent assessment and management of risk, such as minimum standards, the use of regulatory requirements that are based on market values and taking prompt action to accommodate new AI technologies.

Using OCR as an example, if minimum standards for an acceptable number of errors in a document had been implemented, then the result would have been difficult to police, because documents have different levels of quality and some documents would no doubt result in less errors than others. In the case of OCR, the market was able to provide sufficient regulation, as companies competed with each other for the best solution, but for other AI technologies there may be a need for industry-specific regulations for ensuring minimum standards or other stability-related principles.

In regard to regulatory structure, these include following a functional/institutional approach to regulation, coordinating regulation by different agencies, and using a small number of regulatory agencies for any regulated activity. In that regard, there is no single regulatory authority that could implement and administer AI regulations across all markets, activities and technologies, or that would add a new regulatory regime to the ones already in place.

For example, in the US many state and federal agencies have OCR requirements that centre on specific technologies/software for document submission, and software application providers can either make their application compatible with those requirements or can seek to be included on a list of allowed applications. They do the latter by working with the state or federal agency to ensure that documents submitted using their applications will be compatible with the agencys uses. For other AI technologies there may be similarly industry-specific regulations that make sense in the context of the existing regulatory structure for that industry.

General principles of regulation include identifying the specific objectives of a regulation, cost-effectiveness, equitable distribution of the regulation costs , flexibility of regulation and a stable relationship between the regulators and regulated parties. Some of these principles could have been implemented for OCR, such as a specific objective in the terms of a number of errors per page. However, the other factors would have been more difficult to determine, and again would depend on an industry- or market-specific analysis. For many specific applications in specific industries, these factors were able to be addressed even though an omnibus regulatory structure was not implemented.

Preventing conflict between these different objectives requires a regime in which these different objectives can be achieved. For AI that would require an industry- or market-specific approach, and in the US, that approach has generally been followed for all AI-related technologies. As discussed, OCR-related technology is regulated by specific federal, state and local agencies as it pertains to their specific mission. Another AI technology is facial recognition, and a regulatory regime of federal, state and local regulation is in progress. The facial recognition technology space has been used by many of these authorities for different applications, with some recent push-back on the use of the technology by privacy advocates.

It is only when conflicts develop between such different regimes that input from a centralised authority may be required.

In the United States, an industry- and market-based approach is generally being adopted. In the 115th Congress, thirty-nine bills were introduced that had the phrase artificial intelligence in the text of the bill, and four were enacted into law. A large number of such bills were also introduced in the 116th Congress. As of April 2017, twenty-eight states had introduced some form of regulations for autonomous vehicles, and a large number of states and cities have proposed or implemented regulations for facial recognition technology.

While critics will no doubt assert that nothing much is being done to regulate AI, a simplistic and heavy-handed approach to AI regulation, reacting to a single concern such as privacy is unlikely to satisfy these principles of regulation, and should be avoided. Artificial intelligence requires regulation with real intelligence.

By Chris Rourk, Partner at Jackson Walker, a member of Globalaw.

Editor's Recommended Articles

See the article here:
Can we realistically create laws on artificial intelligence? - Open Access Government

Ohio to Analyze State Regulations with Artificial Intelligence – Governing

(TNS) A new Ohio initiative aims to use artificial intelligence to guide an overhaul of the states laws and regulations.

Lt. Gov. Jon Husted said his staff will use an AI software tool, developed for the state by an outside company, to analyze the states regulations, numbered at 240,000 in a recent study by a conservative think-tank, and narrow them down for further review.

Husted compared the tool to an advanced search engine that will automatically identify and group together like terms, getting more sophisticated the more its used.

He said the goal is to use the tool to streamline state regulations such as eliminating permitting requirements deemed to be redundant which is a long-standing policy goal of Republicans that lead the state government.

This gives us the capability to look at everything thats been done in 200 years in the state of Ohio and make sense of it, Husted said.

The project is part of two Husted-led projects the Common Sense Initiative, a state project to review regulations with the goal of cutting government red tape, and InnovateOhio,a Husted-led officethat aims to use technology to improve Ohios government operations

Husted announced the project on Thursday at a meeting of the Small Business Advisory Council. The panel advises the state on government regulations and tries to identify challenges they can pose for business owners.

State officials sought bids for projects last summer, authorized through the state budget. Starting soon, Husteds staff will load the states laws and regulations into the software, with the goal of starting to come up with recommendations for proposed law and rule changes before the summer.

Husteds office has authority to spend as much as $1.2 million on the project, although it could cost less, depending on how many user licenses they request.

I dont know if it will be a small success, a medium success, or a large success, Husted said. I dont want to over-promise, but we have great hope for it.

2020 The Plain Dealer, Cleveland.Distributed byTribune Content Agency, LLC.

View post:
Ohio to Analyze State Regulations with Artificial Intelligence - Governing

Artificial intelligence and digital initiatives to be scrutinised by MEPs | News – EU News

Commissioner Breton will present to and debate with MEPs the initiatives that the Commission will put forward on 19 February:

When: Wednesday, 19 February, 16.00 to 18.00

Where: European Parliament, Spaak building, room 3C050, Brussels

Live streaming: You can also follow the debate on EP Live

A Strategy for Europe Fit for the Digital Age

The Commission has announced in its 2020 Work Programme that it will put forward a European Strategy for Europe - Fit for the Digital Age, setting out its vision on how to address the challenges and opportunities brought about by digitalisation.

Boosting the single market for digital services and introducing regulatory rules for the digital economy should be addressed in this strategy. It is expected to build on issues covered by the e-commerce directive and the platform-to-business regulation.

White Paper on Artificial Intelligence

The White Paper on Artificial Intelligence (AI) will aim to support its development and uptake in the EU, as well as to ensure that European values are fully respected. It should identify key opportunities and challenges, analyse regulatory options and put forward proposals and policy actions related to, e.g. ethics, transparency, safety and liability.

European Strategy for Data

The purpose of the Data Strategy would be to explore how to make the most of the enormous value of non-personal data as an ever-expanding and re-usable asset in the digital economy. It will build in part on the free flow of non-personal data regulation.

Original post:
Artificial intelligence and digital initiatives to be scrutinised by MEPs | News - EU News

Artificial intelligence makes a splash in efforts to protect Alaska’s ice seals and beluga whales – Stories – Microsoft

When Erin Moreland set out to become a research zoologist, she envisioned days spent sitting on cliffs, drawing seals and other animals to record their lives for efforts to understand their activities and protect their habitats.

Instead, Moreland found herself stuck in front of a computer screen, clicking through thousands of aerial photographs of sea ice as she scanned for signs of life in Alaskan waters. It took her team so long to sort through each survey akin to looking for lone grains of rice on vast mounds of sand that the information was outdated by the time it was published.

Theres got to be a better way to do this, she recalls thinking. Scientists should be freed up to contribute more to the study of animals and better understand what challenges they might be facing. Having to do something this time-consuming holds them back from what they could be accomplishing.

That better way is now here an idea that began, unusually enough, with the view from Morelands Seattle office window and her fortuitous summons to jury duty. She and her fellow National Oceanic and Atmospheric Administration scientists now will use artificial intelligence this spring to help monitor endangered beluga whales, threatened ice seals, polar bears and more, shaving years off the time it takes to get data into the right hands to protect the animals.

The teams are training AI tools to distinguish a seal from a rock and a whales whistle from a dredging machines squeak as they seek to understand the marine mammals behavior and help them survive amid melting ice and increasing human activity.

Morelands project combines AI technology with improved cameras on a NOAA turboprop airplane that will fly over the Beaufort Sea north of Alaska this April and May, scanning and classifying the imagery to produce a population count of ice seals and polar bears that will be ready in hours instead of months. Her colleague Manuel Castellote, a NOAA affiliate scientist, will apply a similar algorithm to the recordings hell pick up from equipment scattered across the bottom of Alaskas Cook Inlet, helping him quickly decipher how the shrinking population of endangered belugas spent its winter.

The data will be confirmed by scientists, analyzed by statisticians and then reported to people such as Jon Kurland, NOAAs assistant regional administrator for protected resources in Alaska.

Kurlands office in Juneau is charged with overseeing conservation and recovery programs for marine mammals around the state and its waters and helping guide all the federal agencies that issue permits or carry out actions that could affect those that are threatened or endangered.

Of the four types of ice seals in the Bering Sea bearded, ringed, spotted and ribbon the first two are classified as threatened, meaning they are likely to become in danger of extinction within the foreseeable future. The Cook Inlet beluga whales are already endangered, having steadily declined to a population of only 279 in last years survey, from an estimate of about a thousand 30 years ago.

Individual groups of beluga whales are isolated and dont breed with others or leave their home, so if this population goes extinct, no one else will come in; theyre gone forever, says Castellote. Other belugas wouldnt survive there because they dont know the environment. So youd lose that biodiversity forever.

Yet recommendations by Kurlands office to help mitigate the impact of human activities such as construction and transportation, in part by avoiding prime breeding and feeding periods and places, are hampered by a lack of timely data.

Theres basic information that we just dont have now, so getting it will give us a much clearer picture of the types of responses that may be needed to protect these populations, Kurland says. In both cases, for the whales and seals, this kind of data analysis is cutting-edge science, filling in gaps we dont have another way to fill.

The AI project was born years ago, when Moreland would sit at her computer in NOAAs Marine Mammal Laboratory in Seattle and look across Lake Washington toward Microsofts headquarters in Redmond, Washington. She felt sure there was a technological solution to her frustration, but she didnt know anyone with the right skills to figure it out.

She hit the jackpot one week while serving on a jury in 2018. She overheard two fellow jurors discussing AI during a break in the trial, so she began talking with them about her work. One of them connected her with Dan Morris from Microsofts AI for Earth program, who suggested they pitch the problem as a challenge that summer at the companys Hackathon, a week-long competition when software developers, programmers, engineers and others collaborate on projects. Fourteen Microsoft engineers signed up to work on the problem.

Across the wildlife conservation universe, there are tons of scientists doing boring things, reviewing images and audio, Morris says. Remote equipment lets us collect all kinds of data, but scientists have to figure out how to use that data. Spending a year annotating images is not only a bad use of their time, but the questions get answered way later than they should.

Morelands idea wasnt as simple as it may sound, though. While there are plenty of models to recognize people in images, there were none until now that could find seals, especially real-time in aerial photography. But the hundreds of thousands of examples NOAA scientists had classified in previous surveys helped technologists, who are using them to train the AI models to recognize which photographs and recordings contained mammals and which didnt.

Part of the challenge was that there were 20 terabytes of data of pictures of ice, and working on your laptop with that much data isnt practical, says Morris. We had daily handovers of hard drives between Seattle and Redmond to get this done. But the cloud makes it possible to work with all that data and train AI models, so thats how were able to do this work, with Azure.

Morelands first ice seal survey was in 2007, flying in a helicopter based on an icebreaker. Scientists collected 90,000 images and spent months scanning them but only found 200 seals. It was a tedious, imprecise process.

Ice seals live largely solitary lives, making them harder to spot than animals that live in groups. Surveys are also complicated because the aircraft have to fly high enough to keep seals from getting scared and diving, but low enough to get high-resolution photos that enable scientists to differentiate a ring seal from a spotted seal, for example. The weather in Alaska often rainy and cloudy further complicates efforts.

Subsequent surveys improved by pairing thermal and color cameras and using modified planes that had a greater range to study more area and could fly higher up to be quieter. Even so, thermal interference from dirty ice and reflections off jumbled ice made it difficult to determine what was an animal and what wasnt.

And then there was the problem of manpower to go along with all the new data. The 2016 survey produced a million pairs of thermal and color images, which a previous software system narrowed down to 316,000 hot spots that the scientists had to manually sort through and classify. It took three people six months.

Read more from the original source:
Artificial intelligence makes a splash in efforts to protect Alaska's ice seals and beluga whales - Stories - Microsoft

SparkCognition Partners with Informatica to Enable Customers to Operationalize Artificial Intelligence and Solve Problems at Scale – Yahoo Finance

SparkCognition's Data Science Automation Platform to Offer Integration With Informatica's Data Management Solutions

AUSTIN, Texas, Feb. 19, 2020 /PRNewswire/ --SparkCognition, a leading AI company, announced a partnership with enterprise cloud data management company, Informatica, to transform the data science process for companies. By combining Informatica's data management capabilities with SparkCognition's AI-powered data science automation platform, Darwin, users will benefit from an integrated end-to-end environment where they can gather and manage their data, create a custom and highly-accurate model based off of that data, and deploy the model to inform business decisions.

(PRNewsfoto/SparkCognition)

"There has never been a more critical time to leverage the power of data and today's leading businesses recognize that data not only enables them to stay afloat, but provides them with the competitive edge necessary to innovate within their industries," said Ronen Schwartz, EVP, global technical and ecosystem strategy and operations at Informatica. "Together with SparkCognition, we are helping users tackle some of the most labor and time-intensive aspects of data science in a user-friendly fashion that allows users of all skill levels to quickly solve their toughest business problems."

Informatica is the leading data integration and data management company, which offers users the ability to collect their data from even the most fragmented sources across hybrid enterprises, discover data, then clean and prepare datasets to create and expand data model features. SparkCognition is the world's leading industrial artificial intelligence company, and its Darwin data science automation platform accelerates the creation of end-to-end AI solutions to deliver business-wide outcomes. The partnership will allow users to seamlessly discover data, pull their data from virtually anywhere using Informatica's data ingestion capabilities, then input the data into the Darwin platform. Through the new integration, users will streamline workflows and speed up the model building process to provide value to their business faster.

"At SparkCognition, we're strong believers that this new decade will be dominated by model-driven enterprisescompanies who have embraced and operationalized artificial intelligence," said Dana Wright, Global Vice President of Sales at SparkCognition. "We recognize this shared mission with Informatica and are excited to announce our partnership to help companies solve their toughest business problems using artificial intelligence."

To learn more about Darwin, visit sparkcognition.com/product/darwin/

About SparkCognition:

With award-winning machine learning technology, a multinational footprint, and expert teams focused on defense, IIoT, and finance, SparkCognition builds artificial intelligence systems to advance the most important interests of society. Our customers are trusted with protecting and advancing lives, infrastructure, and financial systems across the globe. They turn to SparkCognition to help them analyze complex data, empower decision-making, and transform human and industrial productivity. SparkCognition offers four main products:DarwinTM, DeepArmor, SparkPredict, and DeepNLPTM. With our leading-edge artificial intelligence platforms, our clients can adapt to a rapidly changing digital landscape and accelerate their business strategies. Learn more about SparkCognition's AI applications and why we've been featured in CNBC's 2017 Disruptor 50, and recognized three years in a row on CB Insights AI 100, by visiting http://www.sparkcognition.com.

For Media Inquiries:

Cara SchwartzkopfSparkCognitioncschwartzkopf@sparkcognition.com512-956-5491

View original content to download multimedia:http://www.prnewswire.com/news-releases/sparkcognition-partners-with-informatica-to-enable-customers-to-operationalize-artificial-intelligence-and-solve-problems-at-scale-301007328.html

SOURCE SparkCognition

See the rest here:
SparkCognition Partners with Informatica to Enable Customers to Operationalize Artificial Intelligence and Solve Problems at Scale - Yahoo Finance

Implementing artificial intelligence in the insurance industry: Cass breakfast briefing – City, University of London

Cass event addresses implementation, benefits and challenges of AI in insurance

How is artificial intelligence (AI) affecting the insurance industry and what should insurance providers consider before implementing this technology? These were just two points of discussion during the Artificial Intelligence and Insurance: Managing Risks and Igniting Innovation breakfast event held at Cass Business School.

Professor of Strategy and Founding Director of the Digital Leadership Research Centre, Gianvito Lanzolla was joined by technology and insurance professionals to explore the feasibility of digitisation on the industry, as well as looking at how and when AI should be implemented.

Professor Lanzolla presented his joint research (carried out with Cass research student Lei Fang and Reader in Actuarial Science, Dr Andreas Tsanakas) about the impact of digitisation on management attention in the banking and insurance industries highlighting the ambivalent consequences of digitisation. On the one hand there could be scope for increased coordination, but this potentially comes at the expense of increasing group thinking and systemic risk.

Santiago Restrepo, Director at global professional services consultancy BDO UK LLP, then spoke about how businesses should make the case for using AI. This includes assessing market needs, company objectives and potential scalability of the technology.

Founder and CEO of data insights provider Digital Fineprint, Bo-Erik Abrahamsson demonstrated the importance of data, and how raw data mining could be transformed into useful insights for insurance companies.

Paul Willoughby, Head of IT Strategy, Innovation and Architecture at insurance provider Beazley, then stressed the importance of only using AI where it was critically required and would make the boat move faster citing the example of anonymous chat bots as a piece of technology that does not necessarily deliver satisfactory insights or levels of customer service.

The session concluded with a Q&A session from audience members.

Reflecting on the discussion, Professor Lanzolla said:

Artificial Intelligence can have many and clear advantages for industries that are heavily reliant on data, such as insurance, but there are also considerations that need to be made before implementing technology."

Common considerations should include the implications of AI-related black boxing, fault lines between new digital skills and legacy skills, loss of emotional engagement with an organisation and risks for organisational stability when turbocharging some areas of an organisation with AI while leaving others lagging behind.

The event was introduced and chaired by Darren Munday, Partner at Internal Consulting Group (Global) and Visiting Fellow at the Digital Leadership Research Centre.

Find out more about upcoming events at Cass.

Link:
Implementing artificial intelligence in the insurance industry: Cass breakfast briefing - City, University of London

Why Bill Gates thinks gene editing and artificial intelligence could save the world – Yahoo News

Microsoft co-founder Bill Gates has been working to improve the state of global health through his nonprofit foundation for 20 years, and today he told the nations premier scientific gathering that advances in artificial intelligence and gene editing could accelerate those improvements exponentially in the years ahead.

We have an opportunity with the advance of tools like artificial intelligence and gene-based editing technologies to build this new generation of health solutions so that they are available to everyone on the planet. And Im very excited about this, Gates said in Seattle during a keynote address at the annual meeting of the American Association for the Advancement of Science.

Such tools promise to have a dramatic impact on several of the biggest challenges on the agenda for the Bill & Melinda Gates Foundation, created by the tech guru and his wife in 2000.

When it comes to fighting malaria and other mosquito-borne diseases, for example, CRISPR-Cas9 and other gene-editing tools are being used to change the insects genome to ensure that they cant pass along the parasites that cause those diseases. The Gates Foundation is investing tens of millions of dollars in technologies to spread those genomic changes rapidly through mosquito populations.

Millions more are being spent to find new ways fighting sickle-cell disease and HIV in humans. Gates said techniques now in development could leapfrog beyond the current state of the art for immunological treatments, which require the costly extraction of cells for genetic engineering, followed by the re-infusion of those modified cells in hopes that theyll take hold.

For sickle-cell disease, the vision is to have in-vivo gene editing techniques, that you just do a single injection using vectors that target and edit these blood-forming cells which are down in the bone marrow, with very high efficiency and very few off-target edits, Gates said. A similar in-vivo therapy could provide a functional cure for HIV patients, he said..

Bill Gates shows how the rise of computational power available for artificial intelligence is outpacing Moores Law. (GeekWire Photo / Todd Bishop)

The rapid rise of artificial intelligence gives Gates further cause for hope. He noted that that the computational power available for AI applications has been doubling every three and a half months on average, dramatically improving on the two-year doubling rate for chip density thats described by Moores Law.

One project is using AI to look for links between maternal nutrition and infant birth weight. Other projects focus on measuring the balance of different types of microbes in the human gut, using high-throughput gene sequencing. The gut microbiome is thought to play a role in health issues ranging from digestive problems to autoimmune diseases to neurological conditions.

This is an area that needed these sequencing tools and the high-scale data processing, including AI, to be able to find the patterns, Gates said. Theres just too much going on there if you had to do it, say, with paper and pencil to understand the 100 trillion organisms and the large amount of genetic material there. This is a fantastic application for the latest AI technology.

Similarly, organs on a chip could accelerate the pace of biomedical research without putting human experimental subjects at risk.

In simple terms, the technology allows in-vitro modeling of human organs in a way that mimics how they work in the human body, Gates said. Theres some degree of simplification. Most of these systems are single-organ systems. They dont reproduce everything, but some of the key elements we do see there, including some of the disease states for example, with the intestine, the liver, the kidney. It lets us understand drug kinetics and drug activity.

Bill Gates explains how gene-drive technology can cause genetic changes to spread rapidly in mosquito populations. (GeekWire Photo / Todd Bishop)

Story continues

The Gates Foundation has backed a number of organ-on-a-chip projects over the years, including one experiment thats using lymph-node organoids to evaluate the safety and efficacy of vaccines. At least one organ-on-a-chip venture based in the Seattle area, Nortis, has gone commercial thanks in part to Gates support.

High-tech health research tends to come at a high cost, but Gates argues that these technologies will eventually drive down the cost of biomedical innovation.

He also argues that funding from governments and nonprofits will have to play a role in the worlds poorer countries, where those who need advanced medical technologies essentially have no voice in the marketplace.

If the solution of the rich country doesnt scale down then theres this awful thing where it might never happen, Gates said during a Q&A with Margaret Hamburg, who chairs the AAAS board of directors.

But if the acceleration of medical technologies does manage to happen around the world, Gates insists that could have repercussions on the worlds other great challenges, including the growing inequality between rich and poor.

Disease is not only a symptom of inequality, he said, but its a huge cause.

Other tidbits from Gates talk:

Read Gates prepared remarks in a posting to his Gates Notes blog, or watch the video on AAAS YouTube channel.

See the original post here:
Why Bill Gates thinks gene editing and artificial intelligence could save the world - Yahoo News

How Will Your Career Be Impacted By Artificial Intelligence? – Forbes

Reject it or embrace it. Either way, artificial intelligence is here to stay.

Nobody can predict the future with absolute precision.

But when it comes to the impact of artificial intelligence (AI) on peoples careers, the recent past provides some intriguing clues.

Rhonda Scharfs bookAlexa Is Stealing Your Job: The Impact of Artificial Intelligence on Your Futureoffers some insights and predictions that are well worth our consideration.

In the first two parts of my conversation with Rhonda (see What Role Will [Doe]) Artificial Intelligence Play In Your Life? and Artificial Intelligence, Privacy, And The Choices You Must Make) we discussed the growth of AI in recent years and talked about the privacy concerns of many AI users.

In this final part, we look at how AI is affectingand will continue to affectpeoples career opportunities.

Spoiler alert: theres some good news here.

Rodger Dean Duncan:You quote one researcher who says robots are not here to take away our jobs, theyre here to give us a promotion. What does that mean?

Rhonda Scharf:Much like the computer revolution, we need jobs to maintain the systems that have been created. This creates new, desirable jobs where humans work alongside technology. These new jobs are called the trainers, explainers, and sustainers.

Trainers will teach a machine what it needs to do. For instance, we need to teach a machine that when I yell at it (loud voice), I may be frustrated. It needs to be taught that when I ask it to call Robert, who Robert is and what phone number should be used. Once the machine has a basic understanding, it continues to self-learn, but it needs the basics taught to it (like children do.)

Rhonda Scharf

Explainers are human experts who explain computer behavior to others. They would explain, for example, why a self-driving car performed in a certain way. Or why AI sold shares in a stock at a certain point of the day. The same way lawyers can explain why someone acted in self-defense, when initially his or her actions seemed inappropriate, we need explainers to tell us why a machine did what it did.

Sustainers ensure that our systems are functioning correctly, safely, and responsibly. In the future, theyll ensure that AI systems uphold ethical standards and that industrial robots dont harm humansbecause robots dont understand that were fragile, unlike machinery.

There are going to be many jobs that AI cant replace. We need to think, evolve, interpret, and relate. As smart as a chatbot can be, it will never have the same qualities as my best friend. We will need people for the intangible side of relationships.

Duncan:What should people look for to maximize their careers through the use of AI?

Scharf:According to the World Economic Forum, the top 10 in-demand skills for 2020 include complex problem-solving, critical thinking, creativity, emotional intelligence, judgment and decision-making, and cognitive flexibility. These are the skills that will provide value to your organization. By demonstrating all of these skills, you will be positioning yourself as a valuable resource. Well have AI to handle basic tasks and administrative work. People need complex thinking to propel organizations forward.

Duncan:Bonus: What question do you wish I had asked, and how would you respond?

If you don't want to be left behind, you'd better get educated on AI.

Scharf:I wished you had asked how I felt about artificial intelligence. If I was afraid for my future, for the future of my children, and my childrens children?

The answer is no. I dont think that AI is all the doom and gloom that has been publicized. I also dont believe were about to lead a life of leisure and have the world operate on its own either.

As history has shown us, these types of life-altering changes happen periodically. This is the next one. I believe the way we work is about to change, the same way it changed during the Industrial Revolution, the same way it evolved in response to automation. The way we live is about to change. (Think pasteurization and food storage.) Those who adapt will have a better life for it, and those who refuse to adapt will suffer.

Im confident that I will still be employed for as long as I want to be. My children have only known a life with computers and are open to change, and my future grandchildren will only know a life with AI.

Im excited about our future. Im excited about what AI can bring to my life. I embrace Alexa and all her friends and welcome them into my home.

More here:
How Will Your Career Be Impacted By Artificial Intelligence? - Forbes