Category Archives: Machine Learning

ServiceNow pulls on its platforms, talks up machine learning, analytics in biggest release since ex-SAP boss took reins – The Register

As is the way with the 21st century, IT companies are apt to get meta and ServiceNow is no exception.

In its biggest product release since the arrival of SAP revenue-boosting Bill McDermott as new CEO, the cloudy business process company is positioning itself as the "platform of platforms". Which goes to show, if nothing else, that platformization also applies to platforms.

To avoid plunging into an Escher-eque tailspin of abstraction, it is best to look at what Now Platform Orlando actually does and who, if anyone, it might help.

The idea is that ServiceNow's tools make routine business activity much easier and slicker. To this the company is adding intelligence, analytics and AI, it said.

Take the arrival of a new employee. They might need to be set up on HR and payroll systems, get access to IT equipment and applications, have facilities management give them the right desk and workspace, be given building security access and perhaps have to sign some legal documents.

Rather than multiple people doing each of these tasks with different IT systems, ServiceNow will make one poor soul do it using its single platform, which accesses all the other prerequisite applications, said David Flesh, ServiceNow product marketing director.

It is also chucking chatbots at that luckless staffer. In January, ServiceNow bought Passage AI, a startup that helps customers build chatbots in multiple languages. It is using this technology to create virtual assistants to help with some of the most common requests that hit HR and IT service desks, for example password resets, getting assess to Wi-Fi, that kind of thing.

This can also mean staffers don't have to worry where they send requests, meaning if, for example, they've just found out they're going to become a parent, they can fire questions at an agent rather than HR, their boss or the finance team. The firm said: "Agents are a great way for employees find information and abstracts that organizational complexity."

ServiceNow has also introduced machine learning, for example, in IT operations management, which uses systems data to identify when a service is degrading and what could be causing the problem. "You get more specific information about the cause and suggested actions to take to actually remediate the problem," Flesh said.

Customers looking to use this feature will still have to train the machine learning models on historic datasets from their operations and validate models, as per the usual ML pipeline. But ServiceNow makes the process more graphical, and comes with its knowledge of common predictors of operational problems.

Lastly, analytics is a new feature in the update. Users can include key performance indicators in the workflows they create, and the platform includes the tools to track and analyse those KPIs and suggest how to improve performance. It also suggests useful KPIs.

Another application of the analytics tools is for IT teams - traditionally the company's core users - monitoring cloud services. ServiceNow said it helps optimise organisations' cloud usage by "making intelligent recommendations on managing usage across business hours, choosing the right resources and enforcing usage policies".

With McDermott's arrival and a slew of new features and customer references, ServiceNow is getting a lot of attention, but many of these technologies exist in other products.

There are independent robotic process automation (RPA) vendors who build automation into common tasks, while application vendors are also introducing automation within their own environments. But as application and platform upgrade cycles are sluggish, and RPA has proved difficult to scale, ServiceNow may find a receptive audience for its, er, platform of platforms.

Sponsored: Webcast: Why you need managed detection and response

Read the original post:
ServiceNow pulls on its platforms, talks up machine learning, analytics in biggest release since ex-SAP boss took reins - The Register

2020-2027 Machine Learning in Healthcare Cybersecurity Industry Trends Survey and Prospects Report – 3rd Watch News

Summary: Global Machine Learning in Healthcare CybersecurityMarket 2020 by Company, Regions, Type and Application, Forecast to 2027

This report gives an in-depth research about the overall state of Machine Learning in Healthcare Cybersecurity Market and projects an overview of its growth Industry. It also gives the crucial elements of the market and across major global regions in detail. Number on primary and secondary research has been carried out in order to collect required data for completing this particular report. Sever industry based analytical techniques has been narrowed down for a better understanding of this market.

It explains the key market drivers, trends, restraints and opportunities to give a precise data which is required and expected. It also analyzes how such aspects affect the market existence globally helping make a wider and better choice of market establishment. The Machine Learning in Healthcare Cybersecurity Markets growth and developments are studied and a detailed overview is been given.

Get sample copy of this report:Global Machine Learning in Healthcare Cybersecurity Market 2020, Forecast to 2027

Leading Key Players: (You will get some more details, Please Enquire for sample by clicking on provided links. Thank You)

This report studies the Machine Learning in Healthcare Cybersecurity market status and outlook of Global and major regions, from angles of players, countries, product types and end industries; this report analyzes the top players in global market, and splits the Machine Learning in Healthcare Cybersecurity market by product type and applications/end industries.

Regions and Countries Level Analysis

Regional analysis is another highly comprehensive part of the research and analysis study of the global Machine Learning in Healthcare Cybersecurity market presented in the report. This section sheds light on the sales growth of different regional and country-level Machine Learning in Healthcare Cybersecurity markets. For the historical and forecast period 2015 to 2027, it provides detailed and accurate country-wise volume analysis and region-wise market size analysis of the global Machine Learning in Healthcare Cybersecurity market.

The report offers in-depth assessment of the growth and other aspects of the Machine Learning in Healthcare Cybersecurity market in important countries (regions), including:

North America (United States, Canada and Mexico)

Europe (Germany, France, UK, Russia and Italy)

Asia-Pacific (China, Japan, Korea, India and Southeast Asia)

South America (Brazil, Argentina, etc.)

Middle East & Africa (Saudi Arabia, Egypt, Nigeria and South Africa)

THIS REPORT PROVIDES COMPREHENSIVE ANALYSIS OF

Key market segments and sub-segments

Evolving market trends and dynamics

Changing supply and demand scenarios

Quantifying market opportunities through market sizing and market forecasting

Tracking current trends/opportunities/challenges

Competitive insights

Opportunity mapping in terms of technological breakthroughs

Machine Learning in Healthcare Cybersecurity Application Services

Reasons to buy

Identify high potential categories and explore further market opportunities based on detailed value and volume analysis

Existing and new players can analyze key distribution channels to identify and evaluate trends and opportunities

Gain an understanding of the total competitive landscape based on detailed brand share analysis to plan effective market positioning

Our team of analysts have placed a significant emphasis on changes expected in the market that will provide a clear picture of the opportunities that can be tapped over the next five years, resulting in revenue expansion

Analysis on key macro-economic indicators such as real GDP, nominal GDP, consumer price index, household consumption expenditure, population (by age group, gender, rurral-urban split, and employed people and unemployment rate. It also includes economic summary of the country along with labor market and demographic trends.

TABLE OF CONTENTS:

Global Machine Learning in Healthcare Cybersecurity Market 2020 by Company, Regions, Type and Application, Forecast to 2027

1 Market Overview

2 Manufacturers Profiles

3 Sales, Revenue and Market Share by Manufacturer

4 Global Market Analysis by Regions

5 North America by Country

6 Europe by Country

7 Asia-Pacific by Regions

8 South America by Country

9 Middle East & Africa by Countries

10 Market Segment by Type

11 Global Machine Learning in Healthcare Cybersecurity Market Segment by Application

12 Market Forecast

13 Sales Channel, Distributors, Traders and Dealers

14 Research Findings and Conclusion

15 Appendix

Enquire for complete report: Global Machine Learning in Healthcare Cybersecurity Market 2020, Forecast to 2027

About Reports and Markets:

REPORTS AND MARKETS is not just another company in this domain but is a part of a veteran group called Algoro Research Consultants Pvt. Ltd. It offers premium progressive statistical surveying, market research reports, analysis & forecast data for a wide range of sectors both for the government and private agencies all across the world. The database of the company is updated on a daily basis. Our database contains a variety of industry verticals that include: Food Beverage, Automotive, Chemicals and Energy, IT & Telecom, Consumer, Healthcare, and many more. Each and every report goes through the appropriate research methodology, Checked from the professionals and analysts.

Contact Info

Reports and Markets

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Connect with Us:LinkedIn|Facebook|Twitter

Ph: +1-352-353-0818 (US)

Excerpt from:
2020-2027 Machine Learning in Healthcare Cybersecurity Industry Trends Survey and Prospects Report - 3rd Watch News

Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time – CNBC

A computer image created by Nexu Science Communication together with Trinity College in Dublin, shows a model structurally representative of a betacoronavirus which is the type of virus linked to COVID-19.

Source: NEXU Science Communication | Reuters

Research has gone digital, and medical science is no exception. As the novel coronavirus continues to spread, for instance, scientists searching for a treatment have drafted IBM's Summit supercomputer, the world's most powerful high-performance computing facility, according to the Top500 list, to help find promising candidate drugs.

One way of treating an infection could be with a compound that sticks to a certain part of the virus, disarming it. With tens of thousands of processors spanning an area as large as two tennis courts, the Summit facility at Oak Ridge National Laboratory (ORNL) has more computational power than 1 million top-of-the-line laptops. Using that muscle, researchers digitally simulated how 8,000 different molecules would interact with the virus a Herculean task for your typical personal computer.

"It took us a day or two, whereas it has traditionally taken months on a normal computer," said Jeremy Smith, director of the University of Tennessee/ORNL Center for Molecular Biophysics and principal researcher in the study.

Simulations alone can't prove a treatment will work, but the project was able to identify 77 candidate molecules that other researchers can now test in trials. The fight against the novel coronavirus is just one example of how supercomputers have become an essential part of the process of discovery. The $200 million Summit and similar machines also simulate the birth of the universe, explosions from atomic weapons and a host of events too complicated or too violent to recreate in a lab.

The current generation's formidable power is just a taste of what's to come. Aurora, a $500 million Intel machine currently under installation at Argonne National Laboratory, will herald the long-awaited arrival of "exaflop" facilities capable of a billion billion calculations per second (five times more than Summit) in 2021 with others to follow. China, Japan and the European Union are all expected to switch on similar "exascale" systems in the next five years.

These new machines will enable new discoveries, but only for the select few researchers with the programming know-how required to efficiently marshal their considerable resources. What's more, technological hurdles lead some experts to believe that exascale computing might be the end of the line. For these reasons, scientists are increasingly attempting to harness artificial intelligenceto accomplish more research with less computational power.

"We as an industry have become too captive to building systems that execute the benchmark well without necessarily paying attention to how systems are used," says Dave Turek, vice president of technical computing for IBM Cognitive Systems. He likens high-performance computing record-seeking to focusing on building the world's fastest race car instead of highway-ready minivans. "The ability to inform the classic ways of doing HPC with AI becomes really the innovation wave that's coursing through HPC today."

Just getting to the verge of exascale computing has taken a decade of research and collaboration between the Department of Energy and private vendors. "It's been a journey," says Patricia Damkroger, general manager of Intel's high-performance computing division. "Ten years ago, they said it couldn't be done."

While each system has its own unique architecture, Summit, Aurora, and the upcoming Frontier supercomputer all represent variations on a theme: they harness the immense power of graphical processing units (GPUs) alongside traditional central processing units (CPUs). GPUs can carry out more simultaneous operations than a CPU can, so leaning on these workhorses has let Intel and IBM design machines that would have otherwise required untold megawatts of energy.

IBM's Summit supercomputer currently holds the record for the world's fastest supercomputer.

Source: IBM

That computational power lets Summit, which is known as a "pre-exascale" computer because it runs at 0.2 exaflops, simulate one single supernova explosion in about two months, according to Bronson Messer, the acting director of science for the Oak Ridge Leadership Computing Facility. He hopes that machines like Aurora (1 exaflop) and the upcoming Frontier supercomputer (1.5 exaflops) will get that time down to about a week. Damkroger looks forward to medical applications. Where current supercomputers can digitally model a single heart, for instance, exascale machines will be able to simulate how the heart works together with blood vessels, she predicts.

But even as exascale developers take a victory lap, they know that two challenges mean the add-more-GPUs formula is likely approaching a plateau in its scientific usefulness. First, GPUs are strong but dumbbest suited to simple operations such as arithmetic and geometric calculations that they can crowdsource among their many components. Researchers have written simulations to run on flexible CPUs for decades and shifting to GPUs often requires starting from scratch.

GPU's have thousands of cores for simultaneous computation, but each handles simple instructions.

Source: IBM

"The real issue that we're wrestling with at this point is how do we move our code over" from running on CPUs to running on GPUs, says Richard Loft, a computational scientist at the National Center for Atmospheric Research, home of Top500's 44th ranking supercomputerCheyenne, a CPU-based machine "It's labor intensive, and they're difficult to program."

Second, the more processors a machine has, the harder it is to coordinate the sharing of calculations. For the climate modeling that Loft does, machines with more processors better answer questions like "what is the chance of a once-in-a-millennium deluge," because they can run more identical simulations simultaneously and build up more robust statistics. But they don't ultimately enable the climate models themselves to get much more sophisticated.

For that, the actual processors have to get faster, a feat that bumps up against what's physically possible. Faster processors need smaller transistors, and current transistors measure about 7 nanometers. Companies might be able to shrink that size, Turek says, but only to a point. "You can't get to zero [nanometers]," he says. "You have to invoke other kinds of approaches."

If supercomputers can't get much more powerful, researchers will have to get smarter about how they use the facilities. Traditional computing is often an exercise in brute forcing a problem, and machine learning techniques may allow researchers to approach complex calculations with more finesse.

More from Tech Trends:Robotic medicine to fight the coronavirusRemote work techology that is key

Take drug design. A pharmacist considering a dozen ingredients faces countless possible recipes, varying amounts of each compound, which could take a supercomputer years to simulate. An emerging machine learning technique known as Bayesian Optimization asks, does the computer really need to check every single option? Rather than systematically sweeping the field, the method helps isolate the most promising drugs by implementing common-sense assumptions. Once it finds one reasonably effective solution, for instance, it might prioritize seeking small improvements with minor tweaks.

In trial-and-error fields like materials science and cosmetics, Turek says that this strategy can reduce the number of simulations needed by 70% to 90%. Recently, for instance, the technique has led to breakthroughs in battery design and the discovery of a new antibiotic.

Fields like climate science and particle physics use brute-force computation in a different way, by starting with simple mathematical laws of nature and calculating the behavior of complex systems. Climate models, for instance, try to predict how air currents conspire with forests, cities, and oceans to determine global temperature.

Mike Pritchard, a climatologist at the University of California, Irvine, hopes to figure out how clouds fit into this picture, but most current climate models are blind to features smaller than a few dozen miles wide. Crunching the numbers for a worldwide layer of clouds, which might be just a couple hundred feet tall, simply requires more mathematical brawn than any supercomputer can deliver.

Unless the computer understands how clouds interact better than we do, that is. Pritchard is one of many climatologists experimenting with training neural networksa machine learning technique that looks for patterns by trial and errorto mimic cloud behavior. This approach takes a lot of computing power up front to generate realistic clouds for the neural network to imitate. But once the network has learned how to produce plausible cloudlike behavior, it can replace the computationally intensive laws of nature in the global model, at least in theory. "It's a very exciting time," Pritchard says. "It could be totally revolutionary, if it's credible."

Companies are preparing their machines so researchers like Pritchard can take full advantage of the computational tools they're developing. Turek says IBM is focusing on designing AI-ready machines capable of extreme multitasking and quickly shuttling around huge quantities of information, and the Department of Energy contract for Aurora is Intel's first that specifies a benchmark for certain AI applications, according to Damkroger. Intel is also developing an open-source software toolkit called oneAPI that will make it easier for developers to create programs that run efficiently on a variety of processors, including CPUs and GPUs.As exascale and machine learning tools become increasingly available, scientists hope they'll be able to move past the computer engineering and focus on making new discoveries. "When we get to exascale that's only going to be half the story," Messer says. "What we actually accomplish at the exascale will be what matters."

See the original post:
Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time - CNBC

Owkin and the University of Pittsburgh Launch a Collaboration to Advance Cancer Research With AI and Federated Learning – AiThority

Owkin, a startup that deploys AI and Federated Learning technologies to augment medical research and enable scientific discoveries, announces a collaboration with the University of Pittsburgh. This pilot leverages the high-quality datasets and world-class medical research within Pitts Departments of Biomedical Informatics and Pathology, as well as Owkins pioneering technologies and research platform. Collaborations such as these have potential to advance clinical research and drug development.

Pitt researchers led by Michael Becich, MD, PhD, Associate Vice Chancellor for Informatics in the Health Sciences and Chairman and Distinguished University Professor of the Department of Biomedical Informatics (DBMI), will team up with Owkin to develop and validate prognostic machine learning models. The pilot project will then have the potential to expand into several key therapeutic areas for the University.

Recommended AI News: Vectors Of Innovation With Conversational AI

The Pitt Department of Biomedical Informatics in partnership with the Department of Pathology is committed to improving biomedical research and clinical care through the innovative application of informatics and best practices in next generation data sharing. This collaboration with Owkin will expand our innovations in the computational pathology space, Dr. Becich said. Our currently funded projects explore areas such as the intersection of genomics and machine learning applied to histopathologic imaging (computational pathology) to broaden our understanding of the role of the tumor microenvironment for precision immune-oncology.

This partnership makes it possible for Pitt to join the Owkin Loop, a federated network of US and European academic medical centers that collaborate with Owkin to generate new insights from high-quality, curated, research-grade, multi-modal patient data captured in clinical trials or research cohorts. Loop generated insights can inform pharmaceutical drug development strategy, from biomarker discovery to clinical trial design, and product differentiation. Owkin seeks to create a movement in medicine by establishing federated learning at the core of future research.

Recommended AI News: AiThority Interview with Ben John, Chief Technology Officer at Xandr

Federated learning technologies enable researchers in different institutions and different geographies to collaborate and train multicentric AI models on heterogeneous datasets, resulting in better predictive performance and higher generalizability. Data does not move, only the algorithms travel, thus protecting an institutions data governance and privacy. Furthermore, Owkins data use is compliant with local ethical body consent processes and data compliance regulations such as HIPAA and GDPR.

Were thrilled to launch this project with Dr. Becich and his team at Pitt. The quality and size of the Universitys research cohorts in combination with the DBMIs mandate to bring together healthcare physicians and innovative academics to work on some of the most cutting-edge science, makes this collaboration a great opportunity to develop predictive AI models and to scale other research in the future. Owkin is proud to bring their expertise in machine learning technologies and data scientists to the table to foment new clinical insights, Meriem Sefta, Owkin Head of Partnerships said.

Recommended AI News: 5 Innovative Applications Of Quantum Computing

See the original post:
Owkin and the University of Pittsburgh Launch a Collaboration to Advance Cancer Research With AI and Federated Learning - AiThority

4 ways to fine-tune your AI and machine learning deployments – TechRepublic

Life cycle management of artificial intelligence and machine learning initiatives is vital in order to rapidly deploy projects with up-to-date and relevant data.

Image: Chinnawat Ngamsom, Getty Images/iStockphoto

An institutional finance company wanted to improve time to market on the artificial intelligence (AI) and machine learning (ML) applications it was deploying. The goal was to reduce time to delivery on AI and ML applications, which had been taking 12 to 18 months to develop. The long lead times jeopardized the company's ability to meet its time-to-market goals in areas of operational efficiency, compliance, risk management, and business intelligence.

SEE: Prescriptive analytics: An insider's guide (free PDF) (TechRepublic)

After adopting a life-cycle management software for its AI and ML application development and deployment, the company was able to reduce its AI and ML application time to market to days, and in some cases, to hours. The process improvement enabled corporate data scientists to spend 90% of their time on data model development, instead of 80% of time on the resolution of technical challenges resulting from unwieldy deployment processes.

This is important because the longer you extend your big data and AI and ML modeling, development, and delivery processes, the greater the risk that you end up with modeling, data, and applications that are already out of date by the time they are ready to be implemented. In the compliance area alone, this creates risk and exposure.

"Three big problems enterprises face as they roll out artificial intelligence and machine learning projects is the inability to rapidly deploy projects, data performance decay, and compliance-related liability and losses," said Stu Bailey, chief technical officer of ModelOP, which provides software that deploys, monitors, and governs data science AI and ML models.

SEE:The top 10 languages for machine learning hosted on GitHub (free PDF)(TechRepublic)

Bailey believes that most problems arise out of a lack of ownership and collaborationbetween data science, IT, and business teams when it comes to getting data models into production in a timely manner. In turn, these delays adversely affect profitability and time-to-business insight.

"Another reason that organizations have difficulty managing the life cycle of their data models is that there are many different methods and tools today for producing data science and machine language models, but no standards for how they're deployed and managed," Bailey said.

The management of big data, AI, and ML life cycles can be prodigious tasks that go beyond having software and automation that does some of the "heavy lifting." Also, many organizations lack policies and procedures for these tasks. In this environment, data can rapidly become dated, application logic and business conditions can change, and new behaviors that humans must teach to machine language applications can become neglected.

SEE:Telemedicine, AI, and deep learning are revolutionizing healthcare (free PDF)(TechRepublic)

How can organizations ensure that the time and talent they put into their big data, AI, and ML applications remain relevant?

Most organizations acknowledge that collaboration between data science, IT, and end users is important, but they don't necessarily follow through. Effective collaboration between departments depends on clearly articulated policies and procedures that everyone adheres to in the areas of data preparation, compliance, speed to market, and learning for ML.

Companies often fail to establish regular intervals for updating logic and data for big data, AI, and ML applications in the field. The learning update cycle should be continuous--it's the only way you can assure concurrency between your algorithms and the world in which they operate.

Like their transaction system counterparts, there will come a time when some AI and ML applications will have seen their day. This is the end of their life cycles, and the appropriate thing to do is retire them.

If you can automate some of your life cycle maintenance functions for big data, AI, and ML, do so. Automation software can automate handoffs between data science IT and production. It makes the process of deployment that much easier.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Excerpt from:
4 ways to fine-tune your AI and machine learning deployments - TechRepublic

Web developers don’t need a math degree to get started with ML – JAXenter

What are notable entry hurdles for software and web developers who want to get started with machine learning? Google AI researchers Carrie J. Cai, Senior Research Scientist, Google Research and Philip J. Guo, Assistant Professor, UC San Diego, decided to find out. They analyzed the results of a survey among 645 TensorFlow.js users and published their results in a research paper as well as on the Google AI Blog.

First of all, what is TensorFlow.js? It was developed by Google, same as the popular ML framework TensorFlow. As the official website states, TensorFlow.js is a library for machine learning in JavaScript that lets you use ML directly in the browser or in Node.js.

In the TensorFlow.js survey, most participants were software or web developers, and were not using ML as part of their primary job. Among the motivations for learning ML was that they found the idea intellectually fascinating (26%) or believed that ML is the future (17%). A specific job task or use case was not reported very frequently, as only 11% of respondents gave this answer.

Lets take a look at the respondents challenges and expectations in machine learning.

According to the survey, developers struggle most with their own lack of conceptual understanding of ML. This shows in the initial stages of choosing when to apply ML as well as in creating the architecture of a neural net and carrying out the model training.

The respondents often felt they faced these challenges due to lack of experience in advanced mathematics, leading to imposter syndrome. As the Google AI researchers point out, though, this isnt the case. While an advanced math degree is useful, it is not necessarydespite mathematical terminology in API documentation that may suggest otherwise.

The survey also aimed to find out what software developers expect from ML frameworks and what features they would like to see implemented. It turned out that the respondents were often interested in pre-made ML models that should be customizable for a specific use case, and thus provide explicit support for modification.

Tips for best practices were also high up on the list of common wishes, and just-in-time strategic pointers such as diagnostic checks could improve the developer experience as well. Some respondents voiced their desire for learning-by-doing tutorials in ML frameworks.

As the study concludes:

Software developers are now treating modern ML frame-works as lightweight vehicles for learning and tinkering. Our work provides evidence that, even with the existence of such APIs, developers still face substantial hurdles due to a perceived lack of conceptual and mathematical understanding. In the future, ML frameworks could help by de-mystifying theoretical concepts and synthesizing ML best practices into just-in-time, practical tips.

See the Google AI Blog for more information.

Originally posted here:
Web developers don't need a math degree to get started with ML - JAXenter

How is AI and machine learning benefiting the healthcare industry? – Health Europa

In order to help build increasingly effective care pathways in healthcare, modern artificial intelligence technologies must be adopted and embraced. Events such as the AI & Machine Learning Convention are essential in providing medical experts around the UK access to the latest technologies, products and services that are revolutionising the future of care pathways in the healthcare industry.

AI has the potential to save the lives of current and future patients and is something that is starting to be seen across healthcare services across the UK. Looking at diagnostics alone, there have been large scale developments in rapid image recognition, symptom checking and risk stratification.

AI can also be used to personalise health screening and treatments for cancer, not only benefiting the patient but clinicians too enabling them to make the best use of their skills, informing decisions and saving time.

The potential AI will have on the NHS is clear, so much so, NHS England is setting up a national artificial intelligence laboratory to enhance the care of patients and research.

The Health Secretary, Matt Hancock, commented that AI had enormous power to improve care, save lives and ensure that doctors had more time to spend with patients, so he pledged 250M to boost the role of AI within the health service.

The AI and Machine Learning Convention is a part of Mediweek, the largest healthcare event in the UK and as a new feature of the Medical Imaging Convention and the Oncology Convention, the AI and Machine Learning expo offer an effective CPD accredited education programme.

Hosting over 50 professional-led seminars, the lineup includes leading artificial intelligence and machine learning experts such as NHS Englands Dr Minai Bakhai, Faculty of Clinical Informatics Professor Jeremy Wyatt, and Professor Claudia Pagliari from the University of Edinburgh.

Other speakers in the seminar programme come from leading organisations such as the University of Oxford, Kings College London, and the School of Medicine at the University of Nottingham.

The event all takes place at the National Exhibition Centre, Birmingham on the 17th and 18th March 2020. Tickets to the AI and Machine Learning are free and gains you access to the other seven shows within MediWeek.

Health Europa is proud to be partners with the AI and Machine Learning Convention, click here to get your tickets.

Do you want the latest news and updates from Health Europa? Click here to subscribe to all the latest updates and stay connected with us here.

View post:
How is AI and machine learning benefiting the healthcare industry? - Health Europa

What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps – The Register

Achieving production-level governance with machine-learning projects currently presents unique challenges. A new space of tools and practices is emerging under the name MLOps. The space is analogous to DevOps but tailored to the practices and workflows of machine learning.

Machine learning models make predictions for new data based on the data they have been trained on. Managing this data in a way that can be safely used in live environments is challenging, and one of the key reasons why 80 per cent of data science projects never make it to production an estimate from Gartner.

It is essential that the data is clean, correct, and safe to use without any privacy or bias issues. Real-world data can also continuously change, so inputs and predictions have to be monitored for any shifts that may be problematic for the model. These are complex challenges that are distinct from those found in traditional DevOps.

DevOps practices are centred on the build and release process and continuous integration. Traditional development builds are packages of executable artifacts compiled from source code. Non-code supporting data in these builds tends to be limited to relatively small static config files. In essence, traditional DevOps is geared to building programs consisting of sets of explicitly defined rules that give specific outputs in response to specific inputs.

In contrast, machine-learning models make predictions by indirectly capturing patterns from data, not by formulating all the rules. A characteristic machine-learning problem involves making new predictions based on known data, such as predicting the price of a house using known house prices and details such as the number of bedrooms, square footage, and location. Machine-learning builds run a pipeline that extracts patterns from data and creates a weighted machine-learning model artifact. This makes these builds far more complex and the whole data science workflow more experimental. As a result, a key part of the MLOps challenge is supporting multi-step machine learning model builds that involve large data volumes and varying parameters.

To run projects safely in live environments, we need to be able to monitor for problem situations and see how to fix things when they go wrong. There are pretty standard DevOps practices for how to record code builds in order to go back to old versions. But MLOps does not yet have standardisation on how to record and go back to the data that was used to train a version of a model.

There are also special MLOps challenges to face in the live environment. There are largely agreed DevOps approaches for monitoring for error codes or an increase in latency. But its a different challenge to monitor for bad predictions. You may not have any direct way of knowing whether a prediction is good, and may have to instead monitor indirect signals such as customer behaviour (conversions, rate of customers leaving the site, any feedback submitted). It can also be hard to know in advance how well your training data represents your live data. For example, it might match well at a general level but there could be specific kinds of exceptions. This risk can be mitigated with careful monitoring and cautious management of the rollout of new versions.

The effort involved in solving MLOps challenges can be reduced by leveraging a platform and applying it to the particular case. Many organisations face a choice of whether to use an off-the-shelf machine-learning platform or try to put an in-house platform together themselves by assembling open-source components.

Some machine-learning platforms are part of a cloud providers offering, such as AWS SageMaker or AzureML. This may or may not appeal, depending on the cloud strategy of the organisation. Other platforms are not cloud-specific and instead offer self-install or a custom hosted solution (eg, Databricks MLflow).

Instead of choosing a platform, organisations can instead choose to assemble their own. This may be a preferred route when requirements are too niche to fit a current platform, such as needing integrations to other in-house systems or if data has to be stored in a particular location or format. Choosing to assemble an in-house platform requires learning to navigate the ML tool landscape. This landscape is complex with different tools specialising in different niches and in some cases there are competing tools approaching similar problems in different ways (see the Linux Foundations LF AI project for a visualization or categorised lists from the Institute for Ethical AI).

The Linux Foundations diagram of MLOps tools ... Click for full detail

For organisations using Kubernetes, the kubeflow project presents an interesting option as it aims to curate a set of open-source tools and make them work well together on kubernetes. The project is led by Google, and top contributors (as listed by IBM) include IBM, Cisco, Caicloud, Amazon, and Microsoft, as well as ML tooling provider Seldon, Chinese tech giant NetEase, Japanese tech conglomerate NTT, and hardware giant Intel.

Challenges around reproducibility and monitoring of machine learning systems are governance problems. They need to be addressed in order to be confident that a production system can be maintained and that any challenges from auditors or customers can be answered. For many projects these are not the only challenges as customers might reasonably expect to be able to ask why a prediction concerning them was made. In some cases this may also be a legal requirement as the European Unions General Data Protection Regulation states that a "data subject" has a right to "meaningful information about the logic involved" in any automated decision that relates to them.

Explainability is a data science problem in itself. Modelling techniques can be divided into black-box and white-box, depending on whether the method can naturally be inspected to provide insight into the reasons for particular predictions. With black-box models, such as proprietary neural networks, the options for interpreting results are more restricted and more difficult to use than the options for interpreting a white-box linear model. In highly regulated industries, it can be impossible for AI projects to move forward without supporting explainability. For example, medical diagnosis systems may need to be highly interpretable so that they can be investigated when things go wrong or so that the model can aid a human doctor. This can mean that projects are restricted to working with models that admit of acceptable interpretability. Making black-box models more interpretable is a fast-growth area, with new techniques rapidly becoming available.

The MLOps scene is evolving as machine-learning becomes more widely adopted, and we learn more about what counts as best practice for different use cases. Different organisations have different machine learning use cases and therefore differing needs. As the field evolves well likely see greater standardisation, and even the more challenging use cases will become better supported.

Ryan Dawson is a core member of the Seldon open-source team, providing tooling for machine-learning deployments to Kubernetes. He has spent 10 years working in the Java development scene in London across a variety of industries.

Bringing DevOps principles to machine learning throws up some unique challenges, not least very different workflows and artifacts. Ryan will dive into this topic in May at Continuous Lifecycle London 2020 a conference organized by The Register's mothership, Situation Publishing.

You can find out more, and book tickets, right here.

Sponsored: Quit your addiction to storage

View original post here:
What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps - The Register

If AI’s So Smart, Why Can’t It Grasp Cause and Effect? – WIRED

Heres a troubling fact. A self-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child whos just learning to walk.

A new experiment shows how difficult it is for even the best artificial intelligence systems to grasp rudimentary physics and cause and effect. It also offers a path for building AI systems that can learn why things happen.

The experiment was designed to push beyond just pattern recognition, says Josh Tenenbaum, a professor at MITs Center for Brains Minds & Machines, who led the work. Big tech companies would love to have systems that can do this kind of thing.

The most popular cutting-edge AI technique, deep learning, has delivered some stunning advances in recent years, fueling excitement about the potential of AI. It involves feeding a large approximation of a neural network copious amounts of training data. Deep-learning algorithms can often spot patterns in data beautifully, enabling impressive feats of image and voice recognition. But they lack other capabilities that are trivial for humans.

To demonstrate the shortcoming, Tenenbaum and his collaborators built a kind of intelligence test for AI systems. It involves showing an AI program a simple virtual world filled with a few moving objects, together with questions and answers about the scene and whats going on. The questions and answers are labeled, similar to how an AI system learns to recognize a cat by being shown hundreds of images labeled cat.

Systems that use advanced machine learning exhibited a big blind spot. Asked a descriptive question such as What color is this object? a cutting-edge AI algorithm will get it right more than 90 percent of the time. But when posed more complex questions about the scene, such as What caused the ball to collide with the cube? or What would have happened if the objects had not collided? the same system answers correctly only about 10 percent of the time.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

David Cox, director of the MIT-IBM Watson AI Lab, which was involved with the work, says understanding causality is fundamentally important for AI. We as humans have the ability to reason about cause and effect, and we need to have AI systems that can do the same.

A lack of causal understanding can have real consequences, too. Industrial robots can increasingly sense nearby objects, in order to grasp or move them. But they don't know that hitting something will cause it to fall over or break unless theyve been specifically programmedand its impossible to predict every possible scenario.

If a robot could reason causally, however, it might be able to avoid problems it hasnt been programmed to understand. The same is true for a self-driving car. It could instinctively know that if a truck were to swerve and hit a barrier, its load could spill onto the road.

Causal reasoning would be useful for just about any AI system. Systems trained on medical information rather than 3-D scenes need to understand the cause of disease and the likely result of possible interventions. Causal reasoning is of growing interest to many prominent figures in AI. All of this is driving towards AI systems that can not only learn but also reason, Cox says.

The test devised by Tenenbaum is important, says Kun Zhang, an assistant professor who works on causal inference and machine learning at Carnegie Mellon University, because it provides a good way to measure causal understanding, albeit in a very limited setting. The development of more-general-purpose AI systems will greatly benefit from methods for causal inference and representation learning, he says.

See more here:
If AI's So Smart, Why Can't It Grasp Cause and Effect? - WIRED

An implant uses machine learning to give amputees control over prosthetic hands – MIT Technology Review

Researchers have been working to make mind-controlled prosthetics a reality for at least a decade. In theory, an artificial hand that amputees could control with their mind could restore their ability to carry out all sorts of daily tasks, and dramatically improve their standard of living.

However, until now scientists have faced a major barrier: they havent been able to access nerve signals that are strong or stable enough to send to the bionic limb. Although its possible to get this sort of signal using a brain-machine interface, the procedure to implant one is invasive and costly. And the nerve signals carried by the peripheral nerves that fan out from the brain and spinal cord are too small.

A new implant gets around this problem by using machine learning to amplify these signals. A study, published in Science Translational Medicine today, found that it worked for four amputees for almost a year. It gave them fine control of their prosthetic hands and let them pick up miniature play bricks, grasp items like soda cans, and play Rock, Paper, Scissors.

Sign up for The Algorithm artificial intelligence, demystified

Its the first time researchers have recorded millivolt signals from a nervefar stronger than any previous study.

The strength of this signal allowed the researchers to train algorithms to translate them into movements. The first time we switched it on, it worked immediately, says Paul Cederna, a biomechanics professor at the University of Michigan, who co-led the study. There was no gap between thought and movement.

The procedure for the implant requires one of the amputees peripheral nerves to be cut and stitched up to the muscle. The site heals, developing nerves and blood vessels over three months. Electrodes are then implanted into these sites, allowing a nerve signal to be recorded and passed on to a prosthetic hand in real time. The signals are turned into movements using machine-learning algorithms (the same types that are used for brain-machine interfaces).

Amputees wearing the prosthetic hand were able to control each individual finger and swivel their thumbs, regardless of how recently they had lost their limb. Their nerve signals were recorded for a few minutes to calibrate the algorithms to their individual signals, but after that each implant worked straight away, without any need to recalibrate during the 300 days of testing, according to study co-leader Cynthia Chestek, an associate professor in biomedical engineering at the University of Michigan.

Its just a proof-of-concept study, so it requires further testing to validate the results. The researchers are recruiting amputees for an ongoing clinical trial, funded by DARPA and the National Institutes of Health.

Excerpt from:
An implant uses machine learning to give amputees control over prosthetic hands - MIT Technology Review