Category Archives: Machine Learning

Machine Learning in Insurance Market(COVID-19 Analysis): Indoor Applications Projected to be the Most Attractive Segment during 2020-2027 – Global…

COVID-19 can affect the global economy in three main ways: by directly affecting production and demand, by creating supply chain and market disruption, and by its financial impact on firms and financial markets. Global Machine Learning in Insurance Market size has covered and analysed the potential of Worldwide market Industry and provides statistics and information on market dynamics, growth factors, key challenges, major drivers & restraints, opportunities and forecast. This report presents a comprehensive overview, market shares, and growth opportunities of market 2020 by product type, application, key manufacturers and key regions and countries.

The recently released report byMarket Research Inctitled as Global Machine Learning in Insurancemarket is a detailed analogy that gives the reader an insight into the intricacies of the various elements like the growth rate, and impact of the socio-economic conditions that affect the market space. An in-depth study of these numerous components is essential as all these aspects need to blend-in seamlessly for businesses to achieve success in this industry.

Request a sample copy of this report @:

https://www.marketresearchinc.com/request-sample.php?id=31501

Top key players:State Farm, Liberty Mutual, Allstate, Progressive, Accenture

This report provides a comprehensive analysis of(Market Size & Forecast, Different Demand Market by Region, Main Consumer Profile etc

This market research report on the Global Machine Learning in InsuranceMarket is an all-inclusive study of the business sectors up-to-date outlines, industry enhancement drivers, and manacles. It provides market projections for the coming years. It contains an analysis of late augmentations in innovation, Porters five force model analysis and progressive profiles of hand-picked industry competitors. The report additionally formulates a survey of minor and full-scale factors charging for the new applicants in the market and the ones as of now in the market along with a systematic value chain exploration.

Rendering to the research report, the global Machine Learning in Insurancemarket has gained substantial momentum over the past few years. The swelling acceptance, the escalating demand and need for this markets product are mentioned in this study. The factors powering their adoption among consumers are stated in this report study. It estimates the market taking up a number of imperative parameters such as the type and application into consideration. In addition to this, the geographical occurrence of this market has been scrutinized closely in the research study.

Get a reasonable discount on this premium report @:

https://www.marketresearchinc.com/ask-for-discount.php?id=31501

Additionally, this report recognizes pin-point investigation of adjusting competition subtleties and keeps you ahead in the competition. It offers a fast-looking perception on different variables driving or averting the development of the market. It helps in understanding the key product areas and their future. It guides in taking knowledgeable business decisions by giving complete constitutions of the market and by enclosing a comprehensive analysis of market subdivisions. To sum up, it equally gives certain graphics and personalized SWOT analysis of premier market sectors.

This report gives an extensively wide-ranging analysis of the market expansion drivers, factors regulating and avoiding market enlargement, existing business sector outlines, market association, market predictions for coming years.

Further information:

https://www.marketresearchinc.com/enquiry-before-buying.php?id=31501

In this study, the years considered to estimate the size ofMachine Learning in Insuranceare as follows:

History Year: 2015-2018

Base Year: 2019

Forecast Year 2020 to 2028.

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Kevin

51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us:+1 (628) 225-1818

Write Us@sales@marketresearchinc.com

https://www.marketresearchinc.com

More here:
Machine Learning in Insurance Market(COVID-19 Analysis): Indoor Applications Projected to be the Most Attractive Segment during 2020-2027 - Global...

Leveraging Machine Learning and IDP to Scale Your Automation Program – AiiA

Add bookmark

As document and input types get more and more complex, legacy business process automation technologies, like Robotic Process Automation (RPA), can struggle to keep up. Designed to execute precise rules and work with structured data inputs, these approaches lack the intelligence to handle the variability and ambiguity of diverse, real-world document processing workflows, making it a partial enterprise solution that needs to be supplemented with more intelligent technology.

Fortunately, theres a sea change coming to back offices, reshaping the way organizations operate. Recent breakthroughs in Artificial Intelligence - and Machine Learning, specifically - are helping businesses replace aging tech stacks and inflexible workflows with technology that supports the kind of responsiveness and innovation required to keep pace with an ever-changing market. These advances bring forth a reliable path towards automating processes that have previously only been possible by people, and they finally fulfill the promises made (and broken) by earlier technologies many times before.

Intelligent Document Processing (IDP) solutions, in particular, leverage the latest in ML and AI to capture data from documents (e.g. text, PDFs, scanned images, emails), and categorizes and extracts relevant data for further processing. Leading IDP solutions that leverage the latest in Machine Learning continue to learn on the data theyre exposed to, driving lower error rates and greater automation.

Read the original here:
Leveraging Machine Learning and IDP to Scale Your Automation Program - AiiA

5 machine learning skills you need in the cloud – TechTarget

Machine learning and AI continue to reach further into IT services and complement applications developed by software engineers. IT teams need to sharpen their machine learning skills if they want to keep up.

Cloud computing services support an array of functionality needed to build and deploy AI and machine learning applications. In many ways, AI systems are managed much like other software that IT pros are familiar with in the cloud. But just because someone can deploy an application, that does not necessarily mean they can successfully deploy a machine learning model.

While the commonalities may partially smooth the transition, there are significant differences. Members of your IT teams need specific machine learning and AI knowledge, in addition to software engineering skills. Beyond the technological expertise, they also need to understand the cloud tools currently available to support their team's initiatives.

Explore the five machine learning skills IT pros need to successfully use AI in the cloud and get to know the products Amazon, Microsoft and Google offer to support them. There is some overlap in the skill sets, but don't expect one individual to do it all. Put your organization in the best position to utilize cloud-based machine learning by developing a team of people with these skills.

IT pros need to understand data engineering if they want to pursue any type of AI strategy in the cloud. Data engineering is comprised of a broad set of skills that requires data wrangling and workflow development, as well as some knowledge of software architecture.

These different areas of IT expertise can be broken down into different tasks IT pros should be able to accomplish. For example, data wrangling typically involves data source identification, data extraction, data quality assessments, data integration and pipeline development to carry out these operations in a production environment.

Data engineers should be comfortable working with relational databases, NoSQL databases and object storage systems. Python is a popular programming language that can be used with batch and stream processing platforms, like Apache Beam, and distributed computing platforms, such as Apache Spark. Even if you are not an expert Python programmer, having some knowledge of the language will enable you to draw from a broad array of open source tools for data engineering and machine learning.

Data engineering is well supported in all the major clouds. AWS has a full range of services to support data engineering, such as AWS Glue, Amazon Managed Streaming for Apache Kafka (MSK) and various Amazon Kinesis services. AWS Glue is a data catalog and extract, transform and load (ETL) service that includes support for scheduled jobs. MSK is a useful building block for data engineering pipelines, while Kinesis services are especially useful for deploying scalable stream processing pipelines.

Google Cloud Platform offers Cloud Dataflow, a managed Apache Beam service that supports batch and steam processing. For ETL processes, Google Cloud Data Fusion provides a Hadoop-based data integration service. Microsoft Azure also provides several managed data tools, such as Azure Cosmos DB, Data Catalog and Data Lake Analytics, among others.

Machine learning is a well-developed discipline, and you can make a career out of studying and developing machine learning algorithms.

IT teams use the data delivered by engineers to build models and create software that can make recommendations, predict values and classify items. It is important to understand the basics of machine learning technologies, even though much of the model building process is automated in the cloud.

As a model builder, you need to understand the data and business objectives. It's your job to formulate the solution to the problem and understand how it will integrate with existing systems.

Some products on the market include Google's Cloud AutoML, which is a suite of services that help build custom models using structured data as well as images, video and natural language without requiring much understanding of machine learning. Azure offers ML.NET Model Builder in Visual Studio, which provides an interface to build, train and deploy models. Amazon SageMaker is another managed service for building and deploying machine learning models in the cloud.

These tools can choose algorithms, determine which features or attributes in your data are most informative and optimize models using a process known as hyperparameter tuning. These kinds of services have expanded the potential use of machine learning and AI strategies. Just as you do not have to be a mechanical engineer to drive a car, you do not need a graduate degree in machine learning to build effective models.

Algorithms make decisions that directly and significantly impact individuals. For example, financial services use AI to make decisions about credit, which could be unintentionally biased against particular groups of people. This not only has the potential to harm individuals by denying credit but it also puts the financial institution at risk of violating regulations, like the Equal Credit Opportunity Act.

These seemingly menial tasks are imperative to AI and machine learning models. Detecting bias in a model can require savvy statistical and machine learning skills but, as with model building, some of the heavy lifting can be done by machines.

FairML is an open source tool for auditing predictive models that helps developers identify biases in their work. Experience with detecting bias in models can also help inform the data engineering and model building process. Google Cloud leads the market with fairness tools that include the What-If Tool, Fairness Indicators and Explainable AI services.

Part of the model building process is to evaluate how well a machine learning model performs. Classifiers, for example, are evaluated in terms of accuracy, precision and recall. Regression models, such as those that predict the price at which a house will sell, are evaluated by measuring their average error rate.

A model that performs well today may not perform as well in the future. The problem is not that the model is somehow broken, but that the model was trained on data that no longer reflects the world in which it is used. Even without sudden, major events, data drift can occur. It is important to evaluate models and continue to monitor them as long as they are in production.

Services such as Amazon SageMaker, Azure Machine Learning Studio and Google Cloud AutoML include an array of model performance evaluation tools.

Domain knowledge is not specifically a machine learning skill, but it is one of the most important parts of a successful machine learning strategy.

Every industry has a body of knowledge that must be studied in some capacity, especially when building algorithmic decision-makers. Machine learning models are constrained to reflect the data used to train them. Humans with domain knowledge are essential to knowing where to apply AI and to assess its effectiveness.

Read more here:
5 machine learning skills you need in the cloud - TechTarget

Machine learning approach could detect drivers of atrial fibrillation – Cardiac Rhythm News

Mapping of the explanted human heart

Researchers have designed a new machine learning-based approach for detecting atrial fibrillation (AF) drivers, small patches of the heart muscle that are hypothesised to cause this most common type of cardiac arrhythmia. This approach may lead to more efficient targeted medical interventions to treat the condition, according to the authors of the paper published in the journal Circulation: Arrhythmia and Electrophysiology.

The mechanism behind AF is yet unclear, although research suggests it may be caused and maintained by re-entrant AF drivers, localised sources of repetitive rotational activity that lead to irregular heart rhythm. These drivers can be burnt via a surgical procedure, which can mitigate the condition or even restore the normal functioning of the heart.

To locate these re-entrant AF drivers for subsequent destruction, doctors use multi-electrode mapping, a technique that allows them to record multiple electrograms inside the using a catheter and build a map of electrical activity within the atria. However, clinical applications of this technique often produce a lot of false negatives, when an existing AF driver is not found, and false positives, when a driver is detected where there really is none.

Recently, researchers have tapped machine learning algorithms for the task of interpreting ECGs to look for AF; however, these algorithms require labelled data with the true location of the driver, and the accuracy of multi-electrode mapping is insufficient. The authors of the new study, co-led by Dmitry Dylov from the Skoltech Center of Computational and Data-Intensive Science and Engineering (CDISE, Moscow, Russia) and Vadim Fedorov from the Ohio State University (Columbus, USA) used high-resolution near-infrared optical mapping (NIOM) to locate AF drivers and stuck with it as a reference for training.

NIOM is based on well-penetrating infrared optical signals and therefore can record the electrical activity from within the heart muscle, whereas conventional clinical electrodes can only measure the signals on the surface. Add to this trait the excellent optical resolution, and the optical mapping becomes a no-brainer modality if you want to visualize and understand the electrical signal propagation through the heart tissue, said Dylov.

The team tested their approach on 11 explanted human hearts, all donated posthumously for research purposes. Researchers performed the simultaneous optical and multi-electrode mapping of AF episodes induced in the hearts. ML model can indeed efficiently interpret electrograms from multielectrode mapping to locate AF drivers, with an accuracy of up to 81%. They believe that larger training datasets, validated by NIOM, can improve machine learning-based algorithms enough for them to become complementary tools in clinical practice.

The dataset of recording from 11 human hearts is both extremely priceless and too small. We realiaed that clinical translation would require a much larger sample size for representative sampling, yet we had to make sure we extracted every piece of available information from the still-beating explanted human hearts. Dedication and scrutiny of two of our PhD students must be acknowledged here: Sasha Zolotarev spent several months on the academic mobility trip to Fedorovs lab understanding the specifics of the imaging workflow and present the pilot study at the HRS conference the biggest arrhythmology meeting in the world, and Katya Ivanova partook in the frequency and visualization analysis from within the walls of Skoltech. These two young researchers have squeezed out everything one possibly could, to train the machine learning model using optical measurements, Dylov notes.

Read the original:
Machine learning approach could detect drivers of atrial fibrillation - Cardiac Rhythm News

Vanderbilt trans-institutional team shows how next-gen wearable sensor algorithms powered by machine learning could be key to preventing injuries that…

A trans-institutional team of Vanderbilt engineering, data science and clinical researchers has developed a novel approach for monitoring bone stress in recreational and professional athletes, with the goal of anticipating and preventing injury. Using machine learning and biomechanical modeling techniques, the researchers built multisensory algorithms that combine data from lightweight, low-profile wearable sensors in shoes to estimate forces on the tibia, or shin bonea common place for runners stress fractures.

The research builds off the researchers2019 study,which found that commercially available wearables do not accurately monitor stress fracture risks.Karl Zelik, assistant professor of mechanical engineering, biomedical engineering and physical medicine and rehabilitation, sought to develop a better technique to solve this problem.Todays wearablesmeasure ground reaction forceshow hard the foot impacts or pushes against the groundto assess injury risks like stress fractures to the leg, Zelik said. While it may seem intuitive to runners and clinicians that the force under your foot causes loading on your leg bones, most of your bone loading is actually from muscle contractions. Its this repetitive loading on the bone that causes wear and tear and increases injury risk to bones, including the tibia.

The article, Combining wearable sensor signals, machine learning and biomechanics to estimate tibial bone force and damage during running was publishedonlinein the journalHuman Movement Scienceon Oct. 22.

The algorithms have resulted in bone force data that is up to four times more accurate than available wearables, and the study found that traditional wearable metrics based on how hard the foot hits the ground may be no more accurate for monitoring tibial bone load than counting steps with a pedometer.

Bones naturally heal themselves, but if the rate of microdamage from repeated bone loading outpaces the rate of tissue healing, there is an increased risk of a stress fracture that can put a runner out of commission for two to three months. Small changes in bone load equate to exponential differences in bone microdamage, said Emily Matijevich, a graduate student and the director of theCenter for Rehabilitation Engineering and Assistive TechnologyMotion Analysis Lab. We have found that 10 percent errors in force estimates cause 100 percent errors in damage estimates. Largely over- or under-estimating the bone damage that results from running has severe consequences for athletes trying to understand their injury risk over time. This highlights why it is so important for us to develop more accurate techniques to monitor bone load and design next-generation wearables. The ultimate goal of this tech is to better understand overuse injury risk factors and then prompt runners to take rest days or modify training before an injury occurs.

The machine learning algorithm leverages the Least Absolute Shrinkage and Selection Operator regression, using a small group of sensors to generate highly accurate bone load estimates, with average errors of less than three percent, while simultaneously identifying the most valuable sensor inputs, saidPeter Volgyesi, a research scientist at the Vanderbilt Institute for Software Integrated Systems. I enjoyed being part of the team.This is a highly practical application of machine learning, markedly demonstrating the power of interdisciplinary collaboration with real-life broader impact.

This research represents a major leap forward in health monitoring capabilities. This innovation is one of the first examples of a wearable technology that is both practical to wear in daily life and can accuratelymonitor forces on and microdamage to musculoskeletal tissues.The team has begun applying similar techniques to monitor low back loading and injury risks, designed for people in occupations that require repetitive lifting and bending. These wearables could track the efficacy of post-injury rehab or inform return-to-play or return-to-work decisions.

We are excited about the potential for this kind of wearable technology to improve assessment, treatment and prevention of other injuries like Achilles tendonitis, heel stress fractures or low back strains, saidMatijevich, the papers corresponding author.The group has filed multiple patents on their invention and is in discussions with wearable tech companies to commercialize these innovations.

This research was funded by National Institutes of Health grant R01EB028105 and the Vanderbilt University Discovery Grant program.

Follow this link:
Vanderbilt trans-institutional team shows how next-gen wearable sensor algorithms powered by machine learning could be key to preventing injuries that...

Machine Learning & Big Data Analytics Education Market Size And Forecast (2020-2026)| With Post Impact Of Covid-19 By Top Leading Players-…

This report studies the Machine Learning & Big Data Analytics Education Market with many aspects of the industry like the market size, market status, market trends and forecast, the report also provides brief information of the competitors and the specific growth opportunities with key market drivers. Find the complete Machine Learning & Big Data Analytics Education Market analysis segmented by companies, region, type and applications in the report.

The report offers valuable insight into the Machine Learning & Big Data Analytics Education market progress and approaches related to the Machine Learning & Big Data Analytics Education market with an analysis of each region. The report goes on to talk about the dominant aspects of the market and examine each segment.

Key Players:DreamBox Learning,Jenzabar, Inc.,,com, Inc.,,Cognizant,IBM Corporation,Metacog, Inc.,Querium Corporation.,Pearson,Blackboard, Inc.,Fishtree,Quantum Adaptive Learning, LLC,Third Space Learning,Bridge-U,Century-Tech Ltd,Microsoft Corporation,Knewton, Inc.,Google,Jellynote.

Get a Free Sample Copy @ https://www.reportsandmarkets.com/sample-request/global-machine-learning-big-data-analytics-education-market-report-2020-by-key-players-types-applications-countries-market-size-forecast-to-2026-based-on-2020-covid-19-worldwide-spread?utm_source=aerospace-journal&utm_medium=46

The global Machine Learning & Big Data Analytics Education market is segmented by company, region (country), by Type, and by Application. Players, stakeholders, and other participants in the global Machine Learning & Big Data Analytics Education market will be able to gain the upper hand as they use the report as a powerful resource. The segmental analysis focuses on revenue and forecast by region (country), by Type, and by Application for the period 2020-2026.

Market Segment by Regions, regional analysis covers

North America (United States, Canada and Mexico)

Europe (Germany, France, UK, Russia and Italy)

Asia-Pacific (China, Japan, Korea, India and Southeast Asia)

South America (Brazil, Argentina, Colombia etc.)

Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)

Research objectives:

To study and analyze the global Machine Learning & Big Data Analytics Education market size by key regions/countries, product type and application, history data from 2013 to 2017, and forecast to 2026.

To understand the structure of Machine Learning & Big Data Analytics Education market by identifying its various sub segments.

Focuses on the key global Machine Learning & Big Data Analytics Education players, to define, describe and analyze the value, market share, market competition landscape, SWOT analysis and development plans in next few years.

To analyze the Machine Learning & Big Data Analytics Education with respect to individual growth trends, future prospects, and their contribution to the total market.

To share detailed information about the key factors influencing the growth of the market (growth potential, opportunities, drivers, industry-specific challenges and risks).

To project the size of Machine Learning & Big Data Analytics Education submarkets, with respect to key regions (along with their respective key countries).

To analyze competitive developments such as expansions, agreements, new product launches and acquisitions in the market.

To strategically profile the key players and comprehensively analyze their growth strategies.

The report lists the major players in the regions and their respective market share on the basis of global revenue. It also explains their strategic moves in the past few years, investments in product innovation, and changes in leadership to stay ahead in the competition. This will give the reader an edge over others as a well-informed decision can be made looking at the holistic picture of the market.

Table of Contents: Machine Learning & Big Data Analytics Education Market

Key questions answered in this report

Get complete Report @ https://www.reportsandmarkets.com/sample-request/global-machine-learning-big-data-analytics-education-market-report-2020-by-key-players-types-applications-countries-market-size-forecast-to-2026-based-on-2020-covid-19-worldwide-spread?utm_source=aerospace-journal&utm_medium=46

About Us:

Reports and Markets is not just another company in this domain but is a part of a veteran group called Algoro Research Consultants Pvt. Ltd. It offers premium progressive statistical surveying, market research reports, analysis & forecast data for a wide range of sectors both for the government and private agencies all across the world. The database of the company is updated on a daily basis. Our database contains a variety of industry verticals that include: Food Beverage, Automotive, Chemicals and Energy, IT & Telecom, Consumer, Healthcare, and many more. Each and every report goes through the appropriate research methodology, Checked from the professionals and analysts.

Contact Us:

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Ph: +1-352-353-0818 (US)

Visit link:
Machine Learning & Big Data Analytics Education Market Size And Forecast (2020-2026)| With Post Impact Of Covid-19 By Top Leading Players-...

The security threat of adversarial machine learning is real – TechTalks

The Adversarial ML Threat Matrix provides guidelines that help detect and prevent attacks on machine learning systems.

This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

With machine learning becoming increasingly popular, one thing that has been worrying experts is the security threats the technology will entail. We are still exploring the possibilities: The breakdown of autonomous driving systems? Inconspicuous theft of sensitive data from deep neural networks? Failure of deep learningbased biometric authentication? Subtle bypass of content moderation algorithms?

Meanwhile, machine learning algorithms have already found their way into critical fields such as finance, health care, and transportation, where security failures can have severe repercussion.

Parallel to the increased adoption of machine learning algorithms in different domains, there has been growing interest in adversarial machine learning, the field of research that explores ways learning algorithms can be compromised.

And now, we finally have a framework to detect and respond to adversarial attacks against machine learning systems. Called the Adversarial ML Threat Matrix, the framework is the result of a joint effort between AI researchers at 13 organizations, including Microsoft, IBM, Nvidia, and MITRE.

While still in early stages, the ML Threat Matrix provides a consolidated view of how malicious actors can take advantage of weaknesses in machine learning algorithms to target organizations that use them. And its key message is that the threat of adversarial machine learning is real and organizations should act now to secure their AI systems.

The Adversarial ML Threat Matrix is presented in the style of ATT&CK, a tried-and-tested framework developed by MITRE to deal with cyber-threats in enterprise networks. ATT&CK provides a table that summarizes different adversarial tactics and the types of techniques that threat actors perform in each area.

Since its inception, ATT&CK has become a popular guide for cybersecurity experts and threat analysts to find weaknesses and speculate on possible attacks. The ATT&CK format of the Adversarial ML Threat Matrix makes it easier for security analysts to understand the threats of machine learning systems. It is also an accessible document for machine learning engineers who might not be deeply acquainted with cybersecurity operations.

Many industries are undergoing digital transformation and will likely adopt machine learning technology as part of service/product offerings, including making high-stakes decisions, Pin-Yu Chen, AI researcher at IBM, told TechTalks in written comments. The notion of system has evolved and become more complicated with the adoption of machine learning and deep learning.

For instance, Chen says, an automated financial loan application recommendation can change from a transparent rule-based system to a black-box neural network-oriented system, which could have considerable implications on how the system can be attacked and secured.

The adversarial threat matrix analysis (i.e., the study) bridges the gap by offering a holistic view of security in emerging ML-based systems, as well as illustrating their causes from traditional means and new risks induce by ML, Chen says.

The Adversarial ML Threat Matrix combines known and documented tactics and techniques used in attacking digital infrastructure with methods that are unique to machine learning systems. Like the original ATT&CK table, each column represents one tactic (or area of activity) such as reconnaissance or model evasion, and each cell represents a specific technique.

For instance, to attack a machine learning system, a malicious actor must first gather information about the underlying model (reconnaissance column). This can be done through the gathering of open-source information (arXiv papers, GitHub repositories, press releases, etc.) or through experimentation with the application programming interface that exposes the model.

Each new type of technology comes with its unique security and privacy implications. For instance, the advent of web applications with database backends introduced the concept SQL injection. Browser scripting languages such as JavaScript ushered in cross-site scripting attacks. The internet of things (IoT) introduced new ways to create botnets and conduct distributed denial of service (DDoS) attacks. Smartphones and mobile apps create new attack vectors for malicious actors and spying agencies.

The security landscape has evolved and continues to develop to address each of these threats. We have anti-malware software, web application firewalls, intrusion detection and prevention systems, DDoS protection solutions, and many more tools to fend off these threats.

For instance, security tools can scan binary executables for the digital fingerprints of malicious payloads, and static analysis can find vulnerabilities in software code. Many platforms such as GitHub and Google App Store already have integrated many of these tools and do a good job at finding security holes in the software they house.

But in adversarial attacks, malicious behavior and vulnerabilities are deeply embedded in the thousands and millions of parameters of deep neural networks, which is both hard to find and beyond the capabilities of current security tools.

Traditional software security usually does not involve the machine learning component because itsa new piece in the growing system, Chen says, adding thatadopting machine learning into the security landscape gives new insights and risk assessment.

The Adversarial ML Threat Matrix comes with a set of case studies of attacks that involve traditional security vulnerabilities, adversarial machine learning, and combinations of both. Whats important is that contrary to the popular belief that adversarial attacks are limited to lab environments, the case studies show that production machine learning system can and have been compromised with adversarial attacks.

For instance, in one case study, the security team at Microsoft Azure used open-source data to gather information about a target machine learning model. They then used a valid account in the server to obtain the machine learning model and its training data. They used this information to find adversarial vulnerabilities in the model and develop attacks against the API that exposed its functionality to the public.

Other case studies show how attackers can compromise various aspect of the machine learning pipeline and the software stack to conduct data poisoning attacks, bypass spam detectors, or force AI systems to reveal confidential information.

The matrix and these case studies can guide analysts in finding weak spots in their software and can guide security tool vendors in creating new tools to protect machine learning systems.

Inspecting a single dimension (machine learning vs traditional software security) only provides an incomplete security analysis of the system as a whole, Chen says. Like the old saying goes: security is only asstrong as its weakest link.

Unfortunately, developers and adopters of machine learning algorithms are not taking the necessary measures to make their models robust against adversarial attacks.

The current development pipeline is merely ensuring a model trained on a training set can generalize well to a test set, while neglecting the fact that the model isoften overconfident about the unseen (out-of-distribution) data or maliciously embbed Trojan patteninthe training set, which offers unintended avenues to evasion attacks and backdoor attacks that an adversary can leverage to control or misguide the deployed model, Chen says. In my view, similar to car model development and manufacturing, a comprehensive in-house collision test for different adversarial treats on an AI model should be the new norm to practice to better understand and mitigate potential security risks.

In his work at IBM Research, Chen has helped develop various methods to detect and patch adversarial vulnerabilities in machine learning models. With the advent Adversarial ML Threat Matrix, the efforts of Chen and other AI and security researchers will put developers in a better position to create secure and robust machine learning systems.

My hope is that with this study, the model developers and machine learning researchers can pay more attention to the security (robustness) aspect of the modeland looking beyond a single performance metric such as accuracy, Chen says.

Read the original post:
The security threat of adversarial machine learning is real - TechTalks

Bridging the Skills Gap for AI and Machine Learning – Integration Developers

Even as COVID-19 has slowed business investments worldwide, AI/ML spending is increasing. In a post for IDN, dotDatas CEO Ryohei Fujimaki, Ph.D, looks at the latest trends in AI/ML automation and how they will speed adoption across industries.

COVID-19 has impacted businesses across the globe, from closures to supply chain interruptions to resource scarcity. As businesses adjust to the new normal, many are looking to do more with less and find ways to optimize their current business investments.

In this resource-constrained environment, many types of business investments have slowed dramatically. That said, investments in AI and machine learning are accelerated, according to a recentAdweek survey.

Adweek found two-thirds of business executives say COVID-19 has not slowed AI projects. In fact, some 40% of respondents told Adweek that the pandemic has accelerated their AI/ML efforts. Reasons for the sustained and growing interest in AI/ML include decreasing costs, improving performance, and increasing efficiencies-all efforts to make up for time and output lost during the COVID-19 slowdown.

Despite the rosy outlook for AI/ML investments, it bears mentioning that businesses also admit they still struggle to scale these technologies beyond PoCs (proof of concepts). This is due to an ongoing talent shortage in the data science field a shortage that COVID has made even more acute.

Data science is an interdisciplinary approach that requires cross-domain expertise, including mathematics, statistics, data engineering, software engineering, and subject matter expertise.

The shortage of data scientists as well as data architects, machine learning engineers skilled in building, testing, and deploying ML models has created a big challenge for businesses implementing AI and ML initiatives, limiting the scale of data science projects and slowing time to production. The scarcity of data scientists has also created a quandary for organizations: how can they change the way they do data science, empowering the teams they already have?

The democratization of data science is very important and a current industry trend, but true democratization has never been easy for organizations. Analytics and data science leaders lament their team's ability to only manage a few projects per year. BI leaders, on the other hand, have been trying to embed predictive analytics in their dashboards but face the daunting task of learning how to build AI/ML models. What can organizations do, what tactics will help them to scale AI initiatives and bridge the gap between what is required and what's available?

Democratization of data science in a true sense is to empower teams with advanced analytical tools and automation technologies.

These tools can significantly simplify tasks that formerly could only be completed by data scientists. They are empowering business analysts, BI developers and data engineers to execute AI and machine learning projects. Further, they accelerate data science processes with very little training.

Notable among these offerings are:

This class of automation tools removes much of the time and expense to design and deploy AI-powered analytics pipelines and do so little cost and without high-priced technical staff.

Today, s typical data team is interdisciplinary and consists of data engineers, data analysts and data scientists. The data analyst and engineer are responsible for cleaning, formatting and preparing data for the data scientist who then uses analytics-ready data to build features and then build ML models using a trial and error approach.

Data science processes are complicated, highly manual, and iterative in nature. Depending on the maturity of the data pipelines, a data science project can take from 30 to 90 days to complete with nearly 80% of the effort spent on AI-focused data preparation and Feature Engineering.

Further, the AI-focused data preparation process requires an impressive amount of hacking skills from developers, data scientists and data engineers to clean, manipulate and transform the data to enable data scientists to execute feature engineering.

That said, the landscaping is changing. Tools are now surfacing to deliver AI automation to pre-process data, connect to data and automatically build features and ML models. These results eliminate the need for having a large team and doing it efficiently at the greatest possible speed.

In addition, feature engineering automation has vast potential to change the traditional data science process. Feature engineering involves the application of business knowledge, math, and statistics to transform data into a format that can be directly consumed by machine learning models.

It also can significantly lower skill barriers beyond ML automation alone, eliminating hundreds or even thousands of manually-crafted SQL queries, and ramps up the speed of the data science project even without a full light of domain knowledge).

Organizations with large data science teams will also find automation platforms very valuable. They free up highly-skilled resources from many of the manual and time-consuming efforts involved in data science and machine learning workflow and allow them to focus on more complex and challenging strategic tasks.

The trend is definitely to leverage automation technologies to speed-up the ML development process. By using AI automation technologies, BI and junior data scientist can automatically build models. This frees up time for experienced data scientists who take on more challenging business problems. While everyone seemed to focus on building automated ML models, the industry is definitely moving towards automating the entire AI/ML workflow.

This empowers data scientists to achieve higher productivity and drive greater business impact than ever before.

Another important tactic for bridging the skills gap in data science is ongoing skills training for the AI, data science and business intelligence teams.

Rather than hiring outside talent from an already shallow talent pool, companies are often better off investing time and resources in data-science training of their existing talent pool. These citizen data scientists can bridge the skill gap, address the labor shortage and enable companies to leverage the existing resources they already have.

There are many advantages to this approach.

Theidea is to build a team from inside the company versus hiring experts from outside. Any transformation is only going to succeed, provided it is embraced by the vast majority. Creating internal AI teams, empowering citizen data scientists and scaling pilot programs focused on AI is the right approach.

One of the most important of which is building data science skills across multiple teams to support data science's democratization across the organization. This strategy can be implemented by first identifying employees with existing programming, analytical and quantitative skills and then augmenting those skills with the required data science skills and tools training. Experienced data scientists can play the role of an evangelizer to share data science best practices and guide the citizen data scientists through the process.

AI and ML-driven innovation becomes indispensable as more enterprises transform themselves into data-driven organizations. Building a strong analytics team, while challenging in todays resource-scarce environment, is attainable by using appropriate automation tools. The benefits of this approach include:

These factors can not only help fill the skills gap but will help accelerate both data science and business innovation, delivering greater and broader business impact.

More here:
Bridging the Skills Gap for AI and Machine Learning - Integration Developers

insitro Strengthens Machine Learning-Based Drug Discovery Capabilities with Acquisition of Haystack Sciences – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--insitro, a machine learning driven drug discovery and development company, today announced the acquisition of Haystack Sciences, a private company advancing proprietary methods to drive machine-learning enabled drug discovery. Haystacks approach focuses on synthesizing, breeding and analyzing large, diverse combinatorial chemical libraries encoded by unique DNA sequences called DNA-encoded libraries, or DELs. Financial details of the acquisition are not disclosed.

insitro is building the leading company at the intersection of machine learning and biological data generation at scale, with a core focus on applying these technologies for more efficient drug discovery. With the acquisition of Haystack, insitro will leverage the companys DEL technology to collect massive small molecule data sets that inform the construction of machine learning models able to predict drug activity from molecular structure. With the addition of the Haystack technology and team, insitro has taken a significant step towards building in-house capabilities for fully integrated drug discovery and development. insitros capabilities in this space are being further developed via a collaboration with DiCE Molecules, a leader in the DEL field. The collaboration, executed earlier this year, is aimed at combining the power of machine learning with high quality DEL datasets to address two difficult protein-protein interface targets that DiCE is pursuing.

We are thrilled to have the Haystack team join insitro, said Daphne Koller, Ph.D., founder and chief executive officer of insitro. For the past two years, insitro has been building a company focused on the creation of predictive cell-based models of disease in order to enable the discovery of novel targets and evaluate the benefits of new or existing molecules in genetically defined patient segments. This acquisition enables us to expand our capabilities to the area of therapeutic design and advances us towards our goal of leveraging machine learning across the entire process of designing and developing better medicines for patients.

Haystacks platform combines multiple elements, including the capability to synthetize broad, diverse, small molecule collections, the ability to execute rapid iterative follow-up, and a proprietary semi-quantitative screening technology, called nDexer, that generates higher resolution datasets than possible through conventional panning approaches. These capabilities will greatly enable insitros development of multi-dimensional predictive models for small molecule design.

The nDexerTM capabilities we have advanced at Haystack, combined with insitros state of the art machine learning models, will enable us to build a platform at the forefront of applying DEL technology to next-generation therapeutics discovery, said Richard E. Watts, co-founder and chief executive officer of Haystack Sciences who will be joining insitro as vice president, high-throughput chemistry. I am excited by the opportunity to join a company with such a uniquely open and collaborative culture and to work with and learn from colleagues in data science, machine learning, automation and cell biology. The capabilities enabled by joining our efforts are considerably greater than the sum of the parts, and I look forward to helping build core drug discovery efforts at insitro.

Haystacks best-in-class DEL technology is uniquely aligned with insitros philosophy of addressing the critical challenges in pharmaceutical R&D through predictive machine learning models, all enabled by producing quality data at scale, said Vijay Pande, Ph.D., general partner at Andreessen Horowitz and member of insitros board of directors. This investment will power insitros swift prosecution of the multiple targets emerging from their platform, as well as the creation of a computational platform for molecule structure and function optimization. Having seen the field of computationally driven molecule design mature over the past twenty years, I look forward to the next chapter in therapeutics design written by the combined efforts of insitro and Haystack.

About insitro

insitro is a data-driven drug discovery and development company using machine learning and high-throughput biology to transform the way that drugs are discovered and delivered to patients. The company is applying state-of-the-art technologies from bioengineering to create massive data sets that enable the power of modern machine learning methods to be brought to bear on key bottlenecks in pharmaceutical R&D. The resulting predictive models are used to accelerate target selection, to design and develop effective therapeutics, and to inform clinical strategy. The company is located in South San Francisco, CA. For more information on insitro, please visit the companys website at http://www.insitro.com.

About Haystack Sciences

Haystack Sciences seeks to inform and speed drug discovery by acquiring data of best-in-class accuracy and dimensionality from DNA Encoded Libraries (DELs). This is enabled by proprietary technologies for in vitro evolution of fully synthetic small molecules and high throughput mapping of structure-activity relationships for selection of molecules with drug-like properties. The companys technologies, including their nDexer platform, allow for generation of better libraries and quantification of binding affinities of entire DELs against a given target in parallel. The combination of these approaches with machine learning has the potential to greatly accelerate the discovery of optimized drug candidates. Haystack Sciences is based in South San Francisco, California. It was incubated at the Illumina Accelerator and is backed by leading investors including Viking Global Investors, Nimble Ventures, HBM Genomics, and Illumina. More information is available at: http://www.haystacksciences.com/

See the rest here:
insitro Strengthens Machine Learning-Based Drug Discovery Capabilities with Acquisition of Haystack Sciences - Business Wire

Machine Learning and AI Can Now Create Plastics That Easily Degrade – Science Times

Plastic pollutionis one of the most pressing environmental issues, and the increase in the production of disposable plastics does not help at all. These plastics would often take many years before they degrade, which poisons the environment. This has prompted efforts from nations to create a global treaty to help reduce plastic pollution.

A combination of machine learning and artificial intelligence has accelerated the design of making materials, including plastics, with properties that quickly degrade without harming the environment and super-strong lightweight plastics for aircraft and satellites that would one day replace the metals being used.

The researchers from the Pritzker School of Molecular Engineering (PME) at the University of Chicago published their study in Science Advances on October 21, which shows a way toward designing polymers using a combination of modeling and machine learning.

This is done through computational structuring of almost 2,000 hypothetical polymers that are large enough to train neural links that understand a polymer's properties.

(Photo: Pixabay)Machine Learning and AI Can Now Create Plastics That Easily Degrade

People have been using products with polymer, like plastic bottles, for so long as this material is very common in many things in the daily lives of humans.

Polymers are materials that have amorphous and disordered structures that even techniques for studying metals and crystalline materials developed by scientists have a hard time defining it. They are made of large atoms arranged in a very long string that might compromise millions of monomers.

Moreover, the length and sequence can affect the polymer molecule's properties that may vary depending on which the atoms are arranged. Due to that, a trial-and-error method will not be ideal to use because it is only limited, and generating the needed data for a rational design strategy would be very demanding, Phys.orgreported.

Fortunately, machine learning could solve this problem as researchers set to answer whether machine learning and AI can predict the properties of polymers based on their sequence. If this might be the case, how large of a dataset would be needed to teach underlying algorithms.

Read Also: P&G Aims to Halve Its Use of Virgin Petroleum Plastics by 2030: Here's How It Plans to Do So

The researchers used almost 2,000 computationally structured polymers that have different sequences in creating the database. They also ran molecular simulations to predict its behavior.

Juan de Pablo, Liew Family Professor of Molecular Engineering and lead researcher, said that they are unsure how many are the different polymer sequences needed to learn its behavior as it could be millions. Fortunately, only a few hundred would do, which means that they can now follow the same technique ad create a database to train the machine learning network.

Then the researchers proceeded to use the data that was learned in making the actual design of the new molecules. They were able to demonstrate to specify a desired property from the polymer, and using machine learning generated a set of polymer sequences that lead to specific properties.

Through this, companies can now design products that save the environment and design polymers that do exactly what they want to do. For instance, they could create polymers that could someday replace the metals used in aerospace or those used in biomedical devices. It could allow engineers to more affordable and sustainable polymer materials.

Read More: Unique Enzyme Combination Could Reduce Global Plastic Waste

Check out more news and information on Plastic Pollutionon Science Times.

See the original post:
Machine Learning and AI Can Now Create Plastics That Easily Degrade - Science Times