Category Archives: Machine Learning
Sift to Host Live Virtual Event Featuring Gartner Analysts on Machine Learning-powered Fraud Detection, Creating Trust and Safety on the Internet -…
SAN FRANCISCO, April 06, 2021 (GLOBE NEWSWIRE) -- Sift, the leader in Digital Trust & Safety, today announced that it will be hosting a virtual event on April 20, 2021, with Gartner analysts presenting new research in two live-only sessions: How to Select a Machine Learning Vendor for Fraud Detection in Online Retail, and Create Trust and Safety on the Internet.
During the sessions, Gartner Senior Director Analysts Dr. Akif Khan, Ph.D, and Jonathan Care will explore the challenges presented by todays interconnected digital Fraud Economy, which can easily overwhelm fraud teams restricted by limited resources, disparate tools, or a narrow strategy focused on a single abuse type. Led by Sift Trust and Safety Architect Kevin Lee, the sessions will provide actionable steps merchants can take to prevent inaccurate transaction decisioning, rising chargebacks and false positives, unnecessary friction for trusted users, and ultimately, lost revenue.
Lee and the Gartner analysts will also answer live questions from attendees.
As cybercriminals adapt and become more sophisticated, fraud fighters can only defend their organizations by staying one step ahead of them, said Lee. Our virtual event featuring Gartner will arm fraud prevention and trust and safety teams with the guidance they need to not only prevent fraud but create a streamlined experience for trusted customers the foundation of a Digital Trust & Safety strategy.
To see full details and to sign up for the live virtual event, go to https://pages.sift.com/gartner-event-2021.html.
About Sift
Sift is the leader in Digital Trust & Safety, empowering digital disruptors to Fortune 500 companies to unlock new revenue without risk. Sift dynamically prevents fraud and abuse through industry-leading technology and expertise, an unrivaled global data network of 70 billion events per month, and a commitment to long-term customer partnerships. Global brands such as Twitter, Airbnb, and Wayfair rely on Sift to gain a competitive advantage in their markets. Visit us at sift.com and follow us on Twitter @GetSift.
Media ContactVictor WhiteDirector of Corporate Communications, Siftvwhite@siftscience.com
TigerGraph’s Graph + AI Summit 2021 to Feature 40+ Sessions, Live Workshops and Speakers from JPMorgan Chase, NewDay, Pinterest, Jaguar Land Rover and…
REDWOOD CITY, Calif., April 08, 2021 (GLOBE NEWSWIRE) -- TigerGraph, provider of the leading graph analytics platform, today unveiled the complete agenda for Graph + AI Summit 2021, the industrys only open conference devoted to democratizing and accelerating analytics, AI and machine learning with graph algorithms. The roster includes confirmed speakers from JPMorgan Chase, Intuit, NewDay, Jaguar Land Rover, Pinterest, Stanford University, Forrester Research, Accenture, Capgemini, KPMG, Intel, Dell, and Xilinx, as well as many innovative startups including John Snow Labs, Fintell, SaH Solutions and Sayari Labs. The virtual conference, set for April 21-23, offers keynotes, speakers, real-world customer case studies and hands-on workshops for data, analytics and AI professionals.
The combination of analytics, AI, machine learning and graph is a powerful one that offers many human benefits and forward-looking companies in all industries have taken note, said Dr. Yu Xu, founder and CEO of TigerGraph. Graph + AI Summit is again bringing together industry luminaries, technical experts and business leaders from the worlds largest banks, fintechs, tech giants and manufacturers to share implementation best practices, lessons learned and more. Were pleased to welcome back speakers from Jaguar Land Rover and Intuit, and welcome new participants from an impressive list of todays top innovators driving the adoption of graph. Our goal is to make graph accessible, applicable and understandable for all, as more people grasp how graph-related technologies can improve our lives.
Graph + AI Summit returns after a successful Graph + AI 2020; the inaugural event attracted more than 3,000 attendees from 56 countries, and welcomed data scientists, data engineers, architects and business and IT executives from 115 of the Fortune 500 companies. The latest conference will host over 6,000 attendees this year and again focus on accelerating analytics, AI and machine learning with graph algorithms timely technologies that are on the minds of todays business leaders. After 2020 accelerated enterprises shift to the cloud, businesses are realizing graph technologies are key to connecting, analyzing and helping glean insights from data.
Graph + AI Summit 2021 includes keynote presentations, executive roundtables, technical breakout sessions, industry tracks (banking, insurance and fintech, healthcare, life sciences and government) and live workshops for advanced analytics and machine learning.
Keynote speakers presenting during conference general sessions include:
Notable roundtables and interactive sessions include:
Graph + AI Summit sessions will also cover the following topics:
Register for one of these live workshops for advanced analytics and machine learning now:
View Graph + AI Summits agenda: https://www.tigergraph.com/graphaisummit/#day1Register and secure your complimentary spot: https://www.tigergraph.com/graphaisummit/.
Helpful Links
About TigerGraphTigerGraph is a platform for advanced analytics and machine learning on connected data. Based on the industrys first and only distributed native graph database, TigerGraphs proven technology supports advanced analytics and machine learning applications such as fraud detection, anti-money laundering (AML), entity resolution, customer 360, recommendations, knowledge graph, cybersecurity, supply chain, IoT, and network analysis. The company is headquartered in Redwood City, California, USA. Start free with tigergraph.com/cloud.
Media ContactCathy WrightOffleash PR for TigerGraphcathy@offleashpr.com650-678-1905
See the original post:
TigerGraph's Graph + AI Summit 2021 to Feature 40+ Sessions, Live Workshops and Speakers from JPMorgan Chase, NewDay, Pinterest, Jaguar Land Rover and...
AI and Machine Learning Operationalization Software Market to Witness Stellar CA – Business-newsupdate.com
AI and Machine Learning Operationalization Software Market to Witness Stellar CAGR During the Forecast Period 2021 -2026
The Global AI and Machine Learning Operationalization Software Market report draws precise insights by examining the latest and prospective industry trends and helping readers recognize the products and services that are boosting revenue growth and profitability. The study performs a detailed analysis of all the significant factors, including drivers, constraints, threats, challenges, prospects, and industry-specific trends, impacting the AI and Machine Learning Operationalization Software market on a global and regional scale. Additionally, the report cites worldwide market scenario along with competitive landscape of leading participants.
The recent study on AI and Machine Learning Operationalization Software market offers a detailed analysis of this business vertical by expounding the key development trends, restraints & limitations, and opportunities that will influence the industry dynamics in the coming years. Proceeding further, it sheds light on the regional markets and identifies the top areas to further business development, followed by a thorough scrutiny of the prominent companies in this business sphere. Additionally, the report explicates the impact of the Covid-19 pandemic on the profitability graph and highlights the business strategies adopted by major players to adapt to the instabilities in the market.
Major highlights from the Covid-19 impact analysis:
Request Sample Copy of this Report @ https://www.business-newsupdate.com/request-sample/70745
An overview of the regional analysis:
Additional highlights from the AI and Machine Learning Operationalization Software market report:
Strategic Points Covered in Table of Content of Global AI and Machine Learning Operationalization Software Market:
Request Customization on This Report @ https://www.business-newsupdate.com/request-for-customization/70745
See the article here:
AI and Machine Learning Operationalization Software Market to Witness Stellar CA - Business-newsupdate.com
Apple Reveals a Multi-Mode Planar Engine for a Neural Processor that could be used in A-Series & screamingly fast M1 Processors – Patently Apple
Back in 2017, Apple introduced the A11 which included their first dedicated neural network hardware that Apple calls a "Neural Engine." At the time, Apple's neural network hardware was able to perform up to 600 billion operations per second and used for Face ID, Animoji and other machine learning tasks. The neural engine allows Apple to implement neural network and machine learning in a more energy-efficient manner than using either the main CPU or the GPU. Today Apple's Neural Engine has advanced to their new M1 processor that delivers 15X faster machine learning performance of the Neural Engine, according to Apple.
Apple revealed back in Q4 that the "M1 features their latest Neural Engine. Its 16core design is capable of executing a massive 11 trillion operations per second. In fact, with a powerful 8core GPU, machine learning accelerators and the Neural Engine, the entire M1 chip is designed to excel at machine learning." There's an excellent chance that today's patent covers technology built into the M1 processor to help it achieve its breakthrough performance. While the patent was published today, it was filed in Q4 2019 before the M1 surfaced.
Today, the U.S. Patent Office published a patent application from Apple titled "Multi-Mode Planar Engine for Neural Processor." Apple's invention relates to a circuit for performing operations related to neural networks, and more specifically to a neural processor that include a plurality of neural engine circuits and one or more multi-mode planar engine circuits.
An artificial neural network (ANN) is a computing system or model that uses a collection of connected nodes to process input data. The ANN is typically organized into layers where different layers perform different types of transformation on their input. Extensions or variants of ANN such as convolution neural network (CNN), recurrent neural networks (RNN) and deep belief networks (DBN) have come to receive much attention. These computing systems or models often involve extensive computing operations including multiplication and accumulation. For example, CNN is a class of machine learning technique that primarily uses convolution between input data and kernel data, which can be decomposed into multiplication and accumulation operations.
Depending on the types of input data and operations to be performed, these machine learning systems or models can be configured differently. Such varying configuration would include, for example, pre-processing operations, the number of channels in input data, kernel data to be used, non-linear function to be applied to convolution result, and applying of various post-processing operations. Using a central processing unit (CPU) and its main memory to instantiate and execute machine learning systems or models of various configuration is relatively easy because such systems or models can be instantiated with mere updates to code. However, relying solely on the CPU for various operations of these machine learning systems or models would consume significant bandwidth of a central processing unit (CPU) as well as increase the overall power consumption.
Apple's invention specifically relates to a neural processor that includes a plurality of neural engine circuits and a planar engine circuit operable in multiple modes and coupled to the plurality of neural engine circuits.
At least one of the neural engine circuits performs a convolution operation of first input data with one or more kernels to generate a first output. The planar engine circuit generates a second output from a second input data that corresponds to the first output or corresponds to a version of input data of the neural processor.
The input data of the neural processor may be data received from a source external to the neural processor, or outputs of the neural engine circuits or planar engine circuit in a previous cycle. In a pooling mode, the planar engine circuit reduces the spatial size of a version of second input data. In an elementwise mode, the planar engine circuit performs an elementwise operation on the second input data. In a reduction mode, the planar engine circuit reduces the rank of a tensor.
Apple's patent FIG. 3 below is a block diagram illustrating a neural processor circuit; FIGS. 6A, 6B, and 6C are conceptual diagrams respectively illustrating a pooling operation, an elementwise operation, and a reduction operation.
To review the deeper details, review Apple's patent application 20210103803.
Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Follow this link:
Apple Reveals a Multi-Mode Planar Engine for a Neural Processor that could be used in A-Series & screamingly fast M1 Processors - Patently Apple
Foundations of Machine Learning | The MIT Press
Summary
Fundamental topics in machine learning are presented along with theoretical and conceptual tools for the discussion and proof of algorithms.
This graduate-level textbook introduces fundamental concepts and methods in machine learning. It describes several important modern algorithms, provides the theoretical underpinnings of these algorithms, and illustrates key aspects for their application. The authors aim to present novel theoretical tools and concepts while giving concise proofs even for relatively advanced topics.
Foundations of Machine Learning fills the need for a general textbook that also offers theoretical details and an emphasis on proofs. Certain topics that are often treated with insufficient attention are discussed in more detail here; for example, entire chapters are devoted to regression, multi-class classification, and ranking. The first three chapters lay the theoretical foundation for what follows, but each remaining chapter is mostly self-contained. The appendix offers a concise probability review, a short introduction to convex optimization, tools for concentration bounds, and several basic properties of matrices and norms used in the book.
The book is intended for graduate students and researchers in machine learning, statistics, and related areas; it can be used either as a textbook or as a reference text for a research seminar.
Hardcover Out of Print ISBN: 9780262018258 432 pp. | 7 in x 9 in 55 color illus., 40 b&w illus. August 2012
Authors Mehryar Mohri Mehryar Mohri is Professor of Computer Science at New York University's Courant Institute of Mathematical Sciences and a Research Consultant at Google Research. Afshin Rostamizadeh Afshin Rostamizadeh is a Research Scientist at Google Research. Ameet Talwalkar Ameet Talwalkar is Assistant Professor in the Machine Learning Department at Carnegie Mellon University.
Follow this link:
Foundations of Machine Learning | The MIT Press
Increasing the Accessibility of Machine Learning at the Edge – Industry Articles – All About Circuits
In recent years, connected devices and the Internet of Things (IoT) have become omnipresent in our everyday lives, be it in our homes and cars or at our workplace. Many of these small devices are connected to a cloud servicenearly everyone with a smartphone or laptop uses cloud-based services today, whether actively or through an automated backup service, for example.
However, a new paradigm known as "edge intelligence" is quickly gaining traction in technologys fast-changing landscape. This article introduces cloud-based intelligence, edge intelligence, and possible use-cases for professional users to make machine learning accessible for all.
Cloud computing, simply put, is the availability of remote computational resources whenever a client needs them.
For public cloud services, the cloud service provider is responsible for managing the hardware and ensuring that the service's availability is up to a certain standard and customer expectations. The customers of cloud services pay for what they use, and the employment of such services is generally only viable for large-scale operations.
On the other hand, edge computing happens somewhere between the cloud and the clients network.
While the definition of where exactly edge nodes sit may vary from application to application, they are generally close to the local network. These computational nodes provide services such as filtering and buffering data, and they help increase privacy, provide increased reliability, and reduce cloud-service costs and latency.
Recently, its become more common for AI and machine learning to complement edge-computing nodes and help decide what data is relevant and should be uploaded to the cloud for deeper analysis.
Machine learning (ML) is a broad scientific field, but in recent times, neural networks (often abbreviated to NN) have gained the most attention when discussing machine learning algorithms.
Multiclass or complex ML applications such as object tracking and surveillance, automatic speech recognition, and multi-face detection typically require NNs. Many scientists have worked hard to improve and optimize NN algorithms in the last decade to allow them to run on devices with limited computational resources, which has helped accelerate the edge-computing paradigms popularity and practicability.
One such algorithm is MobileNet, which is an image classification algorithm developed by Google. This project demonstrates that highly accurate neural networks can indeed run on devices with significantly restricted computational power.
Until recently, machine learning was primarily meant for data-science experts with a deep understanding of ML and deep learning applications. Typically, the development tools and software suites were immature and challenging to use.
Machine learning and edge computing are expanding rapidly, and the interest in these fields steadily grows every year. According to current research, 98% of edge devices will use machine learning by 2025. This percentage translates to about 18-25 billion devices that the researchers expect to have machine learning capabilities.
In general, machine learning at the edge opens doors for a broad spectrum of applications ranging from computer vision, speech analysis, and video processing to sequence analysis.
Some concrete examples for possible applications are intelligent door locks combined with a camera. These devices could automatically detect a person wanting access to a room and allow the person entry when appropriate.
Due to the previously discussed optimizations and performance improvements of neural network algorithms, many ML applications can now run on embedded devices powered by crossover MCUs such as the i.MX RT1170. With its two processing cores (a 1GHz Arm Cortex M7 and a 400 MHz Arm Cortex-M4 core), developers can choose to run compatible NN implementations with real-time constraints in mind.
Due to its dual-core design, the i.MX RT1170 also allows the execution of multiple ML models in parallel. The additional built-in crypto engines, advanced security features, and graphics and multimedia capabilities make the i.MX RT1170 suitable for a wide range of applications. Some examples include driver distraction detection, smart light switches, intelligent locks, fleet management, and many more.
The i.MX 8M Plus is a family of applications processors that focuses on ML, computer vision, advanced multimedia applications, and industrial automation with high reliability. These devices were designed with the needs of smart devices and Industry 4.0 applications in mind and come equipped with a dedicated NPU (neural processing unit) operating at up to 2.3 TOPS and up to four Arm Cortex A53 processor cores.
Built-in image signal processors allow developers to utilize either two HD camera sensors or a single 4K camera. These features make the i.MX 8M Plus family of devices viable for applications such as facial recognition, object detection, and other ML tasks. Besides that, devices of the i.MX 8M Plus family come with advanced 2D and 3D graphics acceleration capabilities, multimedia features such as video encode and decode support including H.265), and 8 PDM microphone inputs.
An additional low-power 800 MHz Arm Cortex M7 core complements the package. This dedicated core serves real-time industrial applications that require robust networking features such as CAN FD support and Gigabit Ethernet communication with TSN capabilities.
With new devices comes the need for an easy-to-use, efficient, and capable development ecosystem that enables developers to build modern ML systems. NXPs comprehensive eIQ ML software development environment is designed to assist developers in creating ML-based applications.
The eIQ tools environment includes inference engines, neural network compilers, and optimized libraries to enable working with ML algorithms on NXP microcontrollers, i.MX RT crossover MCUs, and the i.MX family of SoCs. The needed ML technologies are accessible to developers through NXPs SDKs for the MCUXpresso IDE and Yocto BSP.
The upcoming eIQ Toolkit adds an accessible GUI; eIQ Portal and workflow, enabling developers of all experience levels to create ML applications.
Developers can choose to follow a process called BYOM (bring your own model), where developers build their trained models using cloud-based tools and then import them to the eIQ Toolkit software environment. Then, all thats left to do is select the appropriate inference engine in eIQ. Or the developer can use the eIQ Portal GUI-based tools or command line interface to import and curate datasets and use the BYOD (bring your own data) workflow to train their model within the eIQ Toolkit.
Most modern-day consumers are familiar with cloud computing. However, in recent years a new paradigm known as edge computing has seen a rise in interest.
With this paradigm, not all data gets uploaded to the cloud. Instead, edge nodes, located somewhere between the end-user and the cloud, provide additional processing power. This paradigm has many benefits, such as increased security and privacy, reduced data transfer to the cloud, and lower latency.
More recently, developers often enhance these edge nodes with machine learning capabilities. Doing so helps to categorize collected data and filter out unwanted results and irrelevant information. Adding ML to the edge enables many applications such as driver distraction detection, smart light switches, intelligent locks, fleet management, surveillance and categorization, and many more.
ML applications have traditionally been exclusively designed by data-science experts with a deep understanding of ML and deep learning applications. NXP provides a range of inexpensive yet powerful devices, such as the i.MX RT1170 and the i.MX 8M Plus, and the eIQ ML software development environment to help open ML up to any designer. This hardware and software aims to allow developers to build future-proof ML applications at any level of experience, regardless of how small or large the project will be.
Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.
Read the original:
Increasing the Accessibility of Machine Learning at the Edge - Industry Articles - All About Circuits
Splunk : Life as a PM on the Splunk Machine Learning Team – Marketscreener.com
Starting a new job is stressful any time. Starting a job in the thick of a pandemic-enforced shelter-in-place is its own beast. I learned this firsthand when I started a new job in May 2020 with a team that I never met face to face, not even for interviews. Towards the end of 2020, I then got an opportunity to interview with the Machine Learning (ML) Product Management team at Splunk. Even though I remembered the experience from 6 months prior of onboarding and starting a new job remotely, I jumped at the opportunity, and started in January 2021.
Before coming to Splunk, I worked in application security - static application security more specifically. I loved every moment of working on the hard problems of finding vulnerabilities in application source code. It is a very complex problem to solve, especially when done with high accuracy and high performance. It is also very rewarding and I felt like a superhero everyday - solving important problems that affect a lot of people alongside some very brilliant minds. Obviously, that is what I was looking for in my next role, also - challenge, talent, ownership, and responsibility. It has been six weeks since starting my new role at Splunk ML and I wanted to share my experience of starting remotely and my thoughts on our portfolio.
Onboarding was a breeze. In my first week on the Machine Learning PM Team, I went through a bootcamp with other new hires. The focus of this bootcamp was to give us insight into Splunk's different product lines as well as Splunk's culture. It was super well-organized and fun. At the same time, I started meeting my new team members virtually. Everyone I have met so far at Splunk has been very helpful, welcoming, and nice. More than anything else, this is what I have liked most about my Splunk experience.
After the onboarding and 'meet and greets,' I experienced what my hiring manager had warned me about - 'You will be drinking from the fire hose.' I was and I still am, in a good way. As you will see later on in this post, we are building a lot of cool things in this team, which is challenging but also exciting. The ML team moves fast, is not shy of challenging rules that don't make sense for our team and our customers, and paves its own path. If something needs to be done, we figure it out and we do it. (If this sounds like you, we are hiring! Check out the many machine learning roles at Splunk)
Which brings me to the very important question of what is it we do here in ML at Splunk -what products are we working on, what problems are we solving, and for whom?
Starting with why, our mission is to empower Splunk customers to leverage machine intelligence in their operations.
Our team's main goal is to enable customers to develop new advanced analytics & ML workloads on their data in Splunk, thus increasing the value they realize from the platform. We want to increase engagement, enable new use cases, and enrich the Splunk experience for our customers.
We strive to make machine learning accessible to all Splunk users. Currently, our offerings meet the needs of four different personas that range from novice to expert when it comes to familiarity with data science and ML:
The different personas we are serving require different solutions - from no-code experiences to heavy-code experiences. We achieve this by having a breadth of products:
These products cover the different personas we are targeting. However, we need to make it easy for users to use our solutions where they are. We achieve this via the following:
It is evident that we have a bold vision and lots to do. We want to make ML-powered insights accessible to core Splunk users. At the same time, we want data scientists to be able to leverage their Splunk data within Splunk.
In the past one year and for the short term, our focus is on the data scientist. We are working on making SMLE Studio available as an app on Splunk Cloud Platform. However, for the middle term, we are going to shift our focus to the Splunk user.
There are other initiatives in Applied ML and research, streaming ML, and the embedded ML space. I will leave that for another blog post because, as I said earlier, I am new! I'm still learning, and there's so much to cover!
The most exciting part for me is that we are in the early stages of delivering on this vision. There is a huge opportunity to own a big part of this effort and create an impact. Ask any product manager and you will quickly know that more exciting words have never been spoken. Needless to say, I am very excited about all the amazing things we are going to build together. Onwards!
Want to help us tackle this vision? Take a look at our machine learning roles today.
Read the rest here:
Splunk : Life as a PM on the Splunk Machine Learning Team - Marketscreener.com
Is Machine Learning The Future Of Coffee Health Research? – Sprudge
If youve been a reader of Sprudge for any reasonable amount of time, youve no doubt by now ready multiple articles about how coffee is potentially beneficial for some particular facet of your health. The stories generally go like this: a study finds drinking coffee is associated with a X% decrease in [bad health outcome] followed shortly by the study is observational and does not prove causation.
In a new study in theAmerican Heart Associations journal Circulation: Heart Failure, researchers found a link between drinking three or more cups of coffee a day and a decreased risk of heart failure. But theres something different about this observational study. This study used machine learning to get to its conclusion, and it may significantly alter the utility of this sort of study in the future.
As reported by the New York Times, the new study isnt exactly new at all. Led by David Kao, a cardiologist at University of Colorado School of Medicine, researchers re-examined the Framingham Heart Study (FHS), a long-term, ongoing cardiovascular cohort studyof residents of the city of Framingham, Massachusetts that began in 1948 and has grown to include over 14,000 participants.
Whereas most research starts out with a hypothesis that it then seeks to prove or disprove, which can lead to false relationships being established by the sort variables researchers choose to include or exclude in their data analysis, Kao et al instead approached the FHS with no intended outcome. Instead, they utilized a powerful and increasingly popular data-analysis technique known as machine learning to find any potential links between patient characteristics captured in the FHS and the odds of the participants experiencing heart failure.
Able to analyze massive amounts of data in a short amount of timeas well as be programmed to handle uncertainties in the data, like if a reported cup of coffee is six ounces or eight ouncesmachine learning can then start to ascertain and rank which variables are most associated with incidents of heart failure, giving even observational studies more explanatory power in their findings. And indeed, when the results of the FHS machine learning analysis were compare to two other well-known studies, the Cardiovascular Heart Study (CHS) and the Atherosclerosis Risk in Communities study (ARIC), the algorithm was able to correctly predict the relationship between coffee intake and heart failure.
But, of course, there are caveats. Machine learning algorithms are only as good as the data being fed to it. If the scope is too narrow, the results may not translate more broadly and its real-world predictive utility is significantly decreased. The New York Times offers facial recognition software as an example: Trained primarily on white male subjects, the algorithms have been much less accurate in identifying women and people of color.
Still, the new study shows promise, not just for the health benefits the algorithm uncovered, but for how we undertake and interpret this sort of analysis-driven research.
Zac Cadwaladeris the managing editor at Sprudge Media Network and a staff writer based in Dallas.Read more Zac Cadwaladeron Sprudge.
Link:
Is Machine Learning The Future Of Coffee Health Research? - Sprudge
Machine learning tool sets out to find new antimicrobial peptides – Chemistry World
By combining machine learning, molecular dynamics simulations and experiments it has been possible to design antimicrobial peptides from scratch.1 The approach by researchers at IBM is an important advance in a field where data is scarce and trial-and-error design is expensive and slow.
Antimicrobial peptides small molecules consisting of 12 to 50 amino acids are promising drug candidates for tackling antibiotic resistance. The co-evolution of antimicrobial peptides and bacterial phyla over millions of years suggests that resistance development against antimicrobial peptides is unlikely, but that should be taken with caution, comments Hvard Jenssen at Roskilde University in Denmark, who was not involved in the study.
Artificial intelligence (AI) tools are helpful in discovering new drugs. Payel Das from the IBM Thomas J Watson Research Centre in the US says that such methods can be broadly divided into two classes. Forward design involves screening of peptide candidates using sequenceactivity or structureactivity models, whereas the inverse approach considers targeted and de novo molecule design. IBMs AI framework, which is formulated for the inverse design problem, outperforms other de novo strategies by almost 10%, she adds.
Within 48 days, this approach enabled us to identify, synthesise and experimentally test 20 novel AI-generated antimicrobial peptide candidates, two of which displayed high potency against diverse Gram-positive and Gram-negative pathogens, including multidrug-resistant Klebsiella pneumoniae, as well as a low propensity to induce drug resistance in Escherichia coli, explains Das.
The team first used a machine learning system called a deep generative autoencoder to capture information about different peptide sequences and then applied controlled latent attribute space sampling, a new computational method for generating peptide molecules with custom properties. This created a pool of 90,000 possible sequences. We further screened those molecules using deep learning classifiers for additional key attributes such as toxicity and broad-spectrum activity, Das says. The researchers then carried out peptidemembrane binding simulations on the pre-screened candidates and finally selected 20 peptides, which were tested in lab experiments and in mice. Their studies indicated that the new peptides work by disrupting pathogen membranes.
The authors created an exciting way of producing new lead compounds, but theyre not the best compounds that have ever been made, says Robert Hancock from the University of British Columbia in Canada, who discovered other peptides with antimicrobial activity in 2009.2 Jenssen participated in that study too and agrees. The identified sequences are novel and cover a new avenue of the classical chemical space, but to flag them as interesting from a drug development point of view, the activities need to be optimised.
Das points out that IBMs tool looks for new peptides from scratch and doesnt depend on engineered input features. This line of earlier work relies on the forward design problem, that is, screening of pre-defined peptide libraries designed using an existing antimicrobial sequence, she says.
Hancock agrees that this makes the new approach challenging. The problem they were trying to solve was much more complex because we narrowed down to a modest number of amino acids whereas they just took anything that came up in nature, he says. That could represent a significant advance, but the output at this stage isnt optimal. Hancock adds that the strategy does find some good sequences to start with, so he thinks it could be combined with other methods to improve on those leads and come up with really good molecules.
Read the rest here:
Machine learning tool sets out to find new antimicrobial peptides - Chemistry World
Machine learning methods to predict mechanical ventilation and mortality in patients with COVID-19 – DocWire News
This article was originally published here
PLoS One. 2021 Apr 1;16(4):e0249285. doi: 10.1371/journal.pone.0249285. eCollection 2021.
ABSTRACT
BACKGROUND: The Coronavirus disease 2019 (COVID-19) pandemic has affected millions of people across the globe. It is associated with a high mortality rate and has created a global crisis by straining medical resources worldwide.
OBJECTIVES: To develop and validate machine-learning models for prediction of mechanical ventilation (MV) for patients presenting to emergency room and for prediction of in-hospital mortality once a patient is admitted.
METHODS: Two cohorts were used for the two different aims. 1980 COVID-19 patients were enrolled for the aim of prediction ofMV. 1036 patients data, including demographics, past smoking and drinking history, past medical history and vital signs at emergency room (ER), laboratory values, and treatments were collected for training and 674 patients were enrolled for validation using XGBoost algorithm. For the second aim to predict in-hospital mortality, 3491 hospitalized patients via ER were enrolled. CatBoost, a new gradient-boosting algorithm was applied for training and validation of the cohort.
RESULTS: Older age, higher temperature, increased respiratory rate (RR) and a lower oxygen saturation (SpO2) from the first set of vital signs were associated with an increased risk of MV amongst the 1980 patients in the ER. The model had a high accuracy of 86.2% and a negative predictive value (NPV) of 87.8%. While, patients who required MV, had a higher RR, Body mass index (BMI) and longer length of stay in the hospital were the major features associated with in-hospital mortality. The second model had a high accuracy of 80% with NPV of 81.6%.
CONCLUSION: Machine learning models using XGBoost and catBoost algorithms can predict need for mechanical ventilation and mortality with a very high accuracy in COVID-19 patients.
PMID:33793600 | DOI:10.1371/journal.pone.0249285