Category Archives: Machine Learning
When Are We Going to Start Designing AI With Purpose? Machine Learning Times – The Predictive Analytics Times
Originally published in UX Collective, Jan 19, 2021.
For an industry that prides itself on moving fast, the tech community has been remarkably slow to adapt to the differences of designing with AI. Machine learning is an intrinsically fuzzy science, yet when it inevitably returns unpredictable results, we tend to react like its a puzzle to be solved; believing that with enough algorithmic brilliance, we can eventually fit all the pieces into place and render something approaching objective truth. But objectivity and truth are often far afield from the true promise of AI, as well soon discuss.
I think a lot of the confusion stems from language;in particular the way we talk about machine-like efficiency. Machines are expected to make precise measurements about whatever theyre pointed at; to produce data.
But machinelearningdoesnt produce data. Machine learning producespredictionsabout how observations in the present overlap with patterns from the past. In this way, its literally aninversionof the classicif-this-then-thatlogic thats driven conventional software development for so long. My colleague Rick Barraza has a great way of describing the distinction:
To continue reading this article, click here.
View original post here:
When Are We Going to Start Designing AI With Purpose? Machine Learning Times - The Predictive Analytics Times
Five trends in machine learning-enhanced analytics to watch in 2021 – Information Age
AI usage is growing rapidly. What does 2021 hold for the world of analytics, and how will AI drive it?
Progress of AI-powered operations looks set to grow this year.
As the world prepares to recover from the Covid-19 pandemic, businesses will need to increasingly rely on analytics to deal with new consumer behaviour.
According to Gartner analyst Rita Sallam, In the face of unprecedented market shifts, data and analytics leaders require an ever-increasing velocity and scale of analysis in terms of processing and access to accelerate innovation and forge new paths to a post-Covid-19 world.
Machine learning and artificial intelligence are finding increasingly significant use cases in data analytics for business. Here are five trends to watch out for in 2021.
Gartner predicts that by 2024, 75% of enterprises will shift towards putting AI and ML into operation. A big reason for this is the way the pandemic has changed consumer behaviour. Regression learning models that rely on historical data might not be valid anymore. In their place, reinforcement and distributed learning models will find more use, thanks to their adaptability.
A large share of businesses have already democratised their data through the use of embedded analytics dashboards. The use of AI to generate augmented analytics to drive business decisions will increase as businesses seek to react faster to shifting conditions. Powering data democratisation efforts with AI will help non-technical users make a greater number of business decisions, without having to rely on IT support to query data.
Companies such as Sisense already offer companies the ability to integrate powerful analytics into custom applications. As AI algorithms become smarter, its a given that theyll help companies use low-latency alerts to help managers react to quantifiable anomalies that indicate changes in their business. Also, AI is expected to play a major role in delivering dynamic data stories and might reduce a users role in data exploration.
A fact thats often forgotten in AI conversations is that these technologies are still nascent. Many of the major developments have been driven by open source efforts, but 2021 will see an increasing number of companies commercialise AI through product releases.
This event will truly be a marker of AI going mainstream. While open source has been highly beneficial to AI, scaling these projects for commercial purposes has been difficult. With companies investing more in AI research, expect a greater proliferation of AI technology in project management, data reusability, and transparency products.
Using AI for better data management is a particular focus of big companies right now. A Pathfinder report in 2018 found that a lack of skilled resources in data management was hampering AI development. However, with ML growing increasingly sophisticated, companies are beginning to use AI to manage data, which fuels even faster AI development.
As a result, metadata management becomes streamlined, and architectures become simpler. Moving forward, expect an increasing number of AI-driven solutions to be released commercially instead of on open source platforms.
Vendors such as Informatica are already using AI and ML algorithms to help develop better enterprise data management solutions for their clients. Everything from data extraction to enrichment is optimised by AI, according to the company.
This article explores the ways in which Kubernetes enhances the use of machine learning (ML) within the enterprise. Read here
Voice search and data is increasing by the day. With products such as Amazons Alexa and Googles Assistant finding their way into smartphones and growing adoption of smart speakers in our homes, natural language processing will increase.
Companies will wake up to the immense benefits of voice analytics and will provide their customers with voice tools. The benefits of enhanced NLP include better social listening, sentiment analysis, and increased personalisation.
Companies such as AX Semantics provide self-service natural language generation software that allows customers to self-automate text commands. Companies such as Porsche, Deloitte and Nivea are among their customers.
As augmented analytics make their way into embedded dashboards, low-level data analysis tasks will be automated. An area that is ripe for automation is data collection and synthesis. Currently, data scientists spend large amounts of time cleaning and collecting data. Automating these tasks by specifying standardised protocols will help companies employ their talent in tasks better suited to their abilities.
A side effect of data analysis automation will be the speeding up of analytics and reporting. As a result, we can expect businesses to make decisions faster along with installing infrastructure that allows them to respond and react to changing conditions quickly.
As the worlds of data and analytics come closer together, vendors who provide end-to-end stacks will provide better value to their customers. Combine this with increased data democratisation and its easy to see why legacy enterprise software vendors such as SAP offer everything from data management to analytics to storage solutions to their clients.
Tech experts provide their tips on how to effectively implement automation into your customer relationship management (CRM) process. Read here
IoT devices are making their way into not just B2C products but B2B, enterprise and public projects as well, from smart cities to industry 4.0.
Data is being generated at unprecedented rates, and to make sense of it, companies are increasingly turning to AI. With so much signal, this is a key help for arriving at insights.
While the rise of embedded and augmented analytics has already been discussed, its critical to point out that the sources of data are more varied than ever before. This makes the use of AI critical, since manual processes cannot process such large volumes efficiently.
As AI technology continues to make giant strides the business world is gearing up to take full advantage of it. Weve reached a stage where AI is powering further AI development, and the rate of progress will only increase.
Link:
Five trends in machine learning-enhanced analytics to watch in 2021 - Information Age
SaaS Data Ownership: The Key to Data Protection and More Impactful Machine Intelligence – insideBIGDATA
In this special guest feature, Joe Gaska, Founder and CEO of GRAX, discusses how SaaS data ownership is the key to data protection and more impactful machine intelligence. Under Joes leadership, GRAX has become the fastest-growing application in Salesforces history. He has been featured on the main stage at Dreamforce and has won numerous awards including the Salesforce Innovation Award. Prior to founding GRAX, Joe built Ionia Corporation and successfully sold it to LogMein (Xively), which is now a part of the Google IoT Cloud. Joe holds a BA in Applied Mathematics and Computer Science from the University of Maine at Farmington.
With Gartner reporting that 97% of organizations having some form of SaaS applications in their technology stack, the question of SaaS data ownership is quickly becoming something we can no longer sweep under the rug. Cloud applications are everywhere and so is the sensitive customer data stored in them. And while most organizations have caught on to the fact that they need to take direct ownership of their SaaS data, many still see it as just a compliance checkbox.
But the data stored and repeatedly overwritten in our SaaS applications represents a historical record of cause and effect change patterns in our business. This data, aside from being essential for compliance and data privacy, represents the biggest missed opportunity to improve modern-day machine learning algorithms. It is the literal cause and effect information gap that machine learning algorithms need to make sense of why things change in our business.
Some of the most iconic companies in the world that we buy from daily, wear on our wrists, have in our pockets, or rely on to power the internet, are starting to catch on to this opportunity and they are using an old set of tools in a new way in order to drive unfair advantage in their markets.
SaaS Data Privacy and Protection
With most major clouds (AWS, Azure and GCP, to name a few), data warehouses and other traditional tools now offering extensive protections and configurability for a myriad of regulatory scenarios, the elephant in the room remains SaaS or cloud applications. When it comes to CRM, third-party marketing automation tools or just about any other SaaS application, businesses are often at a loss about how to extend the same protections to sensitive customer data stored in those tools. Yet, those same tools are the lifeblood of our organizations they are literally the mechanisms that move us forward in our markets.
So we audit our vendors, force them to sign BAAs or other industry-specific affidavits, block non-compliant tools and hope for the best. When GDPR requests come in, we do our very best to comply, hoping to limit our liability if something goes awry. Meanwhile, as individuals, we opine about the lack of protection extended to our own personal data in all of the cloud apps in which it is stored.
SaaS Data is the Missing Link for Machine Learning
With the mirage of general machine intelligence quickly fading, weve turned to narrower, purpose-built machine learning algorithms to help shed some predictive light on our future. This is where companies like Tesla are successfully feeding massive streams of narrow, time-series sensor data into machine learning algorithms to improve self-driving car functionality over time. The rest of us, in the consumer or B2B space, are often left scratching our heads about why Siri or some other, perhaps more modern intelligent algorithm running in our enterprise, seems to be so poor at giving us meaningful predictions about our future. We often overlook one of the key linchpins of answering that question something the engineers at Tesla understand all too well: the most critical success factor in machine learning is feeding in a high volume of changes in data over time.
But, short of putting a million connected vehicles onto the road, how can we take advantage of that insight in our business?
It turns out that the answer to that question is the same one that addresses the SaaS data privacy and protection issue we identified earlier: SaaS application change data.
SaaS Data Ownership & Change Data Capture
For most organizations, the highest velocity of changes in data happens in the SaaS applications that they use to go to market. And the dataset those changes are happening to is often the sensitive customer data stored in CRM, ERP, e-commerce, and other critical cloud applications.
Based on both the regulatory need to protect such data, and the strategic advantage the data holds to improving analytics, machine learning and predictive modeling, it behooves every single organization in the world to start taking ownership of their SaaS application data.
But how can this be done?
SaaS Data Replication, Backup, Archiving oh my!
Most organizations turn to some form of data replication or change data capture, ingesting application data into some parts of their DataOps ecosystems to try to extract value there. However, most final resting places of data, such as cloud data warehouses, are often only good at consuming data at a specific point in time. They dont offer the ability to consume all changes in data over time, a critical factor for both the regulatory and machine learning scenarios identified earlier.
However, some organizations are starting to use old tools in new ways one such case involves SaaS data backup. Traditional backup tools are extending functionality into SaaS applications, while other, SaaS-first tools are offering organizations the ability to snapshot data and store it in their own cloud environments. While some tools require a workaround to allow organizations direct access to captured data, a new breed of tools is starting to allow organizations to directly access the raw data in their own cloud environments.
3 Things to Look for in the Right Tool
Three simple guideposts can quickly tell an organization if they have found the right tool for the job:
Sign up for the free insideBIGDATAnewsletter.
Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1
The rest is here:
SaaS Data Ownership: The Key to Data Protection and More Impactful Machine Intelligence - insideBIGDATA
New Canaan native speaks on Machine Learning Revolution – New Canaan Advertiser
While COVID-19 circumstances have forced organizations to meet remotely on the Zoom application, it has enabled groups like the Rotary Club of New Canaan to invite speakers from far away.
The clubs Zoom Christmas party included a previous Rotary International Scholar, Yuri Nakashima, from her home in Japan. This past weeks luncheon speaker was New Canaan native John Gnuse, son of Rotarian Jeanne Gnuse, and her late husband, Tom. Gnuse spoke to the club from San Francisco, where he is managing director at Lazard, on the topic of The Machine Learning Revolution.
Happily, the Zoom format enabled his sister, Dr. Karen Gnuse Nead, in Rochester, N.Y., and uncle, William Pflaum, in Menlo Park, Calif., to attend as well.
Gnuses career has focused on mergers and acquisitions of major technology companies, e.g. Google, IBM, Microsoft, Amazon and Apple, etc., and as such, he is a great guide to the world of machine learning.
His talk highlighted the progress which advanced computing power, and capacity have made possible.
Machine learning refers to the ability for complex algorithms to improve accuracy, and performance based on continuous experience with additional training data.
With these capabilities, complex, iterative processes using with multiple parameters have yielded sophisticated neural networks that can learn.
This has yielded sophisticated tools, and solutions that were not previously possible, but which we rely on now for so much of daily life such as for web search, speech recognition, (Alexa, Siri), medical research and financial optimization models, etc., to name a few.
In answer to concerns about where advances in artificial intelligence will take us, John referred to the guardrails already in place, and those which continue to be applied as key elements of the machine learning revolution. The field raises significant legal, ethical and morality challenges, which will continue to be evaluated as do concerns regarding bias, and fairness as the results of these networks impact people everywhere.
For more on the club, contact Alex Grantcharov, president, at alex.grantcharov@edwardjones.com, follow the club at http://www.facebook.com/NewCanaanRotary, newcanaanrotary on Instagram or at the clubs website, newcanaanrotary.org
See the original post here:
New Canaan native speaks on Machine Learning Revolution - New Canaan Advertiser
PathAI Machine Learning Models Reveal Treatment-Induced Changes in the Non-Small Cell Lung Cancer Tumor Microenvironment in Samples from the LCMC3…
BOSTON, Jan. 29, 2021 /PRNewswire-PRWeb/ --PathAI, a global provider of AI-powered technology applied to pathology research, announced that machine learning models were developed and applied on NSCLC samples from the LCMC3 trial by Genentech, a member of the Roche Group, and participating study investigators to identify predictive/prognostic biomarkers in the tumor microenvironment (TME) and perform AI-powered pathologic response assessment. A global summary of the LCMC3 primary analysis will be presented at the World Conference on Lung Cancer Symposium Singapore that will take place from January 28-31, 2021 in an oral presentation by Dr. David Carbone of The Ohio State University ("Clinical/Biomarker Data for Neoadjuvant Atezolizumab in Resectable Stage IB-IIIB NSCLC: Primary Analysis in the LCMC3 Study", January 29, 2021; Session OA06.06).
The LCMC3 study is a single arm trial that enrolled 181 participants with resectable, untreated stage IB to select IIIB NSCLC to investigate the pathologic response to atezolizumab as a neoadjuvant treatment. Biopsies were collected from all study subjects prior to neoadjuvant treatment, and surgical resections were collected after treatment from 159 subjects. Pathologic response is suggested to be associated with survival outcomes and is traditionally assessed upon evaluating the residual tumor following a course of neoadjuvant therapy. A major pathologic response (MPR), described as less than 10% viable tumor cells present in the post neoadjuvant treatment resections, was achieved in 30/144 (21%) subjects eligible for preliminary analysis, and a complete pathologic response, meaning that 0% tumor cells were present after neoadjuvant treatment, was observed in 10/144 (7%) eligible subjects. PathAI's AI-powered quantification of cell and tissue features to characterize the tumor microenvironment and analyze pathologic response is ongoing and those results will be presented at a future meeting.
In preliminary analyses, the PathAI research platform was able to identify and quantify tissue- and cell-level features in digitized whole slide images of hematoxylin and eosin (H&E)- stained biopsies and resections. ML model comparison of the TME composition pre- and post-treatment revealed quantifiable changes in histopathologic features in response to atezolizumab treatment. Furthermore, even in subjects that did not achieve a major pathologic response, these early results suggested that there may be a reduction in tumor tissue after treatment. If confirmed, this result would correlate well with other outcomes from this primary analysis that showed a significant increase in CD3+/PD1+ T cells after atezolizumab treatment, and that the presence of this cell type in the TME before treatment was associated with an observed MPR.
The data presented at WCLC highlight the potential for AI-powered pathology to reveal the architecture of the TME with the granularity necessary to understand the effect of anti-cancer agents and the biology underlying a treatment response. As treatment options become increasingly personalizable, developing robust and accurate quantitative measurements of the TME, and pathologic response to treatment will work toward enabling oncologists to provide patients with appropriate needs.
About PathAI
PathAI is a leading provider of AI-powered research tools and services for pathology. PathAI's platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit http://www.pathai.com.
Media Contact
Isabella Canuso, PathAI, +1 6096821080, isabella.canuso@pathai.com
SOURCE PathAI
FBI Wants Machine Learning Tools To Track Mobile Messaging Sites – Bloomberg Government
The Federal Bureau of Investigation is seeking software and expertise to monitor social media accounts and mobile messaging platforms for possible terrorist activity, according to procurement documents.
The FBI released a Jan. 21 request for information under the ambiguous title Information Technology Services. Attached to the listing is a document outlining the bureaus interest in software tools utilizing machine-learning models to assist FBI agents in analyzing large, open-source data sets. The document specifies social media and mobile messaging platforms as online channels where investigators seek to upgrade their intelligence-gathering capabilities.
The contract will support the FBIs Counterterrorism Advanced Projects Unit (CTAPU), established to supply high-tech support for investigations within the U.S. and abroad, according to the document. The CTAPUs work may involve attempts to exploit mobile messaging and social media platforms and analyze seized counterterrorism and counterintelligence digital media for clues into future threats.
To be considered for the contract, potential bidders must be qualified small businesses possessing expertise with machine learning and open-source data. They must also have experience guiding software projects through the complete research and development lifecycle. Interested vendors have until Feb. 9 to respond to the solicitation.
Photo credit: Bloomberg media
The announcement comes two weeks after rioters supporting former President Donald J. Trumpransacked the U.S. Capitol and clashed with law enforcement officers, leaving five dead. Federal agencies are aggressively pursuing investigations into individuals suspected of vandalizing the Capitol and assaulting police officers. The FBI also continues to investigate individuals suspected of planting explosive devices at the headquarters of the Democratic and Republican parties.
In the days following the riot, social media platforms Twitter Inc. and Facebook Inc. cracked down on alleged disinformation and suspended hundreds of accounts, including that of former President Trump. Days later, Amazon Web Services Inc. voided its IT infrastructure contract with social media site, Parler, citing Parlers failure to police extremist content on its platform.
In response, conservative activists and pro-Trump online communities are migrating to alternative social media sites like Gab.com or mobile messaging applications, such as the Dubai-based Telegram, according to a Jan. 11 Bloomberg report. Fragmentation of right-wing online media poses a challenge for law enforcement efforts to identify criminal suspects from the Jan. 6 riot, and to piece together clues warning of future violence.
There are currently few institutional restrictions on the FBIs ability to review public social media posts in the course of investigations. But private messaging applications pose potential legal and technical obstacles. FBI guidelines prohibit agents from attempting to infiltrate closed online chats without first demonstrating evidence of criminal activity. Use of end-to-end encryption by services like Telegram and Signal further constrains the bureaus intelligence-gathering abilities.
The procurement coincides with President Joe Bidens first steps to confront what he has called domestic terrorism. On Jan. 21, the same day the RFI was released, the White House ordered federal intelligence and law enforcement agencies, including the FBI and Department of Homeland Security, to perform a comprehensive threat assessment on domestic extremism.
The FBI did not respond to Bloomberg Governments request for comment.
To contact the analyst on this story: Chris Cornillie in Washington at ccornillie@bgov.com
To contact the editors responsible for this story: Michael Clark at mclark@ic.bloombergindustry.com
Follow this link:
FBI Wants Machine Learning Tools To Track Mobile Messaging Sites - Bloomberg Government
Predictions for 2021 – Machine Learning will become Ubiquitous Four Things We Need to Do Now – CXOToday.com
It wasnt too long ago that concepts like communicating with your friends in real-time through text or accessing your bank account information all from a mobile device seemed outside the realm of possibility. Today, thanks in large part to the cloud, these actions are so commonplace, we hardly even think about these incredible processes. And as we enter the golden age of machine learning, we can expect a similar boom of benefits that previously seemed impossible.
Machine learning is already helping companies make better and faster decisions. In healthcare, the use of predictive models created with machine learning is accelerating research and discovery of new drugs and treatment regiments. In other industries, its helping remote villages of Southeast Africa gain access to financial services, and matching individuals experiencing homelessness with housing.
In the short term, were encouraged by the applications of machine learning already benefiting our world. But it has the potential to have an even greater impact on our society. In the future, machine learning will be intertwined and under the hood of almost every application, business process, and end-user experience. However, before this technology becomes so ubiquitous that its almost boring, there are four key barriers to adoption we need to clear first.
Democratizing machine learning
The only way that machine learning will truly scale is if we as an industry make it easier for everyone regardless of their skill level or resources to be able to incorporate this sophisticated technology into applications and business processes.
To achieve this, companies should take advantage of tools that have intelligence directly built into applications that their entire organization can benefit from. Take for example Kabbage, a data and technology company providing small business cash flow solutions, who used artificial intelligence to quickly adapt and help process an unprecedented number of small business loans and unemployment claims caused by COVID-19 while preserving more than 945,000 jobs in America. By folding artificial intelligence into personalization, document processing, enterprise search, contact center intelligence, supply chain or fraud detection, all workers can benefit from machine learning in a frictionless way.
As processes go from being manual to automatic, workers are free to innovate and invent, and companies are empowered to be proactive instead of reactive. And as this technology becomes more intuitive and accessible, it can be applied to nearly every problem imaginablefrom the toughest challenges in the IT department, to the biggest environmental issues in the world.
Upskilling workers
According to the World Economic Forum, the growth of AI could create 58 million net new jobs in the next few years. However, research suggests that there are currently only 300,000 AI engineers worldwide, and AI-related job postings are three times that of job searches with a widening divergence. Given this significant gap, organizations need to recognize that they simply arent going to be able to hire all the data scientists they need as they continue to implement machine learning into their work. Moreover, this pace of innovation will open doors and ultimately create jobs we cant even begin to imagine today.
Thats why companies around the world like Morningstar, Liberty Mutual, DBS Bank, and others are finding innovative ways to encourage their employees to gain new machine learning skills with a fun, interactive hands-on approach. Its critical that organizations should not only direct their efforts towards training the workforce they have with machine learning skills, but also invest in training programs that develop these important skills in the workforce of tomorrow.
Instilling trust in products
With anything new, often people are of two minds either an emerging technology is a panacea and global savior, or it is a destructive force with cataclysmic tendencies. The reality is more often than not, a nuance somewhere in the middle. These disparate perspectives can be reconciled with information, transparency, and trust.
As a first step, leaders in the industry need to help companies and communities learn about machine learning, how it works, where it can be applied, ways to use it responsibly, and understand what it is not.
Second, in order to gain faith in machine learning products, they need to be built by diverse groups of people across gender, race, age, national origin, sexual orientation, disability, culture, and education. We will all benefit from individuals who bring varying backgrounds, ideas, and points of view to inventing new machine learning products.
Third, machine learning services should be rigorously tested, measuring accuracy against third party benchmarks. Benchmarks should be established by academia, as well as governments, and be applied to any machine learning-based service, creating a rubric for reliable results, as well as contextualizing results for use cases.
Regulation of machine learning
Finally, as a society, we need to agree on what parameters should be put in place governing how and when machine learning can be used. With any new technology, there has to be a balance in protecting civil rights while also allowing for continued innovation and practical application of the technology.
Any organization working with machine learning technology should be engaging customers, researchers, academics, and others to best determine the benefits of its machine learning technology with the potential risks. And they should be in active conversation with policymakers, supporting legislation, and creating their own guidelines for the responsible use of machine learning technology. Transparency, open dialogue, and constant evaluation must always be prioritized to ensure that machine learning is applied appropriately and is continuously enhanced.
Whats next
Through machine learning weve already accomplished so much, and yet, its still day one (and we havent even had a cup coffee yet!). If were using machine learning to help endangered orangutans, just imagine how it could be used to help save and preserve our oceans and marine life. If were using this technology to create digital snapshots of the planets forests in real-time, imagine how it could be used to predict and prevent forest fires. If machine learning can be used to help connect small-holder farmers to the people and resources they need to achieve their economic potential, imagine how it could help end world hunger.
To achieve this reality, we as an industry, have a lot of work ahead of us. Im incredibly optimistic that machine learning will help us solve some of the worlds toughest challenges and create amazing end-user experiences weve never even dreamt. Before we know it, machine learning will be as familiar as reaching for our phones.
(The author is Swami Sivasubramanian, Vice-President, Amazon Machine Learning, AWS (Amazon Web Services)and the views expressed in the article are his own)
Read this article:
Predictions for 2021 - Machine Learning will become Ubiquitous Four Things We Need to Do Now - CXOToday.com
Guilty or Not Guilty: Weight of Evidence Machine Learning Times – The Predictive Analytics Times
By: Sam Koslowsky, Senior Analytic Consultant, Harte HanksYou have been invited to serve as a juror in a criminal related case. After hearing testimony, the presiding judge offers a summary of the proceeding. Evaluate the evidence, he declares. Whether it was an eyewitness account, an affidavit, an image, or a recording, it is your responsibility to assess what was heard. Although I cannot tell you how to weigh the evidence, it is your responsibility to select the significant aspects of the case. Consider the value of the information that you have examined, weigh the evidence, and formulate your verdict accordingly. While the concept of weight
This content is restricted to site members. If you are an existing user, please log in on the right (desktop) or below (mobile). If not, register today and gain free access to original content and industry news. See the details here.
See the original post here:
Guilty or Not Guilty: Weight of Evidence Machine Learning Times - The Predictive Analytics Times
Machine learning-based PRAISE score may aid in the prediction of adverse events following an acute coronary syndrome – 2 Minute Medicine
1. The PRAISE score showed accurate discriminative capabilities for the prediction of all-cause death, acute myocardial infarction, and major bleeding after an acute coronary syndrome.
2. Compared with low risk stratification, a high-risk PRAISE score was associated with a 58.8-times increase in death, 27.7-times increase in myocardial infarction, and a 32.7-times increase in major bleeding events.
Evidence Rating Level: 2 (Good)
Study Rundown: Patients with acute coronary syndrome (ACS) are at an increased risk for ischemic and bleeding events. Although several predictive tools have been developed to predict adverse events, the accuracy of these scores remains modest. It is believed that machine learning may overcome some of the limitations of current analytical approaches. This study aimed to develop and validate a machine learning-based risk stratification model to predict all-cause death, recurrent acute myocardial infarction, and major bleeding after ACS. Four machine learning models were developed to predict the occurrence of each of the three outcomes one year after discharge. According to study results, the PRAISE score showed accurate discriminative abilities for the prediction of all three outcomes, even when externally validated. Specifically, the risk of myocardial infarction was greater than the risk of major bleeding among patients classified by the PRAISE score as being high risk for myocardial infarction. In contrast, the risk of myocardial infarction was lower than the risk of major bleeding among patients classified as being low risk for infarction. This study was limited by the retrospective design of the registries used to compose the derivation cohort. Perhaps, a prospective design could have been chosen to increase the validity of the PRAISE model. Overall, this study showed that a machine learning-based approach may accurately predict the occurrence of adverse events and aid in optimizing care for patients following an ACS.
Click to read the study in The Lancet
Relevant Reading: Artificial Intelligence to Detect Papilledema from Ocular Fundus Photographs
In-depth [prospective cohort]: Patients for the derivation cohort were obtained from two registries: the BleeMACS registry (comprising 15 401 patients with ACS at 15 tertiary hospitals in America, Europe, and Asia), and the RENAMI registry (comprising 4425 patients admitted at 12 European hospitals). Altogether, the derivation cohort consisted of 19 826 patients (18 years) with ACS and 1 year follow-up data. The derivation cohort was split into two groups: a training cohort (80%) and an internal validation cohort (20%). To assess the performance of the PRAISE score, an external validation cohort of 3444 adult patients admitted to the hospital with ACS with 2 years of follow-up was used.
Area under the receiver operating characteristic curves (AUCs) of the PRAISE model (training and internal validation cohort) for all three outcomes were similar to the external validation cohort. In the internal validation cohort, AUCs for 1-year all-cause death, 1-year myocardial infarction, and 1-year major bleeding were 0.82 (95% confidence interval [CI], 0.78-0.85), 0.74 (95% CI, 0.70-0.78), and 0.70 (95% CI, 0.66-0.75), respectively. For the external validation cohort, AUCs for 1-year all-cause death, 1-year myocardial infarction, and 1-year major bleeding were 0.92 (95% CI, 0.90-0.93), 0.81 (95% CI, 0.76-0.85), and 0.86 (95% CI, 0.82-0.89), respectively. Compared to the low-risk group, being in the high-risk group increased the risk of death by 58.8 times, myocardial infarction by 27.7 times, and major bleeding events by 32.7 times. While predictors varied based on study outcomes, left ventricular ejection fraction (LVEF), age, hemoglobin level, and statin therapy were important predictors for 1-year all-cause death. Meanwhile, hemoglobin level, age, LVEF, and estimated glomerular filtration rate (EGFR) were important predictors of both 1-year myocardial infarction and major bleeding risk. Findings from this study show that a machine learning-based approach for the identification of predictors of events after an ACS is effective and may help guide clinical decision making.
Image: PD
2020 2 Minute Medicine, Inc. All rights reserved. No works may be reproduced without expressed written consent from 2 Minute Medicine, Inc. Inquire about licensing here. No article should be construed as medical advice and is not intended as such by the authors or by 2 Minute Medicine, Inc.
Go here to see the original:
Machine learning-based PRAISE score may aid in the prediction of adverse events following an acute coronary syndrome - 2 Minute Medicine
Latest News Why Should Python Be Used in Machine Learning? – Analytics Insight
Machine learning is essentially making a PC to play out a task without expressly programming it. In this day and age, each framework that does well has a machine learning algorithm at its heart. Machine learning is at present probably the most sizzling topics in the business and organizations have been racing to have it consolidated into their products, particularly applications
As indicated by Forbes, Machine learning patents developed at a 34% rate somewhere between 2013 and 2017 and this is simply set to increment later on. Furthermore, Python is the essential programming language utilized for a significant part of the innovative work in Machine Learning. To such an extent that Python is the top programming language for Machine Learning as indicated by Github
Machine learning isnt just utilized in the IT business. Machine learning likewise plays an important role in advertising, banking, transport, and numerous different businesses. This innovation is continually advancing, and subsequently, it is methodically acquiring new fields in which it is an integral part.
Python is a high-level programming language for overall programming. Besides being an open-source programming language, python is an extraordinarily interpreted, object-oriented, and interactive programming language. Python joins surprising power with clear syntax. It has modules, classes, special cases, significant level dynamic data types, and dynamic composing. There are interfaces to numerous system calls and libraries, as well as to different windowing frameworks.
Easy and Fast Data Validation
The job of machine learning is to identify patterns in data. An ML engineer is answerable for harnessing, refining, processing, cleaning, sorting out, and deriving insights from data to create clever algorithms. Python is easy while the topics of linear algebra or calculus can be so perplexing, they require the maximum amount of effort. Python can be executed rapidly which allows ML engineers to approve an idea immediately.
Different Libraries and Frameworks
Python is already very well-known and thus, it has many various libraries and frameworks that can be utilized by engineers. These libraries and frameworks are truly valuable in saving time which makes Python significantly more well-known.
Code Readability
Since machine learning includes an authentic knot of math, now and then very troublesome and unobvious, the readability of the code (also outside libraries) is significant if we need to succeed. Developers should think not about how to write, but rather what to write, all things considered.
Python developers are excited about making code that is not difficult to read. Moreover, this specific language is extremely strict about appropriate spaces. Another of Pythons advantages is its multi-paradigm nature, which again empowers engineers to be more adaptable and approach issues utilizing the simplest way possible.
Low-entry Barrier
There is an overall shortage of software engineers. Python is not difficult to get familiar with a language. Hence, the entry barrier. is low. Whats the significance here? That more data scientists can become experts rapidly and thus, they can engage in ML projects. Python is fundamentally the same as the English language, which makes learning it simpler. Because of its easy phrase structure, you can unhesitatingly work with complex systems.
Portable and Extensible
This is a significant reason why Python is so mainstream in Machine Learning. So many cross-language tasks can be performed effectively on Python due to its portable and extensible nature. There are numerous data scientists who favor utilizing Graphics Processing Units (GPUs) for training their ML models on their own machines and the versatile idea of Python is appropriate for this.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Read more from the original source:
Latest News Why Should Python Be Used in Machine Learning? - Analytics Insight