Category Archives: Machine Learning
Think Fast! Using Machine Learning Approaches to Identify Predictors of Adverse Pregnancy Outcomes in SLE – Rheumatology Network
Unfortunately, a substantial portion of pregnancies in lupus patients are complicated by an adverse pregnancy outcome (APO). This can include preterm delivery, intrauterine growth restriction and foetal mortality. Given the high prevalence of APOs in this group, there has been considerable interest in predicting those at the greatest risk of negative outcomes, to permit enhanced observation and intervention in these patients. The EUREKA algorithm was developed to predict obstetric risk in patients with different subsets of antiphospholipid antibodies and generated significant discourse.1 More recently, machine learning (ML) methodology has been applied by Fazzari et al to a large observational cohort (PROMISSE) to identify additional predictors of APO.2
The PROMISSE cohort enrolled 385 pregnant women with mild to moderate SLE both with and without antiphospholipid antibody positivity. They collected data on pregnancy outcomes between 2003 and 2013 from 9 North American sites. Exclusion criteria included a daily prednisone dose >20mg, a urinary PCR >1000, serum creatinine >1.2mg/dL, type 1 or 2 diabetes mellitus, or systemic hypertension.
Previous work in this cohort has linked increased levels of the complement activation productsBb and sC5b-9 to higher rates of APOs.3 More recently, Fazzari et al have applied several ML approaches to the PROMISSE cohort and compared these to logistic regression modelling to identify predictors of APO in SLE patients.2
Approaches were trialled including least absolute shrinkage and selection operator (LASSO), random forest, neural network, support vector machines (SVM-RBF) gradient boosting, and SuperLearner. These were compared via area under the receiver operating characteristic(AUROC). Forty-one predictors assessed during routine care of patients with SLE were used to build these models.
Fazzari et al identified several risk factors for APO including high disease activity, lupus anticoagulant positivity, thrombocytopenia, and antihypertensive use. When comparing AUROC, the SuperLearner package had the numerically superior area under the curve (AUC) (0.78). However, this was not significantly different to LASSO, SVM-RBF, or random forest (AUC 0.77 in all cases).
Weaknesses of the PROMISSE cohort are its exclusion of high disease activity SLE patients and those with a systemic blood pressure of > 140/90mmHg. Additionally, the proportion of patients with APOs within the cohort was low (18.4%), likely in part related to the stringent exclusion criteria. A recent retrospective Portuguese study of which did not exclude high disease activity of lupus nephritis patients identified a far higher rate of APO (41.4%) in their SLE cohort.4 Indeed, Ntali et al recently demonstrated reduced APOs in SLE patients with low disease activity in a prospective observational study.5 Application of these models in a higher disease burden cohort would therefore be desirable.
This work demonstrates the utility of ML in aiding clinical risk stratification within a complex patient cohort. The utilization of standard clinical variables and comparison of several ML techniques are substantial strengths of this work. However, further validation in external cohorts is desirable. The application of ML methodology in risk stratification within SLE may provide better clarity in a heterogeneous patient cohort. Additionally, in the future, similar methodological approaches could be trialled across the autoimmune connective tissue disease spectrum to provide better prognostic information to patients at diagnosis, irrespective of their diagnostic label.
References:
Machine learning tool could help people in rough situations make sure their water is good to drink – ZME Science
Imagine for a moment that you dont know if your water is safe to drink. It may be, it may not be just trying to visualize that situation brings a great deal of discomfort, doesnt it? Thats the situation 2.2 billion people find themselves in on a regular basis.
Chlorine can help with that. Chlorine kills pathogens in drinking water and can make water safe to drink at an optimum level. But its not always easy to estimate the optimum amount of chlorine. For instance, if you put chlorine into a piped water distribution system, thats one thing. But if you chlorinate water in a tank, and then people come and take that water home in containers, its a different thing because this water is more prone to recontamination so you need more chlorine in this type of water. But how much? The problem gets even more complicated because if water stays in place too long, chlorine can also decay.
This is particularly a problem in refugee camps, many of which suffer from a severe water crisis.
Ensuring sufficient free residual chlorine (FRC) up to the time and place water is consumed in refugee settlements is essential for preventing the spread of waterborne illnesses. write the authors of the new study. Water system operators need accurate forecasts of FRC during the household storage period. However, factors that drive FRC decay after the water leaves the piped distribution system vary substantially, introducing significant uncertainty when modeling point-of-consumption FRC.
To estimate the right amount of FRC, a team of researchers from York Universitys Lassonde School of Engineering used a machine learning algorithm to estimate chlorine decay.
They focused on refugee camps, which often face problems regarding drinking water, and collected 2,130 water samples from Bangladesh from June to December 2019, noting the level of chlorine and how it decayed. Then, the algorithm was used to develop probabilistic forecasting of how safe the water is to drink.
AI is particularly good at this type of problem: when it has to derive statistical likelihoods of events from a known data set. In fact, the team combined AI with methods routinely used for weather forecasting. So, you input parameters such as the local temperature, water quality, and the condition of the pipes, and then it can make a forecast of how safe the water is to drink at a certain moment. The model estimates how likely it is for the chlorine to be at a certain level and outputs a range of probabilities, which researchers say is better because it allows water operators to plan better.
These techniques can enable humanitarian responders to ensure sufficient FRC more reliably at the point-of-consumption, thereby preventing the spread of waterborne illnesses.
Its not the first time AI has been used to try and help the worlds less fortunate. In fact, many in the field believe thats where AI can make the most difference. Raj Reddy, one of the pioneers of AI recently spoke at the Heidelberg Laureate Forum, explaining that hes most interested in AI being used for the worlds least fortunate people, noting that this type of technology can move the plateau and improve the lives of the people that need it most.
According to a World Bank analysis, machine learning can be useful in helping developing countries rebuild after the pandemic, noting that software solutions such as AI can help countries overcome more quickly and efficiently existing infrastructure gaps. However, other studies suggest that without policy intervention, AI risks exacerbating economic inequality instead of bridging it.
No doubt, the technology has the ability to solve real problems where its needed most. But more research such as this is needed to find how AI can address specific challenges.
The study has been published in PLoS Water.
RBI plans to extensively use artificial intelligence, machine learning to improve regulatory supervision – ETCIO
The Reserve Bank is planning to extensively use advanced analytics, artificial intelligence and machine learning to analyse its huge database and improve regulatory supervision on banks and NBFCs.
For this purpose, the central bank is also looking to hire external experts.
While the RBI is already using AI and ML in supervisory processes, it now intends to upscale it to ensure that the benefits of advanced analytics can accrue to the Department of Supervision in the central bank.
The supervisory jurisdiction of the RBI extends over banks, urban cooperative banks (UCB), NBFCs, payment banks, small finance banks, local area banks, credit information companies and select all India financial institutions.
It undertakes continuous supervision of such entities with the help of on-site inspections and off-site monitoring.
The central bank has floated an expression of interest (EoI) for engaging consultants in the use of Advanced Analytics, Artificial Intelligence and Machine Learning for generating supervisory inputs.
"Taking note of the global supervisory applications of AI & ML applications, this Project has been conceived for use of Advance Analytics and AI/ML to expand analysis of huge data repository with RBI and externally, through the engagement of external experts, which is expected to greatly enhance the effectiveness and sharpness of supervision," it said.
Among other things, the selected consultant will be required to explore and profile data with a supervisory focus.
The objective is to enhance the data-driven surveillance capabilities of the Reserve Bank, the EoI said.
Most of these techniques are still exploratory, however, they are rapidly gaining popularity and scale.
On the data collection side, AI and ML technologies are used for real-time data reporting, effective data management and dissemination.
For data analytics, these are being used for monitoring supervised firm-specific risks, including liquidity risks, market risks, credit exposures and concentration risks; misconduct analysis; and mis-selling of products.
Go here to read the rest:
RBI plans to extensively use artificial intelligence, machine learning to improve regulatory supervision - ETCIO
Artificial intelligence may improve suicide prevention in the future – EurekAlert
The loss of any life can be devastating, but the loss of a life from suicide is especially tragic.
Around nine Australians take their own lifeeach day, and it is theleading cause of death for Australians aged 1544. Suicide attempts are more common, with some estimates stating that they occur up to 30 times as often as deaths.
Suicide has large effects when it happens. It impacts many people and has far-reaching consequences for family, friends and communities, says Karen Kusuma, a UNSW Sydney PhD candidate in psychiatry at theBlack Dog Institute, who investigates suicide prevention in adolescents.
Ms Kusuma and a team of researchers from the Black Dog Institute and theCentre for Big Data Research in Healthrecently investigated the evidence base of machine learning models and their ability to predict future suicidal behaviours and thoughts. They evaluated the performance of 54 machine learning algorithms previously developed by researchers to predict suicide-related outcomes of ideation, attempt and death.
The meta-analysis, published in theJournal of Psychiatric Research, found machine learning models outperformed traditional risk prediction models in predicting suicide-related outcomes, which have traditionally performed poorly.
Overall, the findings show there is a preliminary but compelling evidence base that machine learning can be used to predict future suicide-related outcomes with very good performance, Ms Kusuma says.
Identifying individuals at risk of suicide is essential for preventing and managing suicidal behaviours. However, risk prediction is difficult.
In emergency departments (EDs), risk assessment tools such as questionnaires and rating scales are commonly used by clinicians to identify patients at elevated risk of suicide. However, evidence suggests they are ineffective in accurately predicting suicide risk in practice.
While there are some common factors shown to be associated with suicide attempts, what the risks look like for one person may look very different in another, Ms Kusuma says. But suicide is complex, with many dynamic factors that make it difficult to assess a risk profile using this assessment process.
A post-mortem analysis of people who died by suicide in Queensland found, of those who received a formal suicide risk assessment,75 per cent were classified as low risk, and none was classified as high risk. Previous research examining the past 50 years of quantitative suicide risk prediction models also found they were onlyslightly better than chance in predicting future suicide risk.
Suicide is a leading cause of years of life lost in many parts of the world, including Australia. But the way suicide risk assessment is done hasnt developed recently, and we havent seen substantial decreases in suicide deaths. In some years, weve seen increases, Ms Kusuma says.
Despite the shortage of evidence in favour of traditional suicide risk assessments, their administration remains a standard practice in healthcare settings to determine a patients level of care and support. Those identified as having a high risk typically receive the highest level of care, while those identified as low risk are discharged.
Using this approach, unfortunately, the high-level interventions arent being given to the people who really need help. So we must look to reform the process and explore ways we can improve suicide prevention, Ms Kusuma says.
Ms Kusuma says there is a need for more innovation in suicidology and a re-evaluation of standard suicide risk prediction models. Efforts to improve risk prediction have led to her research using artificial intelligence (AI) to develop suicide risk algorithms.
Having AI that could take in a lot more data than a clinician would be able to better recognise which patterns are associated with suicide risk, Ms Kusuma says.
In the meta-analysis study, machine learning models outperformed the benchmarks set previously by traditional clinical, theoretical and statistical suicide risk prediction models. They correctly predicted 66 per cent of people who would experience a suicide outcome and correctly predicted 87 per cent of people who would not experience a suicide outcome.
Machine learning models can predict suicide deaths well relative to traditional prediction models and could become an efficient and effective alternative to conventional risk assessments, Ms Kusuma says.
The strict assumptions of traditional statistical models do not bind machine learning models. Instead, they can be flexibly applied to large datasets to model complex relationships between many risk factors and suicidal outcomes. They can also incorporate responsive data sources, including social media, to identify peaks of suicide risk and flag times where interventions are most needed.
Over time, machine learning models could be configured to take in more complex and larger data to better identify patterns associated with suicide risk, Ms Kusuma says.
The use of machine learning algorithms to predict suicide-related outcomes is still an emerging research area, with 80 per cent of the identified studies published in the past five years. Ms Kusuma says future research will also help address the risk of aggregation bias found in algorithmic models to date.
More research is necessary to improve and validate these algorithms, which will then help progress the application of machine learning in suicidology, Ms Kusuma says. While were still a way off implementation in a clinical setting, research suggests this is a promising avenue for improving suicide risk screening accuracy in the future.
Journal of Psychiatric Research
Meta-analysis
People
The performance of machine learning models in predicting suicidal ideation, attempts, and deaths: A meta-analysis and systematic review
29-Sep-2022
The authors declare no conflict of interest.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
More:
Artificial intelligence may improve suicide prevention in the future - EurekAlert
Machine vision breakthrough: This device can see ‘millions of colors’ – Northeastern University
An interdisciplinary team of researchers at Northeastern have built a device that can recognize millions of colors using new artificial intelligence techniquesa massive step, they say, in the field of machine vision, a highly specialized space with broad applications for a range of technologies.
The machine, which researchers call A-Eye, is capable of analyzing and processing color far more accurately than existing machines, according to a paper detailing the research published in Materials Today. The ability of machines to detect, or see, color is an increasingly important feature as industry and society more broadly becomes more automated, says Swastik Kar, associate professor of physics at Northeastern and co-author of the research.
In the world of automation, shapes and colors are the most commonly used items by which a machine can recognize objects, Kar says.
The breakthrough is twofold. Researchers were able to engineer two-dimensional material whose special quantum properties, when built into an optical window used to let light into the machine, can process a rich diversity of color with very high accuracysomething practitioners in the field havent been able to achieve before.
Additionally, A-Eye is able to accurately recognize and reproduce seen colors with zero deviation from their original spectra thanks, also, to the machine-learning algorithms developed by a team of AI researchers, helmed by Sarah Ostadabbas, an assistant professor of electrical and computer engineering at Northeastern. The project is a result of unique collaboration between Northeasterns quantum materials and Augmented Cognition labs.
The essence of the technological discovery centers on the quantum and optical properties of the class of material, called transition metal dichalcogenides. Researchers have long hailed the unique materials as having virtually unlimited potential, with many electronic, optoelectronic, sensing and energy storage applications.
This is about what happens to light when it passes through quantum matter, Kar says. When we grow these materials on a certain surface, and then allow light to pass through that, what comes out of this other end, when it falls on a sensor, is an electrical signal which then [Ostadabbass] group can treat as data.
As it relates to machine vision, there are numerous industrial applications for this research tied to, among other things, autonomous vehicles, agricultural sorting and remote satellite imaging, Kar says.
Color is used as one of the principle components in recognizing good from bad, go from no-go, so theres a huge implication here for a variety of industrial uses, Kar says.
Machines typically recognize color by breaking it down, using conventional RGB (red, green, blue) filters, into its constituent components, then use that information to essentially guess at, and reproduce, the original color. When you point a digital camera at a colored object and take a photo, the light from that object flows through a set of detectors with filters in front of them that differentiate the light into those primary RGB colors.
You can think about these color filters as funnels that channel the visual information or data into separate boxes, which then assign artificial numbers to natural colors, Kar says.
So if youre just breaking it down into three components [red, green, blue], there are some limitations, Kar says.
Instead of using filters, Kar and his team used transmissive windows made of the unique two-dimensional material.
We are making a machine recognize color in a very different way, Kar says. Instead of breaking it down into its principal red, green and blue components, when a colored light appears, say, on a detector, instead of just seeking those components, we are using the entire spectral information. And on top of that, we are using some techniques to modify and encode them, and store them in different ways. So it provides us with a set of numbers that help us recognize the original color much more uniquely than the conventional way.
As the light pass through these windows, the machine processes the color as data; built into it are machine learning models that look for patterns in order to better identify the corresponding colors the device analyzes, Ostadabbas says.
A-Eye can continuously improve color estimation by adding any corrected guesses to its training database, the researchers wrote.
Davoud Hejazi, a Northeastern physics Ph.D. student, contributed to research.
For media inquiries, please contact media@northeastern.edu.
Go here to read the rest:
Machine vision breakthrough: This device can see 'millions of colors' - Northeastern University
Redapt Has Earned the AI and Machine Learning on Microsoft Azure Advanced Specialization – goskagit.com
Country
United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe
Tuskegee University professor receives fellowship award to minimize bias in the AI field – Tuskegee University
October 05, 2022
Contact:Thonnia Lee, Office of Communications, Public Relations and Marketing
Dr. Chitra Nayak, associate professor in Tuskegee University's College of Arts and Sciences department of physics, was recently awarded a $50,000 leadership fellowship to help increase researcher participation in artificial intelligence.
The National Institutes of Health's Artificial Intelligence/Machine Learning (AI/MIL) Consortium to Advance Health Equity and Researcher Diversity program, or AIM-AHEAD program, awarded the grant to focus on underrepresented communities using AI/MIL to achieve health equity through mutually beneficial partnerships. The goal of the fellowship is to prepare tomorrow's leaders to champion the use of AI/ML in addressing persistent health disparities.
Dr. Nayak's research will investigate the bias that can cause in some deep learning algorithms due to discrimination in the algorithm and or lack of training data sets representing different races. The project is titled "Investigation of the Spatial Transcriptomic Deep Learning Algorithms using Histological Images for Possible Bias Depending on the Training Data Sets."
"I would like to thank my colleagues, Drs. Channa Prakash, Clayton Yates, and Qazi as they each have been instrumental in the program's introduction and the funding," said Dr. Nayak. "The work I have been doing with Dr. Yates trying to use AI to train tissue classifiers helped me immensely while preparing the application."
In addition to the year-long fellowship, Dr. Nayak will have access to Consortium Cores, targeted training, and courses specific to AI/ML and Health Equity education. She will benefit from workshops and seminars on leadership principles, strategies, and case examples.
Dr. Nayak is a scholar in Health Disparity Research Education Award Certificate Program. She expects to leverage her interest in artificial intelligence/machine learning and her role as an educator to attract students from underrepresented communities in AI, minimizing the existing bias in this field.
2022 Tuskegee University
Heard on the Street 10/6/2022 – insideBIGDATA
Welcome to insideBIGDATAs Heard on the Street round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
AI for Good Computer Vision in health, wildfire prevention, and conservation. Prashant Natarajan, VP of Strategy and Products atH2O.ai
Deep learning and advances in interpretability and explainability are bringing computer vision to life. The advances in deep learning coupled with multi-cloud compute and storage are allowing business & technical teams to create value more rapidly than before. These CV models, scorers, and end-to-end AI pipelines are resulting in higher accuracy, faster experimentation, and quicker paths to outcomes and business value. The development of pre-trained computer vision models is another consequence of these innovations, and augurs well for the ability to predict, scale, and deploy solutions for wildfire prevention & management; wildlife conservation; medical imaging; and disease management.For example, data scientists are able to use satellite imaging, along with a mix of historical and current climatic data; and human habitation trends to predict and manage wildfires.These solutions at the foundation of which are multi-modal AI and computer vision are being used by responders, local leaders, businesses, and the community to help save lives, prevent damage to property, and reduce the impact on natural resources. Computer vision is also aiding in wildlife conservation. The vast variety and volumes of selfies and tourist photos in addition to images from wildlife cameras and tracking reports are being put to work with completed vision. AI is used to identify individuals; groups; and migration patterns/habits of animals in danger of extinction. Scientists and local communities are deploying these solutions today to better protect habitats and keep a watch on their ecosystem threats.The most impactful uses of computer vision are in health and wellness. Given the challenges with data and the acute realities of pressures faced by clinicians, computer vision on medical images is more needed than prior to COVID. Health systems, pharma companies, and public sector health programs are looking for new outcomes and value from AI in areas as diverse as abnormality detection, disease progression prediction, and for clinician-supporting tools in precision medicine & population health.
Shrinkflation isnt the Long-Term Answer, Decision Intelligence is.Commentary from Peaks Go-To-Market Director Ira Dubinsky
With decades-high inflation wreaking havoc on supply chains, brands are hopping on the shrinkflation bandwagon reducing the weight, size or quantity of their product to avoid raising prices. While this is not a new tactic, shrinkflation is a sure-fire way to steer customers toward the competition, with 49% of consumers opting for a different brand in response. Its a short-term tactic for a long-term problem, as shrinkflation doesnt drastically impact transportation routes, packaging or other fixed overhead costs, just the quantities of raw materials per item. That means it doesnt have the restorative impact on margins that companies are hoping for.Instead, brands should be responding with operational changes. Typically, we see internal data silos dividing demand generation and demand fulfillment data, a lack of visibility across the two is more detrimental to margins than rising production costs. Demand data can be leveraged to improve forecasting and increase the efficiency of fulfillment its a relatively easy short-term modification every brand should be implementing. However, Decision Intelligence (DI), the commercial application of AI to the decision-making process, is a long-term fix that takes this one step further. Brands already have immense amounts of rich data at their fingertips, and DI is capable of helping them uncover who their customers are, where they are, and what theyre buying. This insight can help CPGs in several ways, including reducing waste, eliminating unnecessary costs, and ensuring that the right products are in the right place at the right time. This ultimately allows brands to make incremental efficiencies across a constrained supply chain, keeping customers coming back and putting themselves in a position to combat inflation, without shrinking their products.
Programmers Day. Commentary by Daniel Hutley, The Open Group Architecture Forum Director
A significant number of businesses is adopting digital into their operations as technology is becoming increasingly ingrained in everyday life. This Programmers Day, its crucial for businesses to understand the importance of Agile programming teams and not to underestimate the role these teams play for businesses. This is because Agile teams are the key drivers of the enterprises digital transformation, through inventing new business models, developing and maintaining softwareand architecting highly-automated operating systems.While Agile adoption has been growing rapidly over the last two years, and being implemented as a strategic priority at pace communication and implementation of Agile methodologies will need to be rethought and reformulated for operations at scale. Implementing standards such asThe Open Group Open Agile Architecture (O-AA) Standardis one of the ways businesses can do this, due to formal coordination playing an essential role in ensuring positive holistic outcomes derive from decision making. In this way, programming teams can bring Agile into their operations, increasing the ability to operate efficiently within individual teams, and the wider business.As we progress further into the digital age, and enterprises encourage integration of Agile practices into business and programming operations, the focus for Agile moving forward will be on how enterprises are making Agile part of their businesses DNA, not just something that is practiced by individual teams. Finally, and most importantly, having a standardized Agile business-led approach means that digital enterprises will be more equipped to efficiently work together, from programming teams to wider operations, to embrace new technologies and advance operations, essential for the technology-enabled future.
Conversational AI is the Key to Improved Customer Experience.Commentary by Sanjay Macwan, Chief Information Officer and Chief Information Security Officer,Vonage
The world has undergone a tremendous digital transformation in the past few years, and this will continue to evolve and accelerate as businesses adopt technologies like artificial intelligence (AI) to enhance the customer journey. AI technologies, whether speech-to-text or text-to-speech or deriving context via natural language processing/understanding (NLP/NLU), can be key to driving improved customer communications and enhanced engagement. Often when people think of the term conversational AI, they imagine traditional chatbots. While some chatbots leverage AI, not all do. Conversational AI is a technology that powers additional communication solutions beyond just chatbots and turns those transactions and interactions intoconversations. It works by combining technologies like NLP and automated speech recognition (ASR) to improve the nature of interactions with customers, to better understand their questions or needs, and address them quickly automated, contextual, and with a human-like touch. This allows customers to get quicker solutions to problems and frees human agents from the tedious task of continuously answering common questions. Traditionally leveraged in contact centers, Conversational AI can also be used to improve company websites, retail and e-commerce platforms, the ways companies collect and analyze data, and enhance IT support- driving deeper engagement and further improving customer and user experience all around.
In Sales, AI-Assisted Practice Makes Perfect. Commentary by Kevin Beales, VP & GM of Allego
Of the 75% of organizations that provided practice opportunities to employees during the pandemic, 90% found it effective. Artificial intelligence is providing new opportunities for sellers to practice. In sales, like in every industry, greatness isnt an innate, unattainable skill for the select few. Great sellers are built through proper trainingas long as this training is followed up with practice and coaching to make it stick. This is where the rubber meets the road for many sales teams. Managers have large, dispersed teams and lack time to provide the hands-on guidance their reps so desperately need. Thats where technology, such as artificial intelligence (AI), can help.Conversation intelligence (CI), powered by AI, records and analyzes reps practice sessions to help pinpoint where improvement is needed. AI tracks against topics and activities of top performers, providing timely feedback to sellers and identifying personalized coaching opportunities for managers. Instead of having to listen to hours of recorded sales conversations, sales managers can consult the data and quickly target moments where a call has gone wellor not. AI has significant potential to improve sales activities and outcomes as it becomes increasingly sophisticated and applicable to new use cases. For example, sales teams can now simulate conversations with AI-powered virtual actors to allow them to practice and receive feedback. AI can also automatically record meeting notes so sales reps dont miss a single customer interaction. With AI, sales teams can more successfully navigate the modern sales landscape and improve sales readiness.
World Engineer Day. Commentary by Peter Vetere, Senior Software Engineer at Singlestore
The computing field is vast and constantly changing, so it can seem quite intimidating if youre just starting out. However, programming is all about breaking problems into smaller pieces. So, my advice is to start with a simple goal or problem in mind, even if youve never written a line of code in your life. Ive found that its much easier to learn new concepts and technologies if there is a concrete use case to work from, however contrived it might be. Lets say maybe you want to try printing the numbers 1-10 on the screen. This seems simple, but there is a surprising amount of thought that goes into it. For example, the screen can mean different things, depending on the context. Do you want your output to go to a web browser, a command-line window or somewhere else? Do some research on popular programming languages and what they are used for. Python and JavaScript are common first choices. Once youve picked a goal and a language to use, find some beginner tutorials on programming in that language. Good tutorials will often teach you enough to do something useful in the first few pages. As you read through them, think about how you can use what you are learning to achieve the goal you set out for yourself. This kind of self-motivated, active curiosity is fundamental to being a successful engineer. Dont be afraid to search for hints or ask questions in online forums, to engage with teachers or friends, or to join clubs. Given some time, patience and dedication, youll eventually accomplish your goal or solve your problem. Its tremendously satisfying when you do. If you enjoyed the experience, think about some ways you can enhance your program. It will engage your creative mind and will naturally lead you down a path of more learning. After a while, youll start to get a feel for what you do and dont like about programming. You dont need to know everything nobody does. Pursue your interests and find the fun in it.
Borderless Data: What Makes for a Sensible Data Regulation Policy. Commentary by Chris McLellan, director of Operations,Data Collaboration Alliance
Think of it as another clash between irresistible force and immovable object: On one side, we have data thats powering an ever-increasing number of personal and business applications while being copied at scale with little regard for consent forms, data governance policies, or jurisdictional borders; on the other, we have increasingly strict data protection regulationssome to guard personal information, others driven by economic and nationalistic interests. Its an escalating tension that is putting a severe strain on innovation, compliance, and international commerce. In this maelstrom, neither side will give. . .but both sides should. Turning a blind eye to silos and copies the root causes of the inability to actually control data and protect its value is setting the bar really low. The bottom line is that if we want to square the circle of digital innovation and meaningful data protection for citizens and organizations we need to accelerate technologies, protocols, and standards that prioritize the control of data by eliminating silos and copies. For example, theZero-Copy Integration framework, which is set to become a national and international standard, provides technologists with a framework that is fundamentally rooted in control. There are also zero-copy technologies like dataware that enable new digital solutions to be built quickly and without copy-based data integration. As we move forward, we may also need to consider enshrining the control and ownership of data as a human right. But whatever the mix of technological, regulatory, and legal approaches we employ, the control of data through the elimination of silos and copies needs to be recognized by all stakeholders as the only logical starting point.
Satellite boom brings big data challenge. Commentary by Dr. Mike Flaxman (Ph.D., Harvard), Product Manager atHEAVY.AI
Commercial satellite deployments continue to boom. Last year saw arecord numberof commercial satellites launched into space, with a 44% increase over 2020. These satellite deployments are driving a new wave of data growth. But organizations are now facing an unprecedented challenge in managing and analyzing this unique geospatial data at such incredible scale. Satellites collect massive, complex, heavy datasets. While orgs now have the infrastructure to store and transport this heavy data, they lack a way to reliably analyze it at scale. To do that effectively, theyll have to harness GPUs and CPUs in parallel to deliver the speed and visualization capabilities needed to map and learn from this data. Satellite geospatial data will support critical use cases predicting wildfires, measuring climate change, improving 5G services but organizations will have to find new tech to properly wield it.
IT Professionals Day. Commentary by Carl DHalluin, chief technology officer at Datadobi
IT Professionals Day is a reminder of the value IT pros provide to virtually every enterprise organization. They do so much to maintain organizations infrastructure, and without them, companies would struggle to operate day-to-day. In addition to recognizing their accomplishments on IT Professionals Day, we should also make sure were providing them with the necessary tools to make their work easier every day of the week. This includes tools they can use to automate routine tasks and tackle some of the biggest issues facing enterprises today, including cost, reducing carbon footprint, and minimizing business risk. By giving IT professionals purpose-built solutions they can use to maximize their unstructured data and its exponential growth, a huge burden is lifted off their shoulders so they can focus on revenue-generating tasks.
How modern innovations like AI are necessary for the future of NAC. Commentary by Jeff Aaron, VP Enterprise Marketing, Juniper Networks
NAC (Network Access Control), still often a go-to solution to protect enterprise networks, was created at a very different time before the widespread use of laptops, BYOD solutions and IoT devices. Since then, the number of devices accessing a network has grown exponentially, as has the complexity of deployments. Given their complex nature, cost and lack of operational flexibility, traditional NAC solutions have struggled to keep up with the demands of the modern digital era. As a result, the NAC industry is turning to Artificial Intelligence (AI). AI, when used in concert with a modern cloud architecture, is a natural solution to help improve the deployment and operations of NAC by simplifying and improving processes and unlocking new use cases. With AI-enabled NAC, for example, networks can automatically identify and categorize users based on what it knows about them from past connections and proactively allocate the appropriate resources and policies to optimize user experiences on an ongoing basis. AI can also improve the efficacy of NAC by rapidly analyzing everything from the initial login to a users behavior across the network to flag any suspicious behavior. Furthermore, if a device has been compromised, it can be identified immediately with proper actions taken proactively, such as quarantining a user at the edge of the network. This minimizes the impact of exposure substantially. Lastly, AI-enabled NAC can flag devices that are having trouble connecting to the network and take corrective actions before users even know a problem exists. This type of self-healing network operation maximizes user connectivity while minimizing troubleshooting and help desk costs. Current trends have creatednew demands across networks, particularly with respect to network access control. By integrating the automation and insights that come from AI with the simplicity and scale of the cloud, next generation NAC solutions promise to deliver far more functionality than traditional solutions at lower costs. The need for NAC is as strong as ever, as is the need to transition to new AI-based NAC solutions designed for the modern digital era.
AI and ML Shortcomings in Cybersecurity. Commentary byInsightCyberCEO and founder,Francis Cianfrocca
Some well-known cybersecurity providers have attempted to offer ML and AI solutions, but the ML has been focused on limited vectors. This is a problem as any security operations center (SOC) analyst will tell you and this type of AI and ML offer is of limited use. Security professionals hands are already tied as theyre still dealing with the challenge of false positive alerts yielded by AI. When the AI activity is restricted, it poses even more of a problem and will not be used effectively. Along with false positives, AI also generatesfalse negatives,which are the goal of advanced persistent threats (ATPs) and what nation-state attackers rely on. So, the human analyst is busy determining what is typical for an environment and what is abnormal to weed out the false positives and negatives and customize alerts to identify real threats early on. Lack of detection is still a significant problem in the world of cyber-attacks, which is true for the most sophisticated types of breaches and for the more traditional methods of attack (i.e., phishing or misrepresentation). That is why now more than ever, AI/ML applications need to be finetuned and enhanced by human experts, resulting in cybersecurity that genuinely works.
How Automated Machine Learning Helps Marketers Market Smarter. Commentary by Steve Zisk, Senior Product Marketing Manager,Redpoint Global
Automated machine learning (AML) has many business use cases, including the ability to educatemarketers on how to better understand a customer journey. Ifyouhave large amounts of data thatcannot simplybecalculated, aggregated or averaged, you likely have a very good business use case for AML, especially if the dataneeds to be scaled. As with other data-intensive activities, ML results will depend on the quality and breadth of the data you feed it,somissingor low-quality input about a customer will result in building poor models. There are effectively two classes of models thatmarketersshould have a basic understanding of in order to be smarter with AML unsupervised and supervised. For unsupervised models,a user provides data and asks the model to find patterns within that data, making it useful for audience selection or segmentation. Rather than an artificial, manual segmentation based on intuition or arbitrary cut-offs (e.g., age or income), an unsupervised clustering model segments based on what the data dictates is important, interesting or unique about a particular audience. Supervised, on the other hand, uses historical data to build a model that will sort through the variables to find a good predictor for those results whether its customer lifetime value, churn or another measurable behavior or attitude. These models can be built, tested, and optimized by familiar marketing techniques like A/B testing and control audiences. AMLs ability to give marketers a way to ask better questions and make better decisions will ultimately help them understand their customers more. The more they know about a customer, the more they can design relevant messages and offers to improve the customer experience.
Why subscription models still have hope. Commentary by Dr Vinod Vasudevan, CEO ofFlytxt
Inflation fears present major challenges for businesses, especially subscription-based models, and maintaining a solid customer base is imperative for not only growth, but survival.More emphasis should be placed on customer lifetime value, and increasing the value of existing customers is a great way to drive growth. For businesses to see this as an important metric to maintain, investing in AI and data-driven analytics will ultimately lead to better measure progress and focus on outcomes.
New School Year New AI & Machine Learning Tools for Students. Commentary by Brainlys CTO, Bill Salak
If you work in AI & ML its not a surprise when I say theres considerable lead time needed to build new applications. With that in mind and recognizing that school experience for the 2022 2023 school will look very different from 2021 2022. I predict well see a frenzyof applications built for last years problems being repurposed and reimagined for this years problems. What are those problems?Two of the biggest in the news now are teacher shortages and students entering a new school who are unprepared to move on to the new year of curriculum.In terms of specific applications of AI/ML I predict well see more AI/ML teacher assistance capabilities plugged into Learning Management Systems and Learning Experience Platforms, smarter virtual classroom and tutoring experiences, and lots of imaginativeuses of GPT-3, and in general NLP applications pushing boundaries. As we get further along in the school year expect to see more new products being released targeted at helping classrooms do more with less teachers and helping teachers & students close the pandemic induced setbacks on academicachievement and a spectrum of learning gaps across the student population.
Why Asset Intelligence Is Vital to Improve Data-driven Process Automation. Commentary by Arthur Lozinski, CEO and Co-Founder atOomnitza
Data is often thought of as a commodity. The value lies in the use of data. Take business process automation for IT. In order to streamline processes to realize operational, security, and financial efficiency, organizations require accurate data correlation across endpoints, applications, network infrastructure, and cloud. With increasing amounts of fragmented data collected every day from tens of thousands of technologies, its no wonder enterprises are experiencing technology management blind spots. With increasing hybrid workplace, multi-cloud, and digital business growth, arecent survey foundthat 76% of enterprises are using multiple systems to obtain inventory data and 45% have wasted expenditures on software licenses and cloud services. This is where multi-source asset data correlation comes into play as an essential component for task automation. How can organizations develop effective workflows if they are triggered by siloed IT management tool data that lacks both accuracy and broader operational context? Organizations need to consolidate, normalize, and then process data from their siloed tools such as device management, SaaS, SSO, cloud, healthcare management systems, and purchasing. With a unified system of record for enterprise technology management, organizations will then be able to create and mature workflows with predictable outcomes and greater efficiencies. Applying this to a complex business process such as automated secure offboarding, organizations will be able to streamline tasks based on accurate employee, manager, department, location, resource access, and device data from Separation to Recoveryensuring complete deprovisioning, workspace transfer, license repurpose, archiving, and asset reclamation. When it comes to key business process automation for IT, asset intelligence is foundational.
Calculating Value of Product Costs. Commentary by Asim Razzaq, co-founder, and CEO of Yotascale
As a former engineer, Ive seen firsthand how difficult it can be for product teams to determine which products are performing and which need help. For those who may be struggling, here are a few tips: (i) To calculate accurate profit margins or revenue figures for the products or services being built for end users, cost information should be broken down by product or engineering; (ii) Engineering needs to see cost at the most granular level of operations, where apps are being built and run; (iii) Granular visibility canenable quick identification of cost-inflating anomalies at the level where they are most likely to occur and can help engineering teams determine which products have the highest value and should receive the most time, money, and resources, (iv) C-suite and engineering teams need to work together to identify the accurate value of product costs and how to not fall victim to simplistic thinking that can blur the truth about a products true value.
Sign up for the free insideBIGDATAnewsletter.
Join us on Twitter:https://twitter.com/InsideBigData1
Join us on LinkedIn:https://www.linkedin.com/company/insidebigdata/
Join us on Facebook:https://www.facebook.com/insideBIGDATANOW
See the rest here:
Heard on the Street 10/6/2022 - insideBIGDATA
Learning on the edge | MIT News | Massachusetts Institute of Technology – MIT News
Microcontrollers, miniature computers that can run simple commands, are the basis for billions of connected devices, from internet-of-things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it challenging to train artificial intelligence models on edge devices that work independently from central computing resources.
Training a machine-learning model on an intelligent edge device allows it to adapt to new data and make better predictions. For instance, training a model on a smart keyboard could enable the keyboard to continually learn from the users writing. However, the training process requires so much memory that it is typically done using powerful computers at a data center, before the model is deployed on a device. This is more costly and raises privacy issues since user data must be sent to a central server.
To address this problem, researchers at MIT and the MIT-IBM Watson AI Lab developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use more than 500 megabytes of memory, greatly exceeding the 256-kilobyte capacity of most microcontrollers (there are 1,024 kilobytes in one megabyte).
The intelligent algorithms and framework the researchers developed reduce the amount of computation required to train a model, which makes the process faster and more memory efficient. Their technique can be used to train a machine-learning model on a microcontroller in a matter of minutes.
This technique also preserves privacy by keeping data on the device, which could be especially beneficial when data are sensitive, such as in medical applications. It also could enable customization of a model based on the needs of users. Moreover, the framework preserves or improves the accuracy of the model when compared to other training approaches.
Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices, says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the paper describing this innovation.
Joining Han on the paper are co-lead authors and EECS PhD students Ji Lin and Ligeng Zhu, as well as MIT postdocs Wei-Ming Chen and Wei-Chen Wang, and Chuang Gan, a principal research staff member at the MIT-IBM Watson AI Lab. The research will be presented at the Conference on Neural Information Processing Systems.
Han and his team previously addressed the memory and computational bottlenecks that exist when trying to run machine-learning models on tiny edge devices, as part of their TinyML initiative.
Lightweight training
A common type of machine-learning model is known as a neural network. Loosely based on the human brain, these models contain layers of interconnected nodes, or neurons, that process data to complete a task, such as recognizing people in photos. The model must be trained first, which involves showing it millions of examples so it can learn the task. As it learns, the model increases or decreases the strength of the connections between neurons, which are known as weights.
The model may undergo hundreds of updates as it learns, and the intermediate activations must be stored during each round. In a neural network, activation is the middle layers intermediate results. Because there may be millions of weights and activations, training a model requires much more memory than running a pre-trained model, Han explains.
Han and his collaborators employed two algorithmic solutions to make the training process more efficient and less memory-intensive. The first, known as sparse update, uses an algorithm that identifies the most important weights to update at each round of training. The algorithm starts freezing the weights one at a time until it sees the accuracy dip to a set threshold, then it stops. The remaining weights are updated, while the activations corresponding to the frozen weights dont need to be stored in memory.
Updating the whole model is very expensive because there are a lot of activations, so people tend to update only the last layer, but as you can imagine, this hurts the accuracy. For our method, we selectively update those important weights and make sure the accuracy is fully preserved, Han says.
Their second solution involves quantized training and simplifying the weights, which are typically 32 bits. An algorithm rounds the weights so they are only eight bits, through a process known as quantization, which cuts the amount of memory for both training and inference. Inference is the process of applying a model to a dataset and generating a prediction. Then the algorithm applies a technique called quantization-aware scaling (QAS), which acts like a multiplier to adjust the ratio between weight and gradient, to avoid any drop in accuracy that may come from quantized training.
The researchers developed a system, called a tiny training engine, that can run these algorithmic innovations on a simple microcontroller that lacks an operating system. This system changes the order of steps in the training process so more work is completed in the compilation stage, before the model is deployed on the edge device.
We push a lot of the computation, such as auto-differentiation and graph optimization, to compile time. We also aggressively prune the redundant operators to support sparse updates. Once at runtime, we have much less workload to do on the device, Han explains.
A successful speedup
Their optimization only required 157 kilobytes of memory to train a machine-learning model on a microcontroller, whereas other techniques designed for lightweight training would still need between 300 and 600 megabytes.
They tested their framework by training a computer vision model to detect people in images. After only 10 minutes of training, it learned to complete the task successfully. Their method was able to train a model more than 20 times faster than other approaches.
Now that they have demonstrated the success of these techniques for computer vision models, the researchers want to apply them to language models and different types of data, such as time-series data. At the same time, they want to use what theyve learned to shrink the size of larger models without sacrificing accuracy, which could help reduce the carbon footprint of training large-scale machine-learning models.
AI model adaptation/training on a device, especially on embedded controllers, is an open challenge. This research from MIT has not only successfully demonstrated the capabilities, but also opened up new possibilities for privacy-preserving device personalization in real-time, says Nilesh Jain, a principal engineer at Intel who was not involved with this work. Innovations in the publication have broader applicability and will ignite new systems-algorithm co-design research.
On-device learning is the next major advance we are working toward for the connected intelligent edge. Professor Song Hans group has shown great progress in demonstrating the effectiveness of edge devices for training, adds Jilei Hou, vice president and head of AI research at Qualcomm. Qualcomm has awarded his team an Innovation Fellowship for further innovation and advancement in this area.
This work is funded by the National Science Foundation, the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford Motor Company, and Google.
View original post here:
Learning on the edge | MIT News | Massachusetts Institute of Technology - MIT News
Study: Few randomized clinical trials have been conducted for healthcare machine learning tools – Mobihealth News
A review of studies published in JAMA Network Open found few randomized clinical trials for medical machine learning algorithms, and researchers noted quality issues in many published trials they analyzed.
The review included 41 RCTs of machine learning interventions. It found 39% were published just last year, and more than half were conducted at single sites. Fifteen trials took place in the U.S., while 13 were conducted in China. Six studies were conducted in multiple countries.
Only 11 trials collected race and ethnicity data. Of those, a median of 21% of participants belonged to underrepresented minority groups.
None of the trials fully adhered to the Consolidated Standards of Reporting Trials Artificial Intelligence (CONSORT-AI), a set of guidelines developed for clinical trials evaluating medical interventions that include AI. Thirteen trials met at least eight of the 11 CONSORT-AI criteria.
Researchers noted some common reasons trials didn't meet these standards, including not assessing poor quality or unavailable input data, not analyzing performance errors and not including information about code or algorithm availability.
Using the Cochrane Risk of Bias tool for assessing potential bias in RCTs, the study also found overall risk of bias was high in the seven of the clinical trials.
"This systematic review found that despite the large number of medical machine learning-based algorithms in development, few RCTs for these technologies have been conducted. Among published RCTs, there was high variability in adherence to reporting standards and risk of bias and a lack of participants from underrepresented minority groups. These findings merit attention and should be considered in future RCT design and reporting," the study's authors wrote.
WHY IT MATTERS
The researchers said there were some limitations to their review. They looked at studies evaluating a machine learning tool that directly impacted clinical decision-makingso future research could look at a broader range of interventions, like those for workflow efficiency or patient stratification. The review also only assessed studies through October 2021, and more reviews would be necessary as new machine learning interventions are developed and studied.
However, the study's authors said their review demonstrated more high-quality RCTs of healthcare machine learning algorithms need to be conducted. Whilehundreds of machine-learning enabled devices have been approved by the FDA, the review suggests the vast majority didn't include an RCT.
"It is not practical to formally assess every potential iteration of a new technology through an RCT (eg, a machine learning algorithm used in a hospital system and then used for the same clinical scenario in another geographic location)," the researchers wrote.
"A baseline RCT of an intervention's efficacy would help to establish whether a new tool provides clinical utility and value. This baseline assessment could be followed by retrospective or prospective external validation studies to demonstrate how an intervention'sefficacy generalizes over time and across clinical settings."
Read the original:
Study: Few randomized clinical trials have been conducted for healthcare machine learning tools - Mobihealth News