Category Archives: Machine Learning
Announcing LityxIQ 6.0 – Powering Predictive Business Decisions … – PR Newswire
Lityx makes its leading alternative AI and MLOps platform easierto deliver value for organizations focused on digital transformation
WILMINGTON, Del., April 25, 2023 /PRNewswire/ -- Lityx, LLC today announced the release of LityxIQ 6.0, the first AutoML platform to combine machine learning with mathematical optimization in a single, cloud-hosted, no-code platform. A fully integrated enterprise decision engine, LityxIQ 6.0 extends a proven track record of success delivering rapid predictive and prescriptive insights, and simplifies model development, management, deployment, and monitoring to genuinely democratize advanced analytics for organizations of any size.
"Lityx combines a guided Customer Success strategy with our best-in-class LityxIQ platform to get analytics capabilities in the hands of anyone who uses data insights to make critical business decisions," said Paul Maiste, Ph.D., Lityx CEO and president. "LityxIQ is built by data scientists for analysts and statisticians, alike, to accelerate advanced analytics success to days or weeks versus months or years. Plus, LityxIQ provides immediate value to business leaders by making insights easy to understand for arriving at the best decisions faster, at a price to meet any organization's budget."
Lityx next-gen machine learning powers predictive business decisions, making digital transformation easier, affordable.
LityxIQ 6.0 users get enhanced MLOps functionality that streamlines machine learning development and production, ensuring that models remain robust, reliable and scalable. Additionally, through available solution accelerators, LityxIQ 6.0 makes the path from data to insights even faster.
"The platform has included essential tools for managing the end-to-end data lifecycle since our launch, and LityxIQ 6.0 makes decision intelligence even easier through additional data automation and a refreshed interface for a world-class user experience," said Dr. Maiste.
Industries achieving success through LityxIQ include global manufacturers, healthcare, financial services, media and advertising agencies, and more.
Notable enhancements in LityxIQ 6.0 include automated model monitoring, enhanced model performance analysis and comparisons, and additional model exploration tools such as customer engagement profitability optimization and threshold and cost optimization.
About Lityx: Wilmington, Del.-based Lityx, LLC is a software and services company focused on building and deploying advanced analytics and decision intelligence solutions. Founded in 2006, Lityx develops LityxIQ, a cloud-based software-as-a-service, to help business and technical users easily leverage the power of advanced analytics and mathematical optimization to achieve deeper insights and increased ROI rapidly. Lityx delivers LityxIQ 6.0 directly or through a global network of services partners. For more information, visit http://www.lityx.com.
SOURCE Lityx LLC
Link:
Announcing LityxIQ 6.0 - Powering Predictive Business Decisions ... - PR Newswire
Thermal Cameras and Machine Learning Combine to Snoop Out … – Tom’s Hardware
Researchers at the University of Glasgow have published a paper that highlights their so-called ThermoSecure implementation for discovering passwords and PINs. The name ThermoSecure provides a clue to the underlying methodology, as the researchers are using a mix of thermal imaging technology and AI to reveal passwords from input devices like keyboards, touchpads, and even touch screens.
Before looking at the underlying techniques and technology, it's worth highlighting how impressive ThermoSecure is for uncovering password inputs. During tests, the research paper states: "ThermoSecure successfully attacks 6-symbol, 8-symbol, 12-symbol, and 16-symbol passwords with an average accuracy of 92%, 80%, 71%, and 55% respectively." Moreover, these results were from relatively cold evidence, and the paper adds that "even higher accuracy [is achieved] when thermal images are taken within 30 seconds."
How does ThermoSecure work? The system needs a thermal camera, which is becoming a much more affordable item in recent years. A usable device may only cost $150, according to the research paper. On the AI software side of things, the system uses an object detection technique based on Mask RCNN that basically maps the (thermal) image to keys. Across three phases, variables like keyboard localization are considered, then key entry and multi-press detection is undertaken, then the order of the key presses is determined by algorithms. Overall it appears to work pretty well, as the results suggest.
With the above thermal attack looking quite viable option for hackers to spy passwords, PINs, and so on, what can be done to mitigate the ThermoSecure threat? We've gathered the main factors that can impact the success of a thermal attack.
Input factors: Users can be more secure by using longer passwords and typing faster. "Users who are hunt-and-peck typists are particularly vulnerable to thermal attacks," note the researchers.
Interface factors: The thermodynamic properties of the input device material is important. If a hacker can image the input device in under 30 seconds, it helps a lot. Keyboard enthusiasts will also probably be interested to know that ABS keycaps retained touch heat signatures much longer than PBT keycaps.
Erase activity: The heat emitted from backlit keyboards helps disguise the heat traces from the human interaction with the keyboard. A cautious person could sometimes touch keys without actuating them and not leave the input area for at least a minute after they input the username / password.
Go passwordless: Even the best passwords are embarrassingly insecure compared to alternative authentication methods such as biometrics.
In summary, the accuracy of these thermal attacks is surprisingly high, even some time after the user has moved away from the keyboard / keypad. It is worrying but no more so than the other surveillance / skimming techniques that are already widespread. The best solution to these kinds of password and PIN guessing methods appears to be the move to biometrics, and / or two or more factor authentication. Preventing unauthorized access to your device in the first place (i.e. not leaving your laptop or phone unattended), especially not right after typing in your PIN/password, will also help thwart attackers.
Go here to see the original:
Thermal Cameras and Machine Learning Combine to Snoop Out ... - Tom's Hardware
How ChatGPT might help your family doctor and other emerging health trends – Toronto Star
Health innovation in Canada has always been strong, but the sector is now experiencing growth at a pace we havent seen before.
While COVID-19 helped accelerate change, new technologies like OpenAIs ChatGPT are also having an impact. Plus, Canadian companies are leveraging machine learning to develop new therapies, diagnostics and patient platforms.
Theres a lot of really interesting drivers out there for innovation, says Jacki Jenuth, partner and chief operating officer at Lumira Ventures. Were starting to better define some of the underlying mechanisms and therapeutics approaches for diseases that up until now had no options, such as neurodegenerative diseases. And researchers are starting to define biomarkers to select patients more likely to respond in clinical settings thats really good news.
Next week, the annual MaRS Impact Health conference will bring together health care professionals, entrepreneurs, investors, policymakers and other stakeholders. Heres a sneak preview of some of the emerging trends in the health care and life sciences space theyll be exploring.
There's huge revenue opportunities in women's health, says Annie Thriault, managing partner at Cross-Border Impact Ventures. (Fryer, Tim)
Womens health funding isnt where it should be, says Annie Thriault, managing partner at Cross-Border Impact Ventures. Bayer recently announced its stopping R&D for womens health to focus on other areas. Other pharmaceutical companies such as Merck have made similar decisions in recent years. Its hard to imagine why groups are moving in that direction, because were seeing huge revenue opportunities in these markets, says Thriault. A lot of exciting things are happening.
One area that Thriault has been watching closely has been personalized medicine that uses artificial intelligence, machine learning or sophisticated algorithms to tailor treatment for women and children. For instance, there are tools that provide targeted cancer treatments that use gender as a key input. In the past, that maybe wouldnt have been thought of as an important variable, she says.
In prenatal care, there are new tools related to diagnosing anomalies in pregnancies through data. What we see in maternal health is a lot of inequalities, Thriault says. But if the exam is performed with the same level of care, accuracy, and specificity, then analyzed through AI to spot problems, you can make positive health outcomes and hopefully a less unequal health system.
Click to expand
With the right protections and security measures, AI could help create efficiencies in health care, says Frank Rudzicz, a ??faculty member at the Vector Institute for Artificial Intelligence. (Fryer, Tim)
New technologies like ChatGPT have shown the potential of not just getting AI and machine learning to take large data sets and make sense of them, but also to create efficiencies when it comes to doing paperwork with that information.
I always thought wed get to this point, but I just didnt think wed get to here so soon where we are talking about AI really changing the nature of jobs, says Frank Rudzicz, a faculty member at the Vector Institute for Artificial Intelligence. And its just getting started.
There are a lot of inefficiencies in health care that AI can help with. Doctors, for instance, spend up to half their time working on medical records and filling out forms. (A recent study from the Canadian Federation of Independent Business found that collectively they are spending some 18.5 million hours on unnecessary paperwork and administrative work each year the equivalent of more than 55 million patient visits.) Thats not what they signed up for, he says. They signed up to help people.
While people are becoming more comfortable with using technology to track and monitor their health whether that be through smartwatches, smartphone apps or genetic testing there arent as many connection points for them to use that data with their family doctor. There is an opportunity, Rudzicz says, to use data and technologies such as machine learning, with proper guardrails and patient consent, to sync the data with your doctors records to help with diagnosis and prescribing.
Ultimately, doctors are trained professionals and they need to be the ones who make the diagnosis and come up with treatment plans with the patients, he says. But once you get all the pieces together, the results could be more accurate and safer than they have been.
Plus, there are a lot of possible futures for technologies like ChatGPT in health care, such as automating repetitive tasks like filling out forms or writing requisitions and referral letters for doctors to review before submitting. The barrier to entry for anything that will speed up your workflow is going to be very low and easily integrated, Rudzicz says.
While there's been a slowdown in venture capital investments, there's still funding to be found, says Jacki Jenuth, partner and chief operating officer at Lumira Ventures. (Fryer, Tim)
While theres been a slowdown in venture capital funding, with fewer dollars available as markets become more rational after the record highs of the last few years, theres still funding to be found, says Lumiras Jenuth. Management teams in the life sciences space just have to be more resourceful and explore all possible avenues of funding, including corporations, non-dilutive sources, foundations and disease specific funders, she adds.
It helps to build deep relationships with investors who want to make an impact in the health sectors, she says. The pitch needs to be targeted for each one of these groups. Youll hear a lot of nos, so you need to be tenacious. Its not easy.
Discover more of the technologies and ideas that will transform health care at the MaRS Impact Health conference on May 3 and 4.
Disclaimer This content was produced as part of a partnership and therefore it may not meet the standards of impartial or independent journalism.
More here:
How ChatGPT might help your family doctor and other emerging health trends - Toronto Star
Hydrogen’s Hidden Phase: Machine Learning Unlocks the Secrets of the Universe’s Most Abundant Element – SciTechDaily
Phases of solid hydrogen. The left is the well-studied hexagonal close packed phase, while the right is the new phase predicted by the authors machine learning-informed simulations. Image by Wesley Moore. Credit: The Grainger College of Engineering at the University of Illinois Urbana-Champaign
Putting hydrogen on solid ground: simulations with a machine learning model predict a new phase of solid hydrogen.
A machine-learning technique developed by University of Illinois Urbana-Champaign researchers has revealed a previously undiscovered high-pressure solid hydrogen phase, offering insights into hydrogens behavior under extreme conditions and the composition of gaseous planets like Jupiter and Saturn.
Hydrogen, the most abundant element in the universe, is found everywhere from the dust filling most of outer space to the cores of stars to many substances here on Earth. This would be reason enough to study hydrogen, but its individual atoms are also the simplest of any element with just one proton and one electron. For David Ceperley, a professor of physics at the University of Illinois Urbana-Champaign, this makes hydrogen the natural starting point for formulating and testing theories of matter.
Ceperley, also a member of the Illinois Quantum Information Science and Technology Center, uses computer simulations to study how hydrogen atoms interact and combine to form different phases of matter like solids, liquids, and gases. However, a true understanding of these phenomena requires quantum mechanics, and quantum mechanical simulations are costly. To simplify the task, Ceperley and his collaborators developed a machine-learning technique that allows quantum mechanical simulations to be performed with an unprecedented number of atoms. They reported in Physical Review Letters that their method found a new kind of high-pressure solid hydrogen that past theory and experiments missed.
Machine learning turned out to teach us a great deal, Ceperley said. We had been seeing signs of new behavior in our previous simulations, but we didnt trust them because we could only accommodate small numbers of atoms. With our machine learning model, we could take full advantage of the most accurate methods and see whats really going on.
Hydrogen atoms form a quantum mechanical system, but capturing their full quantum behavior is very difficult even on computers. A state-of-the-art technique like quantum Monte Carlo (QMC) can feasibly simulate hundreds of atoms, while understanding large-scale phase behaviors requires simulating thousands of atoms over long periods of time.
To make QMC more versatile, two former graduate students, Hongwei Niu and Yubo Yang, developed a machine learning model trained with QMC simulations capable of accommodating many more atoms than QMC by itself. They then used the model with postdoctoral research associate Scott Jensen to study how the solid phase of hydrogen that forms at very high pressures melts.
The three of them were surveying different temperatures and pressures to form a complete picture when they noticed something unusual in the solid phase. While the molecules in solid hydrogen are normally close-to-spherical and form a configuration called hexagonal close packedCeperley compared it to stacked orangesthe researchers observed a phase where the molecules become oblong figuresCeperley described them as egg-like.
We started with the not-too-ambitious goal of refining the theory of something we know about, Jensen recalled. Unfortunately, or perhaps fortunately, it was more interesting than that. There was this new behavior showing up. In fact, it was the dominant behavior at high temperatures and pressures, something there was no hint of in older theory.
To verify their results, the researchers trained their machine learning model with data from density functional theory, a widely used technique that is less accurate than QMC but can accommodate many more atoms. They found that the simplified machine learning model perfectly reproduced the results of standard theory. The researchers concluded that their large-scale, machine learning-assisted QMC simulations can account for effects and make predictions that standard techniques cannot.
This work has started a conversation between Ceperleys collaborators and some experimentalists. High-pressure measurements of hydrogen are difficult to perform, so experimental results are limited. The new prediction has inspired some groups to revisit the problem and more carefully explore hydrogens behavior under extreme conditions.
Ceperley noted that understanding hydrogen under high temperatures and pressures will enhance our understanding of Jupiter and Saturn, gaseous planets primarily made of hydrogen. Jensen added that hydrogens simplicity makes the substance important to study. We want to understand everything, so we should start with systems that we can attack, he said. Hydrogen is simple, so its worth knowing that we can deal with it.
Reference: Stable Solid Molecular Hydrogen above 900 K from a Machine-Learned Potential Trained with Diffusion Quantum Monte Carlo by Hongwei Niu, Yubo Yang, Scott Jensen, Markus Holzmann, Carlo Pierleoni and David M. Ceperley, 17 February 2023, Physical Review Letters.DOI: 10.1103/PhysRevLett.130.076102
This work was done in collaboration with Markus Holzmann of Univ. Grenoble Alpes and Carlo Pierleoni of the University of LAquila. Ceperleys research group is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Computational Materials Sciences program under Award DE-SC0020177.
The rest is here:
Hydrogen's Hidden Phase: Machine Learning Unlocks the Secrets of the Universe's Most Abundant Element - SciTechDaily
Machine learning: As AI tools gain heft, the jobs that could be at stake – The Indian Express
Watch out for the man with the silicon chipHold on to your job with a good firm gripCause if you dont youll have had your chipsThe same as my old man
Scottish revival singer-songwriter Ewan MacColls 1986 track My Old Man was an ode to his father, an iron-moulder who faced an existential threat to his job because of the advent of technology. The lyrics could finds some resonance nearly four decades on, as industry leaders and tech stalwarts predict the advancement in large language models such as OpenAIs GPT-4 and their ability to write essays, code, and do maths with greater accuracy and consistency, heralding a fundamental tech shift; almost as significant as the creation of the integrated circuit, the personal computer, the web browser or the smartphone. But there still are question marks over how advanced chatbots could impact the job market. And if the blue collar work was the focus of MacColls ballad, artificial intelligence (AI) models of the generative pretrained transformer type signify a greater threat for white collar workers, as more powerful word-predicting neural networks that manage to carry out a series of operations on arrays of inputs end up producing output that is significantly humanlike. So, will this latest wave impact the current level of employment?
According to Goldman Sachs economists Joseph Briggs and Devesh Kodnani, the answer is a resounding yes, and they predict that as many as 300 million full-time jobs around the world are set to get automated, with workers replaced by machines or AI systems. What lends credence to this stark prediction is the new wave of AI, especially large language models that include neural networks such as Microsoft-backed OPenAIs ChatGPT.
The Goldman Sachs economists predict that such technology could bring significant disruption to the labour market, with lawyers, economists, writers, and administrative staff among those projected to be at greatest risk of becoming redundant. In a new report, The Potentially Large Effects of Artificial Intelligence on Economic Growth, they calculate that approximately two-thirds of jobs in the US and Europe are set to be exposed to AI automation, to various degrees.
In general white-collar workers, and workers in advanced economies in general, are projected to be at a greater risk than blue collar workers in developing countries. The combination of significant labour cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labour productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer, the report said.
And OpenAI itself predicts that a vast majority of workers will have at least part of their jobs automated by GPT models. In a study published on the arXiv preprint server, researchers from OpenAI and the University of Pennsylvania said that 80 percent of the US workforce could have at least 10 percent of their tasks affected by the introduction of GPTs.
Central to these predictions is the way models such as ChatGPT get better with more usage GPT stands for Generative Pre-trained Transformer and is a marker for how the platform works; being pre-trained by human developers initially and then primed to learn for itself as more and more queries are posed by users to it. The OpenAI study also said that around 19 per cent of US workers will see at least 50 per cent of their tasks impacted, with the qualifier that GPT exposure is likely greater for higher-income jobs, but spans across almost all industries. These models, the OpenAI study said, will end up as general-purpose technologies like the steam engine or the printing press.
A January 2023 paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, explored the question of whether AI tools or humans were more effective at helping people lose weight. The authors conducted the first causal evaluation of the effectiveness of human vs. AI tools in helping consumers achieve their health outcomes in a real-world setting by comparing the weight loss outcomes achieved by users of a mobile app, some of whom used only an AI coach while others used a human coach as well.
Interestingly, while human coaches scored higher broadly, users with a higher BMI did not fare as well with a human coach as those who weighed less.
The results of our analysis can extend beyond the narrow domain of weight loss apps to that of healthcare domains more generally. We document that human coaches do better than AI coaches in helping consumers achieve their weight loss goals. Importantly, there are significant differences in this effect across different consumer groups. This suggests that a one-size-fits-all approach might not be most effective Kapoor told The Indian Express.
The findings: Human coaches help consumers achieve their goals better than AI coaches for consumers below the median BMI relative to consumers who have above-median BMI. Human coaches help consumers achieve their goals better than AI coaches for consumers below the median age relative to consumers who have above-median age.
Human coaches help consumers achieve their goals better than AI coaches for consumers below the median time in a spell relative to consumers who spent above-median time in a spell. Further, human coaches help consumers achieve their goals better than AI coaches for female consumers relative to male consumers.
While Kapoor said the paper did not go deeper into the why of the effectiveness of AI+Human plans for low BMI individuals over high BMI individuals, he speculated on what could be the reasons for that trend: Humans can feel emotions like shame and guilt while dealing with other humans. This is not always true, but in general and theres ample evidence to suggest this research has shown that individuals feel shameful while purchasing contraceptives and also while consuming high-calorie indulgent food items. Therefore, high BMI individuals might find it difficult to interact with other human coaches. This doesnt mean that health tech platforms shouldnt suggest human plans for high BMI individuals. Instead, they can focus on (1) Training their coaches well to make the high BMI individuals feel comfortable and heard and (2) deciding the optimal mix of the AI and Human components of the guidance for weight loss, he added.
Similarly, the female consumers responding well to the human coaches can be attributed to the recent advancements in the literature on Human AI interaction, which suggests that the adoption of AI is different for females/males and also theres differential adoption across ages, Kapoor said, adding that this can be a potential reason for the differential impact of human coaches for females over males.
An earlier OECD paper on AI and employment titled New Evidence from Occupations most exposed to AI asserted that the impact of these tools would be skewed in favour of high-skilled, white-collar ones, including: business professionals; managers; science and engineering professionals; and legal, social and cultural professionals.
This contrasts with the impact of previous automating technologies, which have tended to take over primarily routine tasks performed by lower-skilled workers. The 2021 study noted that higher exposure to AI may be a good thing for workers, as long as they have the skills to use these technologies effectively. The research found that over the period 2012-19, greater exposure to AI was associated with higher employment in occupations where computer use is high, suggesting that workers who have strong digital skills may have a greater ability to adapt to and use AI at work and, hence, to reap the benefits that these technologies bring. By contrast, there is some indication that higher exposure to AI is associated with lower growth in average hours worked in occupations where computer use is low. On the whole, the study findings suggested that the adoption of AI may increase labour market disparities between workers who have the skills to use AI effectively and those who do not. Making sure that workers have the right skills to work with new technologies is therefore a key policy challenge, which policymakers will increasingly have to grapple with.
See the article here:
Machine learning: As AI tools gain heft, the jobs that could be at stake - The Indian Express
Causal Bayesian machine learning to assess treatment effect heterogeneity by dexamethasone dose for patients with … – Nature.com
This is a post hoc exploratory analysis of the COVID STEROID 2 trial7. It was conducted according to a statistical analysis plan, which was written after the pre-planned analyses of the trial were reported, but before any of the analyses reported in this manuscript were conducted (https://osf.io/2mdqn/). This manuscript was presented according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist12, with Bayesian analyses reported according to the Reporting of Bayes Used in clinical STudies (ROBUST) guideline13.
HTE implies that some individuals respond differently, i.e., better or worse, than others who receive the same therapy due to differences between individuals. Most trials are designed to evaluate the average treatment effect, which is the summary of all individual effects in the trial sample (see supplementary appendix for additional technical details). Traditional HTE methods examine patient characteristics one at a time, looking to identify treatment effect differences according to individual variables. This approach is well known to be limited as it is underpowered (due to adjustment for multiple testing) and does not account for the fact that many characteristics under examination are correlated and may have synergistic effects. As a result, more complex relationships between variables that better define individuals, and thus may better inform understanding about the variations in treatment response, may be missed using conventional HTE approaches. Thus, identifying true and clinically meaningful HTE requires addressing these data and statistical modeling challenges. BART is inherently an attractive method for this task, as the algorithm automates the detection of nonlinear relationships and interactions hierarchically based on the strength of the relationships, thereby reducing researchers discretion when analyzing experimental data. This approach also avoids any model misspecification or bias inherent in traditional interaction test procedures. BART can also be deployed, as we do herein, within the counterfactual framework to study HTE, i.e., to estimate conditional average treatment effects given the set of covariates or potential effect modifiers11,14,15, and has shown superior performance to competing methods in extensive simulation studies16,17. These features make BART an appealing tool for trialists to explore HTE to inform future confirmatory HTE analyses in trials and hypothesis generation more broadly. Thus, this analysis used BART to evaluate the presence of multivariable HTE and estimate conditional average treatment effects among meaningful subgroups in the COVID STEROID 2 trial.
The COVID STEROID 2 trial7 was an investigator-initiated, international, parallel-group, stratified, blinded, randomized clinical trial conducted at 31 sites in 26 hospitals in Denmark, India, Sweden, and Switzerland between 27 August 2020 and 20 May 20217,18. The trial was approved by the regulatory authorities and ethics committees in all participating countries.
The trial enrolled 1000 adult patients hospitalized with COVID-19 and severe hypoxemia (10 L oxygen/min, use of non-invasive ventilation (NIV), continuous use of continuous positive airway pressure (cCPAP), or invasive mechanical ventilation (IMV)). Patients were primarily excluded due to previous use of systemic corticosteroids for COVID-19 for 5 or more days, unobtainable consent, and use of higher-dose corticosteroids for other indications than COVID-194,17. Patients were randomized 1:1 to dexamethasone 12mg/d or 6mg/d intravenously once daily for up to 10days. Additional details are provided in the primary protocol and trial report7,18.
The trial protocol was approved by the Danish Medicines Agency, the ethics committee of the Capital Region of Denmark, and institutionally at each trial site. The trial was overseen by the Collaboration for Research in Intensive Care and the George Institute for Global Health. A data and safety monitoring committee oversaw the safety of the trial participants and conducted 1 planned interim analysis. Informed consent was obtained from the patients or their legal surrogates according to national regulations.
We examined two outcomes: (1) DAWOLS at day 90 (i.e., the observed number of days without the use of IMV, circulatory support, and kidney replacement therapy without assigning dead patients the worst possible value), and (2) 90-day mortality. Binary mortality outcomes were used to match the primary trial analysis; time-to-event outcomes also generally tend to be less robust for ICU trials19. We selected DAWOLS at day 90 in lieu of the primary outcome of the trial (DAWOLS at day 28) and to align with other analyses of the trial which sought to examine outcomes in a longer term. Both outcomes were assessed in the complete intention-to-treat (ITT) population, which was 982 after the exclusion of patients without consent for the use of their data7. As the sample size is fixed, there was no formal sample size calculation for this study.
While BART is a data-driven approach that can scan for interdependent relationships among any number of factors, we only examined heterogeneity across a pre-selected set of factors deemed to be clinically relevant by the authors and members of the COVID STEROID 2 trial Management Committee. The pre-selected variables that were included in this analysis are listed below with the scale used in parentheses. Continuous covariates were standardized to have a mean of 0 and a standard deviation of 1 prior to analysis. Detailed variable definitions are available in the study protocol18.
participant age (continuous),
limitations in care (yes, no),
level of respiratory support (open system versus NIV/cCPAP versus IMV)
interleukin-6 (IL-6) receptor inhibitors (yes, no),
use of dexamethasone for up to 2days versus use for 3 to 4days prior to randomization,
participant weight (continuous),
diabetes mellitus (yes, no),
ischemic heart disease or heart failure (yes, no),
chronic obstructive pulmonary disease (yes, no), and,
immunosuppression within 3months prior to randomization (yes, no).
We evaluated HTE on the absolute scale (i.e., mean difference in days for the number of DAWOLS at day 90 and the risk difference for 90-day mortality). The analysis was separated into two stages14,20,21,22. In the first stage, conditional average treatment effects were estimated according to each participants covariates using BART models. The DAWOLS outcome was treated as a continuous variable and analyzed using standard BART, while the binary mortality outcome was analyzed using logit BART. In the second stage, a fit-the-fit approach was used, where the estimated conditional average treatment effects were used as dependent variables in models to identify covariate-defined subgroups differential treatment effects. This second stage used classification and regression trees models23, where the maximum depth was set to 3 as a post hoc decision to aid interpretability. As the fit-the-fit reflects estimates from the BART model, the resulting overall treatment effects (e.g., risk difference) vary slightly from the raw trial data.
BART models are often fit using a sum of 200 trees and specifying a base prior of 0.95 and a power prior of 2, which penalize substantial branch growth within each tree15. Although these default hyperparameters tend to work well in practice, it was possible they were not optimal for this data. Thus, the hyperparameters were evaluated using tenfold cross-validation, comparing predictive performance of the model under 27 pre-specified possibilities, namely every combination of power priors equal to 1, 2, or 3, base priors equal to 0.25, 0.5, or 0.95, and number of trees equal to 50, 200, or 400. The priors corresponding to the lowest cross-validation error were used in the final models. Each model used a Markov chain Monte Carlo procedure consisting of 4 chains that each had 100 burn-in iterations and a total length of 1100 iterations. Posterior convergence for each model was assessed using the diagnostic procedures described in Sparapani et al.24. Model diagnostics were good for all models. All parameters seemed to converge within the burn-in period and the z-scores for Gewekes convergence diagnostic25 were approximately standard normal. All BART models were fit using R statistical computing software v. 4.1.226 with the BART package v. 2.924, and all CART models were fit using the rpart package v. 4.1.1627.
The analysis was performed under the ITT paradigm; compliance issues were considered minimal. As in the primary analyses of the trial, the small amount of missing outcome data was ignored in the primary analyses. Sensitivity analyses were performed under best/worst- and worst/best-case imputation. For best/worst-case imputation, the entire estimation procedure was repeated after setting all missing mortality outcome data in the 12mg/d group to alive at 90days and all missing mortality outcome data in the 6mg/d group to dead at 90days. Then, all days with missing life support data were set to alive without life support for the 12mg/d group and the opposite for the 6mg/d group. Under worst/best-case imputation, the estimation procedure was repeated under the opposite conditions, e.g., setting all missing mortality outcome data in the 12mg/d group to dead at 90days and all missing mortality outcome data in the 6mg/d group to alive at 90days.
The resulting decision trees from each fit-the-fit analysis described above (one for the 90-day mortality outcome, and one for the 90-day DAWOLS outcome) were outputted (with continuous variables de-standardized, i.e., back-translated to the original scales). Likewise, the resulting decision trees for each outcome after best- and worst-case imputation were outputted for comparison with the complete records analyses. All statistical code is made available at https://github.com/harhay-lab/Covid-Steroid-HTE.
Machine Learning as a Service Market Size Growing at 37.9% CAGR Set to Reach USD 173.5 Billion By 2032 – Yahoo Finance
Acumen Research and Consulting
Acumen Research and Consulting recently published report titled Machine Learning as a Service Market Forecast, 2023 - 2032
TOKYO, April 24, 2023 (GLOBE NEWSWIRE) -- The Global Machine Learning as a Service Market Size accounted for USD 7.1 Billion in 2022 and is projected to achieve a market size of USD 173.5 Billion by 2032 growing at a CAGR of 37.9% from 2023 to 2032.
Machine Learning as a Service Market Research Report Highlights and Statistics:
The Global Machine Learning as a Service Market Size in 2022 stood at USD 7.1 Billion and is set to reach USD 173.5 Billion by 2032, growing at a CAGR of 37.9%
MLaaS allows users to access and utilize pre-built algorithms, models, and tools, making it easier and faster to develop and deploy machine learning applications.
Adoption of cloud-based technologies, the need for managing the huge amount of data generated, and the rise in demand for predictive analytics and natural language processing are driving the growth of the Machine Learning as a Service market.
North America is expected to hold the largest market share in the Machine Learning as a Service market due to the presence of large technology companies and the increasing demand for advanced technologies in the region.
Some of the key players in the Machine Learning as a Service market include Amazon Web Services, IBM Corporation, Google LLC, Microsoft Corporation, and Oracle Corporation.
Request For Free Sample Report @ https://www.acumenresearchandconsulting.com/request-sample/385
Machine Learning as Service Market Report Coverage:
Market
Machine Learning as a Service Market
Machine Learning as a Service Market Size 2022
USD 7.1 Billion
Machine Learning as a Service Market Forecast2032
USD 173.5 Billion
Machine Learning as a Service Market CAGR During 2023 - 2032
37.9%
Machine Learning as a Service Market Analysis Period
2020 - 2032
Machine Learning as a Service Market Base Year
2022
Machine Learning as a Service Market Forecast Data
2023 - 2032
Segments Covered
By Component, By Application, By Organization Size, By End-Use Industry, And ByGeography
Metabolomics Market Regional Scope
North America, Europe, Asia Pacific, Latin America, and Middle East & Africa
Key Companies Profiled
Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), IBM Watson, Oracle Cloud, Alibaba Cloud, SAS, PREDICTRON labs LTD, FICO, and HEWLETT Packard Enterprise
Report Coverage
Market Trends, Drivers, Restraints, Competitive Analysis, Player Profiling, Regulation Analysis
Machine Learning as a Service Market Overview:
The increasing adoption of cloud-based technologies and the need for managing the enormous amount of data generated has led to the rise in demand for MLaaS solutions. MLaaS provides pre-built algorithms, models, and tools, making it easier and faster to develop and deploy machine learning applications. This service is being used in various industries such as healthcare, retail, BFSI, manufacturing, and others.
Story continues
The healthcare industry is using MLaaS for patient monitoring and disease prediction. In retail, MLaaS is being used for personalized recommendations and fraud detection. MLaaS is also being utilized for financial fraud detection, sentiment analysis, recommendation systems, predictive maintenance, and much more.
The Natural Language Processing (NLP) segment is expected to grow rapidly during the forecast period. NLP is being used by organizations to analyze customer feedback, improve customer experience, and automate customer service. MLaaS vendors such as Amazon Web Services, IBM Corporation, Google LLC, Microsoft Corporation, and Oracle Corporation offer various pricing models and features, making the Machine Learning as a Service market competitive.
Trends in the Machine Learning as a Service Market:
Automated Machine Learning (AutoML): The development of AutoML algorithms is reducing the need for expert data scientists to develop machine learning models, allowing non-experts to develop and deploy models with less effort and cost.
Edge Computing: Machine learning models are being deployed on edge devices such as smartphones, IoT sensors, and other devices to reduce latency and improve privacy.
Explainable AI: Machine learning models are becoming more transparent, and algorithms are being developed that can explain how the model arrived at its decisions.
Federated Learning: Machine learning models are being developed to train on data that is distributed across multiple devices, allowing for privacy protection and faster training.
Synthetic Data: Synthetic data is being used to augment training data, reducing the need for large amounts of real data and improving model accuracy.
Time Series Analysis: Machine learning models are being developed to analyze and predict time series data, which is important in industries such as finance and transportation.
Personalization: Machine learning models are being developed to provide personalized recommendations, content, and experiences to users.
Generative Models: Generative models are being developed to create new data based on existing data, which can be used for various applications such as image and text generation.
Machine Learning as a Service Market Dynamics
Increased demand for advanced analytics: Businesses are looking for ways to extract insights from their data to improve decision-making, and MLaaS provides a fast and efficient way to do so.
Quantum Machine Learning: Machine learning algorithms are being developed that can run on quantum computers, which offer significant speed improvements over classical computers.
Interpretable Machine Learning: Machine learning models are being developed to provide interpretable results, allowing users to understand how the model arrived at its decisions.
Reinforcement Learning: Reinforcement learning algorithms are being developed to teach machines how to make decisions based on feedback from their environment.
Multi-Task Learning: Machine learning models are being developed to perform multiple tasks simultaneously, reducing the need for multiple models.
Transfer Learning: Machine learning models are being developed that can transfer knowledge learned from one task to another, reducing the need for large amounts of training data.
Increasing adoption of IoT devices: The growing number of IoT devices is generating massive amounts of data that can be analyzed with machine learning algorithms, driving demand for MLaaS services.
Speech Recognition: Machine learning models are being developed that can accurately recognize speech, which is important for applications such as virtual assistants and speech-to-text.
Low barriers to entry: MLaaS provides a low barrier to entry for businesses that want to incorporate machine learning into their operations but lack the resources to do so in-house.
Explainable Deep Learning: Deep learning models are being developed that can provide interpretable results, allowing users to understand how the model arrived at its decisions, which is important for applications such as healthcare and finance.
Growth Hampering Factors in the Market for Machine Learning as a Service:
Concerns about data security and privacy: Many businesses are hesitant to use MLaaS due to concerns about data security and privacy, which may hamper the growth of the market.
Complexity of machine learning models: Developing and deploying machine learning models can be complex, which may limit the adoption of MLaaS by businesses.
Limited interpretability of machine learning models: Many machine learning models are not easily interpretable, which may make it difficult for businesses to understand the underlying logic and decision-making process of these models.
Limited availability of training data: Machine learning models require large amounts of high-quality training data, and if this data is not available, it may limit the ability of businesses to develop accurate models.
Cost: MLaaS can be expensive, especially for small and medium-sized businesses, which may limit adoption.
Lack of trust in machine learning models: If businesses do not trust the accuracy and reliability of machine learning models, they may be hesitant to adopt MLaaS.
Check the detailed table of contents of the report @
Market Segmentation:
By Type of component
By Application
Security and surveillance
Augmented and Virtual reality
Marketing and Advertising
Fraud Detection and Risk Management
Predictive analytics
Computer vision
Natural Language processing
Other
By Size of Organization
End User
Retail
BFSI
Healthcare
Public sector
Manufacturing
IT and Telecom
Energy and Utilities
Aerospace and Defense
Machine Learning as a Service Market Overview by Region:
North Americas Machine Learning as a Service market share is the highest globally, due to the high adoption of cloud computing and the presence of several major players in the region. The United States is the largest market for MLaaS in North America, driven by the increasing demand for predictive analytics, the growing use of deep learning, and the rising adoption of artificial intelligence (AI) across various industries. For instance, companies in the healthcare sector are using MLaaS for predicting patient outcomes, and retailers are using it to analyze customer behavior and preferences to deliver personalized experiences.
The Asia-Pacific regions Machine Learning as a Service Market share is also huge and is growing at the fastest rate, due to the increasing adoption of cloud computing, the growth of IoT devices, and the rise of e-commerce in the region. China is the largest market for MLaaS in the Asia Pacific region, with several major companies investing in AI and machine learning technologies. For example, Alibaba, the largest e-commerce company in China, is using MLaaS for predictive analytics and recommendation engines. Japan is another significant market for MLaaS in the region, with companies using it for predictive maintenance and fraud detection.
Europe is another key market for Machine Learning as a Service, with countries such as the United Kingdom, Germany, and France driving growth in the region. The adoption of MLaaS in Europe is being driven by the growth of e-commerce and the increasing demand for personalized experiences. For example, companies in the retail sector are using MLaaS to analyze customer data and make personalized product recommendations. The healthcare sector is also a significant user of MLaaS in Europe, with providers using it for predictive analytics and diagnosis.
The MEA and South American regions have a growing Machine Learning as a Service market share, however it is expected to grow at a steady pace.
Buy this premium research report
https://www.acumenresearchandconsulting.com/buy-now/0/385
Machine Learning as a Service Market Key Players:
Some of the major players in the Machine Learning as a Service market include Amazon Web Services, Google LLC, IBM Corporation, Microsoft Corporation, SAP SE, Oracle Corporation, Hewlett Packard Enterprise Development LP, Fair Isaac Corporation (FICO), Fractal Analytics Inc., H2O.ai, DataRobot, Alteryx Inc., Big Panda Inc., RapidMiner Inc., SAS Institute Inc., Angoss Software Corporation, Domino Data Lab Inc., TIBCO Software Inc., Cloudera Inc., and Databricks Inc. These companies offer a wide range of MLaaS solutions, including predictive analytics, machine learning algorithms, natural language processing, deep learning, and computer vision.
Browse More Research Topic on Technology Industries Related Reports:
Wallaroo.ai partners with VMware on machine learning at the edge – SiliconANGLE News
Machine learning startup Wallaroo Labs Inc., better known as Wallaroo.ai, said today its partnering with the virtualization software giant VMware Inc. to create a unified edge machine learning and artificial intelligence deployment and operations platform thats aimed at communications service providers.
Wallaroo.ai is the creator of a unified platform for easily deploying, observing and optimizing machine learning in production, on any cloud, on-premises or at the network edge. The company says its joining with VMware to help CSPs better make money from their networks by supporting them with scalable machine learning at the edge.
Its aiming to solve the problem of managing edge machine learning through easier deployment, more efficient inference and continuous optimization of models at 5G edge locations and in distributed networks. CSPs will also benefit from a unified operations center that allows them to observe, manage and scale up edge machine learning deployments from one place.
More specifically, Wallaroo.ai said, its new offering will make it simple to deploy AI models trained in one environment in multiple resource-constrained edge endpoints, while providing tools to help test and continuously optimize those models in production. Benefits include automated observability and drift detection, so users will know if their models start to generate inaccurate responses or predictions. It also offers integration with popular ML development environments, such as Databricks, and cloud platforms such as Microsoft Azure.
Wallaroo.ai co-founder and Chief Executive Vid Jain told SiliconANGLE that CSPs are specifically looking for help in deploying machine learning models fortasks such as monitoring network health, network optimization, predictive maintenance and security. Doing so is difficult, he says, because the models have a number of requirements, including the need for very efficient compute at the edge.
At present, most edge locations are constrained by low-powered compute resources, low memory and low-latency. In addition, CSPs need the ability to deploy the models at many edge endpoints simultaneously, and they also need a way to monitor those endpoints.
We offer CSPs a highly efficient, trust-based inference server that is ideally suited for fast edge inferencing, together with a single unified operations center, Jain explained. We are also working on integrating orchestration software such as VMware that allows for monitoring, updating and management of all the edge locations running AI. The Wallaroo.AI server and models can be deployed into telcos 5G infrastructure and bring back any monitoring data to a central hub.
Stephen Spellicy, vice president of service provider marketing, enablement and business development at VMware, said the partnership is all about helping telecommunications companies put machine learning to work in distributed environments more easily. Machine learning at the edge has multiple use cases, he explained, such as better securing and optimizing distributed networks and providing low-latency services to businesses and consumers.
Wallaroo.ai said its platform will be able to operate across multiple clouds, radio access networks and edge environments, which it believes will become the primary elements of a future, low-latency and highly distributed internet.
TheCUBEis an important partner to the industry, you know,you guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy
THANK YOU
Excerpt from:
Wallaroo.ai partners with VMware on machine learning at the edge - SiliconANGLE News
Sliding Out of My DMs: Young Social Media Users Help Train … – Drexel University
In a first-of-its-kind effort, social media researchers from Drexel University, Vanderbilt University, Georgia Institute of Technology and Boston University are turning to young social media users to help build a machine learning program that can spot unwanted sexual advances on Instagram. Trained on data from more than 5 million direct messages annotated and contributed by 150 adolescents who had experienced conversations that made them feel sexually uncomfortable or unsafe the technology can quickly and accurately flag risky DMs.
The project, which was recently published by the Association for Computing Machinery in its Proceedings of the ACM on Human-Computer Interaction, is intended to address concerns that an increase of teens using social media, particularly during the pandemic, is contributing to rising trends of child sexual exploitation.
In the year 2020 alone, the National Center for Missing and Exploited Children received more than 21.7 million reports of child sexual exploitation which was a 97% increase over the year prior. This is a very real and terrifying problem, said Afsaneh Razi, PhD, an assistant professor in Drexels College of Computing & Informatics, who was a leader of the research.
Social media companies are rolling out new technology that can flag and remove sexually exploitative images and helps users to more quickly report these illegal posts. But advocates are calling for greater protection for young users that could identify and curtail these risky interactions sooner.
The groups efforts are part of a growing field of research looking at how machine learning and artificial intelligence be integrated into platforms to help keep young people safe on social media, while also ensuring their privacy. Its most recent project stands apart for its collection of a trove of private direct messages from young users, which the team used to train a machine learning-based program that is 89% accurate at detecting sexually unsafe conversations among teens on Instagram.
Most of the research in this area uses public datasets which are not representative of real-word interactions that happen in private, Razi said. Research has shown that machine learning models based on the perspectives of those who experienced the risks, such as cyberbullying, provide higher performance in terms of recall. So, it is important to include the experiences of victims when trying to detect the risks.
Each of the 150 participants who range in age from 13- to 21-years-old had used Instagram for at least three months between the ages of 13 and 17, exchanged direct messages with at least 15 people during that time, and had at least two direct messages that made them or someone else feel uncomfortable or unsafe. They contributed their Instagram data more than 15,000 private conversations through a secure online portal designed by the team. And were then asked to review their messages and label each conversation, as safe or unsafe, according to how it made them feel.
Collecting this dataset was very challenging due to sensitivity of the topic and because the data is being contributed by minors in some cases, Razi said. Because of this, we drastically increased the precautions we took to preserve confidentiality and privacy of the participants and to ensure that the data collection met high legal and ethical standards, including reporting child abuse and the possibility of uploads of potentially illegal artifacts, such as child abuse material.
The participants flagged 326 conversations as unsafe and, in each case, they were asked to identify what type of risk it presented nudity/porn, sexual messages, harassment, hate speech, violence/threat, sale or promotion of illegal activities, or self-injury and the level of risk they felt either high, medium or low.
This level of user-generated assessment provided valuable guidance when it came to preparing the machine learning programs. Razi noted that most social media interaction datasets are collected from publicly available conversations, which are much different than those held in private. And they are typically labeled by people who were not involved with the conversation, so it can be difficult for them to accurately assess the level of risk the participants felt.
With self-reported labels from participants, we not only detect sexual predators but also assessed the survivors perspectives of the sexual risk experience, the authors wrote. This is a significantly different goal than attempting to identify sexual predators. Built upon this real-user dataset and labels, this paper also incorporates human-centered features in developing an automated sexual risk detection system.
Specific combinations of conversation and message features were used as the input of the machine learning models. These included contextual features, like age, gender and relationship of the participants; linguistic features, such as wordcount, the focus of questions, or topics of the conversation; whether it was positive, negative or neutral; how often certain terms were used; and whether or not a set of 98 pre-identified sexual-related words were used.
This allowed the machine learning programs to designate a set of attributes of risky conversations, and thanks to the participants assessments of their own conversations, the program could also rank the relative level of risk.
The team put its model to the test against a large set of public sample conversations created specifically for sexual predation risk-detection research. The best performance came from its Random Forest classifier program, which can rapidly assign features to sample conversations and compare them to known sets that have reached a risk threshold. The classifier accurately identified 92% of unsafe sexual conversations from the set. It was also 84% accurate at flagging individual risky messages.
By incorporating its user-labeled risk assessment training, the models were also able to tease out the most relevant characteristics for identifying an unsafe conversation. Contextual features, such as age, gender and relationship type, as well as linguistic inquiry and wordcount contributed the most to identifying conversations that made young users feel unsafe, they wrote.
This means that a program like this could be used to automatically warn users, in real-time, when a conversation has become problematic, as well as to collect data after the fact. Both of these applications could be tremendously helpful in risk prevention and the prosecution of crimes, but the authors caution that their integration into social media platforms must preserve the trust and privacy of the users.
Social service providers find value in the potential use of AI as an early detection system for risks, because they currently rely heavily on youth self-reports after a formal investigation had occurred, Razi said. But these methods must be implemented in a privacy-preserving matter to not harm the trust and relationship of the teens with adults. Many parental monitoring apps are privacy invasive since they share most of the teen's information with parents, and these machine learning detection systems can help with minimal sharing of information and guidelines to resources when it is needed.
They suggest that if the program is deployed as a real-time intervention, then young users should be provided with a suggestion rather than an alert or automatic report and they should be able to provide feedback to the model and make the final decision.
While the groundbreaking nature of its training data makes this work a valuable contribution to the field of computational risk detection and adolescent online safety research, the team notes that it could be improved by expanding the size of the sample and looking at users of different social media platforms. The training annotations for the machine learning models could also be revised to allow outside experts to rate the risk of each conversation.
The group plans to continue its work and to further refine its risk detection models. It has also created an open-source community to safely share the data with other researchers in the field recognizing how important it could be for the protection of this vulnerable population of social media users.
The core contribution of this work is that our findings are grounded in the voices of youth who experienced online sexual risks and were brave enough to share these experiences with us, they wrote. To the best of our knowledge, this is the first work that analyzes machine learning approaches on private social media conversations of youth to detect unsafe sexual conversations.
This research was supported by the U.S. National Science Foundation and the William T. Grant Foundation.
In addition to Razi, Ashwaq Alsoubai and Pamela J. Wisniewski, from Vanderbilt University; Seunghyun Kim and Munmun De Choudhury, from Georgia Institute of Technology; and Shiza Ali and Gianluca Stringhini, from Boston University, contributed to the research.
Read the full paper here: https://dl.acm.org/doi/10.1145/3579522
Read the original:
Sliding Out of My DMs: Young Social Media Users Help Train ... - Drexel University
How AI, automation, and machine learning are upgrading clinical trials – Clinical Trials Arena
Artificial intelligence (AI) is set to be the most disruptive emerging technology in drug development in 2023, unlocking advanced analytics, enabling automation, and increasing speed across the clinical trial value chain.
Todays clinical trials landscape is being shaped by macro trends that include the Covid-19 pandemic, geopolitical uncertainty, and climate pressures. Meanwhile, advancements in adaptive design, personalisation and novel treatments mean that clinical trials are more complex than ever. Sponsors seek greater agility and faster time to commercialisation while maintaining quality and safety in an evolving global market. Across every stage of clinical research, AI offers optimisation opportunities.
A new whitepaper from digital technology solutions provider Taimei examines the transformative impact of AI on the clinical trials of today and explores how it will shape the future.
The big delay areas are always patient recruitment, site start-up, querying, data review, and data cleaning, explains Scott Clark, chief commercial officer at Taimei.
Patient recruitment is typically the most time-consuming stage of a clinical trial. Sponsors must find and identify a set of subjects, gather information, and use inclusion/exclusion criteria to filter and select participants. And high-quality patient recruitment is vital to a trials success.
Once patients are recruited, they must be managed effectively. Patient retention has a direct impact on the quality of the trials results, so their management is crucial. In todays clinical trials, these patients can be distributed over more than a hundred sites and across multiple geographies, presenting huge data management challenges for sponsors.
AI can be leveraged across patient recruitment and management to boost efficiency, quality, and retention. Algorithms can gather subject information and screen and filter potential participants. They can analyse data sources such as medical records and even social media content to detect subgroups and geographies that may be relevant to the trial. AI can also alert medical staff and patients to clinical trial opportunities.
The result? Faster, more efficient patient recruitment, with the ability to reach more diverse populations and more relevant participants, as well as increase quality and retention. [Using AI], you can develop the correct cohort, explains Clark. Its about accuracy, efficiency, and safety.
Study build can be a laborious and repetitive process. Typically, data managers must read the study protocol and generate as many as 50-60 case report forms (CRFs). Each trial has different CRF requirements. CRF design and database building can take weeks and has a direct impact on the quality and accuracy of the clinical trial.
Enter AI. Automated text reading can parse, categorise, and stratify corpora of words to automatically generate eCRFs and the data capture matrix. In study building, AI is able to read the protocols and pull the best CRF forms for the best outcomes, adds Clark.
It can then use the data points from the CRFs to build the study base, creating the whole database in a matter of minutes rather than weeks. The database is structured for export to the biostatisticians programming. AI can then facilitate the analysis of data and develop all of the required tables, listings and figures (TLFs). It can even come to a conclusion on the outcomes, pending review.
Optical character recognition (OCR) can address structured and unstructured native documents. Using built-in edit checks, AI can reduce the timeframe for study build from ten weeks to just one, freeing up data managers time. We are able to do up to 168% more edit checks than are done currently in the human manual process, says Clark. AI can also automate remote monitoring to identify outliers and suggest the best route of action, to be taken with approval from the project manager.
AI data management is flexible, agile, and robust. Using electronic data capture (EDC) removes the need to manage paper-based documentation. This is essential for modern clinical trials, which can present huge amounts of unstructured data thanks to the rise of advances such as decentralisation, wearables, telemedicine, and self-reporting.
Once the trial is launched, you can use AI to do automatic querying and medical coding, says Clark. When theres a piece of data that doesnt make sense or is not coded, AI can flag it and provide suggestions automatically. The data manager just reviews what its corrected, adds Clark. Thats a big time-saver. By leveraging AI throughout data input, sponsors also cut out the lengthy process of data cleaning at the end of a trial.
Implementing AI means establishing the proof of concept, building a customised knowledge base, and training the model to solve the problem on a large scale. Algorithms must be trained on large amounts of data to remove bias and ensure accuracy. Today, APIs enable best-in-class advances to be integrated into clinical trial applications.
By taking repetitive tasks away from human personnel, AI accelerates the time to market for life-saving drugs and frees up man-hours for more specialist tasks. By analysing past and present trial data, AI can be used to inform future research, with machine learning able to suggest better study design. In the long term, AI has the potential to shift the focus away from trial implementation and towards drug discovery, enabling improved treatments for patients who need them.
To find out more, download the whitepaper below.
Read the original post:
How AI, automation, and machine learning are upgrading clinical trials - Clinical Trials Arena