Category Archives: Machine Learning
Study finds workplace machine learning improves accuracy, but also increases human workload – Tech Xplore
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
proofread
by European School of Management and Technology (ESMT)
Credit: Pixabay/CC0 Public Domain
New research from ESMT Berlin shows that utilizing machine-learning in the workplace always improves the accuracy of human decision-making, however, often it can also cause humans to exert more cognitive efforts when making decisions.
These findings come from research by Tamer Boyaci and Francis de Vricourt, both professors of management science at ESMT Berlin, alongside Caner Canyakmaz, previously a post-doctoral fellow at ESMT and now an assistant professor of operations management at Ozyegin University. The researchers wanted to investigate how machine-based predictions may affect the decision process and outcomes of a human decision-maker. Their paper has been published in Management Science.
Interestingly, the use of machines increases human's workload most when the professional is cognitively constrained, for instance, experiencing time pressures or multitasking. However, situations where decision makers experience high workload is precisely when introducing AI to alleviate some of this load appears most tempting. The research suggests that using AI, in this instance, to make the process faster can backfire, and actually increase rather than decrease the human's cognitive effort.
The researchers also found that, although machine input always improves the overall accuracy of human decisions, it can also increase the likelihood of certain types of errors, such as false positives. For the study, a machine learning model was used to identify the differences in accuracy, propensity, and the levels of cognitive effort exerted by humans, comparing solely human-made decisions to machine-aided decisions.
"The rapid adoption of AI technologies by many organizations has recently raised concerns that AI may eventually replace humans in certain tasks," says Professor de Vricourt. "However, when used alongside human rationale, machines can significantly enhance the complementary strengths of humans," he says.
The researchers say their findings clearly showcase the value of collaborations between humans and machines to the professional. But humans should also be aware that, though machines can provide incredibly accurate information, often there still needs to be a cognitive effort from humans to assess their own information and compare the machine's prescription to their own conclusions before making a decision. The researchers say that the level of cognitive effort needed increases when humans are under pressure to deliver a decision.
"Machines can perform specific tasks with incredible accuracy, due to their incredible computing power, while in contrast, human decision-makers are flexible and adaptive but constrained by their limited cognitive capacitytheir skills complement each other," says Professor Boyaci. "However, humans must be wary of the circumstances of utilizing machines and understand when it is effective and when it is not."
Using the example of a doctor and patient, the researchers' findings suggest that the use of machines will improve overall diagnostic accuracy and decrease the number of misdiagnosed sick patients. However, if the disease incidence is low and time is constrained introducing a machine to help doctors make their diagnosis would lead to more misdiagnosed patients, and more human cognitive effort needed to diagnosedue to the additional cognitive effort needed to resolve due to the ambiguity implementing machines can cause.
The researchers state that their findings offer both hope and caution for those looking to implement machines in the work. On the positive side, the average accuracy improves, and when the machine input tends to confirm the rather expected all error rates decrease and the human is more "efficient" as she reduces her cognitive effort.
However, incorporating machine-based predictions in human decisions is not always beneficial, neither in terms of the reduction of errors nor the amount of cognitive effort. In fact, introducing a machine to improve a decision-making process can be counter-productive as it can increase certain error types and the time and cognitive effort it takes to reach a decision.
The findings underscore the critical impact machine-based predictions have on human judgment and decisions. These findings provide guidance on when and how machine input should be considered, and hence on the design of human-machine collaboration.
More information: Tamer Boyac et al, Human and Machine: The Impact of Machine Input on Decision Making Under Cognitive Limitations, Management Science (2023). DOI: 10.1287/mnsc.2023.4744
Journal information: Management Science
Provided by European School of Management and Technology (ESMT)
See more here:
Study finds workplace machine learning improves accuracy, but also increases human workload - Tech Xplore
Development and internal-external validation of statistical and machine learning models for breast cancer … – The BMJ
Abstract
Objective To develop a clinically useful model that estimates the 10 year risk of breast cancer related mortality in women (self-reported female sex) with breast cancer of any stage, comparing results from regression and machine learning approaches.
Design Population based cohort study.
Setting QResearch primary care database in England, with individual level linkage to the national cancer registry, Hospital Episodes Statistics, and national mortality registers.
Participants 141765 women aged 20 years and older with a diagnosis of invasive breast cancer between 1 January 2000 and 31 December 2020.
Main outcome measures Four model building strategies comprising two regression (Cox proportional hazards and competing risks regression) and two machine learning (XGBoost and an artificial neural network) approaches. Internal-external cross validation was used for model evaluation. Random effects meta-analysis that pooled estimates of discrimination and calibration metrics, calibration plots, and decision curve analysis were used to assess model performance, transportability, and clinical utility.
Results During a median 4.16 years (interquartile range 1.76-8.26) of follow-up, 21688 breast cancer related deaths and 11454 deaths from other causes occurred. Restricting to 10 years maximum follow-up from breast cancer diagnosis, 20367 breast cancer related deaths occurred during a total of 688564.81 person years. The crude breast cancer mortality rate was 295.79 per 10000 person years (95% confidence interval 291.75 to 299.88). Predictors varied for each regression model, but both Cox and competing risks models included age at diagnosis, body mass index, smoking status, route to diagnosis, hormone receptor status, cancer stage, and grade of breast cancer. The Cox models random effects meta-analysis pooled estimate for Harrells C index was the highest of any model at 0.858 (95% confidence interval 0.853 to 0.864, and 95% prediction interval 0.843 to 0.873). It appeared acceptably calibrated on calibration plots. The competing risks regression model had good discrimination: pooled Harrells C index 0.849 (0.839 to 0.859, and 0.821 to 0.876, and evidence of systematic miscalibration on summary metrics was lacking. The machine learning models had acceptable discrimination overall (Harrells C index: XGBoost 0.821 (0.813 to 0.828, and 0.805 to 0.837); neural network 0.847 (0.835 to 0.858, and 0.816 to 0.878)), but had more complex patterns of miscalibration and more variable regional and stage specific performance. Decision curve analysis suggested that the Cox and competing risks regression models tested may have higher clinical utility than the two machine learning approaches.
Conclusion In women with breast cancer of any stage, using the predictors available in this dataset, regression based methods had better and more consistent performance compared with machine learning approaches and may be worthy of further evaluation for potential clinical use, such as for stratified follow-up.
Clinical prediction models already support medical decision making in breast cancer by providing individualised estimations of risk. Tools such as PREDICT Breast1 or the Nottingham Prognostic Index23 are used in patients with early stage, surgically treated breast cancer for prognostication and selection of post-surgical treatment. Such tools are, however, inherently limited to treatment specific subgroups of patients. Accurate estimation of mortality risk after diagnosis across all patients with breast cancer of any stage may be clinically useful for stratifying follow-up, counselling patients about their expected prognosis, or identifying high risk individuals suitable for clinical trials.4
The scope for machine learning approaches in clinical prediction modelling has attracted considerable interest.56789 Some have posited that these flexible approaches might be more suitable for capturing non-linear associations, or for handling higher order interactions without explicit programming.10 Others have raised concerns about model transparency,1112 interpretability,13 risk of algorithmic bias exacerbating extant health inequalities,14 quality of evaluation and reporting,15 ability to handle rare events16 or censoring,17 and appropriateness of comparisons11 to regression based methods.18 Indeed, systematic reviews have shown no inherent benefit of machine learning approaches over appropriate statistical models in low dimensional clinical settings.18 As no a priori method exists to predict which modelling approach may yield the most useful clinical prediction model for a given scenario, frameworks that appropriately compare different models can be used.
Owing to the risks of harm from suboptimal medical decision making, clinical prediction models should be comprehensively evaluated for performance and utility,19 and, if widespread clinical use is intended, heterogeneity in model performance across relevant patient groups should be explored.20 Given developments in treatment for breast cancer over time, with associated temporal falls in mortality, another key consideration is the transportability of risk modelsnot just across regions and subpopulations but also across time periods.21 Although such dataset shift22 is a common issue with any algorithm sought to be deployed prospectively, this is not routinely explored. Robust evaluation is necessary but is non-uniform in the modelling of breast cancer prognostication.23 A systematic review identified 58 papers that assessed prognostic models for breast cancer,24 but only one study assessed clinical effectiveness by means of a simplistic approach measuring the accuracy of classifying patients into high or low risk groups. A more recent systematic review25 appraised 922 breast cancer prediction models using PROBAST (prediction model risk of bias assessment tool)26 and found that most of the clinical prediction models are poorly reported, show methodological flaws, or are at high risk of bias. Of the 27 models deemed to be at low risk of bias, only one was intended to estimate the risks of breast cancer related mortality in women with disease of any stage.27 However, this small study of 287 women using data from a single health department in Spain had methodological limitations, including possibly insufficient data to fit a model (see supplementary table 1) and uncertain transportability to other settings. Therefore, no reliable prediction model exists to provide accurate risk assessment of mortality in women with breast cancer of any stage. Although we refer to women throughout, this is based on self-reported female sex, which may include some individuals who do not identify as female.
We aimed to develop a clinically useful prediction model to reliably estimate the risks of breast cancer specific mortality in any woman with a diagnosis of breast cancer, in line with modern best practice. Utilising data from 141675 women with invasive breast cancer diagnosed between 2000 and 2020 in England from a population representative, national linked electronic healthcare record database, this study comparatively developed and evaluated clinical prediction models using a combination of analysis methods within an internal-external validation strategy.2829 We sought to identify and compare the best performing methods for model discrimination, calibration, and clinical utility across all stages of breast cancer.
We evaluated four model building approaches: two regression methods (Cox proportional hazards and competing risks regression) and two machine learning methods (XGBoost and neural networks). The prediction horizon was 10 year risk of breast cancer related death from date of diagnosis. The study was conducted in accordance with our protocol30 and is reported consistent with the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) guidelines.31
Assuming 100 candidate predictor parameters, an annual mortality rate of 0.024 after diagnosis,32 and a conservative 15% of the maximal Cox-Snell R2, we estimated that the minimum sample size for fitting the regression models was 10080, with 1452 events, and 14.52 events for each predictor parameter.3334 No standard method exists to estimate minimum sample size for our machine learning models of interestsome evidence, albeit on binary outcome data, suggests that some machine learning methods may require much more data.35
The QResearch database was used to identify an open cohort of women aged 20 years and older (no upper age limit) at time of diagnosis of any invasive breast cancer between 1 January 2000 and 31 December 2020 in England. QResearch has collected data from more than 1500 general practices in the United Kingdom since 1989 and comprises individual level linkage across general practice data, NHS Digitals Hospital Episode Statistics, the national cancer registry, and the Office for National Statistics death registry.
The outcome for this study was breast cancer related mortality within 10 years from the date of a diagnosis of invasive breast cancer. We defined the diagnosis of invasive breast cancer as the presence of breast cancer related Read/Systemised Nomenclature of Medicine Clinical Terms (SNOMED) codes in general practice records, breast cancer related ICD-10 (international classification of diseases, 10th revision) codes in Hospital Episode Statistics data, or as a patient with breast cancer in the cancer registry (stage >0; whichever occurred first). The outcome, breast cancer death, was defined as the presence of relevant ICD-10 codes as any cause of death (primary or contributory) on death certificates from the ONS register. We excluded women with recorded carcinoma in situ only diagnoses as these are non-obligate precursor lesions and present distinct clinical considerations.36 Clinical codes used to define predictors and outcomes are available in the QResearch code group library (https://www.qresearch.org/data/qcode-group-library/). Follow-up time was calculated from the first recorded date of breast cancer diagnosis (earliest recorded on any of the linked datasets) to the earliest of breast cancer related death, other cause of death, or censoring (reached end of study period, left the registered general practice, or the practice stopped contributing to QResearch). The status at last follow-up depended on the modelling framework (ie, Cox proportional hazards or competing risks framework). The maximum follow-up was truncated to 10 years, in line with the model prediction horizon. Supplementary table 2 shows ascertainment of breast cancer diagnoses across the linked datasets.
Individual participant data were extracted on the candidate predictor parameters listed in Box 1, as well as geographical region, auxiliary variables (breast cancer treatments), and dates of events of interest. Candidate predictors were based on evidence from the clinical, epidemiological, or prediction model literature.12337383940 The most recently recorded values before or at the time of breast cancer diagnosis were used with no time restriction. Data were available from the cancer registry about cancer treatment within one year of diagnosis (eg, chemotherapy) but without any corresponding date. The intended model implementation (prediction time) would be at the breast cancer multidisciplinary team meeting or similar clinical setting, following initial diagnostic investigations and staging. To avoid information leakage, and since we did not seek model treatment selection within a causal framework,41 breast cancer treatment variables were not included as predictors.
Age at breast cancer diagnosiscontinuous or fractional polynomial
Townsend deprivation score at cohort entrycontinuous or fractional polynomial
Body mass index (most recently recorded before breast cancer diagnosis)continuous or fractional polynomial
Self-reported ethnicity
Tumour characteristics:
Cancer stage at diagnosis (ordinal: I, II, III, IV)
Differentiation (categorical: well differentiated, moderately differentiated, poorly or undifferentiated)
Oestrogen receptor status (binary: positive or negative)
Progesterone receptor status (binary: positive or negative)
Human epidermal growth factor receptor 2 (HER2) status (binary: positive or negative)
Route to diagnosis (categorical: emergency presentation, inpatient elective, other, screen detected, two week wait)
Comorbidities or medical history on general practice or Hospital Episodes Statistics data (recorded before or at entry to cohort; categorical unless stated otherwise):
Hypertension
Ischaemic heart disease
Type 1 diabetes mellitus
Type 2 diabetes mellitus
Chronic liver disease or cirrhosis
Systemic lupus erythematosus
Chronic kidney disease (ordinal: none or stage 2, stage 3, stage 4, stage 5)
Vasculitis
Family history of breast cancer (categorical: recorded in general practice or Hospital Episodes Statistics data, before or at entry to cohort)
Drug use (before breast cancer diagnosis):
Hormone replacement therapy
Antipsychotic
Tricyclic antidepressant
Selective serotonin reuptake inhibitor
Monoamine oxidase inhibitor
Oral contraceptive pill
Angiotensin converting enzyme inhibitor
blocker
Renin-angiotensin aldosterone antagonists
Age (fractional polynomial terms)family history of breast cancer
Ethnicityage (fractional polynomial terms)
Fractional polynomial42 terms for the continuous variables age at diagnosis, Townsend deprivation score, and body mass index (BMI) at diagnosis were identified in the complete data. This was done separately for the Cox and competing risks regression models, with a maximum of two powers permitted.
Multiple imputation with chained equations was used to impute missing data for BMI, ethnicity, Townsend deprivation score, smoking status, cancer stage at diagnosis, cancer grade at diagnosis, HER2 status, oestrogen receptor status, and progesterone receptor status under the missing at random assumption.4344 The imputation model contained all other candidate predictors, the endpoint indicator, breast cancer treatment variables, the Nelson-Aalen cumulative hazard estimate,45 and the period of cohort entry (period 1=1 January 2000-31 December 2009; period 2=1 January 2010-31 December 2020). The natural logarithm of BMI was used in imputation for normality, with imputed values exponentiated back to the regular scale for modelling. We generated 50 imputations and used these in all model fitting and evaluation steps. Although missing data were observed in the linked datasets used for model development, in the intended use setting (ie, risk estimation at breast cancer multidisciplinary team after a medical history has been taken), the predictors would be expected to be available for all patients.
Models were fit to the entire cohort and then evaluated using internal-external cross validation,28 which involved splitting the dataset by geographical region (n=10) and time period (see figure 1 for summary). For the internal-external cross validation, we recalculated follow-up so that those women who entered the study during the first study decade and survived into the second study period had their follow-up truncated (and status assigned accordingly) at 31 December 2009. This was to emulate two wholly temporally distinct datasets, both with maximum follow-up of 10 years, for the purposes of estimating temporal transportability of the models.
Summary of internal-external cross validation framework used to evaluate model performance for several metrics, and transportability
For the approach using Cox proportional hazards modelling, we treated other (non-breast cancer) deaths as censored. A full Cox model was fitted using all candidate predictor parameters. Model fitting was performed in each imputed dataset and the results combined using Rubins rules, and then this pooled model was used as the basis for predictor selection. We selected binary or multilevel categorical predictors associated with exponentiated coefficients >1.1 or <0.9 (at P<0.01) for inclusion, and interactions and continuous variables were selected if associated with P<0.01. Then these were used to refit the final Cox model. The predictor selection approach benefits from starting with a full, plausible, maximally complex model,46 and then considers both the clinical and the statistical magnitude of predictors to select a parsimonious model while making use of multiply imputed data.4748 This approach has been used in previous clinical prediction modelling studies using QResearch.495051 Clustered standard errors were used to account for clustering of participants within individual general practices in the database.
Deaths from other, non-breast cancer related causes represent a competing risk and in this framework were handled accordingly.30 We repeated the fractional polynomial term selection and predictor selection processes for the competing risks models owing to potential differential associations between predictors and risk or functional forms thereof. A full model was fit with all candidate predictors, with the same magnitude and significance rule used to select the final predictors.
The competing risks model was developed using jack-knife pseudovalues for the Aalen-Johansen cumulative incidence function at 10 years as the outcome variable52the pseudovalues were calculated for the overall cohort (for fitting the model) and then separately in the data from period 1 and from period 2 for the purposes of internal-external cross validation. These values are a marginal (pseudo) probability that can then be used in a regression model to predict individuals probabilities conditional on the observed predictor values. Pseudovalues for the cumulative incidence function at 10 years were regressed on the predictor parameters in a generalised linear model with a complementary log-log link function525354 and robust standard errors to account for the non-independence of pseudovalues. The resultant coefficients are statistically similar to those of the Fine-Gray model5254 but computationally less burdensome to obtain, and permit direct modelling of probabilities.
All fitting and evaluation of the Cox and competing risks regression models occurred in each separate imputed dataset, with Rubins rules used to pool coefficients and standard errors across all imputations.55
The XGBoost and neural network approaches were adapted to handle right censored data in the setting of competing risks by using the jack-knife pseudovalues for the cumulative incidence function at 10 years as a continuous outcome variable. The same predictor parameters as selected for the competing risks regression model were used for the purposes of benchmarking. The XGBoost model used untransformed values for continuous predictors, but these were minimum-maximum scaled (constrained between 0 and 1) for the neural network. We converted categorical variables with more than two levels to dummy variables for both machine learning approaches.
We fit the XGBoost and neural network models to the entire available cohort and used bayesian optimisation56 with fivefold cross validation to identify the optimal configuration of hyperparameters to minimise the root mean squared error between observed pseudovalues and model predictions. Fifty iterations of bayesian optimisation were used, with the expected improvement acquisition function.
For the XGBoost model, we used bayesian optimisation to tune the number of boosting rounds, learning rate (eta), tree depth, subsample fraction, regularisation parameters (alpha gamma, and lambda), and column sampling fractions (per tree, per level). We used the squared error regression option as the objective, and the root mean squared error as the evaluation metric.
To permit modelling of higher order interactions in this tabular dataset, we used a feed forward artificial neural network approach with fully connected dense layers: the model architecture comprised an input layer of 26 nodes (ie, number of predictor parameters), rectified linear unit activation functions in each hidden layer, and a single linear activation output node to generate predictions for the pseudovalues of the cumulative incidence function. The Adam optimiser was used,57 with the initial learning rate, number of hidden layers, number of nodes in each hidden layer, and number of training epochs tuned using bayesian optimisation. If the loss function had plateaued for three epochs, we halved the learning rate, with early stopping after five epochs if the loss function had not reduced by 0.0001. The loss function was the root mean squared error between observed and predicted pseudovalues due to the continuous nature of the target variable.58
After identification of the optimal hyperparameter configurations, we fit the models accordingly to the entirety of the cohort data. We then assessed the performance of these models using the internal-external cross validation strategythis resembled that for the regression models but with the addition of a hyperparameter tuning component (fig 1). During each iteration of internal-external cross validation, we used bayesian optimisation with fivefold cross validation to identify the optimal hyperparameters for the model fitted to the development data from period 1, which we then tested on the held-out period 2 data. This therefore constituted a form of nested cross validation.59
As the XGBoost and neural network models do not constitute a linear set of parameters and do not have standard errors (therefore not able to be pooled using Rubins rules), we used a stacked imputation strategy. The 50 imputed datasets were stacked to form a single, long dataset, which enabled us to use the same full data as for the regression models, avoiding suboptimal approaches such as complete case analysis or single imputation. For model evaluation after internal-external cross validation, we used approaches based on Rubins rules,55 with performance estimates calculated in each separate imputed dataset using the internal-external cross validation generated individual predictions, and then the estimates were pooled.
Predicted risks when using the Cox model can be derived by combining the linear predictor with the baseline hazard function using the equation: predicted event probability=1Stexp(X) where St is the baseline survival function calculated at 10 years, and X is the individuals linear predictor. For internal-external cross validation, we estimated baseline survival functions separately in each imputation in the period 1 data (continuous predictors centred at the mean, binary predictors set to zero), with results pooled across imputations in accordance with Rubins rules.55 We estimated the final models baseline function similarly but using the full cohort data.
Probabilistic predictions for the competing risks regression model were directly calculated using the following transformation of the linear predictors (X, which included a constant term): predicted event probability=1exp(exp(X)).
As the XGBoost and neural network approaches modelled the pseudovalues directly, we handled the generated predictions as probabilities (conditional on the predictor values). As pseudovalues are not restricted to lie between 0 and 1, we clipped the XGBoost and neural network model predictions to be between 0 and 1 to represent predicted probabilities for model evaluation.
Discrimination was assessed using Harrells C index,60 calculated at 10 years and taking censoring into accountthis used inverse probability of censoring weights for competing risks regression, XGBoost, and neural networks given their competing risks formulation.61 Calibration was summarised in terms of the calibration slope and calibration-in-the-large.6263 Region level results for these metrics were computed during internal-external cross validation and pooled using random effects meta-analysis20 with the Hartung-Knapp-Sidik-Jonkmann method64 to provide an estimate of each metric with a 95% confidence interval, and with a 95% prediction interval. The prediction interval estimates the range of model performance on application to a distinct dataset.20 We also computed these metrics by ethnicity, 10 year age groups, and cancer stage (I-IV) using the pooled, individual level predictions.
Using the individual level predictions from all models, we generated smoothed calibration plots to assess alignment of observed and predicted risks across the spectrum of predicted risks. We generated these using a running smoother through individual risk predictions, and observed individual pseudovalues65 for the Kaplan-Meier failure function (Cox model) or cumulative incidence function (all other models).
Meta-regression following Hartung-Knapp-Sidik-Jonkmann random effects models were used to calculate measures of I2 and R2 to assess the extent to which inter-regional heterogeneity in discrimination and calibration metrics could be attributable to regional variation in age, BMI (standard deviation thereof), mean deprivation score, and ethnic diversity (percentage of people of non-white ethnicity).20 These region level characteristics were estimated using the data from period 2.
We compared the models for clinical utility using decision curve analysis.66 This analysis assesses the trade-off between the benefits of true positives (breast cancer deaths) and the potential harms that may arise from false positives across a range of threshold probabilities. Each model was compared using the two default scenarios of treat all or treat none, with the mean model prediction used for each individual across all imputations. This approach implicitly takes into account both discrimination and calibration and also extends model evaluation to consider the ramifications on clinical decision making.67 The competing risk of other, non-breast-cancer death was taken into account. Decision curves were plotted overall, and by cancer stage to explore potential utility for all breast cancers.
Predictions generated from the Cox proportional hazards model and other, competing risks approaches have different interpretations, owing to their differential handling of competing events and their modelling of hazard functions with distinct statistical properties.
Data processing, multiple imputation, regression modelling, and evaluation of internal-external cross validation results utilised Stata (version 17). Machine learning modelling was performed in R 4.0.1 (xgboost, keras, and ParBayesianOptimization packages), with an NVIDIA Tesla V100 used for graphical processing unit support. Analysis code is available in repository https://github.com/AshDF91/Breast-cancer-prognosis.
Two people who survived breast cancer were involved in discussions about the scope of the project, candidate predictors, importance of research questions, and co-creation of lay summaries before submitting the project for approval. This project was also presented at an Oxfordshire based breast cancer support group to obtain qualitative feedback on the studys aims and face validity or plausibility of candidate predictors, and to discuss the acceptability of clinical risk models to guide stratified breast cancer care.
A total of 141765 women aged between 20 and 97 years at date of breast cancer diagnosis were included in the study. During the entirety of follow-up (median 4.16 (interquartile range 1.76-8.26) years), there were 21688 breast cancer related deaths and 11454 deaths from other causes. Restricting to 10 years maximum follow-up from breast cancer diagnosis, 20367 breast cancer related deaths occurred during a total of 688564.81 person years. The crude mortality rate was 295.79 per 10000 person years (95% confidence interval 291.75 to 299.88). Supplementary figure 1 presents ethnic group specific mortality curves. Table 1 shows the baseline characteristics of the cohort overall and separately by decade defined subcohort.
Summary characteristics of final study cohort overall and separated into temporally distinct subcohorts used in internal-external cross validation. Values are number (column percentage) unless stated otherwise
After the cohort was split by decade of cohort entry and follow-up was truncated for the purposes of internal-external cross validation, 7551 breast cancer related deaths occurred in period 1 during a total of 211006.95 person years of follow-up (crude mortality rate 357.96 per 10000 person years (95% confidence interval 349.87 to 366.02)). In the period 2 data, 8808 breast cancer related deaths occurred during a total of 297066.74 person years of follow-up, with a lower crude mortality rate of 296.50 per 10000 person years (290.37 to 302.76) observed.
We selected non-linear fractional polynomial terms for age and BMI (see supplementary figure 2). The final Cox model after predictor selection is presented as exponentiated coefficients in figure 2 for transparency, with the full model detailed in supplementary table 3. Model performance across all ethnic groups is summarised in supplementary table 4: discrimination ranged between a Harrells C index of 0.794 (95% confidence interval 0.691 to 0.896) in Bangladeshi women to 0.931 (0.839 to 1.000) in Chinese women, but the low numbers of event counts in smaller ethnic groups (eg, Chinese) meant that overall calibration indices were imprecisely estimated for some.
Final Cox proportional hazards model predicting 10 year risk of breast cancer mortality, presented as its exponentiated coefficients (hazard ratios with 95% confidence intervals). Model contains fractional polynomial terms for age (0.5, 2) and body mass index (2, 2), but these are not plotted owing to reasons of scale. Model also includes a baseline survival term (not plottedthe full model as coefficients is presented in the supplementary file). ACE=angiotensin converting enzyme; CI=confidence interval; CKD=chronic kidney disease; ER=oestrogen receptor; GP=general practitioner; HER2= human epidermal growth factor receptor 2; HRT=hormone replacement therapy; PR=progesterone receptor; RAA=renin-angiotensin aldosterone; SSRI=selective serotonin reuptake inhibitor
Overall, the Cox models random effects meta-analysis pooled estimate for Harrells C index was the highest of any model, at 0.858 (95% confidence interval 0.853 to 0.864, 95% prediction interval 0.843 to 0.873). A small degree of miscalibration occurred on summary metrics, with a meta-analysis pooled estimate for the calibration slope of 1.108 (95% confidence interval 1.079 to 1.138, 95% prediction interval 1.034 to 1.182) (table 2). Figure 3, figure 4, and figure 5 show the meta-analysis pooling of performance metrics across regions. Smoothed calibration plots showed generally good alignment of observed and predicted risks across the entire spectrum of predicted risks, albeit with some minor over-prediction (fig 6).
Summary performance metrics for all four models, estimated using random effects meta-analysis after internal-external cross validation.
Results from internal-external cross validation of Cox proportional hazards model for Harrells C index. Plots display region level performance metric estimates and 95% confidence intervals (diamonds with lines), and an overall pooled estimate obtained using random effects meta-analysis and 95% confidence interval (lowest diamond) and 95% prediction interval (line through lowest diamond). CI=confidence interval
Results from internal-external cross validation of Cox proportional hazards model for calibration slope. Plots display region level performance metric estimates and 95% confidence intervals (diamonds with lines), and an overall pooled estimate obtained using random effects meta-analysis and 95% confidence interval (lowest diamond) and 95% prediction interval (line through lowest diamond). CI=confidence interval
Results from internal-external cross validation of Cox proportional hazards model for calibration-in-the-large. Plots display region level performance metric estimates and 95% confidence intervals (diamonds with lines), and an overall pooled estimate obtained using random effects meta-analysis and 95% confidence interval (lowest diamond) and 95% prediction interval (line through lowest diamond). CI=confidence interval
Calibration of the four models tested. Top row shows the alignment between predicted and observed risks for all models with smoothed calibration plots. Bottom row summarises the distribution of predicted risks from each model as histograms
Regional differences in the Harrells C index were relatively slight. None of the inter-region heterogeneity observed for discrimination (I2=53.14%) and calibration (I2=42.35%) appeared to be attributable to regional variation in any of the sociodemographic factors examined (table 3). The model discriminated well across cancer stages, but discriminative capability decreased with increasing stage; moderate variation was observed in calibration across cancer stage groups (supplementary table 9).
Random effects meta-regression of relative contributions of regional variation in age, body mass index, deprivation, and non-white ethnicity on inter-regional differences in performance metrics after internal-external cross validation
Similar fractional polynomial terms were selected for age and BMI in the competing risks regression model (see supplementary figure 2), and predictor selection yielded a model with fewer predictors than the Cox model. The competing risks regression model is presented as exponentiated coefficients in figure 7, with the full model (including constant term) detailed in supplementary table 5. Ethnic group specific discrimination and overall calibration metrics are detailed in supplementary table 4the model generally performed well across ethnic groups, with similar discrimination, but there was some overt miscalibration on summary metricsalthough some metrics were estimated imprecisely owing to small event counts in some ethnic groups.
Final competing risks regression model predicting 10 year risk of breast cancer mortality, presented as its exponentiated coefficients (subdistribution hazard ratios with 95% confidence intervals). Model contains fractional polynomial terms for age (1, 2) and body mass index (2, 2), but these are not plotted owing to reasons of scale. Model also includes an intercept term (not plottedsee supplementary file for full model as coefficients). CI=confidence interval; ER=oestrogen receptor; GP=general practitioner; HER2=human epidermal growth factor receptor 2; HRT=hormone replacement therapy; PR=progesterone receptor
The random effects meta-analysis pooled Harrells C index was 0.849 (95% confidence interval 0.839 to 0.859, 95% prediction interval 0.821 to 0.876). Some evidence suggested systematic miscalibration overallthat is, a pooled calibration slope of 1.160 (95% confidence interval 1.064 to 1.255, 95% prediction interval 0.872 to 1.447). Smoothed calibration plots showed underestimation of risk at the highest predicted values (eg, predicted risk >40%, fig 6). Supplementary figure 3 displays regional performance metrics.
An estimated 41.33% of the regional variation in the Harrells C index for the competing risks regression model was attributable to inter-regional case mix (table 3); ethnic diversity was the leading sociodemographic factor associated therewith (table 3). For calibration, the I2 from the full meta-regression model was 56.68%, with regional variation in age, deprivation, and ethnic diversity associated therewith. Similar to the Cox model, discrimination tended to decrease with increasing cancer stage (supplementary table 9).
Table 4 summarises the selected hyperparameter configuration for the final XGBoost model. The discrimination of this model appeared acceptable overall,68 albeit lower than for both regression models (table 2; supplementary figure 4), with a meta-analysis pooled Harrells C index of 0.821 (95% confidence interval 0.813 to 0.828, 95% prediction interval 0.805 to 0.837). Pooled calibration metrics suggested some mild systemic miscalibrationfor example, the meta-analysis pooled calibration slope was 1.084 (95% confidence interval 1.003 to 1.165, 95% prediction interval 0.842 to 1.326). Calibration plots showed miscalibration across much of the predicted risk spectrum (fig 6), with overestimation in those with predicted risks <0.4 (most of the individuals) before mixed underestimation and overestimation in the patients at highest risk. Discrimination and calibration were poor for stage IV tumours (see supplementary table 9). Regarding regional variation in performance metrics as a result of differences between regions, most of the variation in calibration was attributable to ethnic diversity, followed by regional differences in age (table 3).
Description of machine learning model architectures and hyperparameters tuning performed
Table 4 summarises the selected hyperparameter configuration for the final neural network. This model performed better than XGBoost for overall discriminationthe meta-analysis pooled Harrells C index was 0.847 (95% confidence interval 0.835 to 0.858, 95% prediction interval 0.816 to 0.878, table 2 and supplementary figure 5). Post-internal-external cross validation pooled estimates of summary calibration metrics suggested no systemic miscalibration overall, such as a calibration slope of 1.037 (95% confidence interval 0.910 to 1.165), but heterogeneity was more noticeable across region, manifesting in the wide 95% prediction interval (slope: 0.624 to 1.451), and smoothed calibration plots showed a complex pattern of miscalibration (fig 6). Meta-regression estimated that the leading factor associated with inter-regional variation in discrimination and calibration metrics was regional differences in ethnic diversity (table 3).
Both the XGBoost and neural network approaches showed erratic calibration across cancer stage groups, especially major miscalibration in stage III and IV tumours, such as a slope for the neural network of 0.126 (95% confidence interval 0.005 to 0.247) in stage IV tumours (see supplementary table 9). Overall decision curves showed that when accounting for competing risks, net benefit was generally better for the regression models, and the neural network had lowest clinical utility; when not accounting for competing risks, the regression models had higher net benefit across the threshold probabilities examined (fig 8). Lastly, the clinical utility of the machine learning models was variable across tumour stages, such as null or negative net benefit compared with the scenarios of treat all for stage IV tumours (see supplementary figure 6).
Decision curves to assess clinical utility (net benefit) of using each model. Top plot accounts for the competing risk of other cause mortality. Bottom plot does not account for competing risks
Table 5 illustrates the predictions obtained using the Cox and competing risks regression models for different sample scenarios. When relevant, these are compared with predictions for the same clinical scenarios from PREDICT Breast and the Adjutorium model (obtained using their web calculators: https://breast.predict.nhs.uk/ and https://adjutorium-breastcancer.herokuapp.com).
Risk predictions from Cox and competing risks regression models developed in this study for illustrative clinical scenarios, compared where relevant with PREDICT and Adjutorium*
This study developed and evaluated four models to estimate 10 year risk of breast cancer death after diagnosis of invasive breast cancer of any stage. Although the regression approaches yielded models that discriminated well and were associated with favourable net benefit overall, the machine learning approaches yielded models that performed less uniformly. For example, the XGBoost and neural network models were associated with negative net benefit at some thresholds in stage I tumours, were miscalibrated in stage III and IV tumours, and exhibited complex miscalibration across the spectrum of predicted risks.
Study strengths include the use of linked primary and secondary healthcare datasets for case ascertainment, identification of clinical diagnoses using accurately coded data, and avoidance of selection and recall biases. Use of centralised national mortality registries was beneficial for ascertainment of the endpoint and competing events. Our methodology enabled the adaptation of machine learning models to handle time-to-event data with competing risks and inclusion of multiple imputation so that all models benefitted from maximal available information, and the internal-external cross validation framework28 permitted robust assessment of model performance and heterogeneity across time, place, and population groups.
Excerpt from:
Development and internal-external validation of statistical and machine learning models for breast cancer ... - The BMJ
Novel machine learning tool IDs early biomarkers of Parkinson’s |… – Parkinson’s News Today
A novel machine learning tool, called CRANK-MS, was able to identify, with high accuracy, people who would go on to develop Parkinsons disease, based on an analysis of blood molecules.
The algorithm identified several molecules that may serve as early biomarkers of Parkinsons.
These findings show the potential of artificial intelligence (AI) to improve healthcare, according to researchers from the University of New South Wales (UNSW), in Australia, who are developing the machine learning tool with colleagues from Boston University, in the U.S.
The application of CRANK-MS to detect Parkinsons disease is just one example of how AI can improve the way we diagnose and monitor diseases, Diana Zhang, a study co-author from UNSW, said in a press release.
The study, Interpretable Machine Learning on Metabolomics Data Reveals Biomarkers for Parkinsons Disease, was published inACS Central Science.
Parkinsons disease now is diagnosed based on the symptoms a person is experiencing; there isnt a biological test that can definitively identify the disease. Many researchers are working to identify biomarkers of Parkinsons, which might be measured to help identify the neurodegenerative disorder or predict the risk of developing it.
Here, the international team of researchers used machine learning to analyze metabolomic data that is, large-scale analyses of levels of thousands of different molecules detected in patients blood to identify Parkinsons biomarkers.
The analysis used blood samples collected from the Spanish European Prospective Investigation into Cancer and Nutrition (EPIC). There were 39 samples from people who would go on to develop Parkinsons after up to 15 years of follow-up, and another 39 samples from people who did not develop the disorder over follow-up. The metabolomic makeup of the samples was assessed with a chemical analysis technique called mass spectrometry.
In the simplest terms, machine learning involves feeding a computer a bunch of data, alongside a set of goals and mathematical rules called algorithms. Based on the rules and algorithms, the computer determines or learns how to make sense of the data.
This study specifically used a form of machine learning algorithm called a neural network. As the name implies, the algorithm is structured with a similar logical flow to how data is processed by nerve cells in the brain.
Machine learning has been used to analyze metabolomic data before. However, previous studies have generally not used wide-scale metabolomic data instead, scientists selected specific markers of interest to include, while not including data for other markers.
Such limits were used because wide-scale metabolomic data typically covers thousands of different molecules, and theres a lot of variation so-called noise in the data. Prior machine learning algorithms have generally had poor results when using such noisy data, because its hard for the computer to detect meaningful patterns amidst all the random variation.
The researchers new algorithm, CRANK-MS short for Classification and Ranking Analysis using Neural network generates Knowledge from Mass Spectrometry has a better ability to sort through the noise, and was able to provide high-accuracy results using full metabolomic data.
Here we feed all the information into CRANK-MS without any data reduction right at the start. And from that, we can get the model prediction and identify which metabolites are driving the prediction the most, all in one step.
Typically, researchers using machine learning to examine correlations between metabolites and disease reduce the number of chemical features first, before they feed it into the algorithm, said W. Alexander Donald, PhD, a study co-author from UNSW, in Sydney.
But here, Donald said, we feed all the information into CRANK-MS without any data reduction right at the start. And from that, we can get the model prediction and identify which metabolites are driving the prediction the most, all in one step.
Including all molecules available in the dataset means that if there are metabolites [molecules] which may potentially have been missed using conventional approaches, we can now pick those up, Donald said.
The researchers stressed that further validation is needed to test the algorithm. But in their preliminary tests, CRANK-MS was able to differentiate between Parkinsons and non-Parkinsons individuals with an accuracy of up to about 96%.
In further analyses, the researchers determined which molecules were picked up by the algorithm as the most important for identifying Parkinsons.
There were several noteworthy findings: For example, patients who went on to develop Parkinsons tended to have lower levels of a triterpenoid chemical known to have nerve-protecting properties. That substance is found at high levels in foods like apples, olives, and tomatoes.
Further, these patients also often had high levels of polyfluorinated alkyl substances (PFAS), which may be a marker of exposure to industrial chemicals.
These data indicate that these metabolites are potential early indicators for PD [Parkinsons disease] that predate clinical PD diagnosis and are consistent with specific food diets (such as the Mediterranean diet) for PD prevention and that exposure to [PFASs] may contribute to the development of PD, the researchers wrote. The team noted a need for further research into these potential biomarkers.
The scientists have made the CRANK-MS algorithm publicly available for other researchers to use. The team says this algorithm likely has applications far beyond Parkinsons.
Weve built the model in such a way that its fit for purpose, Zhang said. Whats exciting is that CRANK-MS can be readily applied to other diseases to identify new biomarkers of interest. The tool is user-friendly where on average, results can be generated in less than 10 minutes on a conventional laptop.
Originally posted here:
Novel machine learning tool IDs early biomarkers of Parkinson's |... - Parkinson's News Today
Study Finds Four Predictive Lupus Disease Profiles Using Machine … – Lupus Foundation of America
A new study using machine learning (ML) identified four distinct lupus disease profiles or autoantibody clusters that are predictive of long-term disease, treatment requirements, organ involvement and risk of death. Machine learning refers to the process by which a machine or computer can imitate human behavior to learn and optimize complicated tasks such as statistical analysis and predictive modeling using large datasets. Autoantibodies are antibodies produced by the immune system and directed against proteins in the body. Proteins are often a cause or marker for many autoimmune diseases, including lupus.
Researchers observed 805 people with lupus, looking at demographic, clinical, and laboratory data within 15-months of their diagnosis, then again at 3-years, and 5-years with the disease. After analyzing the data, the researchers used predictive ML which revealed four distinct clusters or lupus disease profiles associated with important lupus outcomes:
Further studies are needed to determine other lupus biomarkers and understand disease pathogenesis through ML approaches. The researchers suggest ML studies can also help to inform diagnosis and treatment strategies for people with lupus. Learn more about lupus research.
Read the study
Continue reading here:
Study Finds Four Predictive Lupus Disease Profiles Using Machine ... - Lupus Foundation of America
Machine learning-guided determination of Acinetobacter density in … – Nature.com
A descriptive summary of the physicochemical variables and Acinetobacter density of the waterbodies is presented in Table 1. The mean pH, EC, TDS, and SAL of the waterbodies was 7.760.02, 218.664.76 S/cm, 110.532.36mg/L, and 0.100.00 PSU, respectively. While the average TEMP, TSS, TBS, and DO of the rivers was 17.290.21C, 80.175.09mg/L, 87.515.41 NTU, and 8.820.04mg/L, respectively, the corresponding DO5, BOD, and AD was 4.820.11mg/L, 4.000.10mg/L, and 3.190.03 log CFU/100mL respectively.
The bivariate correlation between paired PVs varied significantly from very weak to perfect/very strong positive or negative correlation (Table 2). In the same manner, the correlation between various PVs and AD varies. For instance, negligible but positive very weak correlation exist between AD and pH (r=0.03, p=0.422), and SAL (r=0.06, p=0.184) as well as very weak inverse (negative) correlation between AD and TDS (r=0.05, p=0.243) and EC (r=0.04, p=0.339). A significantly positive but weak correlation occurs between AD and BOD (r=0.26, p=4.21E10), and TSS (r=0.26, p=1.09E09), and TBS (r=0.26, 1.71E-09) whereas, AD had a weak inverse correlation with DO5 (r=0.39, p=1.31E21). While there was a moderate positive correlation between TEMP and AD (r=0.43, p=3.19E26), a moderate but inverse correlation occurred between AD and DO (r=0.46, 1.26E29).
The predicted AD by the 18 ML regression models varied both in average value and coverage (range) as shown in Fig.1. The average predicted AD ranged from 0.0056 log units by M5P to 3.2112 log unit by SVR. The average AD prediction declined from SVR [3.2112 (1.46464.4399)], DTR [3.1842 (2.23124.3036)], ENR [3.1842 (2.12334.8208)], NNT [3.1836 (1.13994.2936)], BRT [3.1833 (1.68904.3103)], RF [3.1795 (1.35634.4514)], XGB [3.1792 (1.10404.5828)], MARS [3.1790 (1.19014.5000)], LR [3.1786 (2.18954.7951)], LRSS [3.1786 (2.16224.7911)], GBM [3.1738 (1.43284.3036)], Cubist [3.1736 (1.10124.5300)], ELM [3.1714 (2.22364.9017)], KNN [3.1657 (1.49884.5001)], ANET6 [0.6077 (0.04191.1504)], ANET33 [0.6077 (0.09500.8568)], ANET42 [0.6077 (0.06920.8568)], and M5P [0.0056 (0.60240.6916)]. However, in term of range coverage XGB [3.1792 (1.10404.5828)] and Cubist [3.1736 (1.10124.5300)] outshined other models because those models overestimated and underestimated AD at lower and higher values respectively when compared with raw data [3.1865 (14.5611)].
Comparison of ML model-predicted AD in the waterbodies. RAW raw/empirical AD value.
Figure2 represents the explanatory contributions of PVs to AD prediction by the models. The subplot A-R gives the absolute magnitude (representing parameter importance) by which a PV instance changes AD prediction by each model from its mean value presented in the vertical axis. In LR, an absolute change from the mean value of pH, BOD, TSS, DO, SAL, and TEMP corresponded to an absolute change of 0.143, 0.108, 0.069, 0.0045, 0.04, and 0.004 units in the LRs AD prediction response/value. Also, an absolute response flux of 0.135, 0.116, 0.069, 0.057, 0.043, and 0.0001 in AD prediction value was attributed to pH, BOD, TSS, DO. SAL, and TEMP changes, respectively, by LRSS. Similarly, absolute change in DO, BOD, TEMP, TSS, pH, and SAL would achieve 0.155, 0.061. 0.099, 0.144, and 0.297 AD prediction response changes by KNN. In addition, the most contributed or important PV whose change largely influenced AD prediction response was TEMP (decreases or decreases the responses up to 0.218) in RF. Summarily, AD prediction response changes were highest and most significantly influenced by BOD (0.209), pH (0.332), TSS (0.265), TEMP (0.6), TSS (0.233), SAL (0.198), BOD (0.127), BOD (0.11), DO (0.028), pH (0.114), pH (0.14), SAL(0.91), and pH (0.427) in XGB, BTR, NNT, DTR, SVR, M5P, ENR, ANET33, ANNET64, ANNET6, ELM, MARS, and Cubist, respectively.
PV-specific contribution to eighteen ML models forecasting capability of AD in MHWE receiving waterbodies. The average baseline value of PV in the ML is presented on the y-axis. The green/red bars represent the absolute value of each PV contribution in predicting AD.
Table 4 presents the eighteen regression algorithms performance predicting AD given the waterbodies PVs. In terms of MSE, RMSE, and R2, XGB (MSE=0.0059, RMSE=0.0770; R2=0.9912) and Cubist (MSE=0.0117, RMSE=0.1081, R2=0.9827) ranked first and second respectively, to outmatched other models in predicting AD. While MSE and RMSE metrics ranked ANET6 (MSE=0.0172, RMSE=0.1310), ANRT42 (MSE=0.0220, RMSE=0.1483), ANET33 (MSE=0.0253, RMSE=0.1590), M5P (MSE=0.0275, RMSE=0.1657), and RF (MSE=0.0282, RMSE=0.1679) in the 3, 4, 5, 6, and 7 position among the MLs in predicting AD, M5P (R2=0.9589 and RF (R2=0.9584) recorded better performance in term of R-squared metric and ANET6 (MAD=0.0856) and M5P (MAD=0.0863) in term of MAD metric among the 5 models. But Cubist (MAD=0.0437) XGB (MAD=0.0440) in term of MAD metric.
The feature importance of each PV over permutational resampling on the predictive capability of the ML models in predicting AD in the waterbodies is presented in Table 3 and Fig. S1. The identified important variables ranked differently from one model to another, with temperature ranking in the first position by 10/18 of the models. In the 10 algorithms/models, the temperature was responsible for the highest mean RMSE dropout loss, with temperature in RF, XGB, Cubist, BRT, and NNT accounting for 0.4222 (45.90%), 0.4588 (43.00%), 0.5294 (50.82%), 0.3044 (44.87%), and 0.2424 (68.77%) respectively, while 0.1143 (82.31%),0.1384 (83.30%), 0.1059 (57.00%), 0.4656 (50.58%), and 0.2682 (57.58%) RMSE dropout loss was attributed to temperature in ANET42, ANET10, ELM, M5P, and DTR respectively. Temperature also ranked second in 2/18 models, including ANET33 (0.0559, 45.86%) and GBM (0.0793, 21.84%). BOD was another important variable in forecasting AD in the waterbodies and ranked first in 3/18 and second in 8/18 models. While BOD ranked as the first important variable in AD prediction in MARS (0.9343, 182.96%), LR (0.0584, 27.42%), and GBM (0.0812, 22.35%), it ranked second in KNN (0.2660, 42.69%), XGB (0.4119, 38.60); BRT (0.2206, 32.51%), ELM (0.0430, 23.17%), SVR (0.1869, 35.77%), DTR (0.1636, 35.13%), ENR (0.0469, 21.84%) and LRSS (0.0669, 31.65%). SAL rank first in 2/18 (KNN: 0.2799; ANET33: 0.0633) and second in 3/18 (Cubist: 0.3795; ANET42: 0.0946; ANET10: 0.1359) of the models. DO ranked first in 2/18 (ENR [0.0562; 26.19%] and LRSS [0.0899; 42.51%]) and second in 3/18 (RF [0.3240, 35.23%], M5P [0.3704, 40.23%], LR [0.0584, 27.41%]) of the models.
Figure3 shows the residual diagnostics plots of the models comparing actual AD and forecasted AD values by the models. The observed results showed that actual AD and predicted AD value in the case of LR (A), LRSS (B), KNN (C), BRT 9F), GBM (G), NNT (H), DTR (I), SVR (J), ENR (L), ANET33 (M), ANER64 (N), ANET6 (O), ELM (P) and MARS (Q) skewed, and the smoothed trend did not overlap. However, actual AD and predicted AD values experienced more alignment and an approximately overlapped smoothed trend was seen in RF (D), XGB (E), M5P (K), and Cubist (R). Among the models, RF (D) and M5P (K) both overestimated and underestimated predicted AD at lower and higher values, respectively. Whereas XGB and Cubist both overestimated AD value at lower value with XGB closer to the smoothed trend that Cubist. Generally, a smoothed trend overlapping the gradient line is desirable as it shows that a model fits all values accurately/precisely.
Comparison between actual and predicted AD by the eighteen ML models.
The comparison of the partial-dependence profiles of PVs on AD prediction by the 18 modes using a unitary model by PVs presentation for clarity is shown in Figs. S2S7. The partial-dependence profiles existed in i. a form where an average increase in AD prediction accompanied a PV increase (upwards trend), (ii) inverse trend, where an increase in a PV resulted in a decline AD prediction, (iii) horizontal trend, where increase/decrease in a PV yielded no effects on AD prediction, and (iv) a mixed trend, where the shape switch between 2 or more of iiii. The models' response varied with a change in any of the PV, especially changes beyond the breakpoints that could decrease or increase AD prediction response.
The partial-dependence profile (PDP) of DO for models has a downtrend either from the start or after a breakpoint(s) of nature ii and iv, except for ELM which had an upward trend (i, Fig. S2). TEMP PDP had an upward trend (i and iv) and, in most cases filled with one or more breakpoints but had a horizontal trend in LRSS (Fig. S3). SAL had a PDP of a typical downward trend (ii and iv) across all the models (Fig. S4). While pH displayed a typical downtrend PDP in LR, LRSS, NNT, ENR, ANN6, a downtrend filled with different breakpoint(s) was seen in RF, M5P, and SVR; other models showed a typical upward trend (i and iv) filled with breakpoint(s) (Fig. S5). The PDP of TSS showed an upward trend that returned to a plateau (DTR, ANN33, M5P, GBM, RF, XFB, BRT), after a final breakpoint or a declining trend (ANNT6, SVR; Fig. S6). The BOD PDP generally had an upward trend filled with breakpoint(s) in most models (Fig. S7).
See the original post:
Machine learning-guided determination of Acinetobacter density in ... - Nature.com
AI and Machine Learning will help to Build Metaverse Claims Exec – The Coin Republic
According to one of the executives at Facebook, reports related to Metaverses demise have been exaggerated more than they needed to be.
Meta hosted a press event in New York on 11 May announcing a new AI generative Sandbox tool for advertisers. Nicola Mendelsohn who is Metas Head of Global Business expressed that they are still very much interested in the Metaverse and reinstated that Mark Zuckerberg is very clear about that.
Responding to various reports by news media organizations showing how Meta is not interested in the Metaverse, Nicola explained that they are really interested in the Metaverse. He addressed the attendees saying that this whole Metaverse thing can take 5-10 years before they realize the vision of what theyre talking about.
Mendelsohns comments come as a defense against the growing speculation that Meta is focusing on artificial intelligence more than Metaverse in recent months during the period when the social media giant, Facebook Inc rebranded itself as Meta and couldnt stop talking about the Metaverse.
The recent surge in reports suggesting Meta is moving away from the Metaverse is because of AI tools dominating headlines. Speculations rose that Metas rebranding and announcement quickly faded as soon as artificial intelligence started making headlines and it made some analysts and critics think that Meta is moving towards the latest buzz trend and farther away from Metaverse.
The stance by Mendelson comes despite the fact that Metas Reality Labs lost $3.9 billion in the first quarter of 2023 which is $1 billion more than the first quarter of 2022.
Meta explained that to build the Metaverse and to make Quest virtual reality headsets, generative AI will play a huge part and will be used by brands and creators.
The newly launched AI Sandbox by the company will leverage generative AI to create text for ad copy aimed at different demographics, automatically crop photos and videos, and turn text prompts into background images for ads on Facebook and Instagram. Andrew Bosworth, CTO of Meta previewed the first incoming tools in March.
Nicola Mendelson explained that if you want to build a virtual world as a company its very difficult to do that but he said that with the help of machine learning and Generative AI, this can be done. John Hegeman, VP of Monetization at Meta said that the AI will help them to build the Metaverse more effectively. He further added, The Metaverse will be another great opportunity to create value for folks with AI.
Oncyber, which is a 3D world-building platform, launched an AI tool powered by OpenAIs ChatGpt that lets users customize their digital environments via text commands. Mendelson feels that the full vision of the company in relation to the metaverse could be challenged by Apples mixed reality headset, which is set to be announced soon.
Nancy J. Allen is a crypto enthusiast and believes that cryptocurrencies inspire people to be their own banks and step aside from traditional monetary exchange systems. She is also intrigued by blockchain technology and its functioning.
See the original post here:
AI and Machine Learning will help to Build Metaverse Claims Exec - The Coin Republic
Machine learning market size to grow by USD 56,493.47 million between 2022 and 2027; Alibaba Group Holding Ltd., Alphabet Inc., among others…
NEW YORK, May 11, 2023 /PRNewswire/ -- The machine learning market size o grow by USD 56,493.47 million at a CAGR of 47.81% from 2022 to 2027, according to the latest research report from Technavio. The research report focuses on top companies and crucial drivers, current growth dynamics, futuristic opportunities, and new product/project launches. Discover IT Consulting & Other Services industry potential and make informed business decisions based on qualitative and quantitative evidence highlighted in Technavio reports.View Sample Report
Technavio has announced its latest market research report titled Global Machine Learning Market
Vendor Landscape
The machine learning market is fragmented. Many local and global vendors offer machine learning with limited product differentiation. However, due to the significant growth of the market, vendors are continuously adopting the latest innovations. Therefore, the threat of rivalry was low, and it is expected to remain the same during the forecast period. Some of the key vendors covered in the report include:
Alibaba Group Holding Ltd.- The company offers machine learning platforms for AI that rely on Alibaba Cloud distributed computing clusters.
Alphabet Inc. - The company offers innovative machine learning products and services that will help build, deploy, and scale more effective AI models.
Amazon.com Inc. - The company offers SageMaker which is a machine learning service enabling data scientists, data engineers, MLOps engineers, and business analysts to build, train, and deploy ML models for any use case, regardless of ML expertise.
BigML Inc. - The company offers machine learning which provides predictive applications across industries including aerospace, automotive, energy, entertainment, financial services, food, healthcare, IoT, pharmaceutical, transportation, and telecommunications.
Altair Engineering Inc.
Alteryx Inc.
Cisco Systems Inc.
Fair Isaac Corp.
H2O.ai Inc.
Hewlett Packard Enterprise Co.
Iflowsoft Solutions Inc.
Intel Corp.
International Business Machines Corp.
Microsoft Corp.
Netguru S.A
Salesforce.com Inc.
SAP SE
SAS Institute Inc.
TIBCO Software Inc.
Yottamine Analytics LLC
Story continues
For the market's vendor landscape highlights with a comprehensive list of vendors and their offerings - View Sample Report
Key Market Segmentation
The market is segmented by end-user (BFSI, retail, telecommunications, healthcare, and automotive and others), deployment (cloud-based and on-premise), and geography (North America, Europe, APAC, Middle East and Africa, and South America).
By end-user, the market will observe significant growth in the BFSI segment during the forecast period. BFSI companies are increasingly adopting machine learning solutions to understand their customers and provide customized solutions. The adoption of machine learning solutions is helping BFSI companies in achieving automated processing, data-driven insights about customers, and personalized customer outreach. In addition, the ongoing digital transformation initiatives in the BFSI sector has been driving the growth of the segment.
View Sample Reportfor more highlights into the market segments.
Regional Market Outlook
North America will account for 36% of the market growth during the forecast period. The growth of the regional market is driven by the increase in data generation from industries such as telecommunications, manufacturing, retail, and energy. Also, the increasing need to ensure data consistency and accuracy, improve data quality, identify data patterns, detect anomalies, and develop predictions among enterprises is another major factor driving the growth of the machine learning market in North America.
For more key highlights on the regional market share of most of the above-mentioned countries. -View Sample Report
The machine learning market covers the following areas:
Market Dynamics
Driver The market is driven by the increasing adoption of cloud-based offerings. Cloud-based services solutions offer various benefits such as minimal cost for computing, network and storage infrastructure, scalability, reliability, and high resource availability. Cloud-based solutions eliminate the need for dedicated IT support teams and hence reduces operating costs. Many such benefits are increasing the adoption of cloud-based offerings among enterprises. Machine learning solutions helps enterprises to scale up the production workload of their projects over cloud with the increase in data. Thus, the increased adoption of cloud-based offerings will drive the growth of the market during the forecast period.
Trend The growing number of acquisitions and partnerships is identified as the key trend in the market. Vendors operating in the market are focused on forming strategic alliances with other players to gain a competitive advantage. These growth strategies are allowing vendors to gain access to new clients. Strategic partnerships also provide access to a larger customer base and technologies to help them improve their product portfolio. Moreover, partnerships and acquisitions help vendors to expand their presence in new markets. This trend among vendors will have a positive influence on the market growth over the forecast period.
Challenge The shortage of skilled professionals is identified as the major challenge hindering market growth. Most enterprises lack a proper mix of AI and machine learning skillset-based workforce. This has led enterprises to invest more time and money in retaining and training their existing employees. Also, companies working and investing in AI and machine learning face challenges in finding skilled workforces. Such challenges are restricting the growth of the market in focus.
Why Buy?
Add credibility to strategy
Analyzes competitor's offerings
Get a holistic view of the market
Grow your profit margin with Technavio - Buy the Report
Related Reports:
The deep learning market is estimated to grow at a CAGR of 29.79% between 2022 and 2027. The size of the market is forecast to increase by USD 11,113.13 million. The market is segmented by application (image recognition, voice recognition, video surveillance and diagnostics, and data mining), type (software, services, and hardware), and geography (APAC, North America, Europe, Middle East and Africa, and South America).
The cloud analytics market is estimated to grow at a CAGR of 20.69% between 2022 and 2027. The size of the market is forecast to increase by USD 49,051.7 million. The market is segmented by solution (hosted data warehouse solutions, cloud BI tools, complex event processing, and others), deployment (public cloud, hybrid cloud, and private cloud), and geography (North America, Europe, APAC, Middle East and Africa, and South America).
Gain instant access to 17,000+ market research reports.Technavio's SUBSCRIPTION platform
Machine Learning Market Scope
Report Coverage
Details
Base year
2022
Historic period
2017-2021
Forecast period
2023-2027
Growth momentum & CAGR
Accelerate at a CAGR of 47.81%
Market growth 2023-2027
USD 56,493.47 million
Market structure
Fragmented
YoY growth 2022-2023(%)
42.74
Regional analysis
North America, Europe, APAC, Middle East and Africa, and South America
Performing market contribution
North America at 36%
Key countries
US, China, Japan, UK, and Germany
Competitive landscape
Leading Vendors, Market Positioning of Vendors, Competitive Strategies, and Industry Risks
Key companies profiled
Alibaba Group Holding Ltd., Alphabet Inc., Altair Engineering Inc., Alteryx Inc., Amazon.com Inc., BigML Inc., Cisco Systems Inc., Fair Isaac Corp., H2O.ai Inc., Hewlett Packard Enterprise Co., Iflowsoft Solutions Inc., Intel Corp., International Business Machines Corp., Microsoft Corp., Netguru S.A, Salesforce.com Inc., SAP SE, SAS Institute Inc., TIBCO Software Inc., and Yottamine Analytics LLC
Market dynamics
Parent market analysis, market growth inducers and obstacles, fast-growing and slow-growing segment analysis, COVID-19 impact and recovery analysis and future consumer dynamics, and market condition analysis for the forecast period.
Customization purview
If our report has not included the data that you are looking for, you can reach out to our analysts and get segments customized.
Browse through Technavio's Information Technology MarketReports
Key Topics Covered:
1 Executive Summary
2 Market Landscape
3 Market Sizing
4 Historic Market Size
5 Five Forces Analysis
6 Market Segmentation by End-user
7 Market Segmentation by Deployment
8 Customer Landscape
9 Geographic Landscape
10 Drivers, Challenges, and Trends
11 Vendor Landscape
12 Vendor Analysis
13 Appendix
About UsTechnavio is a leading global technology research and advisory company. Their research and analysis focuses on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavio's report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavio's comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.
ContactTechnavio ResearchJesse MaidaMedia & Marketing ExecutiveUS: +1 844 364 1100UK: +44 203 893 3200Email: media@technavio.comWebsite: http://www.technavio.com/
Global Machine Learning Market
Cision
View original content to download multimedia:https://www.prnewswire.com/news-releases/machine-learning-market-size-to-grow-by-usd-56-493-47-million-between-2022-and-2027-alibaba-group-holding-ltd-alphabet-inc-among-others-identified-as-key-vendors---technavio-301820931.html
SOURCE Technavio
Read more here:
Machine learning market size to grow by USD 56,493.47 million between 2022 and 2027; Alibaba Group Holding Ltd., Alphabet Inc., among others...
Humans in the Loop: AI & Machine Learning in the Bloomberg … – AccessWire
Originally published on bloomberg.com
NORTHAMPTON, MA / ACCESSWIRE / May 12, 2023 / The Bloomberg Terminal provides access to more than 35 million financial instruments across all asset classes. That's a lot of data, and to make it useful, AI and machine learning (ML) are playing an increasingly central role in the Terminal's ongoing evolution.
Machine learning is about scouring data at speed and scale that is far beyond what human analysts can do. Then, the patterns or anomalies that are discovered can be used to derive powerful insights and guide the automation of all kinds of arduous or tedious tasks that humans used to have to perform manually.
While AI continues to fall short of human intelligence in many applications, there are areas where it vastly outshines the performance of human agents. Machines can identify trends and patterns hidden across millions of documents, and this ability improves over time. Machines also behave consistently, in an unbiased fashion, without committing the kinds of mistakes that humans inevitably make.
"Humans are good at doing things deliberately, but when we make a decision, we start from whole cloth," says Gideon Mann, Head of ML Product & Research in Bloomberg's CTO Office. "Machines execute the same way every time, so even if they make a mistake, they do so with the same error characteristic."
The Bloomberg Terminal currently employs AI and ML techniques in several exciting ways, and we can expect this practice to expand rapidly in the coming years. The story begins some 20 years ago
Keeping Humans in the Loop
When we started in the 80s, data extraction was a manual process. Today, our engineers and data analysts build, train, and use AI to process unstructured data at massive speeds and scale - so our customers are in the know faster.
The rise of the machines
Prior to the 2000s, all tasks related to data collection, analysis, and distribution at Bloomberg were performed manually, because the technology did not yet exist to automate them. The new millennium brought some low-level automation to the company's workflows, with the emergence of primitive models operating by a series of if-then rules coded by humans. As the decade came to a close, true ML took flight within the company. Under this new approach, humans annotate data in order to train a machine to make various associations based on their labels. The machine "learns" how to make decisions, guided by this training data, and produces ever more accurate results over time. This approach can scale dramatically beyond traditional rules-based programming.
In the last decade, there has been an explosive growth in the use of ML applications within Bloomberg. According to James Hook, Head of the company's Data department, there are a number of broad applications for AI/ML and data science within Bloomberg.
One is information extraction, where computer vision and/or natural language processing (NLP) algorithms are used to read unstructured documents - data that's arranged in a format that's typically difficult for machines to read - in order to extract semantic meaning from them. With these techniques, the Terminal can present insights to users that are drawn from video, audio, blog posts, tweets, and more.
Anju Kambadur, Head of Bloomberg's AI Engineering group, explains how this works:
"It typically starts by asking questions of every document. Let's say we have a press release. What are the entities mentioned in the document? Who are the executives involved? Who are the other companies they're doing business with? Are there any supply chain relationships exposed in the document? Then, once you've determined the entities, you need to measure the salience of the relationships between them and associate the content with specific topics. A document might be about electric vehicles, it might be about oil, it might be relevant to the U.S., it might be relevant to the APAC region - all of these are called topic codes' and they're assigned using machine learning."
All of this information, and much more, can be extracted from unstructured documents using natural language processing models.
Another area is quality control, where techniques like anomaly detection are used to spot problems with dataset accuracy, among other areas. Using anomaly detection methods, the Terminal can spot the potential for a hidden investment opportunity, or flag suspicious market activity. For example, if a financial analyst was to change their rating of a particular stock following the company's quarterly earnings announcement, anomaly detection would be able to provide context around whether this is considered a typical behavior, or whether this action is worthy of being presented to Bloomberg clients as a data point worth considering in an investment decision.
And then there's insight generation, where AI/ML is used to analyze large datasets and unlock investment signals that might not otherwise be observed. One example of this is using highly correlated data like credit card transactions to gain visibility into recent company performance and consumer trends. Another is analyzing and summarizing the millions of news stories that are ingested into the Bloomberg Terminal each day to understand the key questions and themes that are driving specific markets or economic sectors or trading volume in a specific company's securities.
Humans in the loop
When we think of machine intelligence, we imagine an unfeeling autonomous machine, cold and impartial. In reality, however, the practice of ML is very much a team effort between humans and machines. Humans, for now at least, still define ontologies and methodologies, and perform annotations and quality assurance tasks. Bloomberg has moved quickly to increase staff capacity to perform these tasks at scale. In this scenario, the machines aren't replacing human workers; they are simply shifting their workflows away from more tedious, repetitive tasks toward higher level strategic oversight.
"It's really a transfer of human skill from manually extracting data points to thinking about defining and creating workflows," says Mann.
Ketevan Tsereteli, a Senior Researcher in Bloomberg Engineering's Artificial Intelligence (AI) group, explains how this transfer works in practice.
"Previously, in the manual workflow, you might have a team of data analysts that would be trained to find mergers and acquisition news in press releases and to extract the relevant information. They would have a lot of domain expertise on how this information is reported across different regions. Today, these same people are instrumental in collecting and labeling this information, and providing feedback on an ML model's performance, pointing out where it made correct and incorrect assumptions. In this way, that domain expertise is gradually transferred from human to machine."
Humans are required at every step to ensure the models are performing optimally and improving over time. It's a collaborative effort involving ML engineers who build the learning systems and underlying infrastructure, AI researchers and data scientists who design and implement workflows, and annotators - journalists and other subject matter experts - who collect and label training data and perform quality assurance.
"We have thousands of analysts in our Data department who have deep subject matter expertise in areas that matter most to our clients, like finance, law, and government," explains ML/AI Data Strategist Tina Tseng. "They not only understand the data in these areas, but also how the data is used by our customers. They work very closely with our engineers and data scientists to develop our automation solutions."
Annotation is critical, not just for training models, but also for evaluating their performance.
"We'll annotate data as a truth set - what they call a "golden" copy of the data," says Tseng. "The model's outputs can be automatically compared to that evaluation set so that we can calculate statistics to quantify how well the model is performing. Evaluation sets are used in both supervised and unsupervised learning."
Check out "Best Practices for Managing Data Annotation Projects," a practical guide published by Bloomberg's CTO Office and Data department about planning and implementing data annotation initiatives.
READ NOW
View additional multimedia and more ESG storytelling from Bloomberg on 3blmedia.com.
Contact Info:Spokesperson: BloombergWebsite: https://www.3blmedia.com/profiles/bloombergEmail: [emailprotected]
SOURCE: Bloomberg
Link:
Humans in the Loop: AI & Machine Learning in the Bloomberg ... - AccessWire
The Yin and Yang of A.I. and Machine Learning: A Force of Good … – Becoming Human: Artificial Intelligence Magazine
Photo by Andrea De Santis on Unsplash
As Artificial Intelligence (AI) and Machine Learning (ML) technologies have become more sophisticated, theyve permeated almost every aspect of our lives. These advancements hold incredible potential to transform society for the better, but they also come with a dark side. So much hype for AI has kicked off this year, spurred by the introduction of Open AIs ChatGPT. However, AI and ML have been around for a while, really kicking into full gear in the 2010s. We are just seeing the outcome for these developments now.
In fact, the 2020s will be defined by advancements of AI and ML. We are just scratching the surface with the potential for these advanced technologies. At its core though, stands the human intention and intervention. AI and ML can serve both as a force of good and a force of evil. However, they, undoubtedly have the potential to revolutionize industries while also posing some serious threats if misused.
The rise of AI and ML presents a double-edged sword. On one hand, these technologies have the potential to revolutionize industries, improve lives, and protect the environment. On the other hand, they can also lead to job displacement, loss of privacy, and perpetuation of biases.
It is up to us as a society to ensure that we harness the power of AI and ML for good while mitigating their potential for harm. By implementing thoughtful regulation, fostering ethical AI practices, and prioritizing transparency, we can harness the benefits of these technologies while minimizing the risks.
Original post:
The Yin and Yang of A.I. and Machine Learning: A Force of Good ... - Becoming Human: Artificial Intelligence Magazine
Financial Leaders Investing in Analytics, AI and Machine Learning … – CPAPracticeAdvisor.com
A new survey shows that continued inflation and economic disruptions are the top concerns for more than half of organizations in 2023. Despite this, most organizations expect their revenues to either increase or stay the same this year. As a result, three quarters of organizations plan to resume business travel in 2023 and half of organizations surveyed plan to invest in analytic technologies that can help navigate uncertain economic conditions.
The Enterprise Financial Decision-Makers Outlook April 2023 semi-annual survey was published by OneStream Software, a leader in corporate performance management (CPM) solutions for the worlds leading enterprises. Conducted by Hanover Research, the survey targeted finance leaders across North America to identify trends and investment priorities in response to economic challenges and other forces in the upcoming year.
When asked about current business drivers and plans for 2023, financial leaders are focused on the following factors:
COVID is still prevalent, but the business impact is shrinking
As the world returns to some type of normal following the pandemic, organizations are planning to reintroduce business travel but are still wary of supply chains. More than half of financial leaders expect COVID-related supply chain disruptions to continue into 2024 (54%) or beyond, down 18% from the Spring 2022survey.Business travel is poised for a comeback this year as 75% of organizations plan to resume this practice in 2023. In the Spring 2022 survey, most organizations (80%) had planned to resume business travel, but the study shows very few have actually implemented the plan (10%), citing the costs of flights, hotel, food and the lack of necessity.
Analytic technology is gaining focus to help navigate uncertainty
Trends in the survey foreshadow an increased usage of analytic technology that improves productivity and supports more agile decision-making across the enterprise. Cloud-based planning and reporting solutions remain the most used data analysis tool (91%), however, most organizations also use predictive analytics (85%), business intelligence (84%) and ML/AI (75%) tools at least intermittently. About half of organizations are planning to invest more in each of these tools this year, compared to 2022.
Adoption momentum for these tools started during the pandemic with no sign of slowing down. According to theSpring 2021survey, organizations said that in comparison to pre-pandemic they were increasing investments in artificial intelligence (59%), predictive analytics (58%), cloud-based planning and reporting solutions (57%) and machine learning (54%).
Organizations are realizing the value of AI
According to the survey, two-thirds of organizations (68%) have adopted an automated machine learning (AutoML) solution to supplement some of their workforce needs, a significant uptick when compared toSpring 2022(56%). In theFall 2022survey, 48% of respondents planned to investigate an AutoML solution in the future, which suggests respondents stayed true to their word and dove in on the technology in the last six months.
Finance leaders see opportunities for improvement in many areas with the help of AI/ML technologies, including ChatGPT. The tasks and processes they believe these technologies will be most useful for include financial reporting, financial planning & analysis, sales/revenue forecasting, sales & marketing and customer service.
Along with investing in new technology, almost all organizations (91%) are investing or planning to invest in new solutions that specifically support finance functions. The most common solutions are cloud-based applications (52%), AI/ML (43%), advanced predictive analytics (42%) and budgeting/planning systems (42%).
The current economic headwinds have finance leaders acutely aware of their investment decisions and weighing the benefits vs. the costs, said Bill Koefoed, Chief Financial Officer, OneStream. With revenue growth through economic uncertainty in mind, financial leaders are looking to invest in solutions that can support more agile decision-making, while delivering a fast return on investment. AutoAI and other AI innovations coming to light in the last couple of years have the potential to improve the speed and accuracy of forecasting and support more informed, confident decision making. OneStream is a proud innovator in this space and partners with organizations around the globe to help them navigate these challenging times.
Read more here:
Financial Leaders Investing in Analytics, AI and Machine Learning ... - CPAPracticeAdvisor.com