Category Archives: Machine Learning

Machine-Learning Tool Sorts Tics From Non-Tics on Video – Medscape

COPENHAGEN A novel machine-learning tool that can distinguish between tics in patients with tic disorders and non-tic movements in healthy controls could potentially save clinicians time and improve the accuracy of tic identification, German researchers suggest.

Videos of more than 60 people with tic disorders were assessed manually to provide a set of clinical features related to facial tics. These were then fed into a machine-learning tool that was trained on nearly 290 videos of patients and controls, and then tested on a further 100 videos.

The resulting tool is "useful to detect tics and distinguish between tics and other movements in healthy controls, said lead author Leonie F. Becker, MD, Institute of Systems Motor Science, University of Lbeck, Lbeck, Germany, and colleagues.

The findings were presented here at the International Congress of Parkinson's Disease and Movement Disorders (MDS) 2023.

The applications of the machine-learning algorithm could eventually extend well beyond analyzing videos of patients recorded in the doctor's office, said Becker.

"Having patients in our clinic is really artificial because they may suppress their tics," she told Medscape Medical News. It is "a really different situation at home or at school."

She hopes that in the future, patients could record themselves on video sitting at home and have that video analyzed by the machine-learning tool. The tool could even be used longitudinally to assess the impact of medication, she said.

For the moment, however, Becker stressed that they have a tool that can simply differentiate between tics and normal movements.

Before it can be released as a clinical application, the tool needs to be able to distinguish between "tics and functional tics, and between tics and myoclonus and every other hyperkinetic movement," and it needs to be validated, she said.

"I think it's years before we have this as an app for your patient."

Becker explained that their group recently conducted a study of healthy individuals, demonstrating that "even people without a tic disorder sometimes move a little bit," although all participants had been asked to sit still.

The team, therefore, wanted to develop a means of reliably distinguishing between these "extra movements" in healthy control participants and tics in people with tic disorders.

One challenge of this task is that rating tics on video recordings is time-consuming and cumbersome; the team reasoned that an automated, machine-learning system could be a more efficient means of assessment, as well as potentially improving classification accuracy.

The researchers used a dataset of 63 videos of people with tic disorders to train a Random Forest classifier to detect tics per second of video.

They first identified 170 facial landmarks and manually tracked the features of tics to indicate whether a tic greater than or equal to a predefined threshold for severity had occurred within 1 second. The severity threshold was chosen as a score of 3 on the Yale Global Tic Severity Scale, which Becker said is a tic which "everybody who looks at it would recognize."

This information was fed into the machine-learning tool to train it to predict the presence of tics in each second. These per-second predictions were aggregated over the whole video to calculate a series of clinical "meta-features," including the number of tics per minute, the maximum duration of a continuous tic, the average duration of tic-free segments, the average size of a tic cluster, and the number of clusters per minute.

The features were then combined into a logistic regression model, which was trained on a dataset of 124 videos of individuals with tic disorders, and 162 videos of health controls.

To determine the accuracy of the model, it was then tested on a dataset of 50 videos of patients with tic disorders and 50 videos of healthy controls.

The machine-learning tool was able to identify severe tics with a test accuracy of 84%, and an F-1 score, which combines the positive predictive value with the sensitivity, of 83%.

The area under the receiver operating characteristics curve was 0.896, and the authors report that the tool revealed significant differences in all meta-features between patients and healthy controls.

Approached for comment, Christos Ganos, MD, Department of Neurology, Charit University Medicine Berlin, Germany, said that the current study is one of several looking at ways of "automatically classifying patterns of behavior."

He told Medscape Medical News that it has the potential to not only "reinforce our clinical decision-making" by demonstrating that "the way we classify phenomenon has been correct all along," but also by showing ways of improving it.

He noted that a new classification of facial tics is being developed, and the phenomenological aspect is "so broad" that machine-learning models could help with some aspects of this, although it will take some time to have useful information from current efforts.

He emphasized, however, that there are "several caveats" to the use of artificial intelligence in this manner, the first being the quality of the data that is fed into the machine-learning tools in the first place.

The information needs to be "correctly labelled," said Ganos, and he is convinced that there will, initially, be a "lot of white noise" from studies that have trained tools using poorly classified data.

Another fundamental aspect, and one that is "going to be talked about a lot" in the future, is that of data protection, he added.

"I worry increasingly" over stories in the media of "videos being re-circulated and re-posted," he said. "Many of these datalabeled and fed into certain algorithms will exist forever."

"Forever means a long time," he stressed, "and it has many implications for generations to come, so we should be aware of that."

"Of course, [machine learning] has great possibilities to be used in therapeutic trials, to monitor symptoms over the large scale, and all of this is very positive," Ganos told Medscape Medical News. "But our role, in many ways, is to make sense of the data, and of what data we feed into these type of approaches, and of how best to leverage it."

Davide Martino, MD, associate professor of neurology in the Department of Clinical Neurosciences at the University of Calgary in Canada, commented in a press release that "an algorithm that measures frequency and clustering of tics from video recordings has strong translational value in routine clinical practice and clinical research."

This is because "it would likely optimize reliability and efficiency of these measurements," he explained.

"Although limited to facial/head tics, the same approach can be extended to other body regions and phonic tics," he added.

"It is also important to point out that video recording-based measures will inevitably still need to be integrated with other domains of tic severity," such as interference with daily routines and functional impact, "in order to achieve a truly comprehensive assessment of tics," Martino underlined.

The study had no specific funding. The investigators report no relevant financial relationships.

International Congress of Parkinson's Disease and Movement Disorders (MDS) 2023: Abstract 951. Presented August 29, 2023.

For more Medscape Neurology news, join us on Facebook and X (formerly Twitter)

See the rest here:
Machine-Learning Tool Sorts Tics From Non-Tics on Video - Medscape

Machine learning and thought, climate impact on health, Alzheimer’s … – Virginia Tech

One of the worlds leaders in computational psychiatry will kick off the upcomingMaury Strauss Distinguished Public Lecture Seriesat the Fralin Biomedical Research Institute at VTC in September.

The public lectures bring innovators and thought leaders in science, medicine, and health from around the globe to the Health Sciences and Technology campus in Roanoke.

Leading the series with a discussion of machine learning and human thought is Read Montague, the Virginia Tech Carilion Vernon Mountcastle Research professor anddirector of theCenter for Human Neuroscience Researchat the Fralin Biomedical Research Institute at VTC.

Montagues research led to the development of the prediction error reward hypothesis among the most influential ideas at the basis for human decision-making in health and in neuropsychiatric disorders and recently to first-of-their-kind observations in the human brain of how the neurochemicals dopamine and serotonin shape peoples perceptions of the world around them.

He will share details of his data-driven neuroscience applications to machine learning to better identify and treat diseases of the brain at 5:30 p.m. on Sept. 28 at the institute.

Montague, who is working with clinicians and research centers worldwide to gather data on brain signaling, is also a professor in the department of physics at Virginia Techs College of Science.

Next in the series is J. Marshall Shepherd, who started his career as a meteorologist and became a leading international expert in weather and climate. He is an elected member of three of the nations influential scientific academies: the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences.

How is his work part of a series on health? The World Health Organization recognizes climate change as the single biggest health threat facing humanity. Shepherd will address the intersection of climate, risk and perception.

Bookending the series in May 2024 is Rick Woychik, director of the National Institute of Environmental Health Sciences at the National Institutes of Health. The molecular geneticist oversees federal funding for biomedical research related to environmental influences, including climate change, on human health and disease.

Other lectures in the series address Alzheimers disease, infant nutrition, dementia, COVID-19 and cardiovascular outcomes, and locomotor learning in children with brain injury.

We look forward to joining with members of the wider community to better understand these exciting new innovations and insights that are germane to health, said Michael Friedlander, Virginia Techs vice president for health sciences and technology and executive director of the Fralin Biomedical Research Institute. This is an incredible collection of speakers who represent some of the best thinking in science, medicine, and policy in the context of improving health. We are also proud that our own Read Montague is among them, and we look forward to sharing this research with the wider community.

The free public lectures are named for Maury Strauss, a Roanoke businessman and longtime community benefactor who recognized the value of welcoming leaders in science, medicine, and health to share their work. The 2023-24 series, which began in 2011, highlights the research institutes commitment to the community.

The full 2023-24 Maury Strauss Distinguished Public lectures include:

The public is invited to attend the lectures, which begin with a 5 p.m. reception. Presentations begin at 5:30 p.m. in 2 Riverside at the Fralin Biomedical Research Institute.All are free, in person, and open to the public. Community attendance is encouraged. To make the lectures accessible to a wider audience, most are streamed live via Zoom and archived.

In addition to the Maury Strauss Distinguished Public Lectures, the Fralin Biomedical Research Institute also hostsPioneers in Biomedical Research Seminars, theTimothy A. Johnson Medical Scholar Lecture Series, as well as other conferences, programs, lectures, and special events.

See the article here:
Machine learning and thought, climate impact on health, Alzheimer's ... - Virginia Tech

Machine Learning Regularization Explained With Examples – TechTarget

What is regularization in machine learning?

Regularization in machine learning is a set of techniques used to ensure that a machine learning model can generalize to new data within the same data set. These techniques can help reduce the impact of noisy data that falls outside the expected range of patterns. Regularization can also improve the model by making it easier to detect relevant edge cases within a classification task.

Consider an algorithm specifically trained to identify spam emails. In this scenario, the algorithm is trained to classify emails that appear to be from a well-known U.S. drugstore chain and contain only a single image as likely to be spam. However, this narrow approach runs the risk of disappointing loyal customers of the chain, who were looking forward to being notified about the store's latest sales. A more effective algorithm would consider other factors, such as the timing of the emails, the use of images and the types of links embedded in the emails to accurately label the emails as spam.

This more complex model, however, would also have to account for the impact that each of these measures added to the algorithm. Without regularization, the new algorithm risks being overly complex, subject to bias and unable to detect variance. We will elaborate on these concepts below.

In short, regularization pushes the model to reduce its complexity as it is being trained, explained Bret Greenstein, data, AI and analytics leader at PwC.

"Regularization acts as a type of penalty that gets added to the loss function or the value that is used to help assign importance to model features," Greenstein said. "This penalty inhibits the model from finding parameters that may over-assign importance to its features."

As such, regularization is an important tool that can be used by data scientists to improve model training to achieve better generalization, or to improve the odds that the model will perform well when exposed to unknown examples.

Adnan Masood, chief architect of AI and machine learning at digital transformation consultancy UST, said his firm regularly uses regularization to strike a balance between model complexity and performance, adeptly steering clear of both underfitting and overfitting.

Overfitting, as described above, occurs when a model is too complex and learns noise in the training data. Underfitting occurs when a model is too simple to capture underlying data patterns.

"Regularization provides a means to find the optimal balance between these two extremes," Masood said.

Consider another example of the use of regularization in retail. In this scenario, the business wants to develop a model that can predict when a certain product might be out of stock. To do this, the business has developed a training data set with many features, such as past sales data, seasonality, promotional events, and external factors like weather or holiday.

This, however, could lead to overfitting when the model is too closely tied to specific patterns in the training data and as a result may be less effective at predicting stockouts based on new, unseen data.

"Without regularization, our machine learning model could potentially learn the training data too well and become overly sensitive to noise or fluctuations in the historical data," Masood said.

In this case, a data scientist might apply a linear regression model to minimize the sum of the squared difference between actual and predicted stockout instances. This discourages the model from assigning too much importance to any one feature.

In addition, they might assign a lambda parameter to determine the strength of regularization. Higher values of this parameter increase regularization and lower the model coefficients (weights of the model).

When this regularized model is trained, it will balance fitting the training data and keeping the model weights small. The result is a model that is potentially less accurate on the training data and more accurate when predicting stockouts on new, unseen data.

"In this way, regularization helps us build a robust model, better generalizes to new data and more effectively predicts stockouts, thereby enabling the business to manage its inventory better and prevent loss of sales," Masood said.

He finds that regularization is vital in managing overfitting and underfitting. It also indirectly helps control bias (error from faulty assumptions) and variance (error from sensitivity to small fluctuations in a training data set), facilitating a balanced model that generalizes well on unseen data.

Niels Bantilan, chief ML engineer at Union.ai, a machine learning orchestration platform, finds it useful to think of regularization as a way to prevent a machine learning model from memorizing the data during training.

For example, a home automation robot trained on making coffee in one kitchen might inadvertently memorize the quirks and layouts of that specific kitchen. It will likely break when presented with a new kitchen where ingredients and equipment differ from the one it memorized.

In this case, regularization forces the model to learn higher-level concepts like "coffee mugs tend to be stored in overhead cabinets" rather than learning specific quirks of the first kitchen, such as "the coffee mugs are stored in the top left-most shelf."

In business, regularization is important to operationalizing machine learning, as it can mitigate errors and save cost, since it is expensive to constantly retrain models on the latest data.

"Therefore, it makes sense to ensure they have some generalization capacity beyond their training data, so the models can handle new situations up to a certain point without having to retrain them on expensive hardware or cloud infrastructure," Bantilan said.

The term overfitting is used to describe a model that has learned too much from the training data. This can include noise, such as inaccurate data accidentally read by a sensor or a human deliberately inputting bad data to evade a spam filter or fraud algorithm. It can also include data specific to that particular situation but not relevant to other use cases, such as a store shelf layout in one store that might not be relevant to different stores in a stockout predictor.

Underfitting occurs when a model has not learned to map features to an accurate prediction for new data. Greenstein said that regularization can sometimes lead to underfitting. In that case, it is important to change the influence that regularization has during model training. Underfitting also relates to bias and variance.

Bantilan described bias in machine learning as the degree to which a model's predictions agree with the actual ground truth. For example, a spam filter that perfectly predicts the spam/not-spam labels in training data would be a low-bias model. It could be considered high-bias if it was wrong all the time.

Variance characterizes the degree to which the model's predictions can handle small perturbations in the training data. One good test is removing a few records to see what happens, Bantilan said. If the model's predictions remain the same, then the model is considered low-variance. If the predictions change wildly, then it is considered high-variance.

Greenstein observed that high variance could be present when a model trained on multiple variations of data appears to learn a solution but struggles to perform on test data. This is one form of overfitting, and regularization can assist with addressing the issue.

Bharath Thota, partner in the advanced analytics practice of Kearney, a global strategy and management consulting firm, said that some of the common use cases in industry include the following:

Regularization needs to be considered as a handy technique in the process of improving ML models rather than a specific use case. Greenstein has found it most useful when problems are high-dimensional, which means they contain many and sometimes complex features. These types of problems are prone to overfitting, as a model may fail to identify simplified patterns to map features to objectives.

Regularization is also helpful with noisy data sets, such as high-dimensional data, where examples vary a lot and are subject to overfitting. In these cases, the models may learn the noise rather than a generalized way of representing the data.

It is also good for nonlinear problems since problems that require nonlinear algorithms can often lead to overfitting. These kinds of algorithms uncover complex boundaries for classifying data that map well to the training data but are only partially applicable to real-world data.

Greenstein noted that regularization is one of many tools that can assist with resolving challenges with an overfit model. Other techniques, such as bagging, reduced learning rates and data sampling methods, can complement or replace regularization, depending on the problem.

There are a range of different regularization techniques. The most common approaches rely on statistical methods such as Lasso regularization (also called L1 regularization), Ridge regularization (L2 regularization) and Elastic Net regularization, which combines both Lasso and Ridge techniques. Various other regulation techniques use different principles, such as ensembling, neural network dropout, pruning decision tree-based models and data augmentation.

Masood said the choice of regularization method and tuning for the regularization strength parameter (lambda) largely depends on the specific use case and the nature of the data set.

"The right regularization can significantly improve model performance, but the wrong choice could lead to underperformance or even harm the model's predictive power," Masood cautioned. Consequently, it is important to approach regularization with a solid understanding of both the data and the problem at hand.

Here are brief descriptions of the common regularization techniques.

Lasso regression AKA L1 regularization. The Lasso regularization technique, an acronym for least absolute shrinkage and selection operator, is derived from calculating the median of the data. A median is a value in the middle of a data set. It calculates a penalty function using absolute weights. Kearney's Thota said this regularization technique encourages sparsity in the model, meaning it can set some coefficients to exactly zero, effectively performing feature selection.

Ridge regression AKA L2 regularization. Ridge regulation is derived from calculating the mean of the data, which is the average of a set of numbers. It calculates a penalty function using a square or other exponent of each variable. Thota said this technique is useful for reducing the impact of irrelevant or correlated features and helps in stabilizing the model's behavior.

Elastic Net (L1 + L2) regularization. Elastic Net combines both L1 and L2 techniques to improve the results for certain problems.

Ensembling. This set of techniques combines the predictions from a suite of models, thus reducing the reliance on any one model for prediction.

Neural network dropout. This process is sometimes used in deep learning algorithms comprised of multiple layers of neural networks. It involves randomly dropping out the weights of some neurons. Bantilan said this forces the deep learning algorithm to learn an ensemble of sub-networks to achieve the task effectively.

Pruning decision tree-based models. This is used in tree-based models like decision trees. The process of pruning branches can simplify the decision rules of a particular tree to prevent it from relying on the quirks of the training data.

Data augmentation. This family of techniques uses prior knowledge about the data distribution to prevent the model from learning the quirks of the data set. For example, in an image classification use case, you might flip an image horizontally, introduce noise, blurriness or crop an image. "As long as the data corruption or modification is something we might find in the real world, the model should learn how to handle those situations," Bantilan said.

Read the original:
Machine Learning Regularization Explained With Examples - TechTarget

Advanced Space-led Team Applying Machine Learning to Detect … – Space Ref

Advanced Space LLC., a leading space tech solutions company, is pleased to announce that an Advanced Space-led team has been chosen to apply Machine Learning (ML) capabilities to detect, track and characterize space debris for the IARPA Space Debris Identification and Tracking (SINTRA) program.

Space debrisitems due to human activity in spacepresents a major hazard to space operations. Advanced Space and its teammates Orion Space Solutions and ExoAnalytic Solutions are applying advanced ML techniques to finding and identifying small debris (0.1-10 cm) under a new Space Debris Identification and Tracking (SINTRA) contract from Intelligence Advanced Research Projects Activity (IARPA).

Space debris is an exponentially growing problem that threatens all activity in space, which Congress is now recognizing as critical infrastructure, said Principal Investigator Nathan R. The well-known Kessler syndrome will inevitably make Earth orbit unusable unless we mitigate it, and the first step is developing the capability to maintain persistent knowledge of the debris population. Through our participation in the SINTRA program, our team aims to revolutionize the global space communitys knowledge of the space debris problem.

Currently, there are over 100 million objects greater than 1 mm orbiting the Earth; however, less than 1 percent of the debris that could cause mission-ending damage are currently tracked. The Advanced Space teams solutionthe Multi-source Extended-Range Mega-scale AI Debris (MERMAID) systemwill feature a sensing system to gather data; ground data processing incorporating ML models to observe, detect, and characterize debris below the threshold of traditional methods; and a catalog of this information. A key component of this solution is that the team will use ML methods to decrease the Signal-to-Noise-Ratio (SNR) required for detecting debris signatures in traditional optical and radar data.

Advanced Space CEO Bradley Cheetham said, Monitoring orbital debris is critical to the sustainable, exploration, development and settlement of space. We are proud of the work the team is doing to advance the state of the art by bringing scale and automation to this challenge.

ABOUT ADVANCED SPACE:

Advanced Space (https://advancedspace.com/) supports the sustainable exploration, development, and settlement of space through software and services that leverage unique subject matter expertise to improve the fundamentals of spaceflight. Advanced Space is dedicated to improving flight dynamics, technology development, and expedited turn-key missions to the Moon, Mars, and beyond.

See original here:
Advanced Space-led Team Applying Machine Learning to Detect ... - Space Ref

Optimization of therapeutic antibodies for reduced self-association … – Nature.com

Jain, T. et al. Biophysical properties of the clinical-stage antibody landscape. Proc. Natl Acad. Sci. USA 114, 944949 (2017).

Article CAS PubMed PubMed Central Google Scholar

Makowski, E. K., Wu, L., Gupta, P. & Tessier, P. M. Discovery-stage identification of drug-like antibodies using emerging experimental and computational methods. mAbs 13, 1895540 (2021).

Labrijn, A. F., Janmaat, M. L., Reichert, J. M. & Parren, P. Bispecific antibodies: a mechanistic review of the pipeline. Nat. Rev. Drug Discov. 18, 585608 (2019).

Article CAS PubMed Google Scholar

Shim, H. Bispecific antibodies and antibody-drug conjugates for cancer therapy: technological considerations. Biomolecules 10, 360 (2020).

Article CAS PubMed PubMed Central Google Scholar

Drago, J. Z., Modi, S. & Chandarlapaty, S. Unlocking the potential of antibodydrug conjugates for cancer therapy. Nat. Rev. Clin. Oncol. 18, 327344 (2021).

Article PubMed PubMed Central Google Scholar

Dean, A. Q., Luo, S., Twomey, J. D. & Zhang, B. Targeting cancer with antibody-drug conjugates: promises and challenges. mAbs 13, 1951427 (2021).

Carter, P. J. Potent antibody therapeutics by design. Nat. Rev. Immunol. 6, 343357 (2006).

Article CAS PubMed Google Scholar

Carter, P. J. & Rajpal, A. Designing antibodies as therapeutics. Cell 185, 27892805 (2022).

Article CAS PubMed Google Scholar

Leavy, O. Therapeutic antibodies: past, present and future. Nat. Rev. Immunol. 10, 297 (2010).

Article CAS PubMed Google Scholar

Chames, P., Van Regenmortel, M., Weiss, E. & Baty, D. Therapeutic antibodies: successes, limitations and hopes for the future. Br. J. Pharmacol. 157, 220233 (2009).

Article CAS PubMed PubMed Central Google Scholar

Makowski, E. K. et al. Co-optimization of therapeutic antibody affinity and specificity using machine learning models that generalize to novel mutational space. Nat. Commun. 13, 3788 (2022).

Article CAS PubMed PubMed Central Google Scholar

Gupta, P. et al. Antibodies with weakly basic isoelectric points minimize trade-offs between formulation and physiological colloidal properties. Mol. Pharm. 19, 775787 (2022).

Article CAS PubMed PubMed Central Google Scholar

Rabia, L. A., Desai, A. A., Jhajj, H. S. & Tessier, P. M. Understanding and overcoming trade-offs between antibody affinity, specificity, stability and solubility. Biochem. Eng. J. 137, 365374 (2018).

Article CAS PubMed PubMed Central Google Scholar

Kingsbury, J. S. et al. A single molecular descriptor to predict solution behavior of therapeutic antibodies. Sci. Adv. 6, eabb0372 (2020).

Article CAS PubMed PubMed Central Google Scholar

Starr, C. G. et al. Ultradilute measurements of self-association for the identification of antibodies with favorable high-concentration solution properties. Mol. Pharm. 18, 27442753 (2021).

Article CAS PubMed Google Scholar

Makowski, E. K., Wu, L., Desai, A. A. & Tessier, P. M. Highly sensitive detection of antibody nonspecific interactions using flow cytometry. mAbs 13, 1951426 (2021).

Xu, Y. et al. Addressing polyspecificity of antibodies selected from an in vitro yeast presentation system: a FACS-based, high-throughput selection and analytical tool. Protein Eng. Des. Sel. 26, 663670 (2013).

Article CAS PubMed Google Scholar

Ahmed, L. et al. Intrinsic physicochemical profile of marketed antibody-based biotherapeutics. Proc. Natl Acad. Sci. USA 118, e2020577118 (2021).

Yadav, S., Laue, T. M., Kalonia, D. S., Singh, S. N. & Shire, S. J. The influence of charge distribution on self-association and viscosity behavior of monoclonal antibody solutions. Mol. Pharm. 9, 791802 (2012).

Article CAS PubMed Google Scholar

Yadav, S. et al. Establishing a link between amino acid sequences and self-associating and viscoelastic behavior of two closely related monoclonal antibodies. Pharm. Res. 28, 17501764 (2011).

Article CAS PubMed Google Scholar

Xolair. Prescribing Information (Genentech Inc., 2021).

Dyson, M. R. et al. Beyond affinity: selection of antibody variants with optimal biophysical properties and reduced immunogenicity from mammalian display libraries. mAbs 12, 1829335 (2020).

Wang, N. et al. Opalescence of an IgG1 monoclonal antibody formulation is mediated by ionic strength and excipients. Biopharm Int. 22, 3647 (2009).

Salinas, B. A. et al. Understanding and modulating opalescence and viscosity in a monoclonal antibody formulation. J. Pharm. Sci. 99, 8293 (2010).

Article CAS PubMed PubMed Central Google Scholar

Goldberg, D. S., Bishop, S. M., Shah, A. U. & Sathish, H. A. Formulation development of therapeutic monoclonal antibodies using high-throughput fluorescence and static light scattering techniques: role of conformational and colloidal stability. J. Pharm. Sci. 100, 13061315 (2011).

Article CAS PubMed Google Scholar

Shi, G. H. et al. Subcutaneous injection site pain of formulation matrices. Pharm. Res. 38, 779793 (2021).

Article CAS PubMed Google Scholar

Chabra, S. et al. Ixekizumab citrate-free formulation: results from two clinical trials. Adv. Ther. 39, 28622872 (2022).

Article CAS PubMed PubMed Central Google Scholar

Grinshpun, B. et al. Identifying biophysical assays and in silico properties that enrich for slow clearance in clinical-stage therapeutic antibodies. mAbs 13, 1932230 (2021).

Htzel, I. et al. A strategy for risk mitigation of antibodies with fast clearance. mAbs 4, 753760 (2012).

Neergaard, M. S., Nielsen, A. D., Parshad, H. & Van De Weert, M. Stability of monoclonal antibodies at high-concentration: head-to-head comparison of the IgG1 and IgG4 subclass. J. Pharm. Sci. 103, 115127 (2014).

Article CAS PubMed Google Scholar

Lai, P. K. et al. Differences in human IgG1 and IgG4 S228P monoclonal antibodies viscosity and self-interactions: experimental assessment and computational predictions of domain interactions. mAbs 13, 1991256 (2021).

Sickmier, E. A. et al. The panitumumab EGFR complex reveals a binding mechanism that overcomes cetuximab induced resistance. PLoS ONE 11, e0163366 (2016).

Article PubMed PubMed Central Google Scholar

Bohrmann, B. et al. Gantenerumab: a novel human anti-A antibody demonstrates sustained cerebral amyloid- binding and elicits cell-mediated removal of human amyloid-. J. Alzheimers Dis. 28, 4969 (2012).

Article CAS PubMed Google Scholar

Weihofen, A. et al. Development of an aggregate-selective, human-derived -synuclein antibody BIIB054 that ameliorates disease phenotypes in Parkinsons disease models. Neurobiol. Dis. 124, 276288 (2019).

Article CAS PubMed Google Scholar

De Groot, A. S. & Martin, W. Reducing risk, improving outcomes: bioengineering less immunogenic protein therapeutics. Clin. Immunol. 131, 189201 (2009).

Article PubMed Google Scholar

De Groot, A. S. & Scott, D. W. Immunogenicity of protein therapeutics. Trends Immunol. 28, 482490 (2007).

Article PubMed Google Scholar

Apgar, J. R. et al. Modeling and mitigation of high-concentration antibody viscosity through structure-based computer-aided protein design. PLoS ONE 15, e0232713 (2020).

Article CAS PubMed PubMed Central Google Scholar

Tomar, D. S. et al. In-silico prediction of concentration-dependent viscosity curves for monoclonal antibody solutions. mAbs 9, 476489 (2017).

Boughter, C. T. et al. Biochemical patterns of antibody polyreactivity revealed through a bioinformatics-based analysis of CDR loops. eLife 9, e61393 (2020).

Lai, P. K., Gallegos, A., Mody, N., Sathish, H. A. & Trout, B. L. Machine learning prediction of antibody aggregation and viscosity for high concentration formulation development of protein therapeutics. mAbs 14, 2026208 (2022).

Han, X., Shih, J., Lin, Y., Chai, Q. & Cramer, S. M. Development of QSAR models for in silico screening of antibody solubility. mAbs 14, 2062807 (2022).

Mason, D. M. et al. Optimization of therapeutic antibodies by predicting antigen specificity from antibody sequence via deep learning. Nat. Biomed. Eng. 5, 600612 (2021).

Article CAS PubMed Google Scholar

Saka, K. et al. Antibody design using LSTM based deep generative model from phage display library for affinity maturation. Sci. Rep. 11, 5852 (2021).

Article CAS PubMed PubMed Central Google Scholar

Hie, B. L. et al. Efficient evolution of human antibodies from general protein language models and sequence information alone. Nat. Biotechnol. https://doi.org/10.1038/s41587-023-01763-2 (2023).

Shin, J. E. et al. Protein design and variant prediction using autoregressive generative models. Nat. Commun. 12, 2403 (2021).

Article CAS PubMed PubMed Central Google Scholar

Neal, B. L., Asthagiri, D. & Lenhoff, A. M. Molecular origins of osmotic second virial coefficients of proteins. Biophys. J. 75, 24692477 (1998).

Article CAS PubMed PubMed Central Google Scholar

Lomakin, A., Asherie, N. & Benedek, G. B. Aeolotopic interactions of globular proteins. Proc. Natl Acad. Sci. USA 96, 94659468 (1999).

Article CAS PubMed PubMed Central Google Scholar

Elcock, A. H., Sept, D. & McCammon, J. A. Computer simulation of proteinprotein interactions. J. Phys. Chem. B 105, 15041518 (2001).

Article CAS Google Scholar

Sharma, V. K. et al. In silico selection of therapeutic antibodies for development: viscosity, clearance, and chemical stability. Proc. Natl Acad. Sci. USA 111, 1860118606 (2014).

Article CAS PubMed PubMed Central Google Scholar

Raybould, M. I. J. et al. Five computational developability guidelines for therapeutic antibody profiling. Proc. Natl Acad. Sci. USA 116, 40254030 (2019).

Article CAS PubMed PubMed Central Google Scholar

Negron, C., Fang, J., McPherson, M. J., Stine, W. B. Jr. & McCluskey, A. J. Separating clinical antibodies from repertoire antibodies, a path to in silico developability assessment. mAbs 14, 2080628 (2022).

Connolly, B. D. et al. Weak interactions govern the viscosity of concentrated antibody solutions: high-throughput analysis using the diffusion interaction parameter. Biophys. J. 103, 6978 (2012).

Article CAS PubMed PubMed Central Google Scholar

Kelly, R. L. et al. Chaperone proteins as single component reagents to assess antibody nonspecificity. mAbs 9, 10361040 (2017).

Datta-Mannan, A. et al. The interplay of non-specific binding, target-mediated clearance and FcRn interactions on the pharmacokinetics of humanized antibodies. mAbs 7, 10841093 (2015).

Chung, S. et al. An in vitro FcRn- dependent transcytosis assay as a screening tool for predictive assessment of nonspecific clearance of antibody therapeutics in humans. mAbs 11, 942955 (2019).

Liu, Y. et al. High-throughput screening for developability during early-stage antibody discovery using self-interaction nanoparticle spectroscopy. mAbs 6, 483492 (2014).

Sule, S. V., Dickinson, C. D., Lu, J., Chow, C. K. & Tessier, P. M. Rapid analysis of antibody self-association in complex mixtures using immunogold conjugates. Mol. Pharm. 10, 13221331 (2013).

Article CAS PubMed Google Scholar

Sun, T. et al. High throughput detection of antibody self-interaction by bio-layer interferometry. mAbs 5, 838841 (2013).

Mouquet, H. et al. Polyreactivity increases the apparent affinity of anti-HIV antibodies by heteroligation. Nature 467, 591595 (2010).

Article CAS PubMed PubMed Central Google Scholar

Wardemann, H. et al. Predominant autoantibody production by early human B cell precursors. Science 301, 13741377 (2003).

Article CAS PubMed Google Scholar

Jacobs, S. A., Wu, S. J., Feng, Y., Bethea, D. & ONeil, K. T. Cross-interaction chromatography: a rapid method to identify highly soluble monoclonal antibody candidates. Pharm. Res. 27, 6571 (2010).

See the original post here:
Optimization of therapeutic antibodies for reduced self-association ... - Nature.com

Machine Learning, Numerical Simulation Integrated To Estimate … – Society of Petroleum Engineers

In the complete paper, the authors analyzed a robust, well-distributed parent/child well data set of the Delaware Basin Wolfcamp formation using a combination of available empirical data and numerical simulation outputs, which was used to develop a predictive machine-learning model (consisting of a multiple linear regression model and a simple neural network). This model has been implemented successfully in field developments to optimize child-well placement and has enabled improvements in performance predictions and net present value.

Pervasive parent/child well pairs have complicated the development of the Delaware Basin Wolfcamp formation by introducing the need to forecast child-well performance reliably. This problem is made more difficult by the complex nature of the physical processes involved in parent/child well interactions and the variety of geometrical configurations that can be realized. In broad terms, the following three classifications of child wells can be recognized based on their spatial relationship to the associated parent well and other offset wells (Fig. 1 above):

To narrow the range of complexities in the study, the authors focused on Type 2 child wells because this configuration will be used most often in future development activities and because it had the most existing field examples.

The principal objective of this assessment was to generate accurate quantitative predictions of the diminished production performance of child wells because of pre-existing parent wells.

In this work, a novel, hybrid approach is detailed involving a combination of machine-learning techniques and numerical simulations.

Read this article:
Machine Learning, Numerical Simulation Integrated To Estimate ... - Society of Petroleum Engineers

3 Up-and-Coming Machine Learning Stocks to Put on Your Must … – InvestorPlace

Source: Sergio Photone / Shutterstock.com

Stocks connected to machine learning are synonymous with those connected to artificial intelligence. Machine learning falls under the umbrella of AI and relates to the use of data and algorithms to imitate human learning to improve accuracy. Kinda scary? Sure. However, machine learning is also proving to be revolutionary in 2023. The emergence of generative AI and its promise to improve our world has created a lot of value. This has led to the rise of machine learning stocks to buy.

While the companies discussed in this article might not be truly up-and-coming as they are established, they certainly are improving. That makes them must-buy stocks that any investor ought to consider.

Source: Below the Sky / Shutterstock.com

There are 13.5 billion reasons Nvidia (NASDAQ:NVDA) why stock should be on every investors list. Im of course referring to Nvidias $13.5 billion in second-quarter revenues. That far exceeded the $11 billion mark, perceived as incredibly ambitious, that Nvidia had given as guidance.

Those blowout earnings lend credence to the notion that AI and machine learning will be much more than a bubble. Instead, it is crystal clear that companies are clamoring for Nvidias leading AI chips and that the pace of things is increasing, not slowing.

Nvidias data center revenues alone at $10.32 billion nearly reached that $11 billion figure. Cloud firms are scrambling to secure their supply of chips that are used for machine learning purposes among other things.

NVDA shares can absolutely run higher from their current position. Their price-to-earnings ratio has temporarily fallen given how unexpectedly high earnings were. Nvidia is predicting $16 billion in revenues for the coming quarter. I dont believe theres any real reason to back off from its shares currently.

Source: T. Schneider / Shutterstock.com

Crowdstrike (NASDAQ:CRWD) is another machine learning stock to consider. The company utilizes machine learning to help it better understand how to stop breaches before they can occur. Its an AI-powered cybersecurity firm that is strongly rated on Wall Street and offers a lot of upside on that basis.

Crowdstrike is getting better and better at thwarting cyber attacks probably by the second. Machine learning allows the company to more intelligently prevent cyber attacks with each piece of data it gathers from an attack.

The company has been growing at a rapid pace over the last few years and has seen year-over-year increases above 40% in each of those periods. However, it has simultaneously struggled to find profitability which likely explains the disconnect between prices and expected prices.

Crowdstrike has several opportunities in front of it. First, if it can address profitability concerns its certain to appreciate in price. Second, theres a general rush toward securing systems that also benefit the company and should provide it fertile ground for future gains.

Source: JHVEPhoto / Shutterstock.com

AMD (NASDAQ:AMD) is the runner-up in the battle for machine learning supremacy at this point.

The stock has boomed in 2023 alongside Nvidia but not to the same degree. It is going to continue to crop up in the machine learning/AI conversation and absolutely makes sense as an investment now.

Lets try to understand AMD in relation to machine learning and its strengths and weaknesses vis-a-vis Nvidia. By now, everyone knows that Nvidia wins the overall battle hands down. When it comes to CPUs, AMD has a lot to offer. Its CPUs, along with those from Intel (NASDAQ:INTC), are the highest rated for machine learning purposes.

However, GPUs outperform CPUs when it comes to machine learning and Nvidia is the king of GPU. It has the highest-rated machine learning GPUs for at least the top five spots according to this source.

As bad as that sounds AMD is roughly 80% as capable as Nvidia overall in relation to AI and machine learning. Therefore, it has a massive opportunity at hand in closing that gap. Its also one of those machine learning stocks to buy.

On the date of publication, Alex Sirois did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

Link:
3 Up-and-Coming Machine Learning Stocks to Put on Your Must ... - InvestorPlace

UW-Madison: Cancer diagnosis and treatment could get a boost … – University of Wisconsin System

Thanks to machine learning algorithms, short pieces of DNA floating in the bloodstream of cancer patients can help doctors diagnose specific types of cancer and choose the most effective treatment for a patient.

The new analysis technique, created by University of WisconsinMadison researchersandpublished recently in Annals of Oncology, is compatible with liquid biopsy testing equipment already approved in the United States and in use in cancer clinics. This could speed the new methods path to helping patients.

Liquid biopsies rely on simple blood draws instead of taking a piece of cancerous tissue from a tumor with a needle.

Marina Sharifi

Liquid biopsies are much less invasive than a tissue biopsy which may even be impossible to do in some cases, depending on where a patients tumor is, saysMarina Sharifi, a professor of medicine and oncologist in UWMadisons School of Medicine and Public Health. Its much easier to do them multiple times over the course of a patients disease to monitor the status of cancer and its response to treatment.

Cancerous tumors shed genetic material, called cell-free DNA, into the bloodstream as they grow. But not all parts of a cancer cells DNA are likely to tumble away. Cells store some of their DNA by coiling it up in protective balls called histones. They unwrap sections to access parts of the genetic code as needed.

Kyle Helzer, a UWMadison bioinformatics scientist, says that parts of the DNA containing the genes that cancer cells use often are uncoiled more frequently and thus are more likely to fragment.

Were exploiting that larger distribution of those regions among cell-free DNA to identify cancer types, adds Helzer, who is also a co-lead author of the study along with Sharifi and scientist Jamie Sperger.

Shuang Zhao

The research team, led by UWMadison senior authorsShuang (George) Zhao, professor of human oncology, andJoshua Lang, professor of medicine, used DNA fragments found in blood samples from a past study of nearly 200 patients (some with, some without cancer), and new samples collected from more than 300 patients treated for breast, lung, prostate or bladder cancers at UWMadison and other research hospitals in the Big Ten Cancer Research Consortium.

The scientists divided each group of samples into two. One portion was used to train a machine-learning algorithm to identify patterns among the fragments of cell-free DNA, relatively unique fingerprints specific to different types of cancers. They used the other portion to test the trained algorithm. The algorithm topped 80 percent accuracy translating the results of a liquid biopsy into both a cancer diagnosis and the specific types of cancer afflicting a patient.

In addition, the machine learning approach was able to tell apart two subtypes of prostate cancer: the most common version, adenocarcinoma, and a swift-progressing variant called neuroendocrine prostate cancer (NEPC) that is resistant to standard treatment approaches. Because NEPC is often difficult to distinguish from adenocarcinoma, but requires aggressive action, it puts oncologists like Lang and Sharifi in a bind.

Joshua Lang

Currently, the only way to diagnose NEPC is via a needle biopsy of a tumor site, and it can be difficult to get a conclusive answer from this approach, even if we have a high clinical suspicion for NEPC, Sharifi says.

Liquid biopsies have advantages, Sperger adds, in that you dont have to know which tumor site to biopsy at, and it is much easier for the patient to get a standard blood draw.

The blood samples were processed using cell-free DNA sequencing technology marketed by Iowa-based Integrated DNA Technologies. Using standard panels like those currently in the clinic is a departure one that can reduce the time and cost of testing from other methods of fragmentomic analysis of cancer DNA in blood samples.

Most commercial panels have been developed around the most important cancer genes that indicate certain drugs for treatment, and they sequence those select genes, says Zhao. What weve shown is that we can use those same panels and same targeted genes to look at the fragmentomics of the cell-free DNA in a blood sample and identify the type of cancer a patient has.

The UW Carbone Cancer Centers Circulating Biomarker Core and Biospecimen Disease-Oriented Team contributed to the collection of the studys hundreds of patient samples.

This research was funded in part by grants from the National Institutes of Health (DP2 OD030734, 1UH2CA260389 and R01CA247479) and the Department of Defense (PC190039, PC200334 and PC180469.)

Written by Chris Barncard

Link to original story: https://news.wisc.edu/algorithmic-blood-test-analysis-will-ease-diagnosis-of-cancer-types-guide-treatment/

See original here:
UW-Madison: Cancer diagnosis and treatment could get a boost ... - University of Wisconsin System

Seismologists use deep learning to forecast earthquakes – University of California, Santa Cruz

For more than 30 years, the models that researchers and government agencies use to forecast earthquake aftershocks have remained largely unchanged. While these older models work well with limited data, they struggle with the huge seismology datasets that are now available.

To address this limitation, a team of researchers at the University of California, Santa Cruz and the Technical University of Munich created a new model that uses deep learning to forecast aftershocks: the Recurrent Earthquake foreCAST (RECAST). In a paper published today in Geophysical Research Letters, the scientists show how the deep learning model is more flexible and scalable than the earthquake forecasting models currently used.

The new model outperformed the current model, known as the Epidemic Type Aftershock Sequence (ETAS) model, for earthquake catalogs of about 10,000 events and greater.

The ETAS model approach was designed for the observations that we had in the 80s and 90s when we were trying to build reliable forecasts based on very few observations, said Kelian Dascher-Cousineau, the lead author of the paper who recently completed his PhD at UC Santa Cruz. Its a very different landscape today. Now, with more sensitive equipment and larger data storage capabilities, earthquake catalogs are much larger and more detailed

Weve started to have million-earthquake catalogs, and the old model simply couldnt handle that amount of data, said Emily Brodsky, a professor of earth and planetary sciences at UC Santa Cruz and co-author on the paper. In fact, one of the main challenges of the study was not designing the new RECAST model itself but getting the older ETAS model to work on huge data sets in order to compare the two.

The ETAS model is kind of brittle, and it has a lot of very subtle and finicky ways in which it can fail, said Dascher-Cousineau. So, we spent a lot of time making sure we werent messing up our benchmark compared to actual model development.

To continue applying deep learning models to aftershock forecasting, Dascher-Cousineau says the field needs a better system for benchmarking. In order to demonstrate the capabilities of the RECAST model, the group first used an ETAS model to simulate an earthquake catalog. After working with the synthetic data, the researchers tested the RECAST model using real data from the Southern California earthquake catalog.

They found that the RECAST model which can, essentially, learn how to learn performed slightly better than the ETAS model at forecasting aftershocks, particularly as the amount of data increased. The computational effort and time were also significantly better for larger catalogs.

This is not the first time scientists have tried using machine learning to forecast earthquakes, but until recently, the technology was not quite ready, said Dascher-Cousineau. New advances in machine learning make the RECAST model more accurate and easily adaptable to different earthquake catalogs.

The models flexibility could open up new possibilities for earthquake forecasting. With the ability to adapt to large amounts of new data, models that use deep learning could potentially incorporate information from multiple regions at once to make better forecasts about poorly studied areas.

We might be able to train on New Zealand, Japan, California and have a model that's actually quite good for forecasting somewhere where the data might not be as abundant, said Dascher-Cousineau.

Using deep-learning models will also eventually allow researchers to expand the type of data they use to forecast seismicity.

Were recording ground motion all the time, said Brodsky. So the next level is to actually use all of that information, not worry about whether were calling it an earthquake or not an earthquake but to use everything."

In the meantime, the researchers hope the model sparks discussions about the possibilities of the new technology.

It has all of this potential associated with it, said Dascher-Cousineau. Because it is designed that way.

Read the original post:
Seismologists use deep learning to forecast earthquakes - University of California, Santa Cruz

How Can Hybrid Machine Learning Techniques Help With Effective … – Dataconomy

Apart from many areas in our lives, hybrid machine learning techniques can help us with effective heart disease prediction. So how can the technology of our time, machine learning, be used to improve the quality and length of human life?

Heart disease stands as one of the foremost global causes of mortality today, presenting a critical challenge in clinical data analysis. Leveraging hybrid machine learning techniques, a field highly effective at processing vast healthcare data volumes is increasingly promising in effective heart disease prediction.

According to the World Health Organization, heart disease takes an estimated 17.9 million lives each year. Although many developments in the field of medicine have succeeded in reducing the death rate of heart diseases in recent years, we are failing in the early diagnosis of these diseases. The time has come for us to treat ML and AI algorithms as more than simple trends.

However effective heart disease prediction proves complex due to various contributing risk factors such as diabetes, high blood pressure, and abnormal pulse rates. Several data mining and neural network techniques have been employed to gauge the severity of heart disease but the prediction of it is a different subject.

This ailment is subclinical, and thats why experts recommend check-ups twice a year for anyone over the age of 30. But lets face it, human beings are lazy and look for the simplest way to do something but how hard can it be to accept an effective and technological medical innovation at a time when we can do our weekly shopping at home with a single voice command into our lives?

Heart disease is one of the leading causes of death worldwide and is a significant public health concern. The deadliness of heart disease depends on various factors, including the type of heart disease, its severity, and the individuals overall health. But does that mean we are left without any preventative method? Is there any way to find it out before it happens to us?

The speed of technological development has reached a peak that we never could have imagined, especially in the last three years. This technological journey of humanity, which started with the slow integration of IoT systems such as Alexa into our lives, has peaked in the last quarter of 2022 with the increase in the prevalence and use of ChatGPT and other LLM models. We are no longer far from the concepts of AI and ML, and these products are preparing to become the hidden power behind medical prediction and diagnostics.

Hybrid machine learning techniques can help with effective heart disease prediction by combining the strengths of different machine learning algorithms and utilizing them in a way that maximizes their predictive power.

Hybrid techniques can help in feature engineering, which is an essential step in machine learning-based predictive modeling. Feature engineering involves selecting and transforming relevant variables from raw data into features that can be used by machine learning algorithms. By combining different techniques, such as feature selection, feature extraction, and feature transformation, hybrid machine learning techniques can help identify the most informative features that contribute to effective heart disease prediction.

The choice of an appropriate model is critical in predictive modeling. Hybrid machine learning techniques excel in model selection by amalgamating the strengths of multiple models. By combining, for example, a decision tree with a support vector machine (SVM), these hybrid models leverage the interpretability of decision trees and the robustness of SVMs to yield superior predictions in medicine.

Model ensembles, formed by merging predictions from multiple models, are another avenue where hybrid techniques shine. The synergy of diverse models often surpasses individual model performance, resulting in more accurate heart disease predictions. For instance, a hybrid ensemble uniting a random forest with a gradient-boosting machine leverages both models strengths to increase the prediction accuracy of heart diseases.

Dealing with missing values is a common challenge in medical data analysis. Hybrid machine learning techniques prove beneficial by combining imputation strategies like mean imputation, median imputation, and statistical model-based imputation. This amalgamation helps mitigate the impact of missing values on predictive accuracy.

The proliferation of large datasets poses challenges related to high-dimensional data. Hybrid approaches address this challenge by fusing dimensionality reduction techniques like principal component analysis (PCA), independent component analysis (ICA), and singular value decomposition (SVD) with machine learning algorithms. This results in reduced data dimensionality, enhancing model interpretability and prediction accuracy.

Traditional machine learning algorithms may falter when dealing with non-linear relationships between variables. Hybrid techniques tackle this issue effectively by amalgamating methods such as polynomial feature engineering, interaction term generation, and the application of recursive neural networks. This amalgamation captures non-linear relationships, thus improving predictive accuracy.

Hybrid machine learning techniques enhance model interpretability by combining methodologies that shed light on the models decision-making process. For example, a hybrid model coupling a decision tree with a linear model offers interpretability akin to decision trees alongside the statistical significance provided by linear models. This comprehensive insight aids in better understanding and trustworthiness of heart disease predictions.

Multiple studies have explored heart disease prediction using hybrid machine learning techniques One such novel method, designed to enhance prediction accuracy, incorporates a combination of hybrid machine learning techniques to identify significant features for cardiovascular disease prediction.

Mohan, Thirumalai, and Srivastava propose a novel method for heart disease prediction that uses a hybrid of machine learning techniques. The method first uses a decision tree algorithm to select the most significant features from a set of patient data.

The researchers compared their method to other machine learning methods for heart disease prediction, such as logistic regression and naive Bayes. They found that their method outperformed these other methods in terms of accuracy.

The decision tree algorithm used to select features is called the C4.5 algorithm. This algorithm is a popular choice for feature selection because it is relatively simple to understand and implement, and it has been shown to be effective in a variety of applications including effective heart disease prediction.

The SVM classifier used to predict heart disease is a type of machine learning algorithm that is known for its accuracy and robustness. SVM classifiers work by finding a hyperplane that separates the data points into two classes. In the case of heart disease prediction, the two classes are patients with heart disease and patients without heart disease.

Exploring the leading AI medical scribes

The researchers suggest that their method could be used to develop a clinical decision support system for the early detection of heart disease. Such a system could help doctors to identify patients who are at high risk of heart disease and to provide them with preventive care.

The authors method has several advantages over other machine learning methods for effective heart disease prediction. First, it is more accurate. Second, it is more robust to noise in the data. Third, it is more efficient to train and deploy.

The authors method is still under development, but it has the potential to be a valuable tool for the early detection of heart disease. The authors plan to further evaluate their method on larger datasets and to explore ways to improve its accuracy.

In addition to the advantages mentioned by the authors, their method also has the following advantages:

The authors evaluated their method on a dataset of 13,000 patients. The dataset included information about the patients age, sex, race, smoking status, blood pressure, cholesterol levels, and other medical history. The authors found that their method was able to predict heart disease with an accuracy of 87.2%.

In another study by Bhatt, Patel, Ghetia, and Mazzero which investigated the use of machine learning (ML) techniques to effectively predict heart disease in 2023, the researchers used a dataset of 1000 patients with heart disease and 1000 patients without heart disease. They used four different ML techniques: decision trees, support vector machines, random forests, and neural networks.

The researchers found that all four ML techniques were able to predict heart disease with a high degree of accuracy. The decision tree algorithm had the highest accuracy, followed by the support vector machines, random forests, and neural networks.

The researchers also found that the accuracy of the ML techniques was improved when they were used in combination with each other. For example, the decision tree algorithm combined with the support vector machines had the highest accuracy of all the models.

The studys findings suggest that ML techniques can be used as an effective tool for predicting heart disease. The researchers believe that these techniques could be used to develop early detection and prevention strategies for heart disease.

In addition to the findings mentioned above, the study also found that the following factors were associated with an increased risk of heart disease:

The studys findings highlight the importance of early detection and prevention of heart disease. By identifying people who are at risk for heart disease, we can take steps to prevent them from developing the disease.

The study is limited by its small sample size. However, the findings are promising and warrant further research. Future studies should be conducted with larger sample sizes to confirm the findings of this study.

Predicting heart disease using hybrid machine learning techniques is an evolving field with several challenges and promising future directions.

One of the primary challenges is obtaining high-quality and sufficiently large datasets for training hybrid models. This involves collecting diverse patient data, including clinical, genetic, and lifestyle factors. Choosing the most relevant features from a large pool is crucial. Hybrid techniques aim to combine different feature selection methods to enhance prediction accuracy.

Deciding which machine learning algorithms to use in hybrid models is critical. Researchers often experiment with various algorithms like random forest, K-nearest neighbor, and logistic regression to find the best combination. Interpreting hybrid model predictions can be challenging due to their complexity. Ensuring transparency and interpretability is essential for clinical acceptance.

The class distribution in heart disease datasets can be imbalanced, with fewer positive cases. Addressing this imbalance is vital for accurate predictions. Ensuring that hybrid models also generalize well to unseen data is a constant concern. Techniques like cross-validation and robust evaluation methods are crucial.

Future directions in effective heart disease prediction using hybrid machine learning techniques encompass several key areas.

A prominent trajectory in the field involves the customization of treatment plans based on individual patient profiles, a trend that continues to gain momentum. Hybrid machine learning models are poised to play a pivotal role in this endeavor by furnishing personalized risk assessments. This approach holds great promise for tailoring interventions to patients unique needs and characteristics, potentially improving treatment outcomes.

The integration of multi-omics data, including genomics, proteomics, and metabolomics, with clinical information represents a compelling avenue for advancing effective heart disease prediction. By amalgamating these diverse data sources, hybrid model techniques can generate more accurate predictions. This holistic approach has the potential to provide deeper insights into the underlying mechanisms of heart disease and enhance predictive accuracy.

As the complexity of hybrid machine learning model techniques increases, ensuring that these models are interpretable and provide transparent explanations for their predictions becomes paramount. The development of hybrid models that offer interpretable explanations can significantly enhance their clinical utility. Healthcare professionals can better trust and utilize these models in decision-making processes, ultimately benefiting patient care.

Another promising direction involves the integration of real-time patient data streams with hybrid models. This approach enables continuous monitoring of patients, facilitating early detection and intervention in cases of heart disease. By leveraging real-time data, hybrid models can provide timely insights, potentially preventing adverse cardiac events and improving patient outcomes.

Collaboration stands as a cornerstone for future progress in effective heart disease prediction using hybrid machine learning techniques. Effective collaboration between medical experts, data scientists, and machine learning researchers is instrumental in driving innovation. Combining domain expertise with advanced computational methods can lead to breakthroughs in hybrid models accuracy and clinical applicability for heart disease prediction.

While heart disease prediction using hybrid machine learning techniques faces data, model complexity, and interpretability challenges, it holds promise for personalized medicine and improving patient outcomes through early detection and intervention. Collaboration and advancements in data collection and analysis methods will continue to shape the future of this field and perhaps humanity.

Featured image credit: rawpixel.com/Freepik

Excerpt from:
How Can Hybrid Machine Learning Techniques Help With Effective ... - Dataconomy