Category Archives: Machine Learning
Machine Learning in CT Imaging: Predicting COPD Progression in High-Risk Individuals – Physician’s Weekly
The following is a summary of CT Imaging With Machine Learning for Predicting Progression to COPD in Individuals at Risk, published in the November 2023 issue of Pulmonology by Kirby et al.
It was not clear what the best treatment is for community-acquired pneumonia in children that is made worse by empyema. For a study, researchers sought to find the difference between the treatments for children with parapneumonic fluid or empyema in terms of hospital stay and other important clinical outcomes. People with long-term lung diseases can get high-resolution pictures of their lungs with a CT scan. Over the last few decades, a lot of work has gone into creating new quantified CT scan airway measures that show when the shape of the airways isnt normal. Many observational studies showed links between CT scan airway measurements and clinically important events like illness, death, and loss of lung function.
However, only a few quantitative CT scan measurements are used in clinical practice. The study overviewed the important methodological issues when using quantitative CT scan airway analyses. They also examined the scientific literature about quantitative CT scan airway measurements used in human clinical or randomized trials and observational studies.
They also discussed new proof that quantitative CT scan images of the airways can be useful in the clinic and what needs to happen to get the study into clinical use. CT scan measures of airways continued to help them learn more about how diseases work, how to diagnose them, and how they turn out. A literature review, on the other hand, showed that more research needs to be done to see if using quantitative CT scans in therapeutic settings is helpful. High-quality proof of clinical gain from treatment based on quantitative CT scan imaging of the airways and technical guidelines for quantitative CT scan imaging of the airways are needed.
Source: sciencedirect.com/science/article/abs/pii/S0012369223003148
See original here:
Machine Learning in CT Imaging: Predicting COPD Progression in High-Risk Individuals - Physician's Weekly
Google unveils MedLM generative AI models for healthcare with HCA, Augmedix and BenchSci as early testers – FierceHealthcare
Google continues to advance its generative AI models designed specifically for healthcare use cases. This week, the tech giant unveiledMedLM, a family of foundation models designed for healthcare industry use cases and available through Google Cloud.
Google's work on generative AI models in healthcare has advanced rapidly since it rolled out Med-PaLM, a large language model designed to provideanswers to medical questions, just a year ago.
The company developed two models under MedLM, built on Med-PaLM 2. The first MedLM model is larger, designed for complex tasks. The second is a medium model, able to be fine-tuned and best for scaling across tasks, according to the company in a blog post. Its first two models are now available to U.S. Google Cloud customers via the companys Vertex AI platform.
"In the coming months, were planning to bringGemini-based models into the MedLM suite to offer even more capabilities," wrote Yossi Matias, vice president of engineering and research at Google and Aashima Gupta, global director, healthcare strategy and solutions at Google Cloud in the blog post.
Gemini is Google's newest large language model as a competitor to OpenAI and Microsoft's GPT-4.
Google says it has been working with companies to testMedLM and those companies are now moving it into production in their solutions, or broadening their testing.
For the past several months, HCA Healthcare has been piloting a solution to help physicians with their medical notes in four emergency department hospital sites. Physicians use an app developed by tech company Augmedix on a hands-free device to create accurate medical notes from clinician-patient conversations.
Augmedix, which developed technology for ambient medical documentation, was piloting Google Clouds Med-PaLM 2 and will now integrate MedLM into its technology stack.
"Generative AI solutions for use in healthcare delivery require a more tailored and precise approach than general purpose LLMs, which is why we value our strategic partnership with Google Cloud, Ian Shakil,Augmedixfounder, director, and chief strategy officer said.Google Cloud has established its leadership as an AI innovator with solutions specifically designed to address the needs of healthcare providers.
Augmedix uses Google Clouds Vertex AI platform to fine-tune some models using training data created by the company's existing technology, which generates 70,000 notes per week and spans more than 30 specialties.
The company anticipates that integrating MedLM into its ambient medical documentation products will improve thequality of medical note output and provide faster turnaround time. Augmedixalso plans to rapidly expand into more sub-specialties through 2024.
BenchSci, a company that uses AI to hasten drug discovery,is integrating MedLM into its ASCEND platform to further improve the speed and quality of pre-clinical research and development.
Google also is working with Deloitte to use generative AI to improve provider search and Accenture to leverage the tech to improve patient access, experienceand outcomes.
Go here to see the original:
Google unveils MedLM generative AI models for healthcare with HCA, Augmedix and BenchSci as early testers - FierceHealthcare
How AI is expanding art history – Nature.com
The colours of Gustav Klimts lost 1901 work Medicine were recovered by artificial intelligence.Credit: IanDagnall Computing/Alamy
Artificial intelligence (AI), machine learning and computer vision are revolutionizing research from medicine and biology to Earth and space sciences. Now, its art historys turn.
For decades, conventionally trained art scholars have been slow to take up computational analysis, dismissing it as too limited and simplistic. But, as I describe in my book Pixels and Paintings, out this month, algorithms are advancing fast, and dozens of studies are now proving the power of AI to shed new light on fine-art paintings and drawings.
For example, by analysing brush strokes, colour and style, AI-driven tools are revealing how artists understanding of the science of optics has helped them to convey light and perspective. Programs are recovering the appearance of lost or hidden artworks and even computing the meanings of some paintings, by identifying symbols, for example.
Its challenging. Artworks are complicated compositionally and materially and are replete with human meaning nuances that algorithms find hard to fathom.
AI reads text from ancient Herculaneum scroll for the first time
Most art historians still rely on their individual expertise when judging artists techniques by eye, backed up with laboratory, library and leg work to pin down dates, materials and provenance. Computer scientists, meanwhile, find it easier to analyse 2D photographs or digital images than layers of oil pigments styled with a brush or palette knife. Yet, collaborations are springing up between computer scientists and art scholars.
Early successes of such computer-assisted connoisseurship fall into three categories: automating conventional by eye analyses; processing subtleties in images beyond what is possible through normal human perception; and introducing new approaches and classes of question to art scholarship. Such methods especially when enhanced by digital processing of large quantities of images and text about art are beginning to empower art scholars, just as microscopes and telescopes have done for biologists and astronomers.
Consider pose an important property that portraitists exploit for formal, expressive and even metaphorical ends. Some artists and art movements favour specific poses. For example, during the Renaissance period in the fifteenth and sixteenth centuries, royals, political leaders and betrothed people were often painted in profile, to convey solemnity and clarity.
Primitivist artists those lacking formal art training, such as nineteenth-century French painter Henri Rousseau, or those who deliberately emulate an untutored simplicity, such as French artist Henri Matisse in the early twentieth century often paint everyday people face-on, to support a direct, unaffected style. Rotated or tipped poses can be powerful: Japanese masters of ukiyo-e (pictures of the floating world), a genre that flourished from the seventeenth to nineteenth centuries, often showed kabuki actors and geishas in twisted or contorted poses, evoking drama, dynamism, unease or sensuality.
Using AI methods, computers can analyse such poses in tens of thousands of portraits in as little as an hour, much quicker than an art scholar can. Deep neural networks machine-learning systems that mimic biological neural networks in brains can detect the locations of key points, such as the tip of the nose or the corners of the eyes, in a painting. They then accurately infer the angles of a subjects pose around three perpendicular axes for realistic and highly stylized portraits.
Consciousness: what it is, where it comes from and whether machines can have it
For example, earlier this year, researchers used deep neural networks to analyse poses and gender across more than 20,000 portraits, spanning a wide range of periods and styles, to help art scholars group works by era and art movement. There were some surprises the tilts of faces and bodies in self-portraits vary with the stance of the artist, and the algorithms could tell whether the self-portraitists were right- or left-handed (J.-P. Chou and D. G. Stork Electron. Imag. 35, 211-1211-13; 2023).
Similarly, AI tools can reveal trends in the compositions of landscapes, colour schemes, brush strokes, perspective and more across major art movements. The models are most accurate when they incorporate an art historians knowledge of factors such as social norms, costumes and artistic styles.
By-eye art analysis can vary depending on how different scholars perceive an artwork. For example, lighting is an expressive feature, from the exaggerated lightdark contrast (chiaroscuro) and gloomy style (tenebrism) of sixteenth-century Italian painter Caravaggio to the flat, graphic lighting in twentieth-century works by US artist Alex Katz. Many experiments have shown that even careful viewers are poor at estimating the overall direction of, or inconsistencies in, illumination throughout a scene. Thats why the human eye is often fooled by photographs doctored by cutting and pasting a figure from one into another, for example.
Computer methods can do better. For example, one source of information about lighting is the pattern of brightness along the outer boundary (or occluding contour) of an object, such as a face. Leonardo da Vinci understood in the fifteenth century that this contour will be bright where the light strikes it perpendicularly but darker where the light strikes it at a sharp angle. Whereas he used his optical analysis to improve his painting, shape from shading and occluding contour algorithms use this rule in reverse, to infer the direction of illumination from the pattern of brightness along a contour.
Leonardo da Vinci understood that an object will appear bright where light strikes it perpendicularly, and dim where rays fall at a glancing angle.Credit: Alamy
Take Johannes Vermeers 1665 painting Girl with a Pearl Earring, for example. Illumination analysis considers highlights in the girls eyes, reflection from the pearl and the shadow cast by her nose and across the face. The occluding-contour algorithm gives a more complete understanding of lighting in this tableau, revealing Vermeers extraordinary consistency in lighting and proving that this character study was executed with a model present (M. K. Johnson et al. Proc. SPIE 6810, 68100I; 2008).
Similarly, advanced computer methods can spot deliberate lighting inconsistencies in works such as those by twentieth-century Belgian surrealist Ren Magritte. They have also proved their worth in debunking theories, such as UK artist David Hockneys bold hypothesis from 2000 that some painters as early as Jan van Eyck (roughly 13901441) secretly used optical projections for their works, a quarter of a millennium earlier than most scholars think optics were used in this way (see Nature 412, 860; 2001). Occluding-contour analysis, homographic analysis (quantification of differences in 3D shapes at various sizes and pose angles), optical-ray tracing and other computational techniques have systematically overturned Hockneys theory much more conclusively than have arguments put forth by other scholars using conventional art-historical methods.
Computer methods have also recovered missing attributes or portions of incomplete artworks, such as the probable style and colours of ghost paintings works that have been painted over and are later revealed by imaging in X-rays or infrared radiation such as Two Wrestlers by Vincent van Gogh. This painting, from before 1886, was mentioned by the artist in a letter but considered lost until it was found beneath another in 2012.
Neural networks, trained on images and text data, have also been used to recover the probable colours of parts of Gustav Klimts lost ceiling painting, Medicine (see go.nature.com/47rx8c2). The original, a representation of the interweaving of life and death presented to the University of Vienna in 1901, was lost during the Second World War, when the castle in which it was kept for safety was burnt down by Nazis to prevent the work from falling into the hands of Allied powers. Only preparatory sketches and photographs remain.
Even more complex was the digital recovery of missing parts of Rembrandts The Night Watch (1642) which was trimmed to fit into a space in Amsterdams city hall on the basis of a contemporary copy by Gerrit Lundens in oil on an oak panel. The algorithms learnt how Lundens copy deviated slightly from Rembrandts original, and corrected it to recreate the missing parts of the original (see go.nature.com/46wvzmj).
Algorithms have inferred the direction of lighting in Johannes Vermeers painting Girl with a Pearl Earring (1665) from the bright edge of the girls face.Credit: Pictures From History/UIG/Getty
To realize the full power of AI in the study of art, we will need the same foundations as other domains: access to immense data sets and computing power. Museums are placing ever more art images and supporting information online, and enlightened funding could accelerate ongoing efforts to collect and organize such data for research.
Scholars anticipate that much recorded information about artworks will one day be available for computation ultra-high-resolution images of every major artwork (and innumerable lesser ones), images taken using the extended electromagnetic spectrum (X-ray, ultraviolet, infrared), chemical and physical measurements of pigments, every word written and lecture video recorded about art in every language. After all, AI advances such as the chatbot ChatGPT and image generator Dall-E have been trained with nearly a terabyte of text and almost one billion images from the web, and extensions under way will use data sets many times larger.
But how will art scholars use existing and future computational tools? Here is one suggestion. Known artworks from the Western canon alone that have been lost to fire, flood, earthquakes or war would fill the walls of every public museum in the world. Some of them, such as Diego Velzquezs Expulsion of the Moriscos (1627), were considered the pinnacle of artistic achievement before they were destroyed. Tens of thousands of paintings were lost in the Second World War and the same number of Chinese masterpieces in Mao Zedongs Cultural Revolution, to mention just two. The global cultural heritage is impoverished and incomplete as a result.
Computation allows art historians to view the task of recovering the appearance of lost artworks as a problem of information retrieval and integration, in which the data on a lost work lie in surviving preparatory sketches, copies by the artist and their followers, and written descriptions. The first tentative steps in recovering lost artworks have shown promise, although much work lies ahead.
Art scholarship has expanded over centuries, through the introduction of new tools. Computation and AI seem poised to be the next step in the never-ending intellectual adventure of understanding and interpreting our immense cultural heritage.
See more here:
How AI is expanding art history - Nature.com
Google’s DeepMind AI can make better weather forecasts than supercomputers – Livescience.com
Google DeepMind has developed a machine learning algorithm that it claims can predict the weather more accurately than current forecasting methods that use supercomputers.
Google's model, dubbed GraphCast, generated a more accurate 10-day forecast than the High Resolution Forecast (HRES) system run by the European Centre for Medium-Range Weather Forecasts (ECMWF) making predictions in minutes rather than hours. Google DeepMind brands HRES the current gold standard weather simulation system.
GraphCast, which can run on a desktop computer, outperformed the ECMWF on more than 99% of weather variables in 90% of the 1,300 test regions, according to findings published Nov. 14 in the journal Science.
But researchers say it is not flawless because results are generated in a black box meaning the AI cannot explain how it found a pattern or show its workings and that it should be used to complement rather than replace established tools.
Related: Is climate change making the weather worse?
Forecasting today relies on plugging data into complex physical models and using supercomputers to run simulations. The accuracy of these predictions relies on granular details within the models, and they are energy-intensive and expensive to run.
But machine learning weather models can operate more cheaply because they need less computing power and work faster. For the new AI model, researchers trained GraphCast on 38 years' worth of global weather readings up to 2017. The algorithm established patterns between variables such as air pressure, temperature, wind and humidity that not even the researchers understood.
After this training, the model extrapolated forecasts from global weather estimates made in 2018 to make 10-day forecasts in less than a minute. Running GraphCast alongside the ECMWF's high-resolution forecast, which uses more conventional physical models to make predictions, the scientists found that GraphCast gave more accurate predictions on more than 90% of the 12,000 data points used.
GraphCast can also predict extreme weather events, such as heatwaves, cold spells and tropical storms, and when Earth's upper atmospheric layers were removed to leave only the lowest level of the atmosphere, the troposphere, where weather events that impact humans are prominent, the accuracy shot up to more than 99%.
"In September, a live version of our publicly available GraphCast model, deployed on the ECMWF website, accurately predicted about nine days in advance that Hurricane Lee would make landfall in Nova Scotia," Rmi Lam, a research engineer at DeepMind, wrote in a statement. "By contrast, traditional forecasts had greater variability in where and when landfall would occur, and only locked in on Nova Scotia about six days in advance."
Despite the model's impressive performance, scientists don't see it supplanting currently used tools anytime soon. Regular forecasts are still needed to verify and set the starting data for any prediction, and as machine learning algorithms produce results they cannot explain, they can be prone to errors or "hallucinations."
Instead, AI models could complement other forecast methods and generate faster predictions, the researchers said. They can also help scientists see shifts in climate patterns over time and get a clearer view of the bigger picture.
"Pioneering the use of AI in weather forecasting will benefit billions of people in their everyday lives. But our wider research is not just about anticipating weather it's about understanding the broader patterns of our climate," Lam wrote. "By developing new tools and accelerating research, we hope AI can empower the global community to tackle our greatest environmental challenges."
Read more:
Google's DeepMind AI can make better weather forecasts than supercomputers - Livescience.com
Machine Learning Methods May Improve Brain Tumor … – HealthITAnalytics.com
November 21, 2023 -Researchers from University of Florida (UF) Health have demonstrated that a combination of machine learning (ML) and liquid chromatography-high resolution mass spectrometry (LC-HRMS) can help make brain tumor evaluations more efficient.
The research was published in the Journal of the American Society for Mass Spectrometry, detailing how these tools can refine the metabolomic and lipidomic characterization of meningioma tumors. While these are a common type of brain tumor, accurately assessing them is critical to prevent adverse outcomes.
Meningioma tumors are classified into three categories: grade I, grade II, and grade III. Grade I tumors are typically slow-growing and less threatening, so treatment focuses on tumor removal and follow-up monitoring for the patient. Grade III tumors are more aggressive, requiring both removal and radiation treatment.
Grade II tumors present a challenge for clinicians.
Grade II tumors are the gray zone, said study co-author Jesse L. Kresak, MD, a clinical associate professor in the UF College of Medicines department of pathology, immunology and laboratory medicine, in the press release. Do we take [the tumor] out and watch to see if it comes back? Or do we also irradiate the area with the idea of preventing a recurrence?
This dilemma led the researchers to pursue an approach that improves meningioma tumor evaluation and better guides clinicians treatment decisions.
To achieve this, the research team analyzed 85 meningioma samples, obtaining chemical profiles of each tumors small molecules and fats. Doing so allowed the researchers to more precisely characterize differences between grades of tumors and identify potential biomarkers that would be helpful for diagnosis.
Initially, the research team had not planned to incorporate ML into their study, instead opting to analyze the byproducts of metabolism within the tumor cells, which would have yielded a chemical fingerprint that would help differentiate between benign and malignant tumors.
However, the researchers realized that the incorporation of ML could help provide additional insights.
After talking about it, we knew that machine learning could be a good opportunity to find things that we wouldnt be able to find ourselves, explained Timothy J. Garrett, PhD, a co-author of the paper, an associate professor in the College of Medicines department of pathology, immunology and laboratory medicine and a UF Health Cancer Center member.
By utilizing ML, the tumor evaluation process became significantly more efficient. Kresak noted that when she is diagnosing a meningioma tumor, she can assess approximately 20 data points in ten minutes. With ML, 17,000 data points were analyzed in less than a second.
Incorporating ML did not lead to significant dips in accuracy. Of those tested, one of the ML models classified the grades of tumors with 87 percent initial accuracy, which the researchers indicated could be improved with the addition and analysis of more samples.
The research team noted that their findings may be useful for meningioma diagnosis and treatment, as tumors can be reclassified after initial pathologist assessment based on new information about the samples genetic makeup.
We are further understanding different tumors by using these tools. Its a way to help us get the right treatment for our patients, Kresak said.
The research is just one example of how health systems are investigating the use of data analytics and artificial intelligence (AI) to bolster oncology.
This month, the University of Texas MD Anderson Cancer Center established its Institute for Data Science in Oncology (IDSO), designed to transform cancer care through the application of clinical expertise and data science.
IDSO is set to focus on collaboration among stakeholders in medicine, science, academia, and industry in an effort to tackle cancer patients most urgent needs.
The institute will support enhanced data generation, collection, and management at MD Anderson, leading to advances in personalized care and patient experience.
See the original post here:
Machine Learning Methods May Improve Brain Tumor ... - HealthITAnalytics.com
Who said what: using machine learning to correctly attribute quotes – The Guardian
Engineering blog
Todays blog does not come to you from any developer in product and engineering but from our talented colleagues in data and insight. Here, the Guardians data scientists share how they have teamed up with PhD students from University College London to train a machine learning model to accurately attribute quotes. Below the two teams explain how theyve been teaching a machine to understand who said what?
Alice Morris, Michel Schammel, Anna Vissens, Paul Nathan, Alicja Polanska and Tara Tahseen
Tue 21 Nov 2023 06.11 EST
Why do we care so much about quotes?
As we discussed in Talking sense: using machine learning to understand quotes, there are many good reasons for identifying quotes. Quotes enable direct transmission of information from a source, capturing precisely the intended sentiment and meaning. They are not only a vital piece of accurate reporting but can also bring a story to life. The information extracted from them can be used for fact checking and allow us to gain insights into public views. For instance, accurately attributed quotes can be used for tracking shifting opinions on the same subject over time, or to explore those opinions as a function of identity, e.g. gender or race. Having a comprehensive set of quotes and their sources is thus a rich data asset that can be used to explore demographic and socioeconomic trends and shifts.
We had already used AI to help with accurate quote extraction from the Guardians extensive archive, and thought it could help us again for the next step of accurate quote attribution. This time, we turned to students from UCLs Centre for Doctoral Training in Data Intensive Science. As part of their PhD programme that involves working on industry projects, we asked these students to explore deep learning options that could help with quote attribution. In particular, they looked at machine learning tools to perform a method known as coreference resolution.
What is coreference resolution?
In everyday language, when we mention the same entity multiple times, we tend to use different expressions to refer to it. The task of coreference resolution is to group together all mentions in a piece of text which refer back to the same entity. We call the original entity the antecedent and subsequent mentions, anaphora. In the simple example below:
Sarah enjoys a nice cup of tea in the morning. She likes it with milk.
Sarah is the antecedent for the anaphoric mention She. The antecedent or the mention or both can also be a group of words rather than a single one. So, in the example there is another group consisting of the phrase cup of tea and the word it as coreferring entities.
Why is coreference resolution so hard?
You might think grouping together mentions of the same entity is a trivial task in machine learning, however, there are many layers of complexity to this problem. The task requires linking ambiguous anaphora (e.g. she or the former First Lady) to an unambiguous antecedent (e.g. Michelle Obama) which may be many sentences, or even paragraphs, prior to the occurrence of the quote in question. Depending on the writing style, there may be many other entities interwoven into the text that dont refer to any mentions of interest. Together with the complication of mentions, potentially being several words long, makes this task even more difficult.
In addition, sentiment conveyed through language is highly sensitive to the choice of words we employ. For example, look how the antecedent of the word they shifts in the following sentences because of the change in verb following it:
The city councilmen refused the demonstrators a permit because they feared violence.
The city councilmen refused the demonstrators a permit because they advocated violence.
(These two subtly different sentences are actually part of the Winograd schema challenge, a recognized test of machine intelligence, which was proposed as an extension of the Turing Test, a test to show whether or not a computer is capable of thinking like a human being.)
The example shows us that grammar alone cannot be relied on to solve this task; comprehending the semantics is essential. This means that rules-based methods cannot (without prohibitive difficulty) be devised to perfectly address this task. This is what prompted us to look into using machine learning to tackle the problem of coreference resolution.
Artificial Intelligence to the rescue
A typical machine learning heuristic for coreference resolution would follow steps like these:
Extract a series of mentions which relate to real-world entities
For each mention, compute a set of features
Based on those features, find the most likely antecedent for each mention
The AI workhorse to carry out those steps is a language model. In essence, a language model is a probability distribution over a sequence of words. Many of you have probably come across OpenAIs ChatGPT, which is powered by a large language model.
In order to analyse language and make predictions, language models create and use word embeddings. Word embeddings are essentially mappings of words to points in a semantic space, where words with similar meaning are placed close together. For example, the location of the points corresponding to cat and lion would be closer together than the points corresponding to cat and piano.
Identical words with different meanings ([river] bank vs bank [financial institution], for example) are used in different contexts and will thus occupy different locations in the semantic space. This distinction is crucial in more sophisticated examples, such as the Winograd Schema. These embeddings are the features mentioned in the recipe above.
Language models use word embeddings to represent a set of text as numbers, which encapsulate contextual meaning. We can use this numeric representation to conduct analytical tasks; in our case, coreference resolution. We show the language model lots of labelled examples (see later) which, in conjunction with the word embeddings, train the model to identify coreferent mentions when it is shown text it hasnt seen before, based on the meaning of that text.
For this task, we chose language models built by ExplosionAI as they fitted well with the Guardians current data science pipeline. To use them, however, they needed to be properly trained, and to do that we needed the right data.
Training the model using labelled data
An AI model can be taught by presenting it with numerous labelled examples illustrating the task we would like it to complete. In our case, this involved first manually labelling over a hundred Guardian articles, drawing links between ambiguous mentions/anaphora and their antecedent.
Though this may not seem the most glamorous task, the performance of any model is bottlenecked by the quality of the data it is given, and hence the data-labelling stage is crucial to the value of the final product. Due to the complex nature of language and the resulting subjectivity of the labelling, there were many intricacies to this task which required a rule set to be devised to standardise the data across human annotators. So, a lot of time was spent with Anna, Michel and Alice on this stage of the project; and we were all thankful when it was complete!
Although tremendously information rich and time-consuming to produce, one hundred annotated articles was still insufficient to fully capture the variability of language that a chosen model would encounter. So, to maximise the utility of our small dataset, we chose three off-the-shelf language models, namely Coreferee, Spacys coreference model and FastCoref that have already been trained on hundreds of thousands of generic examples. Then we fine-tuned them to adapt to our specific requirements by using our annotated data.
This approach enabled us to produce models that achieved greater precision on the Guardian-specific data compared with using the models straight out of the box.
These models should allow matching of quotes with sources from Guardian articles on a highly automated basis with a greater precision than ever before. The next step is to run a large-scale test on the Guardian archive and to see what journalistic questions this approach can help us answer.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
One-timeMonthlyAnnual
Other
Continued here:
Who said what: using machine learning to correctly attribute quotes - The Guardian
Revolutionizing Diagnostics: Machine Learning Unleashes the … – Spectroscopy Online
A recent study published in Applied Spectroscopy presents a new approach to biomedical diagnosis through the surface-enhanced Raman spectroscopy (SERS)-based detection of micro-RNA (miRNA) biomarkers using a comparative study of interpretable machine learning (ML) algorithms (1). Led by Joy Q. Li of Duke University, the research team conducted more SERS research by introducing a multiplexed SERS-based nanosensor, named the inverse molecular sentinel (iMS) for miRNA detection. As machine learning (ML) increasingly becomes a vital tool in spectral analysis, the researchers grappled with the high dimensionality of SERS data, a challenge for traditional ML techniques prone to overfitting and poor generalization (1).
The team explored the performance of ML methods, including convolutional neural network (CNN), support vector regression, and extreme gradient boosting, both with and without non-negative matrix factorization (NMF) for spectral unmixing of four-way multiplexed SERS spectra from iMS assays (1). Remarkably, CNN stands out for achieving high accuracy in spectral unmixing. However, the incorporation of NMF before CNN proves revolutionary, drastically reducing memory and training demands without compromising model performance on SERS spectral unmixing (1).
The study also used these ML models to analyze clinical SERS data from single-plexed iMS in RNA extracted from 17 endoscopic tissue biopsies. CNN and CNN-NMF, trained on multiplexed data, emerged as the top performers, demonstrating high accuracy in spectral unmixing (1).
To enhance transparency and understanding, the researchers employed gradient class activation maps and partial dependency plots to interpret the predictions. This approach not only showcases the potential of CNN-based ML in spectral unmixing of multiplexed SERS spectra, but it also underscores the significant impact of dimensionality reduction on performance and training speed (1).
This research highlights the intersection of spectroscopy and machine learning, providing new opportunities for precise and efficient diagnostics that could enhance biomedical applications and improve patient outcomes.
This article was written with the help of artificial intelligence and has been edited to ensure accuracy and clarity. You can read more about ourpolicy for usingAIhere.
(1) Li, J. Q., Neng-Wang, H., Canning, A. J., et al. Surface-Enhanced Raman Spectroscopy-Based Detection of Micro-RNA Biomarkers for Biomedical Diagnosis Using a Comparative Study of Interpretable Machine Learning Algorithms. Appl. Spectrosc. 2023, ASAP. DOI: 10.1177/0037028231209053
View original post here:
Revolutionizing Diagnostics: Machine Learning Unleashes the ... - Spectroscopy Online
Tackle computer science problems using both fundamental and … – KDnuggets
Sponsored Content
The ability to use algorithms to solve real-world problems is a must-have skill for any developer or programmer. But a major issue for them is to dive into a big pool of algorithms and find the most relevant ones.
This book (50 Algorithms Every Programmer Should Know) will help you not only to develop the skills to select and use an algorithm to tackle problems in the real world but also to understand how it works.
You'll start with an introduction to algorithms and discover various algorithm design techniques before exploring how to implement different types of algorithms, with the help of practical examples. As you advance, you'll learn about linear programming, page ranking, and graphs, and will then work with machine learning algorithms to understand the math and logic behind them. Additionally, the book will delve into modern deep learning techniques, including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Networks (RNNs), providing insights into their applications. The expansive realm of Generative AI and Large Language Models (LLMs) such as ChatGPT will also be explored, unraveling the algorithms, methodologies, and architectures that drive their implementation.
Case studies will show you how to apply these algorithms optimally before you focus on deep learning algorithms and learn about different types of deep learning models along with their practical use. Finally, you'll become well-versed in techniques that enable parallel processing, giving you the ability to use these algorithms for compute-intensive tasks.
By the end of this programming book, you'll have become adept at solving real-world computational problems by using a wide range of algorithms, including modern deep learning techniques.
Hurry Up, grab your copy from: https://packt.link/wAk8W
Read the rest here:
Tackle computer science problems using both fundamental and ... - KDnuggets
Hyper Oracle Introduces opML, a Game-Changing Approach to Machine Learning on Ethereum – Decrypt
San Francisco, California, November 21st, 2023, Chainwire
Hyper Oracle has successfully launched its Optimistic Machine Learning (opML) in the first open-source implementation of the technology. This solution will provide a flexible and performant approach for running large machine learning (ML) models on the Ethereum blockchain. In the meantime, the protocol continues to work on introducing elements of zero-knowledge technology, such as reducing challenge period time and unlocking privacy use cases.
The current age is marked by rapid advancements in artificial intelligence (AI) and machine learning (ML). When brought onchain, AI and ML will make smart contracts smarter and enable use cases previously thought impossible. At the same time, ML will benefit from fairness, transparency, decentralization, and other advantages of onchain validity.
Current implementations face the challenge of proving the validity of computation while addressing key challenges in cost, security, and performance. Both opML and zkML have emerged as methods of verifiably proving the model used to generate a specific output.
zkML utilizes zk proofs to verify ML models. While this leverages mathematics and cryptography to provide the highest levels of security, it also limits performance.
zkML suffers from limitations in memory usage, quantization, circuit size limit, etc., which means that only small models can be implemented. Thus, onchain ML and AI computing demand another solution for the practical implementation of large ML models like GPT 3.5.
opML ports AI model inference using an optimistic verification mechanism. This allows it to offer much more enhanced performance and flexibility than zkML. This makes it capable of running a variety of ML models on the mainnet, including extremely large ones.
One of its key advantages is its low cost and high efficiency, opML does not require extensive resources for proof generation; it can run a large language model (LLM) on a laptop without requiring a GPU.
Thus, it represents a promising approach to onchain AI and machine learning. It offers efficiency, scalability, and decentralization while maintaining high standards of transparency and security.
Note that while Hyper Oracles primary focus lies on the inference of ML models to allow for secure and efficient model computations, its current opML framework will also support the fine-tuning and training process. This makes it a versatile solution for various ML tasks.
About Hyper Oracle
Hyper Oracle is a programmable zkOracle protocol developed to make smart contracts smarter with richer data sources and more compute including onchain AI and machine learning. Its goal is to enable a new wave of decentralized applications (dApps) by addressing the limitations of smart contracts and existing middle-layer solutions.
Hyper Oracle makes historical onchain data and compress compute useful and verifiable with fast finality while preserving blockchain security and decentralization. All this is done with the hopes of empowering developers to interact with blockchains in new, different ways.
Hyper Oracle is trusted by the Ethereum Foundation, Compound, and Uniswap. Get more information on Hyper Oracles opML and use Stable Diffusion and LLaMa 2 in opML live.
Socials: Discord | Twitter | GitHub
COOJamieThe PR Geniusj.kingsley@theprgenius.com
See the original post here:
Hyper Oracle Introduces opML, a Game-Changing Approach to Machine Learning on Ethereum - Decrypt
Machine learning could improve efficiency of X-ray-guided pelvic fracture surgery – Medical Xpress
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread
close
Researchers at Johns Hopkins University are leveraging the power of machine learning to improve X-ray-guided pelvic fracture surgery, an operation to treat an injury commonly sustained during car crashes.
A team of researchers from the university's Whiting School of Engineering and the School of Medicine plan to increase the efficiency of this surgery by applying the benefits of surgical phase recognition, or SPR, a cutting-edge machine learning application that involves identifying the different steps in a surgical procedure to extract valuable insights into workflow efficiency, the proficiency of surgical teams, error rates, and more.
The team presented its X-ray-based SPR-driven approach, called Pelphix, last month at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention in Vancouver.
"Our approach paves the way for surgical assistance systems that will allow surgeons to reduce radiation exposure and shorten procedure length for optimized pelvic fracture surgeries," said research team member Benjamin Killeen, a doctoral candidate in the Department of Computer Science and a member of the Advanced Robotics and Computationally AugmenteD Environments (ARCADE) Lab.
SPR lays the foundation for automated surgical assistance and skill analysis systems that promise to maximize operating room efficiency. While SPR typically analyzes full-color endoscopic videos taken during surgery, it has to date ignored X-ray imagingthe only imaging available for many procedures, such as orthopedic surgery, interventional radiology, and angiology, leaving these procedures unable to reap the benefits of SPR-enabled advancements.
Despite the rise of modern machine learning algorithms, X-ray images are still not routinely saved or analyzed because of the human hours required to process them. So to begin applying SPR to X-ray-guided procedures, the researchers first had to create their own training dataset, harnessing the power of synthetic data and deep neural networks to simulate surgical workflows and X-ray sequences based on a preexisting database of annotated CT scan images. They simulated enough data to successfully train their own machine learning-powered SPR algorithm specifically for X-ray sequences.
"We simulated not only the visual appearance of images but also the dynamics of surgical workflows in X-ray to provide a viable alternative to real image sourcesand then we set out to show that this approach transfers to the real world," Killeen said.
The researchers validated their novel approach in cadaver experiments and successfully demonstrated that the Pelphix workflow can be applied to real-world X-ray-based SPR algorithms. They suggest that future algorithms use Pelphix's simulations for pretraining before fine-tuning on real image sequences from actual human patients.
The team is now collecting patient data for a large-scale validation effort.
"The next step in this research is to refine the workflow structure based on our initial results and deploy more advanced algorithms on large-scale datasets of X-ray images collected from patient procedures," Killeen said. "In the long term, this work is a first step toward obtaining insights into the science of orthopedic surgery from a big data perspective."
The researchers hope that Pelphix's success will motivate the routine collection and interpretation of X-ray data to enable further advances in surgical data science, ultimately improving the standard of care for patients.
"In some ways, modern operating theaters and the surgeries happening within them are much like the expanding universe, in that 95% of it is dark or unobservable," says senior co-author Mathias Unberath, an assistant professor of computer science, the principal investigator of the ARCADE Lab, and Killeen's advisor.
"That is, many complex processes happen during surgeries: tissue is manipulated, instruments are placed, and sometimes, errors are made andhopefullycorrected swiftly. However, none of these processes is documented precisely. Surgical data science and surgical phase recognitionand approaches like Pelphixare working to make that inscrutable 95% of surgery data observable, to patients' benefit."
More information: Benjamin D. Killeen et al, Pelphix: Surgical Phase Recognition from X-Ray Images in Percutaneous Pelvic Fixation, Medical Image Computing and Computer Assisted InterventionMICCAI 2023 (2023). DOI: 10.1007/978-3-031-43996-4_13
Continued here:
Machine learning could improve efficiency of X-ray-guided pelvic fracture surgery - Medical Xpress