Category Archives: Machine Learning
Revolutionizing Food Analysis: The Synergy of Artificial Instruments … – Food Safety Magazine
Revolutionizing Food Analysis: The Synergy of Artificial Instruments and Machine Learning Methods | Food Safety This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
More:
Revolutionizing Food Analysis: The Synergy of Artificial Instruments ... - Food Safety Magazine
Ironscales targets image-based email security threats with new … – SiliconANGLE News
Israeli phishing protection startup Ironscales Ltd. today announced an update to its platform capabilities aimed at bolstering defenses against the surge in image-based phishing attacks, including those using QR codes.
The Ironscales Fall 2023 release introduces sophisticated machine learning protections tailored to counter image-based threats, as well as automation features for phishing simulation testing. The threats covered by the update include protection from quishing, or QR code phishing, business email compromise and image-basedattacks that bypass conventional language processing defenses.
The updates are aimed at addressing the rapid advancement in generative artificial intelligence technology, which has significantly expanded the tools available to cyber criminals. Ironscales data analysts observed an alarming 215% increase in image-centric phishing emails in the third quarter of 2023, with the use of malicious QR codes a particular standout.
Ironscales platform now employs optical character recognition and deep-text and image processing to identify and thwart such attacks before they reach end-users. The new features integrate enhanced image recognition and analysis into the companys behavioral analysis framework.
A new autonomous phishing simulation testing functionality reduces processing time for information technology and security teams by creating timely and relevant simulation campaigns. Ironscales customers can skip the manual setup process and put their phishing simulation testing on autopilot, ensuring they deliver phishing simulations based on real-world examples of email attacks.
Enhanced reporting for organization visibility and improved employee awareness in the release deliverenhanced reporting that includes metrics, according to the company, and a comprehensive summary of simulation testing campaign results to compare against industry benchmarks, identify training gaps, measure effectiveness and improve future campaign strategy.
Phishing threats are rapidly evolving in sophistication and its more crucial than ever for organizations to ensure their employees are trained and prepared so they can be a vital layer of defense against these attacks, Chief Executive Eyal Benishti said. Our job is to take the burden off security teams for threat detection and training of their employees. We think that our new Fall 23 release is going to do just that.
Ironscales was last in the news in June when itlaunched an artificial intelligence tool for Microsoft Outlook designed to empower users in threat detection and reporting.Called Themis Co-pilot, the servicegives users the necessary tools to detect and report emerging threats, regardless of their role or cybersecurity expertise.
TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy
THANK YOU
Continued here:
Ironscales targets image-based email security threats with new ... - SiliconANGLE News
How AI could lead to a better understanding of the brain – Nature.com
Can a computer be programmed to simulate a brain? Its a question mathematicians, theoreticians and experimentalists have long been asking whether spurred by a desire to create artificial intelligence (AI) or by the idea that a complex system such as the brain can be understood only when mathematics or a computer can reproduce its behaviour. To try to answer it, investigators have been developing simplified models of brain neural networks since the 1940s1. In fact, todays explosion in machine learning can be traced back to early work inspired by biological systems.
However, the fruits of these efforts are now enabling investigators to ask a slightly different question: could machine learning be used to build computational models that simulate the activity of brains?
At the heart of these developments is a growing body of data on brains. Starting in the 1970s, but more intensively since the mid-2000s, neuroscientists have been producing connectomes maps of the connectivity and morphology of neurons that capture a static representation of a brain at a particular moment. Alongside such advances have been improvements in researchers abilities to make functional recordings, which measure neural activity over time at the resolution of a single cell. Meanwhile the field of transcriptomics is enabling investigators to measure the gene activity in a tissue sample, and even to map when and where that activity is occurring.
So far, few efforts have been made to connect these different data sources or collect them simultaneously from the whole brain of the same specimen. But as the level of detail, size and number of data sets increases, particularly for the brains of relatively simple model organisms, machine-learning systems are making a new approach to brain modelling feasible. This involves training AI programs on connectomes and other data to reproduce the neural activity you would expect to find in biological systems.
Several challenges will need to be addressed for computational neuroscientists and others to start using machine learning to build simulations of entire brains. But a hybrid approach that combines information from conventional brain-modelling techniques with machine-learning systems that are trained on diverse data sets could make the whole endeavour both more rigorous and more informative.
The quest to map a brain began nearly half a century ago, with a painstaking 15-year effort in the roundworm Caenorhabditis elegans2. Over the past two decades, developments in automated tissue sectioning and imaging have made it much easier for researchers to obtain anatomical data while advances in computing and automated-image analysis have transformed the analysis of these data sets2.
Connectomes have now been produced for the entire brain of C. elegans3, larval4 and adult5 Drosophila melanogaster flies, and for tiny portions of the mouse and human brain (one thousandth and one millionth respectively)2.
This is the largest map of the human brain ever made
The anatomical maps produced so far have major holes. Imaging methods are not yet able to map electrical connections at scale alongside the chemical synaptic ones. Researchers have focused mainly on neurons even though non-neuronal glial cells, which provide support to neurons, seem to play a crucial part in the flow of information through nervous systems6. And much remains unknown about what genes are expressed and what proteins are present in the neurons and other cells being mapped.
Still, such maps are already yielding insights. In D. melanogaster, for example, connectomics has enabled investigators to identify the mechanisms behind the neural circuits responsible for behaviours such as aggression7. Brain mapping has also revealed how information is computed within the circuits responsible for the flies knowing where they are and how they can get from one place to another8. In zebrafish (Danio rerio) larvae, connectomics has helped to uncover the workings of the synaptic circuitry underlying the classification of odours9, the control of the position and movement of the eyeball10 and navigation11.
Efforts that might ultimately lead to a whole mouse brain connectome are under way although using current approaches, this would probably take a decade or more. A mouse brain is almost 1,000 times bigger than the brain of D. melanogaster, which consists of roughly 150,000 neurons.
Alongside all this progress in connectomics, investigators have been capturing patterns of gene expression with increasing levels of accuracy and specificity using single-cell and spatial transcriptomics. Various technologies are also allowing researchers to make recordings of neural activity across entire brains in vertebrates for hours at a time. In the case of the larval zebrafish brain, that means making recordings across nearly 100,000 neurons12. These technologies include proteins with fluorescent properties that change in response to shifts in voltage or calcium levels, and microscopy techniques that can image living brains in 3D at the resolution of a single cell. (Recordings of neural activity made in this way provide a less accurate picture than electrophysiology recordings, but a much better one than non-invasive methods such as functional magnetic resonance imaging.)
When trying to model patterns of brain activity, scientists have mainly used a physics-based approach. This entails generating simulations of nervous systems or portions of nervous systems using mathematical descriptions of the behaviour of real neurons, or of parts of real nervous systems. It also entails making informed guesses about aspects of the circuit, such as the network connectivity, that have not yet been verified by observations.
In some cases, the guesswork has been extensive (see Mystery models). But in others, anatomical maps at the resolution of single cells and individual synapses have helped researchers to refute and generate hypotheses4.
A lack of data makes it difficult to evaluate whether some neural-network models capture what happens in real systems.
The original aim of the controversial European Human Brain Project, which wrapped up in September, was to computationally simulate the entire human brain. Although that goal was abandoned, the project did produce simulations of portions of rodent and human brains (including tens of thousands of neurons in a model of a rodent hippocampus), on the basis of limited biological measures and various synthetic data-generation procedures.
A major problem with such approaches is that in the absence of detailed anatomical or functional maps, it is hard to assess to what degree the resulting simulations accurately capture what is happening in biological systems20.
Neuroscientists have been refining theoretical descriptions of the circuit that enables D. melanogaster to compute motion for around seven decades. Since it was completed in 201313, the motion-detection-circuit connectome, along with subsequent larger fly connectomes, has provided a detailed circuit diagram that has favoured some hypotheses about how the circuit works over others.
Yet data collected from real neural networks have also highlighted the limits of an anatomy-driven approach.
Gigantic map of fly brain is a first for a complex animal
A neural-circuit model completed in the 1990s, for example, contained a detailed analysis of the connectivity and physiology of the roughly 30 neurons comprising the crab (Cancer borealis) stomatogastric ganglion a structure that controls the animals stomach movements14. By measuring the activity of the neurons in various situations, researchers discovered that even for a relatively small collection of neurons, seemingly subtle changes, such as the introduction of a neuromodulator, a substance that alters properties of neurons and synapses, completely changes the circuits behaviour. This suggests that even when connectomes and other rich data sets are used to guide and constrain hypotheses about neural circuits, todays data might be insufficiently detailed for modellers to be able to capture what is going on in biological systems15.
This is an area in which machine learning could provide a way forward.
Guided by connectomic and other data to optimize thousands or even billions of parameters, machine-learning models could be trained to produce neural-network behaviour that is consistent with the behaviour of real neural networks measured using cellular-resolution functional recordings.
Such machine-learning models could combine information from conventional brain-modelling techniques, such as the HodgkinHuxley model, which describes how action potentials (a change in voltage across a membrane) in neurons are initiated and propagated, with parameters that are optimized using connectivity maps, functional-activity recordings or other data sets obtained for entire brains. Or machine-learning models could comprise black box architectures that contain little explicitly specified biological knowledge but billions or hundreds of billions of parameters, all empirically optimized.
Researchers could evaluate such models, for instance, by comparing their predictions about the neural activity of a system with recordings from the actual biological system. Crucially, they would assess how the models predictions compare when the machine-learning program is given data that it wasnt trained on as standard practice in the evaluation of machine-learning systems.
Axonal projections of neurons in a mouse brain.Credit: Adam Glaser, Jayaram Chandrashekar, Karel Svoboda, Allen Institute for Neural Dynamics
This approach would make brain modelling that encompasses thousands or more neurons more rigorous. Investigators would be able to assess, for instance, whether simpler models that are easier to compute do a better job of simulating neural networks than do more complex ones that are fed more detailed biophysical information, or vice versa.
Machine learning is already being harnessed in this way to improve understanding of other hugely complex systems. Since the 1950s, for example, weather-prediction systems have generally relied on carefully constructed mathematical models of meteorological phenomena, with modern systems resulting from iterative refinements of such models by hundreds of researchers. Yet, over the past five years or so, researchers have developed several weather-prediction systems using machine learning. These contain fewer assumptions in relation to how pressure gradients drive changes in wind velocity, for example, and how that in turn moves moisture through the atmosphere. Instead, millions of parameters are optimized by machine learning to produce simulated weather behaviour that is consistent with databases of past weather patterns16.
This way of doing things does present some challenges. Even if a model makes accurate predictions, it can be difficult to explain how it does so. Also, models are often unable to make predictions about scenarios that were not included in the data they were trained on. A weather model trained to make predictions for the days ahead has trouble extrapolating that forecast weeks or months into the future. But in some cases for predictions of rainfall over the next several hours machine-learning approaches are already outperforming classical ones17. Machine-learning models offer practical advantages, too; they use simpler underlying code and scientists with less specialist meteorological knowledge can use them.
On the one hand, for brain modelling, this kind of approach could help to fill in some of the gaps in current data sets and reduce the need for ever-more detailed measurements of individual biological components, such as single neurons. On the other hand, as more comprehensive data sets become available, it would be straightforward to incorporate the data into the models.
To pursue this idea, several challenges will need to be addressed.
Machine-learning programs will only ever be as good as the data used to train and evaluate them. Neuroscientists should therefore aim to acquire data sets from the whole brain of specimens even from the entire body, should that become more feasible. Although it is easier to collect data from portions of brains, modelling a highly interconnected system such as a neural network using machine learning is much less likely to generate useful information if many parts of the system are absent from the underlying data.
Researchers should also strive to obtain anatomical maps of neural connections and functional recordings (and perhaps, in the future, maps of gene expression) from whole brains of the same specimen. Currently, any one group tends to focus on obtaining only one of these not on acquiring both simultaneously.
How the worlds biggest brain maps could transform neuroscience
With only 302 neurons, the C. elegans nervous system might be sufficiently hard-wired for researchers to be able to assume that a connectivity map obtained from one specimen would be the same for any other although some studies suggest otherwise18. But for larger nervous systems, such as those of D. melanogaster and zebrafish larvae, connectome variability between specimens is significant enough that brain models should be trained on structure and function data acquired from the same specimen.
Currently, this can be achieved only in two common model organisms. The bodies of C. elegans and larval zebrafish are transparent, which means researchers can make functional recordings across the organisms entire brains and pinpoint activity to individual neurons. Immediately after such recordings are made, the animal can be killed, embedded in resin and sectioned, and anatomical measurements of the neural connections mapped. In the future, however, researchers could expand the set of organisms for which such combined data acquisitions are possible for instance, by developing new non-invasive ways to record neural activity at high resolution, perhaps using ultrasound.
Obtaining such multimodal data sets in the same specimen will require extensive collaboration between researchers, investment in big-team science and increased funding-agency support for more holistic endeavours19. But there are precedents for this type of approach, such as the US Intelligence Advanced Research Projects Activitys MICrONS project, which between 2016 and 2021 obtained functional and anatomical data for one cubic millimetre of mouse brain.
Besides acquiring these data, neuroscientists would need to agree on the key modelling targets and the quantitative metrics by which to measure progress. Should a model aim to predict the behaviour of a single neuron on the basis of a past state or of an entire brain? Should the activity of an individual neuron be the key metric, or should it be the percentage of hundreds of thousands of neurons that are active? Likewise, what constitutes an accurate reproduction of the neural activity seen in a biological system? Formal, agreed benchmarks will be crucial to comparing modelling approaches and tracking progress over time.
Lastly, to open up brain-modelling challenges to diverse communities, including computational neuroscientists and specialists in machine learning, investigators would need to articulate to the broader scientific community what modelling tasks are the highest priority and which metrics should be used to evaluate a models performance. WeatherBench, an online platform that provides a framework for evaluating and comparing weather forecasting models, provides a useful template16.
Some will question and rightly so whether a machine-learning approach to brain modelling will be scientifically useful. Could the problem of trying to understand how brains work simply be traded for the problem of trying to understand how a large artificial network works?
Yet, the use of a similar approach in a branch of neuroscience concerned with establishing how sensory stimuli (for example, sights and smells) are processed and encoded by the brain is encouraging. Researchers are increasingly using classically modelled neural networks, in which some of the biological details are specified, in combination with machine-learning systems. The latter are trained on massive visual or audio data sets to reproduce the visual or auditory capabilities of nervous systems, such as image recognition. The resulting networks demonstrate surprising similarities to their biological counterparts, but are easier to analyse and interrogate than the real neural networks.
For now, perhaps its enough to ask whether the data from current brain mapping and other efforts can train machine-learning models to reproduce neural activity that corresponds to what would be seen in biological systems. Here, even failure would be interesting a signal that mapping efforts must go even deeper.
See original here:
How AI could lead to a better understanding of the brain - Nature.com
Machine Learning Could Be Used to Better Predict Floods – IEEE Spectrum
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
As the frequency of extreme weather events rises in recent years, there is also a growing need for accurate and precise hydrological knowledge in order to anticipate catastrophic flooding. Hydrologythe study of the Earths water cyclehas played a big role in human civilization for thousands of years. However, in a recent paper, a team of researchers argue that hydrologys outdated methodology is holding the field back, and that it is time for the field to move on from complex theoretical models to predictive models built using machine learning algorithms.
Hydrologists and computer network researchers collaborated on a proof-of-concept machine learning model that can make hydrological predictions. Hydrology models already exist, said Andrea Zanella, professor of information engineering at the University of Padova in Italy, but those traditional models are mathematically complex and require too many input parameters to be feasible.
Using machine learning techniques, researchers were able to train a model that could, using the first 30 minutes of a storm, predict occurrences of water runoff or flooding up to an hour before they might happen. Zanella, who is also coauthor on the study, said that the study was only the first step towards building a model that would ideally predict the occurrence of water runoff with a few hours of lead time, which would give people more time to prepare or evacuate an area if necessary.
Precipitation like rain or snow happens relatively infrequently, so sensors may not record any data at all during a downpour. And when they do, they usually wont have enough data points to capture a storms progression in much detail.
The work towards reaching that goal is not simple at all, Zanella said. But the methodology that we propose seems to be a first step towards that.
Researchers trained their machine learning model with input data parameters like rainfall and atmospheric pressure obtained from sensors at weather stations. Their output data parameters, like soil absorption and runoff volume, was a combination of data they collected and by using traditional theoretical models to generate additional synthetic data. Synthetic data was necessary, Zanella said, because there is a lack of the kind of data necessary to build dependable machine learning models for hydrology.
The lack of data is the result of current data collection practices. Currently, hydrological data is collected using sensors at predetermined time intervalsusually every few hours, or even days. This method of data collection is inefficient because only a small proportion of the collected data is useful for modeling. Precipitation like rain or snow happens relatively infrequently, so sensors may not record any data at all during a downpour. And when they do, they usually wont have enough data points to capture a storms progression in much detail.
In their study, researchers suggest that more sensors and a variable rate of data collection may help solve the problem. Ideally, sensors would significantly ramp up data collection when theres precipitation and slow down collection when conditions are fair.
Output data like the absorption of water by the soil is especially difficult to come by, even though it is important for building machine learning models by matching observations with predictions about runoff effects. The difficulty is in the need to take soil samples and analyze those samples, which is both labor intensive and time consuming.
Zanella said that weather sensors should also incorporate some form of data preprocessing. Currently, researchers downloading data from sensors must sift through a large amount of data to find useful precipitation data. Thats not only time consuming but also uses space that could instead store more relevant data. If data processing were to occur automatically at weather stations, it could help clean up the data and make data storage more efficient.
The study also stressed the importance of improving data visualization tools. As a field with important practical applications, hydrological information should be easy to understand for a wide audience from diverse technical backgroundsbut that currently isnt the case today. For example, graphs that show the intensity of rainfall over time, called hyetographs, are especially notorious for being difficult to understand.
In most cases, when you look at the management of water resources, these people who are in charge are not [technical] experts, Zanella said. So we need to also develop some visualization tools that help these people to understand.
Zanella said researchers from different disciplines will need to collaborate to significantly advance the field of hydrology. He hoped more researchers with wireless communications and networking backgrounds would work in the field to help tackle its challenges.
The researchers published their work on 25 September in IEEE Access.
From Your Site Articles
Related Articles Around the Web
More here:
Machine Learning Could Be Used to Better Predict Floods - IEEE Spectrum
Ethical Machine Learning with Explainable AI and Impact Analysis – InfoQ.com
As more decisions are made or influenced by machines, theres a growing need for a code of ethics for artificial intelligence. The main question is, "I can build it, but should I?" Explainable AI can provide checks and balances for fairness and explainability, and engineers can analyze the systems impact on peoples lives and mental health.
Kesha Williams spoke about ethical machine learning at NDC Oslo 2023.
In the pre-machine learning world, humans made hiring, advertising, lending, and criminal sentencing decisions, and these decisions were often governed by laws that regulated the decision-making processes regarding fairness, transparency, and equity, Williams said. But now machines make or heavily influence a lot of these decisions.
A code of ethics is needed because machines can not only imitate and enhance human decision-making, but they can also amplify human prejudices, Williams said.
When people discuss ethical AI, youll hear several terms: fairness, transparency, responsibility, and human rights, Williams mentioned. Overall goals are not to perpetuate bias, consider the potential consequences, and mitigate negative impacts.
According to Williams, ethical AI boils down to one question:
I can build it, but should I? And if I do build it, what guardrails are in place to protect the person thats the subject of the AI?
This is at the heart of ethics in AI, Williams said.
According to William, ethics and risks can be incorporated using explainable AI, which would help us understand how the models make decisions:
Explainable AI seeks to bake in checks and balances for fairness and explainability during each stage of the machine learning lifecycle: problem formation, dataset construction, algorithm selection, training, testing, deployment, monitoring, and feedback.
We all have a duty as engineers to look at the AI/ML systems were developing from a moral and ethical standpoint, Williams said. Given the broad societal impact, mindlessly implementing these systems is no longer acceptable.
As engineers, we must first analyze these systems impact on peoples lives and mental health and incorporate bias checks and balances at every stage of the machine learning lifecycle, Williams concluded.
InfoQ interviewed Kesha Williams about ethical machine learning.
InfoQ: How does machine learning differ from traditional software development?
Kesha Williams: In traditional software development, developers write code to tell the machine what to do, line-by-line, using programming languages like Java, C#, JavaScript, Python, etc. The software spits out the data, which we use to solve a problem.
Machine learning differs from traditional software development in that we give the machine the data first, and it writes the code (i.e., the model) to solve the problem we need to solve. Its the complete reverse to start with the data, which is very cool!
InfoQ: How does bias in AI surface?
Williams: Bias shows up in your data if your dataset is imbalanced or doesnt accurately represent the environment the model will be deployed in.
Bias can also be introduced by the ML algorithm itself even with a well-balanced training dataset, the outcomes might favor certain subsets of the data compared to others.
Bias can show up in your model (once its deployed to production) because of drift. Drift indicates that the relationship between the target variable and the other variables changes over time and degrades the predictive power of the model.
Bias can also show up in your people, strategy, and the action taken based on model predictions.
InfoQ: What can we do to mitigate bias?
Williams: There are several ways to mitigate bias:
See the original post:
Ethical Machine Learning with Explainable AI and Impact Analysis - InfoQ.com
Working with Non-IID data part2(Machine Learning 2023) – Medium
Photo by Mostafa Ashraf Mostafa on Unsplash
Author : Yeachan Kim, Bonggun Shin
Abstract : Federated learning algorithms perform reasonably well on independent and identically distributed (IID) data. They, on the other hand, suffer greatly from heterogeneous environments, i.e., Non-IID data. Despite the fact that many research projects have been done to address this issue, recent findings indicate that they are still sub-optimal when compared to training on IID data. In this work, we carefully analyze the existing methods in heterogeneous environments. Interestingly, we find that regularizing the classifiers outputs is quite effective in preventing performance degradation on Non-IID data. Motivated by this, we propose Learning from Drift (LfD), a novel method for effectively training the model in heterogeneous settings. Our scheme encapsulates two key components: drift estimation and drift regularization. Specifically, LfD first estimates how different the local model is from the global model (i.e., drift). The local model is then regularized such that it does not fall in the direction of the estimated drift. In the experiment, we evaluate each method through the lens of the five aspects of federated learning, i.e., Generalization, Heterogeneity, Scalability, Forgetting, and Efficiency. Comprehensive evaluation results clearly support the superiority of LfD in federated learning with Non-IID data
2. Federated PAC-Bayesian Learning on Non-IID data. (arXiv)
Author : Zihao Zhao, Yang Liu, Wenbo Ding, Xiao-Ping Zhang
Abstract : Existing research has either adapted the Probably Approximately Correct (PAC) Bayesian framework for federated learning (FL) or used information-theoretic PAC-Bayesian bounds while introducing their theorems, but few considering the non-IID challenges in FL. Our work presents the first non-vacuous federated PAC-Bayesian bound tailored for non-IID local data. This bound assumes unique prior knowledge for each client and variable aggregation weights. We also introduce an objective function and an innovative Gibbs-based algorithm for the optimization of the derived bound. The results are validated on real-world datasets
Read more here:
Working with Non-IID data part2(Machine Learning 2023) - Medium
UW scientists and NFL player create new MRI machine-learning … – Spectrum News 1
MADISON, Wis.University of Wisconsin-Madison researchers said they were proud to publish a groundbreaking paper on a new MRI machine-learning network.
They determined how brightly colored scans can help surgeons recognize, and accurately remove, an intracerebral hemorrhage (ICH), or bleeding in the brain.
Walter Block, a professor of medical physics and biomedical engineering, leads the research team that developed a special algorithm to support doctors who must act quickly and with precision to extract a brain bleed.
The trick is to visualize it and quantify it so that the surgeon has the information they need, Block said.
Tom Lilieholm a PhD candidate and lead author of the research created the specific algorithm for the new color-coded MRI machine-learning network.
We got pretty high accurate segmentations out of the machine here, 96% accurate clot, 81% accurate edema, he said, showing off one of the studys MRI slides.
Lilieholm said it can show a surgeon in less than a minute just how much of the hemorrhage they can safely remove.
Its really kind of useful to have that, and to have robust data to compare against, Lilieholm said. Thats where Matt kind of came in.
The Matt Lilieholm was referring to is NFL player Matt Henningsen.
Henningsen is from Menomonee Falls. Before becoming a Denver Bronco, he attended UW-Madison, where he excelled on the football field and in the classroom. He earned a bachelors and masters degree from the university.
My task would be to identify the location of the intracerebral hemorrhage and segment both the clot and the edema surrounding the clot, and then move on to every single layer of that image, Henningsen said.
Henningsen spent more than 100 hours gathering data for this new research on brain bleeds. He said he was excited and grateful for the opportunity to be part of this collaboration.
The UW-trained bioengineer and football player said he hopes this project can eventually support and improve something his football profession fears: traumatic brain injury.
You cant diagnose concussion with an MRI currently, he said. But I mean, maybe in the future, if youre able to, you can use machine-learning to potentially detect certain abnormalities that the human eye couldnt necessarily detect or things of that sort. Maybe we could get somewhere.
Originally posted here:
UW scientists and NFL player create new MRI machine-learning ... - Spectrum News 1
Amity University Online Collaborates with TCS iON to Offer Machine Learning and Gen AI Certification Program – DATAQUEST
Amity University Online, Indias pioneering online degree program provider authorized by the UGC, has unveiled a Certificate Program in Machine Learning and Generative AI. This initiative is a collaborative effort of Amity University with TCS iON, the strategic arm of Tata Consultancy Services dedicated to Manufacturing Industries (SMB), Educational Institutions, and Examination Boards. The program spans eight months and is designed to empower learners with knowledge and skills in the domains of Machine Learning and Generative AI.
The Certificate Program in Machine Learning and Generative AI harnesses the collaboration between Amity University Onlines e-learning expertise and the industry insights and experienced instructors of TCS iON. Participants of this program will not only engage closely with TCS instructors but will also gain invaluable hands-on experience through TCS projects, allowing them to apply their knowledge to practical scenarios.
This program distinguishes itself through a range of features offered by TCS iON, including master classes, weekend sessions, live interactions with industry experts, and the completion of capstone projects. These elements are designed to provide learners with profound insights into the latest trends and advancements in Machine Learning and Generative AI, backed by updated study materials and live projects.
The curriculum of this certification program ensures that learners master machine learning pipelines, including the deployment of AWS Cloud. Moreover, participants will acquire proficiency in core Python programming for ML and Generative AI, and they will gain advanced skills in Computer Vision and NLP through deep learning techniques. During the course, special focus will be given to topics such as CV & NLP models, ChatGPT, and Dall-E. Learners will also benefit from expert-led live sessions, allowing them to gain real-time insights into industry practices. Additionally, they will also delve into model interpretability using tools like LIME & SHAP, which will enhance their understanding of machine learning algorithms.
The AI landscape is continuously evolving. By 2030, it is estimated to add up to $15.7 trillion to the global economy, unlocking potential job opportunities for skilled professionals. Launching the Machine Learning and Generative AI certification program in collaboration with TCS iON positions us at the forefront of this transformative industry, empowering learners with knowledge and expertise in AI. With TCS iONs industry insights, learners will gain access to real-world and capstone projects, ensuring they are well-equipped to navigate the AI-driven world of tomorrow, said Ajit Chauhan, Spokesperson, of Amity University Online.
We are excited to partner with Amity University Online in launching this innovative program in Machine Learning and Generative AI. TCS iON is committed to promoting the skill development of the nations youth, and this collaboration is a testament to our dedication to empowering learners with the latest advancements in technology. To ensure a strong industry focus, we will bring SMEs and domain experts from TCS for every module of the course apart from enabling learning content andprojects. Students will also be given an opportunity to appear in the TCS iON National Proficiency Test on the subject to prove their expertise and be job-ready, said Venguswamy Ramaswamy, Global Head of TCS iON while commenting on the collaboration.
The collaboration between Amity University Online and TCS iON represents a significant step towards equipping learners with advanced skills in Machine Learning and Generative AI. Participants of this program can stay at the forefront of the rapidly evolving digital world with an immersive eight-month program, hands-on experience, and insights from industry experts.
Read the original post:
Amity University Online Collaborates with TCS iON to Offer Machine Learning and Gen AI Certification Program - DATAQUEST
Introduction – Rethinking Clinical Trials
Artificial intelligence (AI) is the theory and practice of designing computer systems to simulate actual processes of human intelligence. AI-powered systems rely on computers that embed machine learning (ML) to analyze large datasets and discover patterns across them. AI/ML thus provide powerful computing tools for pragmatic clinical trial (PCT) investigators. These tools support multimodal data analytics (e.g., data from electronic health records, wearables, and social media), advanced prediction, and large-scale modeling that far exceed the analytic capacities of many existing trial designs. Using AI/ML, a new class of digital PCTs has emerged (Inan et al 2020). It is anticipated that modern digital PCTs will increasingly serve as testbeds for AI/ML systems in clinical decision support (Yao et al 2021). Application of AI/ML could help find new ways to contain healthcare costs and facilitate longitudinal health surveillance. Among their strengths, AI/ML-enabled digital PCTs can help investigators to:
However, AI/ML systems are only as accurate as the data on which they are trained. Multimodal linkages allow researchers to triangulate many sources of data that collectively improve how algorithms iteratively learn and begin to discover patterns indicative of health or hospital trends. Health-related data used for medical AI/ML research and development are often from various sources, including:
Both over- and under-representation of patient populations in these AI/ML training data sources can yield biased results that in turn harm real patients and exacerbate existing health inequities. For example, one study (Obermeyer et al 2019) found that Black patients were given a lower risk score than equally sick White patients based on data input that reflected that more health care dollars are spent on White versus Black patients. The algorithm misunderstood that signal to assume that meant that Black patients needed less healthcare, rather than had less access.
These and other technical limitations of AI/ML have ethical consequences that digital PCT investigators should anticipate and can proactively address at every stage in the research.
See the article here:
Introduction - Rethinking Clinical Trials
Inexpensive water-treatment monitoring process powered by machine learning – Tech Xplore
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
close
Small, rural drinking water treatment (DWT) plants typically use only chlorine to implement the disinfection process. For these plants, free chlorine residual (FCR) serves as a key performance measure for disinfection. The FCR is stated as the concentration of free chlorine remaining in the water, after the chlorine has oxidized the target contaminants.
In practice, the FCR is determined by plant operators based on their experience. Specifically, operators choose a dose of chlorine to achieve a satisfactory FCR concentration, but often have to make an estimate of the chlorine requirements.
The challenge of determining an accurate FCR has led to the use of advanced FCR prediction techniques. In particular, machine learning (ML) algorithms have proven effective in achieving this goal. By identifying correlations among numerous variables in complex systems, successful ML implementation could accurately predict FCR, even from cost-effective, low-tech monitoring data.
In a new study published in Frontiers of Environmental Science & Engineering, the authors implemented a gradient boosting (GB) ML model with categorical boosting (CatBoost) to predict FCR. GB algorithms, including CatBoost, accumulate decision trees to generate the prediction function.
The input data was collected from a DWT plant in Georgia in the U.S., and included a wide variety of DWT monitoring records and operational process parameters. Four iterations of a generalized modeling approach were developed, including (1) base case, (2) rolling average, (3) parameter consolidation, and (4) intuitive parameters.
The research team also applied the SHapely Additive explanation (SHAP) method to this study. SHAP is an open-source software for interpreting ML models with many input parameters, which allows users to visually understand how each parameter affects the prediction function. We can study the influence of each parameter on the predicted output, by calculating its corresponding SHAP value. For example, the SHAP analysis ranks the channel Cl2 as the most influential parameter.
Of all four iterations, the fourth and final iteration considered only intuitive, physical relationships and water quality measured downstream from filtration. The authors summarized the comparative performance of the four ML modeling iterations. According to them, the key findings are: 1) with a sufficient number of related input parameters, ML models can produce accurate prediction results; 2) ML models can be driven by correlations that may or may not have a physical basis; 3) ML models can be analogous to operator experience.
Looking forward, the research team suggests that future studies should explore expanding the applicability domain. For example, the data set analyzed was limited to only one full year. Therefore, greater data availability is expected to broaden the applicability domain and improve the predictivity.
More information: Wiley Helm et al, Development of gradient boosting-assisted machine learning data-driven model for free chlorine residual prediction, Frontiers of Environmental Science & Engineering (2023). DOI: 10.1007/s11783-024-1777-6. journal.hep.com.cn/fese/EN/10. 07/s11783-024-1777-6
Provided by Frontiers Journals
Read the original:
Inexpensive water-treatment monitoring process powered by machine learning - Tech Xplore