Category Archives: Machine Learning
Machine learning identifies ‘heart roundness’ as a new tool for diagnosing cardiovascular conditions – Medical Xpress
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread
Deep learning-enabled analysis of medical images identifies cardiac sphericity as an early marker of cardiomyopathy and related outcomes. Credit: Med/Vukadinovic et al.
Physicians currently use assessments like heart chamber size and systolic function to diagnose and monitor cardiomyopathy and other related heart conditions. A paper published in the journal Med on March 29 suggests that another measurementcardiac sphericity, or roundness of the heartmay one day be a useful implement to add to the diagnostic toolkit.
"Roundness of the heart isn't necessarily the problem per seit's a marker of the problem," says co-corresponding author Shoa L. Clarke, a preventive cardiologist and an instructor at Stanford University School of Medicine. "People with rounder hearts may have underlying cardiomyopathy or underlying dysfunction with the molecular and cellular functions of the heart muscle. It could be reasonable to ask whether there is any utility in incorporating measurements of sphericity into clinical decision-making."
This proof-of-concept study used big data and machine learning to look at whether other anatomical changes in the heart could improve the understanding of cardiovascular risk and pathophysiology. The investigators chose to focus on sphericity because clinical experience had suggested it is associated with heart problems. Prior research had primarily focused on sphericity after the onset of heart disease, and they hypothesized that sphericity may increase even before the onset of clinical heart disease.
"We have established traditional ways of evaluating the heart, which have been important for how we diagnose and treat heart disease," Clarke says. "Now with the ability to use deep-learning techniques to look at medical images at scale, we have the opportunity to identify new ways of evaluating the heart that maybe we haven't considered much in the past."
"They say a picture is worth a thousand words, and we show that this is very true for medical imaging," says co-corresponding author David Ouyang, a cardiologist and researcher at the Smidt Heart Institute of Cedars-Sinai. "There's a lot more information available than what physicians are currently using. And just as we've previously known that a bigger heart isn't always better, we're learning that a rounder heart is also not better."
This research employed data from the UK Biobank, which includes genetic and clinical information on 500,000 people. As part of that study, a subset of volunteers had MRI imaging of their hearts performed. The California-based team used data from a subset of about 38,000 UK Biobank study participants who had MRIs that were considered normal at the time of the scans. Subsequent medical records from the volunteers indicated which of them later went on to develop diseases like cardiomyopathy, atrial fibrillation, or heart failure and which did not.
The researchers then used deep-learning techniques to automate the measurement of sphericity. Increased cardiac sphericity appeared to be linked to future heart troubles.
The investigators also looked at genetic drivers for cardiac sphericity and found overlap with the genetic drivers for cardiomyopathy. Using Mendelian randomization, they were able to infer that intrinsic disease of the heart musclemeaning defects not caused by heart attackscaused cardiac sphericity.
"There are two ways that these findings could add value," Ouyang says. "First, they might allow physicians to gain greater clinical intuition on how patients are likely to do at a very rapid glance. In the broader picture, this research suggests there are probably many useful measurements that clinicians still don't understand or haven't discovered. We hope to identify other ways to use imaging to help us predict what will happen next."
The researchers emphasize that much more research is needed before the findings from this study can be translated to clinical practice. For one thing, the connection is still speculative and would need to be confirmed with additional data. If the link is confirmed, a threshold would need to be established to indicate what degree of sphericity might suggest that clinical interventions are needed. The team is sharing all the data from this work and making them available to other investigators to begin answering some of these questions.
Additionally, ultrasound is more commonly used than MRI to image the heart. To further advance this research, replicating these findings using ultrasound images will be useful, they note.
More information: Shoa L. Clarke, Deep learning-enabled analysis of medical images identifies cardiac sphericity as an early marker of cardiomyopathy and related outcomes, Med (2023). DOI: 10.1016/j.medj.2023.02.009. http://www.cell.com/med/fulltext/S2666-6340(23)00069-7
Journal information: Med
See the original post:
Machine learning identifies 'heart roundness' as a new tool for diagnosing cardiovascular conditions - Medical Xpress
Where nature meets technology: Machine learning as a tool for … – McGill Tribune
With the dangers of continued fossil fuel use and environmental mismanagement unfolding before our eyes in the form of intense heat waves, droughts, and wildfires, its obvious that dramatic, transformative action must be taken.
Throughout the pessimistic debate about the effectiveness of climate change policy and methods of pollution mitigation, almost every solution under the sun has been proposed. Some have suggested the widespread use of carbon capture technology, while others, like Boyan Slat, have developed ways to remove garbage from our oceans. But one technology has the potential to revolutionize climate action: Artificial intelligence (AI).
In a recent paper spearheaded by professor David Rolnick of the Department of Computer Science, researchers studied the application of machine learning to climate science in great detail. Each section of the article explored a specific sectorincluding electricity, industry, or infrastructureand explained the ways machine learning could be used to reduce the sectors impact on the climate.
Machine learning is an offshoot of AI. While the aim of AI is to develop computers that can think like a human, machine learning is more about training computers on experiences and data to recognize patterns and make decisions.
Machine learning is looking at large amounts of data, finding the patterns that are common across that data and linking those to what the algorithm is asked to do, Rolnick said in an interview with The McGill Tribune.
Uses for machine learning fall into a few categories, according to Rolnick: Monitoring, optimization, simulation, and forecasting. Take, for example, how forecasting can be applied to the study of electricity.
Machine learning is used to predict the amount of electricity that will be in demand at a given point in time so there is enough supply to meet that but not more than there needs to be, Rolnick explained. Understanding how much power is needed and how much power is available is important to make sure the grid is running effectively and without waste.
Since AI cannot plant trees or pass legislation, its practical application may seem abstract. However, its effects are tangible: AI has been used to increase crop yield in India, improve electricity efficiency on wind farms by planning for weather, and improve data centres efficiency.
Most of the technologies that I am talking about are at some level of deployment. For example, the U.K.s national grid has already integrated deep learning models into forecasting supply and demand of electricity and has greatly increased efficiency as a result, Rolnick said. The UN uses AI to guide interventions in flooded areas [.] These are not just research projects and its fundamentally important.
Although AI is an incredibly promising technology, there are a couple of drawbacks to be addressed. One of these drawbacks is human biassince humans write the algorithms and supply the human-collected data to train machine learning, these tools can replicate human biases. To prevent these biases, then, human bias needs to be correctedthere is no software fix.
We cannot technology our way out of most biases, Rolnick said. The solutions to biases in technology are the same as solutions to biases in any other part of human endeavour. That means they are hard, but they are solvable via human choices.
This technology also requires enormous quantities of energy for algorithms to be trained and maintained, but the energy can be minimized by designing efficient algorithms and planning applications carefully.
Its also worth noting that most of the negative climate impacts of AI globally come from how it is used, not the direct energy consumption, Rolnick wrote in a follow-up email.
Although machine learning models can be quite energy hungry, the models Rolnick uses are not exceedingly energy-intensive. With careful planning, scientists hope that the emissions benefits from these models outweigh their energy consumption.
Read the original here:
Where nature meets technology: Machine learning as a tool for ... - McGill Tribune
Machine-learning-powered extraction of molecular diffusivity from … – Nature.com
Lippincott-Schwartz, J., Snapp, E. & Kenworthy, A. Studying protein dynamics in living cells. Nat. Rev. Mol. Cell Biol. 2, 444456 (2001).
Article CAS PubMed Google Scholar
Verkman, A. S. Solute and macromolecule diffusion in cellular aqueous compartments. Trends Biochem. Sci. 27, 2733 (2002).
Article CAS PubMed Google Scholar
Mach, R. & Wohland, T. Recent applications of fluorescence correlation spectroscopy in live systems. FEBS Lett. 588, 35713584 (2014).
Article PubMed Google Scholar
Lippincott-Schwartz, J., Snapp, E. L. & Phair, R. D. The development and enhancement of FRAP as a key tool for investigating protein dynamics. Biophys. J. 115, 11461155 (2018).
Article CAS PubMed PubMed Central Google Scholar
Wawrezinieck, L., Rigneault, H., Marguet, D. & Lenne, P.-F. Fluorescence correlation spectroscopy diffusion laws to probe the submicron cell membrane organization. Biophys. J. 89, 40294042 (2005).
Article CAS PubMed PubMed Central Google Scholar
Bacia, K., Kim, S. A. & Schwille, P. Fluorescence cross-correlation spectroscopy in living cells. Nat. Methods 3, 8389 (2006).
Article CAS PubMed Google Scholar
Elson, E. L. Fluorescence correlation spectroscopy: past, present, future. Biophys. J. 101, 28552870 (2011).
Article CAS PubMed PubMed Central Google Scholar
Krieger, J. W. et al. Imaging fluorescence (cross-) correlation spectroscopy in live cells and organisms. Nat. Protoc. 10, 19481974 (2015).
Article CAS PubMed Google Scholar
Manley, S. et al. High-density mapping of single-molecule trajectories with photoactivated localization microscopy. Nat. Methods 5, 155157 (2008).
Article CAS PubMed Google Scholar
Chenouard, N. et al. Objective comparison of particle tracking methods. Nat. Methods 11, 281289 (2014).
Article CAS PubMed PubMed Central Google Scholar
Cognet, L., Leduc, C. & Lounis, B. Advances in live-cell single-particle tracking and dynamic super-resolution imaging. Curr. Opin. Chem. Biol. 20, 7885 (2014).
Article CAS PubMed Google Scholar
Manzo, C. & Garcia-Parajo, M. F. A review of progress in single particle tracking: from methods to biophysical insights. Rep. Prog. Phys. 78, 124601 (2015).
Article PubMed Google Scholar
Shen, H. et al. Single particle tracking: from theory to biophysical applications. Chem. Rev. 117, 73317376 (2017).
Article CAS PubMed Google Scholar
Beheiry, M. E., Dahan, M. & Masson, J.-B. InferenceMAP: mapping of single-molecule dynamics with Bayesian inference. Nat. Methods 12, 594595 (2015).
Article PubMed Google Scholar
Xiang, L., Chen, K., Yan, R., Li, W. & Xu, K. Single-molecule displacement mapping unveils nanoscale heterogeneities in intracellular diffusivity. Nat. Methods 17, 524530 (2020).
Article CAS PubMed PubMed Central Google Scholar
Yan, R., Chen, K. & Xu, K. Probing nanoscale diffusional heterogeneities in cellular membranes through multidimensional single-molecule and super-resolution microscopy. J. Am. Chem. Soc. 142, 1886618873 (2020).
Article CAS PubMed PubMed Central Google Scholar
Xiang, L., Chen, K. & Xu, K. Single molecules are your Quanta: a bottom-up approach toward multidimensional super-resolution microscopy. ACS Nano 15, 1248312496 (2021).
Article CAS PubMed PubMed Central Google Scholar
Schuster, J., Cichos, F. & von Borczyskowski, C. Diffusion measurements by single-molecule spot-size analysis. J. Phys. Chem. A 106, 54035406 (2002).
Article CAS Google Scholar
Zareh, S. K., DeSantis, M. C., Kessler, J. M., Li, J.-L. & Wang, Y. M. Single-image diffusion coefficient measurements of proteins in free solution. Biophys. J. 102, 16851691 (2012).
Article CAS PubMed PubMed Central Google Scholar
Serag, M. F., Abadi, M. & Habuchi, S. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations. Nat. Commun. 5, 5123 (2014).
Article CAS PubMed Google Scholar
Mckl, L., Roy, A. R. & Moerner, W. E. Deep learning in single-molecule microscopy: fundamentals, caveats, and recent developments [Invited]. Biomed. Opt. Express 11, 16331661 (2020).
Article PubMed PubMed Central Google Scholar
Nehme, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458464 (2018).
Article CAS Google Scholar
Zhang, P. et al. Analyzing complex single-molecule emission patterns with deep learning. Nat. Methods 15, 913916 (2018).
Article CAS PubMed PubMed Central Google Scholar
Zelger, P. et al. Three-dimensional localization microscopy using deep learning. Opt. Express 26, 3316633179 (2018).
Article CAS PubMed Google Scholar
Kim, T., Moon, S. & Xu, K. Information-rich localization microscopy through machine learning. Nat. Commun. 10, 1996 (2019).
Article PubMed PubMed Central Google Scholar
Hershko, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Multicolor localization microscopy and point-spread-function engineering by deep learning. Opt. Express 27, 61586183 (2019).
Article CAS PubMed Google Scholar
Mckl, L., Petrov, P. N. & Moerner, W. E. Accurate phase retrieval of complex 3D point spread functions with deep residual neural networks. Appl. Phys. Lett. 115, 251106 (2019).
Article PubMed PubMed Central Google Scholar
Zhang, Z., Zhang, Y., Ying, L., Sun, C. & Zhang, H. F. Machine-learning based spectral classification for spectroscopic single-molecule localization microscopy. Opt. Lett. 44, 58645867 (2019).
Article CAS PubMed PubMed Central Google Scholar
Gaire, S. K. et al. Accelerating multicolor spectroscopic single-molecule localization microscopy using deep learning. Biomed. Opt. Express 11, 27052721 (2020).
Article CAS Google Scholar
Mckl, L., Roy, A. R., Petrov, P. N. & Moerner, W. E. Accurate and rapid background estimation in single-molecule localization microscopy using the deep neural network BGnet. Proc. Natl Acad. Sci. 117, 6067 (2020).
Article PubMed Google Scholar
Nehme, E. et al. DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning. Nat. Methods 17, 734740 (2020).
Article CAS PubMed PubMed Central Google Scholar
Speiser, A. et al. Deep learning enables fast and dense single-molecule localization with high accuracy. Nat. Methods 18, 10821090 (2021).
Article CAS PubMed PubMed Central Google Scholar
Cascarano, P. et al. DeepCEL0 for 2D single-molecule localization in fluorescence microscopy. Bioinformatics 38, 14111419 (2022).
Article CAS PubMed Google Scholar
Spilger, R. et al. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support Vol. 11045 (eds. Stoyanov, D. et al.) 128136 (Springer International Publishing, 2018).
Newby, J. M., Schaefer, A. M., Lee, P. T., Forest, M. G. & Lai, S. K. Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D. Proc. Natl Acad. Sci. USA 115, 90269031 (2018).
Article CAS PubMed PubMed Central Google Scholar
Muoz-Gil, G. et al. Objective comparison of methods to decode anomalous diffusion. Nat. Commun. 12, 6253 (2021).
Article PubMed PubMed Central Google Scholar
Kowalek, P., Loch-Olszewska, H. & Szwabiski, J. Classification of diffusion modes in single-particle tracking data: Feature-based versus deep-learning approach. Phys. Rev. E 100, 032410 (2019).
Article CAS PubMed Google Scholar
Granik, N. et al. Single-particle diffusion characterization by deep learning. Biophys. J. 117, 185192 (2019).
Article CAS PubMed PubMed Central Google Scholar
Pinholt, H. D., Bohr, S. S.-R., Iversen, J. F., Boomsma, W. & Hatzakis, N. S. Single-particle diffusional fingerprinting: a machine-learning framework for quantitative analysis of heterogeneous diffusion. Proc. Natl Acad. Sci. 118, e2104624118 (2021).
Article CAS PubMed PubMed Central Google Scholar
Pineda, J. et al. Geometric deep learning reveals the spatiotemporal features of microscopic motion. Nat. Mach. Intell. 5, 7182 (2023).
Article Google Scholar
He, K., Zhang, X., Ren, S. & Sun, J. in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770778 (IEEE, 2016).
Ioffe, S. & Szegedy, C. in Proceedings of the 32nd International Conference on Machine Learning 448456 (PMLR, 2015).
Ramachandran, P., Zoph, B. & Le, Q. V. Searching for activation functions. arXiv https://doi.org/10.48550/arXiv.1710.05941 (2017).
Nair, V. & Hinton, G. E. in Proc. 27th International Conference on International Conference on Machine Learning 807814 (Omnipress, 2010).
Choi, A. A. et al. Displacement statistics of unhindered single molecules show no enhanced diffusion in enzymatic reactions. J. Am. Chem. Soc. 144, 48394844 (2022).
Article CAS PubMed PubMed Central Google Scholar
Tobin, J. et al. Domain randomization for transferring deep neural networks from simulation to the real world. arXiv https://doi.org/10.48550/arXiv.1703.06907 (2017).
Filippov, A., Ordd, G. & Lindblom, G. Sphingomyelin structure influences the lateral diffusion and Raft formation in lipid Bilayers. Biophys. J. 90, 20862092 (2006).
Article CAS PubMed Google Scholar
Mach, R. & Hof, M. Lipid diffusion in planar membranes investigated by fluorescence correlation spectroscopy. Biochim. Biophys. Acta BBA Biomembr. 1798, 13771391 (2010).
Article Google Scholar
Sharonov, A. & Hochstrasser, R. M. Wide-field subdiffraction imaging by accumulated binding of diffusing probes. Proc. Natl Acad. Sci. USA 103, 1891118916 (2006).
Article CAS PubMed PubMed Central Google Scholar
Maekawa, T. et al. Molecular diffusion and nano-mechanical properties of multi-phase supported lipid bilayers. Phys. Chem. Chem. Phys. 21, 1668616693 (2019).
Article CAS PubMed Google Scholar
Kuo, C. & Hochstrasser, R. M. Super-resolution microscopy of lipid bilayer phases. J. Am. Chem. Soc. 133, 46644667 (2011).
Article CAS PubMed PubMed Central Google Scholar
Yan, R., Wang, B. & Xu, K. Functional super-resolution microscopy of the cell. Curr. Opin. Chem. Biol. 51, 9297 (2019).
Article CAS PubMed Google Scholar
Go here to see the original:
Machine-learning-powered extraction of molecular diffusivity from ... - Nature.com
Top 9 Ways Ethical Hackers Will Use Machine Learning to Launch … – Analytics Insight
The top 9 ways ethical hackers will use machine learning to launch attacks are enlisted here
Several threat detection and response platforms are using machine learning and artificial intelligence (AI) as essential technologies. Security teams benefit from being able to learn on the go and automatically adjust to evolving cyber threats.
Yet, certain ethical hackers are also evading security measures, finding new vulnerabilities, and scaling up their cyberattacks at an unprecedented rate and with fatal outcomes by utilizing machine learning and AI. Below are the top 9 ways ethical hackers will use machine learning to launch attacks.
Machine learning has been used by defenders for decades to identify spam. The attacker can alter their behavior if the spam filter they are using offers explanations for why an email message was rejected or creates a score of some sort. They would be utilizing lawful technology to boost the effectiveness of their attacks.
Ethical Hackers will use machine learning to creatively alter phishing emails so that they dont appear in bulk email lists and are designed to encourage interaction and clicks. They go beyond simply reading the emails text. AI can produce realistic-looking images, social media profiles, and other content to give communication the best possible legitimacy.
Machine learning is also being used by criminals to improve their password-guessing skills. Moreover, they use machine learning to recognize security measures so they can guess better passwords with fewer attempts, increasing the likelihood that they will succeed in gaining access to a system.
The most ominous use of artificial intelligence is the creation of deep fake technologies that can produce audio or video that is difficult to differentiate from actual human speech. To make their messages seem more credible, fraudsters are now leveraging AI to create realistic-looking user profiles, photographs, and phishing emails. Its a huge industry.
Nowadays, a lot of widely used security technologies come equipped with artificial intelligence or machine learning. For instance, anti-virus technologies are increasingly searching for suspicious activities outside the fundamental signs. Attackers might use these tools to modify their malware so that it can avoid detection rather than defend against attacks.
Attackers can employ machine learning for reconnaissance to examine the traffic patterns, defenses, and possible weaknesses of their target. Its unlikely that the typical cybercriminal would take on anything like this because its difficult to do. It may, however, become more publicly available if, at some time, the technology is marketed and offered as a service through the criminal underworld.
Malware may not be able to link back to its command-and-control servers for instructions if a business recognizes that it is under assault and disables internet connectivity for impacted computers.
A machine learning model can be deceived by an attacker by being fed fresh data. For instance, a compromised user account may log into a system every day at 2 a.m. to perform unimportant tasks, fooling the system into thinking that working at that hour is normal, and reducing the number of security checks the user must complete.
Fuzzing software is used by reputable software engineers and penetration testers to create random sample inputs to crash a program or discover a vulnerability. The most advanced versions of this software prioritize inputs such as text strings most likely to create issues using machine learning to generate inputs that are more targeted and ordered. Because of this, fuzzing technologies are not only more effective for businesses but also more lethal in the hands of attackers.
Read more:
Top 9 Ways Ethical Hackers Will Use Machine Learning to Launch ... - Analytics Insight
Hear from the experts: using machine learning and AI to grow your … – SmartCompany
Simon Johnston, artificial intelligence and machine learning practice lead at AWS.
Across two events on February 9 and 16, SmartCompany and AWS dived into all things AI and machine learning. In his introduction speech, host Simon Crerar referenced the wide-eyed futurism of science fiction films like Blade Runner. According to keynote speaker Simon Johnston of AWS, that sci-fi future is already here. If you can take one thing away from todays presentation from myself it would be that the technology is good to go. Its not about the technology, in my opinion, its purely the application, the integration, the process.
Sharing their insights were speakers from AWS, Deloitte, Carsales and Nearmap. Lets take a closer look at a few of the key themes that emerged over the two days.
One big talking point of both sessions was AI/ML democratisation the idea that the technology is accessible to more businesses, budgets and skill levels than ever. Simon Johnston touched on how AWS uses a platform called Canvas for accessible education. If youre a business and youve got business analysts that know their part of the business really well, know their data and use cases but dont know machine learning, they shouldnt be prevented from developing these capabilities. Thats what Canvas allows.
Augustinus Nalwan, GM of AI, Data Science and Data Platform at Carsales, showed how that business is putting AI in the hands of just about every employee. Carsales began its AI journey in 2016, added a data science team in 2018 and, when the project brought success, the workload increased. The problem, according to Nalwan, is that hiring more data scientists and machine learning experts is extremely expensive. Instead, Carsales used existing Metaflow and Sagemaker ecosystems to automate workflows and upskill employees. At Carsales, 70% of AI models can be built using this platforms algorithm which does not require data scientists, said Nalwan. Anyone with good practice and guided by data scientists can perform this job. Carsales has even gone further, using AWS tools like Rekognition and Comprehend to allow those with no programming skills (such as marketing and finance teams) to train models such as spam message recognition.
Simon Johnston noted the recent, rapid growth of AI, talking about themes of data growth and increases in model sophistication. In the space of two years weve had a 1600x growth in the number of parameters. When you talk about ChatGPT and Open AI-type algorithms, theyre sitting around 175 billion parameters and itll continue to grow.
With such rapid growth has come both extreme complexity and, as Michael Bewley of aerial imaging company Nearmap has found, heavy processing requirements. Nearmap uses deep learning models to create incredibly detailed geospatial images which now total over 25 petabytes of data. Bewley says that, for businesses similarly leaning on ML, its wise to use cloud AI like AWS Sagemaker rather than taking everything on in-house. At some point theres a break point where local machines really start to suffer. Theyre great for freedom early on but then theres size and scale limitations. Cloud computing is really important. Probably the most important thing is, dont bring your legacy baggage with you on the transition to cloud.
In Melbourne, Simon Johnston asked the audience how many have or would be implementing machine learning into their business and about a quarter raised their hands. The question for those attendees, then, is how do we get started?
In the discussion panel, Alon Ellis of Deloitte pointed to a classic model of technological adoption, the Gartner hype cycle. Ellis says that, for businesses looking to effectively wield AI and ML tech, they need to avoid the distraction of hype and focus on practically applying these technologies. Its bringing it back to that business problem, getting really clear on how thats going to work, how youre going to alter the business going forward, what that might mean for different teams, different ways of working and capitalising on that so you can go from the hype through to the pragmatic, implemented outcome.
For AWS chief technologist Rada Stanic, jumping into AI and ML means getting your businesss data ready to go. The success of the project will rely on the quality and breadth of the data that you have. If the quality data is there and its ready, Ive seen proof of concepts happen in a couple of days, a week, to demonstrate that there is value in pursuing the project.
Learn about the 6 key trends driving Machine Learning innovation across Australian and New Zealand industries inclusive of improvements to Model Sophistication, Data Growth, ML Industrialisation, ML Powered Use Cases, Responsible AI and ML democratisation.
On-Demand Keynote Recording: View Here
See the original post:
Hear from the experts: using machine learning and AI to grow your ... - SmartCompany
Top 10 Concepts and Technologies in Machine learning in 2023 – Analytics Insight
The top 10 concepts and technologies in machine learning in 2023 is a process of teaching computers to learn from data, without being explicitly programmed, Machine learning is a subject that is continuously evolving, with new ideas and technologies being created all the time. To remain ahead of the curve, data scientists should follow some of these sites to stay up to speed on the newest developments. This will assist you in comprehending how Technologies in machine learning can be used in practice and will provide you with ideas for possible applications in your own business or area of work.
Deep Neural Networks (DNN): Deep neural networks are a type of machine learning program that has existed since the 1950s. DNNs are capable of performing image identification, voice recognition, and natural language processing. They are made up of numerous hidden layers of neurons, each of which learns a representation of the incoming data. These models are then used to forecast the outgoing data.
Generative Adversarial Networks: GANs are a form of the generative model in which two competitive neural networks are trained against each other. One network attempts to create samples that appear genuine, while the other network determines whether those samples are derived from real or generated data. GANs have demonstrated tremendous success in the generation of pictures and videos. Gans are used to generating new data that resembles existing data but is entirely new. We can use GANs to generate new images from existing masterpieces created by renowned artists, also known as contemporary AI art. These artists are working with generative models to create masterpieces that have already been created.
Deep Learning: Deep learning is a type of machine learning that learns data models using numerous processing levels (typically hundreds). This enables computers to accomplish jobs that humans find challenging. Deep learning has been used in a wide range of applications, including computer vision, voice recognition, natural language processing, automation, and reinforcement learning.
COVID-19: Machine Learning and Artificial intelligence: Since January 2020, artificial intelligence (AI) has been used to identify COVID-19 instances in China. Wuhan University experts created this AI system. They developed a deep learning algorithm capable of analyzing data from phone calls, text messages, social media entries, and other sources.
Conversational AI or Conversational BOTS: It is a technology in which we talk to a chatbot and it processes the speech after detecting the voice input or text input and then enables a specific job or answer, such as
Machine Learning in Cybersecurity: Cybersecurity is the area in which it is ensured that an organization, or anyone for that matter, is secure from all security-related dangers on the Internet or in any network. An organization deals with a lot of complex data that needs to be protected from malicious dangers such as anyone attempting to breach into your computer or gain access to your data or unauthorized access, which is what cyber security is all about.
Machine learning and IoT: The different IOT procedures that we use in businesses are prone to errors; after all, it is a machine. If the system is not correctly designed or has a few flaws, it is destined to fail at some point. However, with machine learning, maintenance becomes much easier because all of the factors that may lead to a failure in the ID process are identified ahead of time and a new plan of action can be prepared for that matter, allowing companies to save a significant amount of money by lowering maintenance costs.
Augmented reality: The future of AI is augmented reality. Many real-life uses will benefit from the promise of augmented reality (AR).
Automated Machine Learning: Traditional machine learning model creation needed extensive subject expertise as well as time to create and compare hundreds of models. And it was more time-consuming, resource-intensive, and difficult. Automated machine learning aids in the rapid development of production-ready ML models.
Time-Series Forecasting: Forecasting is an essential component of any sort of company, whether it is sales, client desire, revenue, or inventory. When combined with automated ML, a suggested, high-quality time-series prediction can be obtained.
Originally posted here:
Top 10 Concepts and Technologies in Machine learning in 2023 - Analytics Insight
Known Medicine is using machine learning to cure cancer – Utah Business
According to statistics from the National Cancer Institute, one in five men and one in six women in the United States are likely to die from cancer during their lifetime.
This year alone, the American Cancer Society estimates were in for a rough ride. In 2023, more than 1.9 million new cancer cases are expected to be diagnosed in the U.S., and nearly 610,000 are expected to die from cancer in the U.S. (about 1,670 deaths per day). Only heart disease outranks cancer, making cancer the second most common cause of death in the nation.
These chilling statistics make it clear why something must be done to save more lives from this hydra of a disease.
The founders of Known Medicine couldnt agree more. In 2020, the team launched the startup as a company dedicated to expediting the development of cancer treatments. As explained on its website, Known Medicines machine learning-based sensitivity assay, paired with -omics data, allows [the company] to identify predictive biomarkers and the most likely responders for any new drug.
Put in simpler terms, Katie-Rose Skelly, co-founder and CTO of Known Medicine, says the company is essentially trying to find out beforehand which patients will respond to which drug. This can help pharmaceutical companies design better clinical trials and improve the drugs chances for success.
To conduct its research, Known Medicine works closely with about 10 different cancer research centers that provide samples from patients who have authorized their use for research. The company breaks these tissue samples down into thousands of microtumors and doses each microtumor with a panel of over 100 drugs to see what works for each individual patient. The drugs include some that have already been approved, some that have failed previous clinical trials, some that pharmaceutical partners are interested in and some that Known Medicine might consider for in-licensing.
The Known Medicine team is currently working toward its first peer-reviewed journal publication, which will essentially provide proof of concept for the innovative work the company is doing. What well be able to show is that we can look at the patient that donated the tissue, see what drug they were given, see how they responded and identify whether that matches what we would have expected, Skelly says.
In other words, the initial publication will prove that Known Medicine can replicate patient responses and that its microtumors are a faithful representation of what the patients cells will do in the body.
From there, Known Medicines goal will be to aid in the drug development process. Skelly explains that most drugs currently fail in clinical trials, with just 3.4 percent of oncology drugs making it to market. This is often due to ineffective patient population selection.
If you try running a new anticancer drug on 100 patients, maybe 20 or 30 respond wellbetter than they would have to any other drug, Skelly says. But if you cant identify that 20 percent to 30 percent upfront, your drug is going to fail clinical trials.
Known Medicines platform will enable drug companies to identify trial candidates that are more likely to respond to their drugs. We can look at what kind of genetic signatures [patients] have, what kind of RNA expression levels they have, Skelly says. We can see if there is anything they can use to separate the patients who will respond from the patients who wont and only enroll the people who will respond in the clinical trials.
The concept for the company came as a collaboration between Skelly and Dr. Andrea Mazzocchi, who serves as the companys CEO. The co-founders initially met by chanceSkelly was working as a data scientist at Recursion, a Utah-based drug discovery digital biology company. Mazzocchi was pursuing her doctorate degree at Wake Forest and happened to be dating Skellys best friend at Recursion.
Read the rest here:
Known Medicine is using machine learning to cure cancer - Utah Business
Machine Vision: MERLIC 5.3 makes deep learning technologies … – Robotics Tomorrow
Object detection as a new deep learning feature in MERLIC 5.3 Expansion of easy-to-use functionalitiesAvailable from April 20, 2023
Munich, March 30, 2023 - With MVTec MERLIC, a software for machine vision, it is possible to both develop and solve complete machine vision applications even without any programming knowledge. On April 20, 2023, MVTec Software GmbH (www.mvtec.com), a leading international software manufacturer for machine vision, will launch the latest version 5.3 of its easy-to-use machine vision software MERLIC. For one thing, customers can look forward to a new deep learning feature. Secondly, user-friendliness has been further enhanced. These improvements are in line with the company's goal of offering powerful machine vision software for beginners as well.
New tool for deep-learning-based object detectionThe deep learning technology for object detection is now also available in MERLIC. The "Find Object" tool locates trained object classes and identifies them with a surrounding rectangle (bounding box). Touching or partially overlapping objects are also separated, which allows counting of objects. Labeling and training are possible without programming knowledge using the free MVTec Deep Learning Tool. The trained network can then be loaded into MERLIC and used with a single click.
Plug-in for Mitsubishi Electric MELSEC PLCs With MERLIC 5.3 it is possible to communicate directly with the widespread Mitsubishi Electric PLC via the MC/SLMP protocol. This is made possible by a newly developed plug-in included in MERLIC. This plug-in supports the Mitsubishi Electric iQR, iQF, L- and Q-series. MERLIC thus offers significant added value for customers working with Mitsubishi Electric PLCs.
Training functionalities in the MERLIC FrontendWith the new version 5.3 it is now possible to use training functionalities in the MERLIC Frontend even during runtime. For example, new matching models or code reading parameters can be trained. This means that the end customer can also perform training for other products directly on the production line in the MERLIC Frontend. This significantly increases flexibility and application possibilities.
Tool grouping for clearer workflowsMERLIC helps solve complex machine vision applications, even without programming knowledge. The visual Tool Flow supports this. To maintain an overview even with complex applications, it is now possible to group several tools into a virtual tool inside the Tool Flow.
Concise startup dialog for easy access to functions and machine vision applicationsUsability is one of MERLIC's unique selling points. To further strengthen this ease-of-use approach, a start dialog with thumbnails was integrated into the MERLIC Creator. This allows users to get an overview of their most recently opened MVApps. All standard examples are clearly displayed. Especially for new users, these offer an orientation to reliably create their own applications. In addition, helpful introductory material as well as the documentation can be easily accessed via quick links.
About MVTec Software GmbHMVTec is a leading manufacturer of standard software for machine vision. MVTec products are used in all demanding areas of imaging: semiconductor industry, surface inspection, automatic optical inspection systems, quality control, metrology, as well as medicine and surveillance. By providing modern technologies such as 3D vision, deep learning, and embedded vision, software by MVTec also enables new automation solutions for the Industrial Internet of Things aka Industry 4.0. With locations in Germany, the USA, and China, as well as an established network of international distributors, MVTec is represented in more than 35 countries worldwide. http://www.mvtec.com
Link:
Machine Vision: MERLIC 5.3 makes deep learning technologies ... - Robotics Tomorrow
Machine learning combined with multispectral infrared imaging to guide cancer surgery – Medical Xpress
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread
Scientists employ multispectral emission profiles instead of the conventional fluorescence intensity profile to train machine learning models for accurately identifying tumor boundaries. Credit: Waterhouse et al, DOI: 10.1117/1.JBO.28.9.094804
Surgical tumor removal remains one of the most common procedures during cancer treatment, with about 45% of cancer patients undergoing this surgery at some point. Thanks to recent progress in imaging and biochemical technologies, surgeons are now better able to tell tumors apart from healthy tissue. Specifically, this is enabled by a technique called "fluorescence-guided surgery" (FGS).
In FGS, the patient's tissue is stained with a dye that emits infrared light when irradiated with a special light source. The dye preferentially binds to the surface of tumor cells, so that its light-wave emissions provide information on the location and extent of the tumor. In most FGS-based approaches, the absolute intensity of the infrared emissions is used as the main criterion for discerning the pixels corresponding to tumors. However, it turns out that the intensity is sensitive to lighting conditions, the camera setup, the amount of dye used, and the time elapsed after staining. As a result, the intensity-based classification is prone to erroneous interpretation.
But what if we could instead use an intensity-independent approach to classify healthy and tumor cells? A recent study published in the Journal of Biomedical Optics and led by Dale J. Waterhouse from University College London, U.K., has now proposed such an approach. The research team has developed a new technique that combines machine learning with short-wave infrared (SWIR) fluorescence imaging to detect precise boundaries of tumors.
Their method relies on capturing multispectral SWIR images of the dyed tissue rather than simply measuring the total intensity over one particular wavelength. Put simply, the team sequentially placed six different wavelength frequency (color) filters in front of their SWIR optical system and registered six measurements for each pixel. This allowed the researchers to create the spectral profiles for each type of pixel (background, healthy, or tumor). Next, they trained seven machine learning models to identify these profiles accurately in multispectral SWIR images.
The researchers trained and validated the models in vivo, using SWIR images with a lab model for an aggressive type of neuroblastoma. They also compared different normalization approaches aimed at making the classification of pixels independent of the absolute intensity such that it was governed by the pixel's spectral profile only.
Out of the seven tested models, the best performing model achieved a remarkable per-pixel classification accuracy of 97.5% (the accuracies for tumor, healthy, and background pixels were 97.1%, 93.5%, and 99.2%, respectively). Moreover, thanks to the normalization of the spectral profiles, the results of the model were far more robust against changes in imaging conditions. This is a particularly desirable feature for clinical applications since the ideal conditions under which new imaging technologies are usually tested are not representative of the real-world clinical environment.
Based on their findings, the team has high hopes for the proposed methodology. They anticipate that a pilot study on its implementation in human patients could help revolutionize the field of FGS. Additionally, multispectral FGS could be extended beyond the scope of the present study. For example, it could be used to remove surgical or background lights from images, remove unwanted reflections, and provide noninvasive ways for measuring lipid content and oxygen saturation. Moreover, multispectral systems enable the use of multiple fluorescent dyes with different emission characteristics simultaneously, since the signals from each dye can be untangled from the total measurements based on their spectral profile. These multiple dyes can be used to target multiple aspects of disease, providing surgeons with even greater information.
Future studies will surely unlock the full potential of multispectral FGS, opening doors to more effective surgical procedures for treating cancer and other diseases.
More information: Dale J. Waterhouse et al, Enhancing intraoperative tumor delineation with multispectral short-wave infrared fluorescence imaging and machine learning, Journal of Biomedical Optics (2023). DOI: 10.1117/1.JBO.28.9.094804
Journal information: Journal of Biomedical Optics
Continue reading here:
Machine learning combined with multispectral infrared imaging to guide cancer surgery - Medical Xpress
Will the Raspberry Pi 5 CPU Have Built-in Machine Learning? – MUO – MakeUseOf
Raspberry Pi has been at the forefront of single-board computers (SBCs) for quite some time. However, nearly four years after the launch of Raspberry Pi 4, a new model is on the horizon.
Previous Raspberry Pi iterations generally involved faster processors, more RAM, and with the Pi 4, improved IO. However, a lot of Pis are used for AI (artificial intelligence) and ML (machine learning) purposes, leading to a lot of speculation from DIY enthusiasts about the Raspberry Pi 5's built-in machine learning capabilities.
Whether the Raspberry Pi 5 gets built-in machine learning capabilities depends a lot on what CPU the board is based around. Raspberry Pi co-founder Eben Upton teased the future of custom Pi silicon back at the tinyML Summit 2021. Since then, an imminent Raspberry Pi 5 release with massive improvements to ML is looking very likely.
Up until Raspberry Pi 4, the development team had been using ARM's Cortex processors. However, with the release of the Raspberry Pi Pico in 2021 came the RP2040, the company's first in-house SoC (system-on-chip). While it doesn't have the same power as the Raspberry Pi Zero 2 W, one of the cheapest SBCs on the market, it does provide microcontroller capabilities similar to that of an Arduino.
The Raspberry Pi 2, Pi 3, and Pi 4 have used ARM's Cortex-A7, Cortex-A53, and Cortex-A72 processors respectively. These have increased the Pi's processing capabilities over each generation, giving each progressive Pi more ML prowess. So does that mean we'll see built-in machine learning on the Raspberry Pi 5's CPU?
While there's no official word on what processor will power the Pi 5, you can be pretty sure it'll be the most ML-capable SBC in the Raspberry Pi lineup and will most likely have built-in ML support. The company's Application Specific Integrated Circuit (ASIC) team has been working since on the next iteration, which seems to be focused on lightweight accelerators for ultra-low power ML applications.
Upton's talk at tinyML Summit 2021 suggests that it might come in the form of lightweight accelerators likely running four to eight multiply-accumulates (MACs) per clock cycle. The company has also worked with ArduCam on the ArduCam Pico4ML, which brings together ML, a camera, microphones, and a screen into a Pico-sized package.
While all the details about the Raspberry Pi 5 aren't yet confirmed, if Raspberry Pi sticks to its trend of incrementally upgrading its boards, the upcoming SBC can be a rather useful board that'll check a lot of boxes for ML enthusiasts and developers looking for cheap hardware for their ML projects.
The Raspberry Pi 5 could come with built-in machine learning support, which opens up a plethora of opportunities for just about anyone to build their own ML applications with hardware that's finally able to keep up with the technology without breaking the bank.
You can already run anything from a large language model (LLM) to a Minecraft server on existing Raspberry Pis. As the SBC becomes more capable (and accessible), the possibilities of what you can do with a single credit-card-sized computer will also increase.
Read the rest here:
Will the Raspberry Pi 5 CPU Have Built-in Machine Learning? - MUO - MakeUseOf