Category Archives: Machine Learning
Artificial Intelligence: Transforming Healthcare, Cybersecurity, and Communications – Forbes
rendering,conceptual image.getty
Globally, a new era of rapidly developing and interconnected technologies that combine engineering, computer algorithms, and culture is already beginning. The basic way we live, work, and connect will alter because of the digital transformation or convergence we will experience in the next years.
More remarkably, the advent of artificial intelligence (AI) and machine learning-based computers in the next century may alter how we relate to ourselves.
The digital ecosystem's networked computer components, which are made possible by machine learning and artificial intelligence, will have a significant impact on practically every sector of the economy. These integrated AI and computing capabilities could pave the way for new frontiers in fields as diverse as genetic engineering, augmented reality, robotics, renewable energy, big data, and more.
Three important verticals in this digital transformation are already being impacted by AI: 1) Healthcare, 2) Cybersecurity, and 3) Communications.
Artificial intelligence: What is it?
AI is a "technology that appears to emulate human performance typically by learning, coming to conclusions, seeming to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance, or replacing people on execution of non-routine tasks," according to Gartner.
Artificial intelligence (AI) systems aim to reproduce human characteristics and processing power in a machine and outperform human speed and constraints. Machine learning and natural language processing, which are already commonplace in our daily lives, are components of the advent of AI. Today's AI can comprehend, identify, and resolve issues from both organized and unstructured data - and in some situations, without being explicitly trained.
AI has the potential to significantly alter cognitive processes and generate economic gains. According to McKinsey & Company, the automation of knowledge labor by intelligent software systems that can carry out knowledge job activities from unstructured commands could have a $5 to $7 trillion potential economic impact by 2025. "These technologies hold many interesting possibilities. According to Dave Choplin, chief envisioning officer at Microsoft UK, artificial intelligence is "the most important technology that anyone on the planet is working on right now." Research and development spending and investments are reliable indicators of upcoming technological advancements. Goldman Sachs, a financial services company, predicts that by 2025, investments in artificial intelligence will reach $200 billion globally.
AI-enabled computers are made for automating tasks like speech recognition, learning, and planning and resolving issues. By prioritizing and acting on data, these technologies can facilitate more effective decision-making, particularly over bigger networks with numerous users and factors. Speech recognition, learning, planning, and problem-solving are some of the fundamental tasks that AI-powered computers are now being created for.
AI and Healthcare
AI is already transforming the healthcare industry in medication discovery, where it is utilized to evaluate combinations of substances and procedures that will improve human health and thwart pandemics. AI was crucial in helping medical personnel respond to the COVID outbreak and in the development of the COVID-19 vaccination medication.
Predictive analytics is one of the most fascinating applications of AI in healthcare. To forecast future outcomes based on a patient's current health or symptoms, predictive analytics leverage past data on their diseases and treatments. This enables doctors to choose the best course of action for treating individuals with persistent diseases or other health problems. Google's DeepMind AI team recently developed computers that can forecast many protein configurations, which is very advantageous for science and medical research.
AI will advance in predicting health outcomes, offering individualized care plans, and even treating illness as it continues to develop. Healthcare professionals will be able to treat patients more effectively at home, in charitable or religious settings, and in the office thanks to this power.
AI and Cybersecurity
AI in cybersecurity can offer a quicker way to recognize and detect online threats. The use of abnormal or malicious credentials, brute force login attempts, unusual data movement, and data exfiltration are all things that cybersecurity companies have developed software and platforms powered by AI to detect in real-time. They do this by scanning data and files. This enables companies to make statistical judgments and guard against anomalies prior to their reporting and patching.
To assist cybersecurity professionals, AI also improves network monitoring and threat detection technologies by minimizing noise, delivering priority warnings, utilizing contextual data backed by proof, and using automated analysis based on correlation indices from cyber threat intelligence reports.
Automation is undoubtedly important in the cybersecurity world. "There are too many things happening - too much data, too many attackers, too much of an attack surface to defend - that without those automated capabilities that you get with artificial intelligence and machine learning, you don't have a prayer of being able to defend yourself," said Art Coviello, a partner at Rally Ventures and the former chairman of RSA.
Although AI and ML can be useful tools for cyber-defense, they can also have drawbacks. Threat actors can also utilize them to detect threat abnormalities and improve cyber defensive capabilities more quickly. AI and MI are already being used as tools by malicious governments and criminal hackers to identify and exploit threats in threat detection models. They employ a variety of techniques to do this. Their preferred methods frequently involve automated human-impersonating phishing attacks and malware that self-modifies to trick or even defeat cyber-defense systems and programs.
Cybercriminals are already attacking and investigating the networks of their victims using AI and ML capabilities. The most at risk are small firms, organizations, and in particular healthcare facilities that cannot afford substantial expenditures in defensive developing cybersecurity technology like AI. Ransomware-based extortion by hackers who demand payment in cryptocurrency poses a potentially persistent and developing threat.
Communications & Customer Service (CX)
AI is also changing the way our society communicates. Businesses are already using robotic processing automation (RPA), a type of artificial intelligence, to automate more routine tasks and save manual labor. By utilizing technology for routine, repeatable tasks, RPA improves service operations and frees up human talent for more difficult, complicated problems. It is scalable and adaptable to performance requirements. In the private sector, RPA is frequently utilized in contact centers, insurance enrollment and billing, claims processing, and medical coding, among other applications.
Chatbots, voice assistants, and other messaging apps that use conversational AI help a variety of sectors by completely automating customer service and providing round-the-clock support. Conversational AI/chatbots have advanced and introduced new forms of human communication through facial expressions and contextual awareness with each passing day. The use of these apps is already widespread in the healthcare, retail, and travel sectors.
A wide range of business sectors have utilized AI technologies to produce news stories, social media posts, legal filings, and banking reports in both the media and on social media. The potential of AI and its human-like correlations, especially when expressing itself in textual analysis, have recently come to light thanks to a conversation box called ChatGPT. Another OpenAI program called DALL-E has demonstrated the capacity to generate graphics from simple instructions. Both AI systems accomplish this by synthesizing the data after mimicking human speech and language.
AI and Our Future
We need to consider any potential ethical concerns with artificial intelligence in the future. We need to consider what might occur if we use this technology and who will oversee it.
Algorithm bias is a serious problem. It has repeatedly been demonstrated. A recent MIT project examined several computer programming approaches to find viewpoints. Many of the programs, they discovered, had harmful biases. We need to consider biases while working with human variables in programming. Technology is made by humans, and humans have prejudices.
This is how technology can be bad. Human monitoring of technology development and application is a plus. We must ensure that the folks writing the code and the algorithms are as diverse as possible. Technology will be shaped to be more balanced if there is responsible oversight over the data input and response.
Understanding AI's contextual nature is another issue. Algorithms that are programmed only display Xs and Os. It does not depict interactions or conduct between people. In the future, interactivity and behavior may be encoded into the software, but that time has not yet come.
The genuine hope is that we will be able to guide these incredible technologies we are creating in the proper direction for good. If we use them properly, each of them has applications that could help our civilization. It must be done by the entire world community. To keep things in check, we need collective research, ethics, transparent strategies, and proper industry incentives to keep AI on the right track.
Chuck Brooks, President of Brooks Consulting International, is a globally recognized thought leader and subject matter expert Cybersecurity and Emerging Technologies. LinkedIn named Chuck as one of The Top 5 Tech People to Follow on LinkedIn. He was named as Cybersecurity Person of the Year by Cyber Express, as one of the worlds 10 Best Cyber Security and Technology Experts by Best Rated,as a Top 50 Global Influencer in Risk, Compliance, by Thompson Reuters, Best of The Word in Security by CISO Platform, and by IFSEC and by Thinkers 360 as the #2 Global Cybersecurity Influencer. He was featured in the 2020 and 2021 Onalytica "Who's Who in Cybersecurity" as one of the top Influencers for cybersecurity issues and in Risk management. He was also named one of the Top 5 Executives to Follow on Cybersecurity by Executive Mosaic.He is also a Cybersecurity Expert for The Network at the Washington Post, Visiting Editor at Homeland Security Today, Expert for Executive Mosaic/GovCon, and a Contributor to FORBES.
In government, Chuck has received two senior Presidential appointments. Under President George W. Bush Chuck was appointed to The Department of Homeland Security (DHS) as the first Legislative Director of The Science & Technology Directorate at the Department of Homeland Security. He also was appointed as Special Assistant to the Director of Voice of America under President Reagan. He served as a top Advisor to the late Senator Arlen Specter on Capitol Hill coveringsecurity and technology issues on Capitol Hill. Currently Chuck is serving DHS CISA on a working group exploring space and satellite cybersecurity.
In industry, Chuck has served in senior executive roles for General Dynamics as the Principal Market Growth Strategist for Cyber Systems, at Xerox as Vice President & Client Executive for Homeland Security, for Rapiscan and Vice President of R & D, for SRA as Vice President of Government Relations, and for Sutherland as Vice President of Marketing and Government Relations. He currently sits on several corporate and not-for-profit Boards in advisory roles.
In academia, Chuck is Adjunct Faculty at Georgetown Universitys Graduate Applied Intelligence Program and the Graduate Cybersecurity Programs where he teaches courses on risk management, homeland security, and cybersecurity. He designed and taught a popular course called Disruptive Technologies and Organizational Management. He was an Adjunct Faculty Member at Johns Hopkins University where he taught a graduate course on homeland security for two years. He has an MA in International relations from the University of Chicago, a BA in Political Science from DePauw University, and a Certificate in International Law from The Hague Academy of International Law.
In the media, Chuck has been a featured speaker at dozens of conferences, events, podcasts, and webinars and has published more than 250 articles and blogs on cybersecurity, homeland security and technology issues.Recently, Chuck briefed the G-20 Energy Conference on operating systems cybersecurity.He has also presented on the need for global cooperation in cybersecurity to the Holy See and the US Embassy to the Holy See in Rome. His writings have appeared on AT&T, IBM, Intel, Microsoft, General Dynamics, Xerox, Juniper Networks, NetScout, Human, Beyond Trust, Cylance, Ivanti, Checkpoint, and many other blogs. He has 104,000 plus followers on LinkedIn and runs a dozen LI groups, including the two largest in homeland security. He has his own newsletter, Security & Tech Trends, which has 48,000 subscribers. He also has a wide following on Twitter (19,000 plus followers), and Facebook (5,000 friends).
Some of Chucks other activities include being a Subject Matter Expert to The Homeland Defense and Security Information Analysis Center (HDIAC), a Department of Defense (DoD) sponsored organization through the Defense Technical Information Center (DTIC),as a featured presenter at USTRANSCOM on cybersecurity threats to transportation, as a featured presenter to the FBI and the National Academy of Sciences on Life Sciences Cybersecurity. He also served on working group with the National Academy of Sciences on digital transformation for the United States Air Force He is an Advisory Board Member for the Quantum Security Alliance.
Follow Chuck on social media:
LinkedIn:https://www.linkedin.com/in/chuckbrooks/
Twitter: @ChuckDBrooks
Read more here:
Artificial Intelligence: Transforming Healthcare, Cybersecurity, and Communications - Forbes
Machine learning for chemistry: Basics and applications – Phys.org
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
close
In a review published in Engineering, scientists explore the burgeoning field of machine learning (ML) and its applications in chemistry. Titled "Machine Learning for Chemistry: Basics and Applications," this comprehensive review aims to bridge the gap between chemists and modern ML algorithms, providing insights into the potential of ML in revolutionizing chemical research.
Over the past decade, ML and artificial intelligence (AI) have made remarkable strides, bringing us closer to the realization of intelligent machines. The advent of deep learning methods and enhanced data storage capabilities has played a pivotal role in this progress. ML has already demonstrated success in domains such as image and speech recognition, and now it is gaining significant attention in the field of chemistry, which is characterized by complex data and diverse organic molecules.
However, chemists often face challenges in adopting ML applications due to a lack of familiarity with modern ML algorithms. Chemistry datasets typically exhibit a bias towards successful experiments, while a balanced perspective necessitates the inclusion of both successful and failed experiments. Furthermore, incomplete documentation of synthetic conditions in literature poses additional challenges.
Computational chemistry, where datasets can be reliably constructed from quantum mechanics calculations, has embraced ML applications more readily. Nonetheless, chemists need a basic understanding of ML to harness the potential of data recording and ML-guided experiments.
This review serves as an introductory guide to popular chemistry databases, two-dimensional (2D) and three-dimensional (3D) features used in ML models, and popular ML algorithms. It delves into three specific chemistry fields where ML has made significant progress: retrosynthesis in organic chemistry, ML-potential-based atomic simulation, and ML for heterogeneous catalysis.
These applications have either accelerated research or provided innovative solutions to complex problems. The review concludes with a discussion of future challenges in the field.
The rapid advancement of computing facilities and the development of new ML algorithms indicate that even more exciting ML applications are on the horizon, promising to reshape the landscape of chemical research in the ML era. While the future is difficult to predict in such a fast-evolving field, it is undeniable that the development of ML models will lead to enhanced accessibility, generality, accuracy, intelligence, and ultimately, higher productivity.
The integration of ML models with the Internet offers a promising avenue for sharing ML predictions worldwide.
However, the transferability of ML models in chemistry poses a common challenge due to the diverse element types and complex materials involved. Predictions often remain limited to local datasets, resulting in decreased accuracy beyond the dataset.
To address this issue, new techniques such as the global neural network (G-NN) potential and improved ML models with more fitting parameters are being explored. While ML competitions in data science have produced exceptional algorithms, there is a need for more open ML contests in chemistry to nurture young talent.
Excitingly, end-to-end learning, which generates final output from raw input rather than designed descriptors, holds promise for more intelligent ML applications. AlphaFold2, for example, utilizes the one-dimensional (1D) structure of a protein to predict its 3D structure. Similarly, in the field of heterogeneous catalysis, an end-to-end AI model has successfully resolved reaction pathways. These advanced ML models can also contribute to the development of intelligent experimental robots for high-throughput experiments.
As the field of ML continues to evolve rapidly, it is crucial for chemists and researchers to stay informed about its applications in chemistry. This review serves as a valuable resource, providing a comprehensive overview of the basics of ML and its potential in various chemistry domains. With the integration of ML models and the collective efforts of the scientific community, the future of chemical research holds immense promise.
More information: Yun-Fei Shi et al, Machine Learning for Chemistry: Basics and Applications, Engineering (2023). DOI: 10.1016/j.eng.2023.04.013
See more here:
Machine learning for chemistry: Basics and applications - Phys.org
Harnessing deep learning for population genetic inference – Nature.com
Wakeley, J. The limits of theoretical population genetics. Genetics 169, 17 (2005).
Article PubMed PubMed Central Google Scholar
Lewontin, R. C. Population genetics. Annu. Rev. Genet. 1, 3770 (1967).
Article Google Scholar
Fu, Y.-X. Variances and covariances of linear summary statistics of segregating sites. Theor. Popul. Biol. 145, 95108 (2022).
Article PubMed PubMed Central Google Scholar
Bradburd, G. S. & Ralph, P. L. Spatial population genetics: its about time. Annu. Rev. Ecol. Evol. Syst. 50, 427449 (2019).
Article Google Scholar
Ewens, W. J. Mathematical Population Genetics I: Theoretical Introduction 2nd edn (Springer, 2004). This classic textbook covers theoretical population genetics ranging from the diffusion theory to the coalescent theory.
Crow, J. F. & Kimura, M. An Introduction to Population Genetics Theory (Blackburn Press, 2009). This classic textbook introduces the fundamentals of theoretical population genetics.
Pool, J. E., Hellmann, I., Jensen, J. D. & Nielsen, R. Population genetic inference from genomic sequence variation. Genome Res. 20, 291300 (2010).
Article CAS PubMed PubMed Central Google Scholar
Charlesworth, B. & Charlesworth, D. Population genetics from 1966 to 2016. Heredity 118, 29 (2017).
Article CAS PubMed Google Scholar
Johri, P. et al. Recommendations for improving statistical inference in population genomics. PLoS Biol. 20, e3001669 (2022).
Article CAS PubMed PubMed Central Google Scholar
The 1000 Genomes Project Consortium. A global reference for human genetic variation. Nature 526, 6874 (2015).
Article Google Scholar
Mallick, S. et al. The Allen Ancient DNA Resource (AADR): a curated compendium of ancient human genomes. Preprint at bioRixv https://doi.org/10.1101/2023.04.06.535797 (2023).
The 1001 Genomes Consortium. 1,135 genomes reveal the global pattern of polymorphism in Arabidopsis thaliana. Cell 166, 481491 (2016).
Article Google Scholar
Sudlow, C. et al. UK Biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12, e1001779 (2015).
Article PubMed PubMed Central Google Scholar
Walters, R. G. et al. Genotyping and population characteristics of the China Kadoorie Biobank. Cell Genom. 3, 100361 (2023).
Schrider, D. R. & Kern, A. D. Supervised machine learning for population genetics: a new paradigm. Trends Genet. 34, 301312 (2018). This review covers the applications of supervised learning in population genetic inference.
Article CAS PubMed PubMed Central Google Scholar
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436444 (2015).
Article CAS PubMed Google Scholar
Gao, H. et al. The landscape of tolerated genetic variation in humans and primates. Science 380, eabn8153 (2023).
Article CAS PubMed Google Scholar
van Hilten, A. et al. GenNet framework: interpretable deep learning for predicting phenotypes from genetic data. Commun. Biol. 4, 1094 (2021).
Article PubMed PubMed Central Google Scholar
Vaswani, A. et al. Attention is all you need. In Proc. Advances in Neural Information Processing Systems 30, NIPS 2017 (eds Guyon, I. et al.) 59996009 (NIPS, 2017). This study proposes the vanilla transformer architecture, which has become the basis of novel architectures that achieve state-of-the-art performance in different machine learning tasks.
Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N. & Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In Proc. 32nd International Conference on Machine Learning Vol. 37 (eds Bach, F. & Blei, D.) 22562265 (PMLR, 2015).
Nei, M. in Molecular Evolutionary Genetics 327403 (Columbia Univ. Press, 1987).
Hamilton, M. B. in Population Genetics 5367 (Wiley-Blackwell, 2009).
Kimura, M. Diffusion models in population genetics. J. Appl. Probab. 1, 177232 (1964).
Article Google Scholar
Kingman, J. F. C. On the genealogy of large populations. J. Appl. Probab. 19, 2743 (1982).
Article Google Scholar
Rosenberg, N. A. & Nordborg, M. Genealogical trees, coalescent theory and the analysis of genetic polymorphisms. Nat. Rev. Genet. 3, 380390 (2002).
Article CAS PubMed Google Scholar
Fu, Y.-X. & Li, W.-H. Maximum likelihood estimation of population parameters. Genetics 134, 12611270 (1993).
Article CAS PubMed PubMed Central Google Scholar
Griffiths, R. C. & Tavar, S. Monte Carlo inference methods in population genetics. Math. Comput. Model. 23, 141158 (1996).
Article Google Scholar
Tavar, S., Balding, D. J., Griffiths, R. C. & Donnelly, P. Inferring coalescence times from DNA sequence data. Genetics 145, 505518 (1997).
Article PubMed PubMed Central Google Scholar
Marjoram, P. & Tavar, S. Modern computational approaches for analysing molecular genetic variation data. Nat. Rev. Genet. 7, 759770 (2006).
Article CAS PubMed Google Scholar
Williamson, S. H. et al. Simultaneous inference of selection and population growth from patterns of variation in the human genome. Proc. Natl Acad. Sci. USA 102, 78827887 (2005).
Article CAS PubMed PubMed Central Google Scholar
Wang, M. et al. Detecting recent positive selection with high accuracy and reliability by conditional coalescent tree. Mol. Biol. Evol. 31, 30683080 (2014).
Article CAS PubMed Google Scholar
Szpiech, Z. A. & Hernandez, R. D. selscan: an efficient multithreaded program to perform EHH-based scans for positive selection. Mol. Biol. Evol. 31, 28242827 (2014).
Article CAS PubMed PubMed Central Google Scholar
Maclean, C. A., Hong, N. P. C. & Prendergast, J. G. D. hapbin: an efficient program for performing haplotype-based scans for positive selection in large genomic datasets. Mol. Biol. Evol. 32, 30273029 (2015).
Article CAS PubMed PubMed Central Google Scholar
Huang, X., Kruisz, P. & Kuhlwilm, M. sstar: a Python package for detecting archaic introgression from population genetic data with S*. Mol. Biol. Evol. 39, msac212 (2022).
Article CAS PubMed PubMed Central Google Scholar
Borowiec, M. L. et al. Deep learning as a tool for ecology and evolution. Methods Ecol. Evol. 13, 16401660 (2022).
Article Google Scholar
Korfmann, K., Gaggiotti, O. E. & Fumagalli, M. Deep learning in population genetics. Genome Biol. Evol. 15, evad008 (2023).
Article PubMed PubMed Central Google Scholar
Alpaydin, E. in Introduction to Machine Learning 3rd edn (eds Dietterich, T. et al.) 120 (MIT Press, 2014).
Bengio, Y., LeCun, Y. & Hinton, G. Deep learning for AI. Commun. ACM 64, 5865 (2021).
Article Google Scholar
Sapoval, N. et al. Current progress and open challenges for applying deep learning across the biosciences. Nat. Commun. 13, 1728 (2022).
Article CAS PubMed PubMed Central Google Scholar
Bishop, C. M. Model-based machine learning. Philos. Trans. R. Soc. A 371, 20120222 (2013).
Article Google Scholar
Lee, C., Abdool, A. & Huang, C. PCA-based population structure inference with generic clustering algorithms. BMC Bioinform. 10, S73 (2009).
Article Google Scholar
Li, H. & Durbin, R. Inference of human population history from individual whole-genome sequences. Nature 475, 493496 (2011).
Article CAS PubMed PubMed Central Google Scholar
Skov, L. et al. Detecting archaic introgression using an unadmixed outgroup. PLoS Genet. 14, e1007641 (2018).
Article PubMed PubMed Central Google Scholar
Chen, H., Hey, J. & Slatkin, M. A hidden Markov model for investigating recent positive selection through haplotype structure. Theor. Popul. Biol. 99, 1830 (2015).
Article PubMed Google Scholar
Lin, K., Li, H., Schltterer, C. & Futschik, A. Distinguishing positive selection from neutral evolution: boosting the performance of summary statistics. Genetics 187, 229244 (2011).
Article PubMed PubMed Central Google Scholar
Schrider, D. R., Ayroles, J., Matute, D. R. & Kern, A. D. Supervised machine learning reveals introgressed loci in the genomes of Drosophila simulans and D. sechellia. PLoS Genet. 14, e1007341 (2018).
Article PubMed PubMed Central Google Scholar
Durvasula, A. & Sankararaman, S. A statistical model for reference-free inference of archaic local ancestry. PLoS Genet. 15, e1008175 (2019).
Article CAS PubMed PubMed Central Google Scholar
Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016). This classic textbook introduces the fundamentals of deep learning.
Eraslan, G., Avsec, Z., Gagneur, J. & Theis, F. J. Deep learning: new computational modelling techniques for genomics. Nat. Rev. Genet. 20, 389403 (2019).
Article CAS PubMed Google Scholar
Villanea, F. A. & Schraiber, J. G. Multiple episodes of interbreeding between Neanderthals and modern humans. Nat. Ecol. Evol. 3, 3944 (2019).
Article PubMed Google Scholar
Unadkat, S. B., Ciocoiu, M. M. & Medsker L. R. in Recurrent Neural Networks: Design and Applications (eds Medsker, L. R. & Jain, L. C.) 112 (CRC, 1999).
Gron, A. Neural networks and deep learning (OReilly Media Inc., 2018).
Sheehan, S. & Song, Y. S. Deep learning for population genetic inference. PLoS Comput. Biol. 12, e1004845 (2016).
Article PubMed PubMed Central Google Scholar
Mondal, M., Bertranpetit, J. & Lao, O. Approximate Bayesian computation with deep learning supports a third archaic introgression in Asia and Oceania. Nat. Commun. 10, 246 (2019).
Article PubMed PubMed Central Google Scholar
Sanchez, T., Curry, J.,Charpiat, G. & Jay, F. Deep learning for population size history inference: design, comparison and combination with approximate Bayesian computation. Mol. Ecol. Resour. 21, 26452660 (2021).
Article PubMed Google Scholar
Tran, L. N., Sun, C. K., Struck, T. J., Sajan, M. & Gutenkunst, R. N. Computationally efficient demographic history inference from allele frequencies with supervised machine learning. Preprint at bioRixv https://doi.org/10.1101/2023.05.24.542158 (2023).
Article Google Scholar
See the original post here:
Harnessing deep learning for population genetic inference - Nature.com
How Apple is already using machine learning and AI in iOS – AppleInsider
Apple may not be as flashy as other companies in adopting artificial intelligence features. Still, the already has a lot of smarts scattered throughout iOS.
Apple does not go out of its way to specifically name-drop "artificial intelligence" or AI meaningfully, but the company isn't avoiding the technology. Machine learning has become Apple's catch-all for its AI initiatives.
Apple uses artificial intelligence and machine learning in iOS in several noticeable ways. Here is a quick breakdown of where you'll find it.
It has been several years since Apple started using machine learning in iOS and other platforms. The first real use case was Apple's software keyboard on the iPhone.
Apple utilized predictive machine learning to understand which letter a user was hitting, which boosted accuracy. The algorithm also aimed to predict what word the user would type next.
Machine learning, or ML, is a system that can learn and adapt without explicit instructions. It is often used to identify patterns in data and provide specific results.
This technology has become a popular subfield of artificial intelligence. Apple has also been incorporating these features for several years.
In 2023, Apple is using machine learning in just about every nook and cranny of iOS. It is present in how users search for photos, interact with Siri, see suggestions for events, and much, much more.
On-device machine learning systems benefit the end user regarding data security and privacy. This allows Apple to keep important information on the device rather than relying on the cloud.
To help boost machine learning and all of the other key automated processes in iPhones, Apple made the Neural Engine. It launched with the iPhone's A11 Bionic processor to help with some camera functions, as well as Face ID.
Siri isn't technically artificial intelligence, but it does rely on AI systems to function. Siri taps into the on-device Deep Neural Network, or DNN, and machine learning to parse queries and offer responses.
Siri can handle various voice- and text-based queries, ranging from simple questions to controlling built-in apps. Users can ask Siri to play music, set a timer, check the weather, and much more.
Apple introduced the TrueDepth camera and Face ID with the launch of the iPhone X. The hardware system can project 30,000 infrared dots to create a depth map of the user's face. The dot projection is paired with a 2D infrared scan as well.
That information is stored on-device, and the iPhone uses machine learning and the DNN to parse every single scan of the user's face when they unlock their device.
This goes beyond iOS, as the stock Photos app is available on macOS and iPadOS as well. This app uses several machine learning algorithms to help with key built-in features, including photo and video curation.
Apple's Photos app using machine learning
Facial recognition in images is possible thanks to machine learning. The People album allows searching for identified people and curating images.
An on-device knowledge graph powered by machine learning can learn a person's frequently visited places, associated people, events, and more. It can use this gathered data to automatically create curated collections of photos and videos called "Memories."
Apple works to improve the camera experience for iPhone users regularly. Part of that goal is met with software and machine learning.
Apple's Deep Fusion optimizes for detail and low noise in photos
The Neural Engine boosts the camera's capabilities with features like Deep Fusion. It launched with the iPhone 11 and is present in newer iPhones.
Deep Fusion is a type of neural image processing. When taking a photo, the camera captures a total of nine shots. There are two sets of four shots taken just before the shutter button is pressed, followed by one longer exposure shot when the button is pressed.
The machine learning process, powered by the Neural Engine, will kick in and find the best possible shots. The result leans more towards sharpness and color accuracy.
Portrait mode also utilizes machine learning. While high-end iPhone models rely on hardware elements to help separate the user from the background, the iPhone SE of 2020 relied solely on machine learning to get a proper portrait blur effect.
Machine learning algorithms help customers automate their general tasks as well. ML makes it possible to get smart suggestions regarding potential events the user might be interested in.
For instance, if someone sends an iMessage that includes a date, or even just the suggestion of doing something, then iOS can offer up an event to add to the Calendar app. All it takes is a few taps to add the event to the app to make it easy to remember.
There are more machine learning-based features coming to iOS 17:
One of Apple's first use cases with machine learning was the keyboard and autocorrect, and it's getting improved with iOS 17. Apple announced in 2023 that the stock keyboard will now utilize a "transformer language model," significantly boosting word prediction.
The transformer language model is a machine learning system that improves predictive accuracy as the user types. The software keyboard also learns frequently typed words, including swear words.
Apple introduced a brand-new Journal app when it announced iOS 17 at WWDC 2023. This new app will allow users to reflect on past events and journal as much as they want in a proprietary app.
Apple's stock Journal app
Apple is using machine learning to help inspire users as they add entries. These suggestions can be pulled from various resources, including the Photos app, recent activity, recent workouts, people, places, and more.
This feature is expected to arrive with the launch of iOS 17.1.
Apple will improve dictation and language translation with machine learning as well.
Machine learning is also present in watchOS with features that help track sleep, hand washing, heart health, and more.
As mentioned above, Apple has been using machine learning for years. Which means the company has technically been using artificial intelligence for years.
People who think Apple is lagging behind Google and Microsoft are only considering chatGPT and other similar systems. The forefront of public perception regarding AI in 2023 is occupied by Microsoft's AI-powered Bing and Google's Bard.
Apple is going to continue to rely on machine learning for the foreseeable future. It will find new ways to implement the system and boost user features in the future.
It is also rumored Apple is developing its own chatGPT-like experience, which could boost Siri in a big way at some point in the future. In February 2023, Apple held a summit focusing entirely on artificial intelligence, a clear sign it's not moving away from the technology.
Apple can rely on systems it's introducing with iOS 17, like the transformer language model for autocorrect, expanding functionality beyond the keyboard. Siri is just one avenue where Apple's continued work with machine learning can have user-facing value.
Apple's work in artificial intelligence is likely leading to the Apple Car. Whether or not the company actually releases a vehicle, the autonomous system designed for automobiles will need a brain.
See the original post here:
How Apple is already using machine learning and AI in iOS - AppleInsider
Some Experiences Integrating Machine Learning with Vision and … – Quality Magazine
Some Experiences Integrating Machine Learning with Vision and Robotics | Quality Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
See the rest here:
Some Experiences Integrating Machine Learning with Vision and ... - Quality Magazine
Here’s Why GPUs Are Deep Learning’s Best Friend – Hackaday
If you have a curiosity about how fancy graphics cards actually work, and why they are so well-suited to AI-type applications, then take a few minutes to read[Tim Dettmers] explain why this is so. Its not a terribly long read, but while it does get technical there are also car analogies, so theres something for everyone!
He starts off by saying that most people know that GPUs are scarily efficient at matrix multiplication and convolution, but what really makes them most useful is their ability to work with large amounts of memory very efficiently.
Essentially, a CPU is a latency-optimized device while GPUs are bandwidth-optimized devices. If a CPU is a race car, a GPU is a cargo truck. The main job in deep learning is to fetch and move cargo (memory, actually) around. Both devices can do this job, but in different ways. A race car moves quickly, but cant carry much. A truck is slower, but far better at moving a lot at once.
To extend the analogy, a GPU isnt actually just a truck; it is more like a fleet of trucks working in parallel. When applied correctly, this can effectively hide latency in much the same way as an assembly line. It takes a while for the first truck to arrive, but once it does, theres an unbroken line of loaded trucks waiting to be unloaded. No matter how quickly and efficiently one unloads each truck, the next one is right there, waiting. Of course, GPUs dont just shuttle memory around, they can do work on it as well.
The usual configuration for deep learning applications is a desktop computer with one or more high-end graphics cards wedged into it, but there are other (and smaller) ways to enjoy some of the same computational advantages without eating a ton of power and gaining a bunch of unused extra HDMI and DisplayPort jacks as a side effect. NVIDIAs line of Jetson development boards incorporates the right technology in an integrated way. While it might lack the raw horsepower (and power bill) of a desktop machine laden with GPUs, theyre no slouch for their size.
Read more:
Here's Why GPUs Are Deep Learning's Best Friend - Hackaday
UWMadison part of effort to advance fusion energy with machine … – University of Wisconsin-Madison
Steffi Diem (middle) participating in a panel at the White House Summit on Developing a Bold Decadal Vision for Commercial Fusion Energy. Diem has joined a collaboration across multiple institutions that will use machine learning to better understand magnetic fusion energy.
Researchers at the University of WisconsinMadison are taking part in a new collaboration built on open-science principles that will use machine learning to advance our knowledge of promising sources of magnetic fusion energy.
The U.S. Department of Energy has selected the collaboration, led by researchers at the Massachusetts Institute of Technology, to receive nearly $5 million over three years. The teams including researchers at UWMadison, William & Mary, Auburn University and the HDF group (a non-profit data management technology organization) are tasked with creating a platform to publicly share data they glean from several unique fusion devices and optimize that data for analysis using artificial intelligence tools. Student researchers from each institutionwill also have an opportunity to participate ina subsidized summer program that will focus on applying data science and machine learning to fusion energy.
The data sources will include UWMadisons Pegasus-III experiment, which is centered around a fusion device known as a spherical tokamak. Pegasus-III is a new Department of Energy funded experiment that began operations in summer 2023 and represents the latest generation in a long-running set of tokamak experiments at UWMadison. A primary goal of the experiment is to study innovative ways to start up future fusion power plants.
Im incredibly excited to be a part of projects like this one as we continue to push innovation both in the analysis and development of experimental devices and diverse workforce development initiatives, says Steffi Diem, a professor of nuclear engineering and engineering physics, who leads the Pegasus-III experiment.
Diem is an emerging leader in the fusion research world. In 2022, she was invited to present at the White Houses Bold Decadal Vision for Commercial Fusion Energy that launched several efforts focused on commercializing fusion energy. In a field traditionally dominated by men, Diem is also one of four women leading the new collaboration.
UWMadison researchers are using the new Pegasus-III experiment to study innovative techniques for starting a plasma. Joel Hallberg
Throughout much of my career, I have often been one of the few women in the room, so it is great to be a part of a collaboration where four out of the five principal investigators are women, Diem says.
The collaboration is based around the principles of open science Diem and her colleagues will make the wealth of data coming from Pegasus-III and other fusion experiments more accessible and usable to others, particularly for machine learning platforms.
While this approach is designed to accelerate knowledge of magnetic fusion devices, its also aimed at providing a more accessible path into fusion research programs for students with wider skillsets and backgrounds, particularly in data sciences. Building a more diverse fusion workforce will be tantamount going forward, says Diem.
Fusion isnt just plasma physicists anymore, she says. As fusion moves out of the lab and toward the goal of providing clean energy to communities, it requires an interdisciplinary approach with engineers, data scientists, skilled technical staff, community members and more.
UWMadison is supporting a broader push to diversify the fusion field. Some of the student researchers who will be participating in the new collaboration are part of the student-led Solis group, which provides gender-inclusive support for students studying plasma physics on campus.
The new collaboration fits well with Diems other research, funded through the Wisconsin Alumni Research Foundation, focused on reimagining fusion energy system design. That work centers energy equity and environmental justice early in the design phase to support a just and equitable energy transition.
While there are still many challenges that lie ahead for fusion, the potential benefits are huge as we drive towards a cleaner, more sustainable, equitable and just future, says Diem.
Continued here:
UWMadison part of effort to advance fusion energy with machine ... - University of Wisconsin-Madison
Machine learning tool simplifies one of the most widely used reactions in the pharmaceutical industry – Phys.org
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread
close
In the past two decades, the carbon-nitrogen bond forming reaction, known as the Buchwald-Hartwig reaction, has become one of the most widely used tools in organic synthesis, particularly in the pharmaceutical industry given the prevalence of nitrogen in natural products and pharmaceuticals.
This powerful reaction has revolutionized the way nitrogen-containing compounds are made in academic and industrial laboratories, but it requires lengthy, time-consuming experimentation to determine the best conditions for a highly effective reaction.
Now, Illinois researchers in collaboration with chemists at Hoffman La-Roche, a pharmaceutical company in Switzerland, have developed a machine learning tool that predicts in a matter of minutes the best conditions for a high-yielding reaction with no lengthy experimentation.
In a recently published article in Science, Illinois chemistry professor Scott Denmark and Ian Rinehart, a recent Ph.D. graduate in the Denmark lab, describe how they developed, trained, and tested their machine learning model to drastically accelerate the identification of substrate-adaptive conditions for this palladiumcatalyzed carbon-nitrogen bond forming reaction.
Denmark said this reaction is a very general transformation so there is much structural diversity among reactant pairings and a lot of "levers to pull" to make it work.
"And that's what we have figured out," Denmark said.
close
User guides and cheat sheets have evolved in the nearly 30 years since this reaction was discovered, and they can provide some direction, Rinehart explained, but experimentation is often necessary. Basically, a trial-and-error process in a lab.
"It's a problem that everyone in the pharmaceutical industry recognized was ripe for intervention by informatics methods," Denmark said. "Lots of people have tried to use the US Patent and Trademark Office or Chemical Abstracts or other huge databases to try to model to make predictive tools for this one very important reaction. But they haven't been able to do very well because the information in the literature is just not very reliable."
The design and construction of their machine learning tool required the generation of an experimental dataset that explores a diverse network of reactant pairings across a set of reaction conditions. A large scope of CN couplings was actively learned by neural network models by using a systematic process to design experiments.
The challenge for a project like this, Denmark said, was the amount of potential data to collect and the thousands and thousands of experiments required to build a database of information for modeling.
"One of Ian's biggest contributions was figuring out the workflow to decide what experiments to do to get a valid predictive model with about 3,500 experiments and still be able to make predictions without an enormous database," Denmark said.
They also experimentally validated the predictions from the machine learning tool.
"We tested them and found with pretty good statistics that the conditions were producing compounds when we expected," Denmark said.
The researchers report that their models showed good performance in experimental validation: Ten products were isolated in more than 85% yield from a range of couplings with out-of-sample reactants designed to challenge the models.
Rinehart said they taught machine learning models to have a kind of chemical intuition like what an expert has.
"So, we have now run or talked about so many of these couplings that we have a good intuition about what's going to happen, but someone who hadn't run hundreds or thousands of these might not have a good first guess. We have taught a model at a much more granular level [than user guides] to have an intuition. It's not perfect. But that's kind of the point. It doesn't have to be. It just has to get you to the answer faster," Rinehart said.
And the coolest part, Rinehart explained, is that intuition gets honed over time as more people use the machine learning tool. The developed workflow continually improves the prediction capability of the tool as the corpus of data grows.
"It's an exciting time as data science merges with chemistry," Denmark said. "And this is the perfect marriage. A lot of people recognized this, but no one has done it, at least not in a meaningful way that is experimentally validated."
The Denmark group is creating a cloud-based version of the workflow to enable scientists around the world to use this tool which will continuously add data to improve the model as more structurally diverse substrates are tested and different catalysts and conditions are added to the database.
Rinehart said the code is public and on an open-source license, so anyone can download and use it. Also, he is currently working on a more user-friendly interface that will allow someone to draw the two molecules they want to react, copy and paste them into the program, and get predictions in minutes instead of hours, depending on the complexity of the molecules.
"I think it's really exciting to do something like that," Rinehart said. "We don't often publish a paper and put out a tool in the public domain that people can use in the field. People in academic labs like ours could use this tool and get an answer faster in their own research."
More information: N. Ian Rinehart et al, A machine-learning tool to predict substrate-adaptive conditions for Pd-catalyzed CN couplings, Science (2023). DOI: 10.1126/science.adg2114
Journal information: Science
Go here to see the original:
Machine learning tool simplifies one of the most widely used reactions in the pharmaceutical industry - Phys.org
Revolutionizing Drug Development Through Artificial Intelligence … – Pharmacy Times
The field of drug development stands at a pivotal crossroads, where the convergence of technological advancements and medical innovation is transforming traditional paradigms. At the forefront of this transformation lies artificial intelligence (AI) and machine learning (ML), powerful tools that are revolutionizing the drug discovery and development processes. The seamless integration of AI/ML has the potential to accelerate research and enhance efficiency in a new era of personalized medicine.
Image credit: Tierney | stock.adobe.com
The FDA acknowledges the growing adoption of AI/ML across various stages of the drug development process and across diverse therapeutic domains. There has been a noticeable surge in the inclusion of AI/ML components in drug and biologic application submissions in recent years.
Moreover, these submissions encompass a broad spectrum of drug development activities, spanning from initial drug discovery and clinical investigations to post-market safety monitoring and advanced pharmaceutical manufacturing.1 In a recent reflection paper, the European Medicine Agency acknowledges the rapid evolution of AI and the need for a regulatory process to support the safe and effective development, regulation, and use of human and veterinary medicines.2
AI and ML tools possess the capability to proficiently aid in data acquisition, transformation, analysis, and interpretation throughout the lifecycle of medicinal products. Their utility spans various aspects, including substituting, minimizing, and improving the use of animal models in preclinical development through AI/ML modeling approaches. During clinical trials, AI/ML systems can assist in identifying patients based on specific disease traits or clinical factors, while also supporting data collection and analysis that will subsequently be provided to regulatory bodies as part of marketing authorization procedures.
AI/ML technologies offer unprecedented capabilities in deciphering complex biological data, predicting molecular interactions, and identifying potential drug candidates. These technologies empower researchers to analyze vast datasets with greater speed and precision than ever before. For example, AI algorithms can sift through enormous databases of chemical compounds to identify molecules with the desired properties, significantly expediting the early stages of drug discovery.
One of the critical challenges in drug development is the identification and validation of suitable drug targets. AI/ML algorithms can analyze genetic, genomic, and proteomic data to pinpoint potential disease targets. By recognizing patterns and relationships in biological information, AI can predict the likelihood of a target's efficacy, enabling researchers to make informed decisions before embarking on laborious and costly experimental processes.
The process of screening potential drug candidates involves evaluating their impact on biological systems. AI/ML models can predict the behavior of compounds within complex cellular environments, streamlining the selection of compounds for further testing. This predictive approach saves time and resources, as only the most promising candidates advance to the next stages of development.
AI/ML-driven computational simulations are transforming drug design by predicting the interaction between molecules and target proteins. These simulations aid in designing drugs with enhanced specificity, potency, and minimal adverse effects. Consequently, AI-guided rational drug design expedites the optimization of lead compounds, fostering precision medicine initiatives.
The utilization of AI/ML in clinical trials has immense potential to improve patient recruitment, predict patient responses, and optimize trial designs. These technologies can analyze patient data to identify potential participants, forecast patient outcomes, and tailor treatment regimens for individual subjects. This leads to more efficient trials, reduced costs, and improved success rates.
Although the integration of AI/MI technologies into drug development has the potential to revolutionize the field, it also comes with several inherent risks and challenges that must be carefully considered:
AI and ML are reshaping the drug development landscape, from target identification to clinical trial optimization. Their ability to analyze complex biological data, predict molecular interactions, and expedite decision-making has the potential to accelerate drug discovery, reduce costs, and improve patient outcomes.
As AI/ML continues to evolve, it will undoubtedly play an increasingly pivotal role in driving innovation and transforming the pharmaceutical industry, leading us toward a more efficient and personalized approach to drug development and health care. Although AI and ML hold immense promise in revolutionizing drug development, their adoption is not without risks.
Careful consideration of these challenges, along with robust validation, regulation, and transparent reporting, are essential to harness the benefits of AI/ML while mitigating potential pitfalls in advancing pharmaceutical innovation.
References
Read more:
Revolutionizing Drug Development Through Artificial Intelligence ... - Pharmacy Times
Open source in machine learning: experts weigh in on the future – CryptoTvplus
In a recent event by the University of California, Berkeley, focused on Open Source vs. Closed Source: Will Open Source Win? Challenges & Future in Open Source LLM, experts in the field of machine learning shared their insights into the role of open source technology in shaping the future of this dynamic industry.
In the world of AI development, the acronym LLM typically stands for Language Model, more specifically, Large Language Models. These sophisticated AI models are designed with the purpose of understanding and generating human language.
Through rigorous training using vast amounts of data, they acquire the remarkable ability to tackle diverse tasks like natural language understanding, text generation, and translation, among others. An exemplary illustration of such language models is GPT-3 (Generative Pre-trained Transformer 3).
Recently, the use of AI has become a huge topic for discussion around the world. An integral part of the conversation is whether open source will be the future of machine learning as it relates to AI or a closed system.
Ion Stoica, Professor of Computer Science at the University of California, Berkeley, outlined three key reasons why open-source technology will play a pivotal role in the future of machine learning.
Firstly, he said that the current limitation in the availability of high-quality data for training machine learning models is a challenge for further development.
However, with the use of open-source systems, more quality data becomes accessible, and the cost of training models will decrease, making larger models more effective.
Secondly, Ion pointed out that machine-learning technology is becoming increasingly strategic for many countries.
Unlike search technology which still requires human intervention, machine learning models can make autonomous decisions, rendering them highly valuable for specific applications.
He also added that there is a need for fine-tuning machine learning models for particular tasks rather than creating general-purpose models.
He believes that experts should focus on developing models that excel in specific use cases. And an open-source model will make this even easier to implement.
Another speaker at the event, Nazneen Rajani, Research Lead at Hugging Face, said that open-source technology is essential for crafting smaller, more specialized language models tailored for specific use cases.
She revealed that most companies and consumers do not require large, general intelligence models; instead, they need models that excel at specific tasks.
The Researcher also expressed excitement about Metas entry into the open-source arena, anticipating increased funding and resources for open-source projects, paving the way for further innovation and development.
In support of the first two speakers, Tatsunori Hashimoto, an Assistant Professor at Stanford University, proposed that language models could become a public good and serve as a foundational layer for intelligent agents.
He cited initiatives like the UKs Brit-GPT, government-run language models available to everyone, as examples. Once these models are open and accessible, they can form the basis of an open-source innovation ecosystem.
Tatsunori also noted that the future of open source depends on who provides the base layer and how much innovation is generated atop it.
More here:
Open source in machine learning: experts weigh in on the future - CryptoTvplus