Category Archives: Machine Learning

Mapping Methane Emissions in California Using Precision Instruments and Machine-Learning – SciTechDaily

2016 2017

Using precision instruments and new mapping and machine-learning tools, a research team has been pinpointing sources of the greenhouse gas.

In October 2016, an aircraft equipped with NASAs Airborne Visible Infrared Imaging SpectrometerNext-Generation (AVIRIS-NG) instrument detected multiple plumes of methane arising from the Sunshine Canyon landfill near Santa Clarita, California. The plumes were large enough that researchers from the Jet Propulsion Laboratory (JPL) notified facility operators and local enforcement agencies about it. It was an important step in a process of better accounting for local emissions of the gas.

Methane is a short-lived but powerful greenhouse gas that has been responsible for about 20 percent of global warming since the Industrial Revolution. Dairy cows and beef cattle produce methane through their guts and release it in burps. Their manure also produces methane, and when it is stored in manure lagoons it can be a major source of emissions. Oil and natural gas production releases methane from underground, and the infrastructure to store and transport it can leak. And landfills are a source of methane when organic materials are broken down by bacteria in anaerobic conditions.

The state of California aims toreduce such methane emissions, trying to cut back to 40 percent below 2013 levels by the end of this decade. But in order to reduce emissions, the state needs to get a better handle on the sources.

The California Air Resources Board (CARB)the state agency that oversees air pollution control effortstraditionally estimated greenhouse gas emissions by taking inventory of known emitting activities. But this approach can miss leaks or other fugitive emissions, so CARB staff became interested in measuring emissions from the air to improve greenhouse gas accounting and to pinpoint mitigation opportunities.

The images above show methane measurements made by the AVIRIS-NG instrument during October 2016 and 2017 flights over Santa Clarita, California. Methane emissions from the Sunshine Canyon landfill are shown in a yellow to red gradient, with red representing the highest concentrations. The right image shows the reduction in methane concentrations after landfill improvements were implemented.

The flights were part of theCalifornia Methane Survey, an ongoing project to map sources of methane emissions around the state. But before any flights took off, climate scientistFrancesca Hopkinsof the University of California, Riverside, and Riley Duren of JPL (now at the University of Arizona) set out to map all potential sources of methane around the state in order to better focus limited flight time and prioritize observations.

They decided to use a GIS-based approach, assimilating many publicly available geospatial datasets to develop a map that could help them quickly match methane plumes to likely sources. The research team organized potential methane-emitting infrastructure in California into three sectors: energy, agriculture, and waste. The dataset, calledSources of Methane Emissions (Vista-CA), includes more than 900,000 entries and is available at NASAs Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC).

From August 2016 to November 2017, a JPL-based team flew aircraft equipped with the AVIRIS-NG instrument over 22,000 square miles of the state. Currently there is no methane observing system that can efficiently survey the entire land surface at high resolution, said Duren. We had to focus on high-priority areas. The flight paths were planned so that they would cover at least 60 percent of methane point-source infrastructure in California.

To speed up the data analysis, Duren and colleagues then used machine learning techniques (such as neural networks) to automatically identify plumes detected during the flights. In parallel, graduate student Talha Rafiq from UC Riversidedeveloped an algorithmto attribute methane plume observations to the most likely Vista-CA source. The technologies allowed the team to share their findings within weeks with facility operators and regulators in California to alert them of fugitive methane emissions and to help accelerate remediation.

More than 272,000 individual facilities and equipment components were surveyed. Of those sites, emissions from less than 0.2 percent of that infrastructure were responsible for at least one third of Californias methane inventory. Landfills and composting facilities were responsible for 41 percent of the emissions measured. Duren, Hopkins, and otherspublished their findingsinNaturein 2019.

In the case of Sunshine Canyon, the landfill operator confirmed the methane emissions and determined that they were due to problems with surface cover and with gas capture systems. Over the next year the operator instituted a number of changes that dramatically reduced emissions. Subsequent flyovers with AVIRIS-NG confirmed a reduction in methane. These findings weredocumented by Duren, Daniel Cusworth(project scientist at the University of Arizona), and others inEnvironmental Research Lettersin 2020.

Data from the survey can be viewed on theMethane Source Finder portal. Some of the funding for the research came from NASAsAdvancing Collaborative Connections for Earth System Scienceprogram and from the Prototype Methane Monitoring System for California in NASAsCarbon Monitoring System.

NASA Earth Observatory image by Lauren Dauphin, using data fromCusworth, Daniel, et al. (2020), Landsat data from theU.S. Geological Surveyand topographic data from theNational Elevation Dataset(NED). Story by Emily Cassidy, NASA Earthdata.

Go here to read the rest:
Mapping Methane Emissions in California Using Precision Instruments and Machine-Learning - SciTechDaily

Microsoft partnership to explore how cloud, AI and machine learning can be used in space – IT Brief Australia

Microsoft has signed a Memorandum of Understanding with the University of Adelaide's Australian Institute for Machine Learning, to jointly explore how advanced cloud computing, AI, computer vision and machine learning can be applied in space, beyond Earth's surface.

Project AI Off Earth will focus on the cutting edge of innovation in space. It will conduct modelling, emulation and simulation of complex space operations and systems; build algorithms for on-board satellite data processing; develop solutions for the remote operation and optimisation of satellites, constellations and swarms; and address space domain awareness and debris monitoring.

The University of Adelaide's Australian Institute for Machine Learning is ranked among world leaders in the application of AI, computer vision and machine learning to real world problems. Microsoft has deep experience in advanced cloud computing and cognitive systems and is building Azure Space, a set of cloud offerings which allow organisations to leverage geospatial data, access anywhere bandwidth, digitally engineer space systems, and engage in remote edge computing including in space.

The University of Adelaide's Professor Tat-Jun Chin is the SmartSat CRC Professorial Chair of Sentient Satellites at the Australian Institute for Machine Learning.

"The relationship with Microsoft will give us access to cloud-based platforms that will allow us to focus on the investigation on the performance of algorithms used to analyse large amounts of earth-observation data from satellites, without needing to be concerned about gaining access to space at the onset," he says.

"Our work on these algorithms has the potential to contribute to many applications, including agricultural land management, water management, mining practices and understanding of economic activity among many other applications."

Chin says AIMLs vision is to be global leaders in machine learning research, and high impact research translation.

"To penetrate the global market we need to collaborate with international partners and this relationship with Microsoft presents the opportunity to do that," he says.

Nicholas Moretti, Azure Space Engineer, Microsoft Australia adds, "I first got exposed to the space industry while I was studying for my undergraduate degree at the University of Adelaide and crossed paths with Professor Chin.

"We are delighted to be working with AIML and believe this will help identify important opportunities to use these technologies and capabilities to support agriculture and ecology, economics and financial systems as well as the burgeoning space sector itself," he says.

Although focused on in-space technologies, Project AI Off Earth will explore how space related technologies and data, and cognitive systems can be used to support automation of multiple different industries, help establish smart cities, as well as address sustainability and important environment challenges.

AIML and Microsoft are already collaborating using Microsoft Azure Orbital Emulator a cloud-based native space emulation environment that enables massive satellite constellation simulations. Using Azure Orbital Emulator, AIML and Project AI Off Earth can quickly develop, evaluate, and train algorithms, machine learning models and AI intended for space without need to launch a single satellite.

"The University of Adelaide undertakes world-leading research in the space sector, as well as many other fields, which aims to find solutions to the challenges facing society," says the University of Adelaide's Deputy Vice-Chancellor (Research) Professor Anton Middelberg.

"This exciting new relationship between the Australian Institute for Machine Learning and Microsoft will help AIMLs expertise to have an impact on a truly global scale."

The collaboration comes at a time of soaring interest in the space-related economy; the Australian Space Agency's goal is to triple the space sectors contribution to GDP to $12 billion and create an additional 20,000 jobs by 2030.

South Australian Minister for Trade and Investment, Stephen Patterson adds, "Adelaide has established itself as the very heart of Australias space industry.

"This agreement between AIML and Microsoft, which is building a space team, is a signal of whats to come. Australia has the opportunity to be a leading player in the global space industry and this sort of international collaboration centred on Adelaide but with a truly global focus will strengthen the local industry, help build skills in this important area and reinforce Adelaides reputation as the epicentre of space activity in this part of the world," he says.

AIML and Microsoft Azure Space also intend to use Project AI Off Earth to advocate for STEM careers, to advise on structuring of STEM traineeships and scholarships, and to encourage greater participation by women, underprivileged groups, and underrepresented groups.

Continued here:
Microsoft partnership to explore how cloud, AI and machine learning can be used in space - IT Brief Australia

Facebook developing machine learning chip – The Information – Reuters

A 3D-printed Facebook logo is seen placed on a keyboard in this illustration taken March 25, 2020. REUTERS/Dado Ruvic/Illustration

Sept 9 (Reuters) - Facebook Inc (FB.O) is developing a machine learning chip to handle tasks such as content recommendation to users, The Information reported on Thursday, citing two people familiar with the project.

The company has developed another chip for video transcoding to improve the experience of watching recorded and live-streamed videos on its apps, according to the report.

Facebook's move comes as major technology firms, including Apple Inc (AAPL.O) Amazon.com Inc (AMZN.O) and Alphabet Inc's (GOOGL.O) Google, are increasingly ditching traditional silicon providers to design their own chips to save up on costs and boost performance. (https://reut.rs/3E0NlVN)

In a 2019 blog, Facebook said it was building custom chip designs specially meant to handle AI inference and video transcoding to improve performance, power and efficiency of its infrastructure, which at that time served 2.7 billion people across all its platforms.

The company had also said it would work with semiconductor players such as Qualcomm Inc (QCOM.O), Intel Corp (INTC.O) and Marvell Technology (MRVL.O) to build these custom chips as general-purpose processors alone would not be enough to manage the volume of workload Facebook's systems handled.

However, The Information's report suggests that Facebook is designing these chips completely in-house and without the help of these firms.

"Facebook is always exploring ways to drive greater levels of compute performance and power efficiency with our silicon partners and through our own internal efforts," a company spokesperson said.

Reporting by Chavi Mehta in Bengaluru; Editing by Anil D'Silva

Our Standards: The Thomson Reuters Trust Principles.

See the original post:
Facebook developing machine learning chip - The Information - Reuters

Prediction of arrhythmia susceptibility through mathematical modeling and machine learning – pnas.org

Significance

Despite our understanding of the many factors that promote ventricular arrhythmias, it remains difficult to predict which specific individuals within a population will be especially susceptible to these events. We present a computational framework that combines supervised machine learning algorithms with population-based cellular mathematical modeling. Using this approach, we identify electrophysiological signatures that classify how myocytes respond to three arrhythmic triggers. Our predictors significantly outperform the standard myocyte-level metrics, and we show that the approach provides insight into the complex mechanisms that differentiate susceptible from resistant cells. Overall, our pipeline improves on current methods and suggests a proof of concept at the cellular level that can be translated to the clinical level.

At present, the QT interval on the electrocardiographic (ECG) waveform is the most common metric for assessing an individuals susceptibility to ventricular arrhythmias, with a long QT, or, at the cellular level, a long action potential duration (APD) considered high risk. However, the limitations of this simple approach have long been recognized. Here, we sought to improve prediction of arrhythmia susceptibility by combining mechanistic mathematical modeling with machine learning (ML). Simulations with a model of the ventricular myocyte were performed to develop a large heterogenous population of cardiomyocytes (n = 10,586), and we tested each variants ability to withstand three arrhythmogenic triggers: 1) block of the rapid delayed rectifier potassium current (IKr Block), 2) augmentation of the L-type calcium current (ICaL Increase), and 3) injection of inward current (Current Injection). Eight ML algorithms were trained to predict, based on simulated AP features in preperturbed cells, whether each cell would develop arrhythmic dynamics in response to each trigger. We found that APD can accurately predict how cells respond to the simple Current Injection trigger but cannot effectively predict the response to IKr Block or ICaL Increase. ML predictive performance could be improved by incorporating additional AP features and simulations of additional experimental protocols. Importantly, we discovered that the most relevant features and experimental protocols were trigger specific, which shed light on the mechanisms that promoted arrhythmia formation in response to the triggers. Overall, our quantitative approach provides a means to understand and predict differences between individuals in arrhythmia susceptibility.

Author contributions: M.V. and E.A.S. designed research; M.V., X.M., and E.A.S. performed research; M.V., X.M., and E.A.S. analyzed data; and M.V. and E.A.S. wrote the paper.

The authors declare no competing interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2104019118/-/DCSupplemental.

Go here to see the original:
Prediction of arrhythmia susceptibility through mathematical modeling and machine learning - pnas.org

Government and Industry May Miss Health Care’s Machine Learning Moment | Opinion – Newsweek

The public has lost confidence in our public health agencies and officialsand also our political leadersthanks to confusing and contradictory pronouncements and policies related to the coronavirus pandemic.

But don't think for a moment that leadership failures in health care are confined to COVID-19. Two other recent stories highlight how both government and industry are missing an opportunity to use new machine learning technology to deliver better health care at a lower cost.

In the government's case, Lina Khan's activist Federal Trade Commission (FTC) is opposing a merger between Illumina and GRAIL, two medical technology companies. The latter company, which spun off from Illumina in 2015, has developed a test capable of providing early detection of 50 different types of cancer. Bringing GRAIL back under the Illumina framework would get this life-saving technology to market faster and more efficientlybut the FTC couldn't let a little thing like saving lives get in the way of its anti-business agenda. To its credit, Illumina closed the deal in August without waiting for approval from the FTC or European regulators.

Unfortunately, the government isn't the only entity that too often makes decisions without fully understanding how disruptive innovation really works. The frustrating case of Epic Systems' Early Detection of Sepsis model shows the industry itself can fall prey to a Luddite mindset.

Epic's model is designed to detect and prevent sepsis, a leading cause of death that also accounts for 5 percent of U.S. hospitalization costs. Many of these deaths and the associated costs can be prevented with early diagnosis. Epic is so committed to helping hospitals reduce sepsis-related deaths that it developed and gives awayfor freean AI-powered early-warning model that helps alert doctors and nurses when a patient might need a second look.

The sepsis algorithm has produced encouraging results with customers, but is facing criticism in the health care industry press. In one peer-reviewed study, Prisma reported a 22 percent decrease in mortalitywhich could translate to millions of lives saved if it were implemented globally. More recently, in a controlled clinical trial, MetroHealth found a meaningful reduction in mortality and length of stay, and reduced the time to antibiotic treatment of septic patients in the ED by almost an hour.

Critics of Epic's algorithm have claimed that it is not yet good enough at detecting sepsis cases. But this accusation reveals a misunderstanding of the technology involved. Both the GRAIL and Epic models utilize machine learninga form of artificial intelligence that compares information from a test or a patient's medical record against vast amounts of data about previous cases with known outcomes. By its nature, machine learning gets better as it acquires new information. Failure to account for this fact has led many in government and media to dismiss promising innovations.

Machine learning cannot replace the expertise of a human doctor or nurse, but it offers those human health care providers a powerful new tool to see patterns and evidence they could never notice on their own. These cutting-edge technologies have the potential to save millions of lives and drastically reduce the cost of health care, if we let them.

Getting these complex algorithms right takes time, but we cannot allow the perfect to be the enemy of the already excellent. In the GRAIL case, the government needs to do what it always needs to do, and just get out of the way. In Epic's case, the industry needs to develop a deeper appreciation for how disruptive innovation works and work closely with bold creators to bring revolutionary technologies to life.

Steve Forbes is Chairman and Editor-in-Chief of Forbes Media.

The views expressed in this article are the writer's own.

See the original post:
Government and Industry May Miss Health Care's Machine Learning Moment | Opinion - Newsweek

NCAR will collaborate on new initiative to integrate AI with climate modeling | NCAR & UCAR News – UCAR

Sep 10, 2021 - by Laura Snider

The National Center for Atmospheric Research (NCAR) is a collaborator on a new $25 million initiative that will use artificial intelligence to improve traditional Earth system models with the goal of advancing climate research to better inform decision makers with more actionable information.

The Center for Learning the Earth with Artificial Intelligence and Physics (LEAP) is one of six new Science and Technology Centers announced by the National Science Foundation to work on transformative science that will broadly benefit society. LEAP will be led by Columbia University in collaboration with several other universities as well as NCAR and NASAs Goddard Institute for Space Studies.

The goals of LEAP support NCARs Strategic Plan, which emphasizes the importance of actionable Earth system science.

LEAP is a tremendous opportunity for a multidisciplinary team to explore the potential of using machine learning to improve our complex Earth system models, all for the long-term benefit of society, said NCAR scientist David Lawrence, who is the NCAR lead on the project. NCARs models have always been developed in collaboration with the community, and were excited to work with skilled data scientists to develop new and innovative ways to further advance our models.

LEAP will focus its efforts on the NCAR-based Community Earth System Model. CESM is an incredibly sophisticated collection of component models that when connected can simulate atmosphere, ocean, land, sea ice, and ice sheet processes that interact with and influence each other, which is critical to accurately project how the climate will change in the future. The result is a model that produces a comprehensive and high-quality representation of the Earth system.

Despite this, CESM is still limited by its ability to represent certain complex physical processes in the Earth system that are difficult to simulate. Some of these processes, like the formation and evolution of clouds, happen at such a fine scale that the model cannot resolve them. (Global Earth system models are typically run at relatively low spatial resolution because they need to simulate decades or centuries of time and computing resources are limited.) Other processes, including land ecology, are so complicated that scientists struggle to identify equations that accurately capture what is happening in the real world.

In both cases, scientists have created simplified subcomponents known as parameterizations to approximate these physical processes in the model. A major goal of LEAP is to improve on these parameterizations with the help of machine learning, which can leverage the incredible wealth of Earth system observations and high-resolution model data that has become available.

By training the machine learning model on these data sets, and then collaborating with Earth system modelers to incorporate these subcomponents into CESM, the researchers expect to improve the accuracy and detail of the resulting simulations.

Our goal is to harness data from observations and simulations to better represent the underlying physics, chemistry, and biology of Earths climate system, said Galen McKinley, a professor of earth and environmental sciences at Columbia. More accurate models will help give us a clearer vision of the future.

To learn more, read the NSF announcement and the Columbia news release.

See all News

Go here to read the rest:
NCAR will collaborate on new initiative to integrate AI with climate modeling | NCAR & UCAR News - UCAR

Computer vision and deep learning provide new ways to detect cyber threats – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

The last decades growing interest in deep learning was triggered by the proven capacity of neural networks in computer vision tasks. If you train a neural network with enough labeled photos of cats and dogs, it will be able to find recurring patterns in each category and classify unseen images with decent accuracy.

What else can you do with an image classifier?

In 2019, a group of cybersecurity researchers wondered if they could treat security threat detection as an image classification problem. Their intuition proved to be well-placed, and they were able to create a machine learning model that could detect malware based on images created from the content of application files. A year later, the same technique was used to develop a machine learning system that detects phishing websites.

The combination of binary visualization and machine learning is a powerful technique that can provide new solutions to old problems. It is showing promise in cybersecurity, but it could also be applied to other domains.

The traditional way to detect malware is to search files for known signatures of malicious payloads. Malware detectors maintain a database of virus definitions which include opcode sequences or code snippets, and they search new files for the presence of these signatures. Unfortunately, malware developers can easily circumvent such detection methods using different techniques such as obfuscating their code or using polymorphism techniques to mutate their code at runtime.

Dynamic analysis tools try to detect malicious behavior during runtime, but they are slow and require the setup of a sandbox environment to test suspicious programs.

In recent years, researchers have also tried a range of machine learning techniques to detect malware. These ML models have managed to make progress on some of the challenges of malware detection, including code obfuscation. But they present new challenges, including the need to learn too many features and a virtual environment to analyze the target samples.

Binary visualization can redefine malware detection by turning it into a computer vision problem. In this methodology, files are run through algorithms that transform binary and ASCII values to color codes.

In a paper published in 2019, researchers at the University of Plymouth and the University of Peloponnese showed that when benign and malicious files were visualized using this method, new patterns emerge that separate malicious and safe files. These differences would have gone unnoticed using classic malware detection methods.

According to the paper, Malicious files have a tendency for often including ASCII characters of various categories, presenting a colorful image, while benign files have a cleaner picture and distribution of values.

When you have such detectable patterns, you can train an artificial neural network to tell the difference between malicious and safe files. The researchers created a dataset of visualized binary files that included both benign and malign files. The dataset contained a variety of malicious payloads (viruses, worms, trojans, rootkits, etc.) and file types (.exe, .doc, .pdf, .txt, etc.).

The researchers then used the images to train a classifier neural network. The architecture they used is the self-organizing incremental neural network (SOINN), which is fast and is especially good at dealing with noisy data. They also used an image preprocessing technique to shrink the binary images into 1,024-dimension feature vectors, which makes it much easier and compute-efficient to learn patterns in the input data.

The resulting neural network was efficient enough to compute a training dataset with 4,000 samples in 15 seconds on a personal workstation with an Intel Core i5 processor.

Experiments by the researchers showed that the deep learning model was especially good at detecting malware in .doc and .pdf files, which are the preferred medium for ransomware attacks. The researchers suggested that the models performance can be improved if it is adjusted to take the filetype as one of its learning dimensions. Overall, the algorithm achieved an average detection rate of around 74 percent.

Phishing attacks are becoming a growing problem for organizations and individuals. Many phishing attacks trick the victims into clicking on a link to a malicious website that poses as a legitimate service, where they end up entering sensitive information such as credentials or financial information.

Traditional approaches for detecting phishing websites revolve around blacklisting malicious domains or whitelisting safe domains. The former method misses new phishing websites until someone falls victim, and the latter is too restrictive and requires extensive efforts to provide access to all safe domains.

Other detection methods rely on heuristics. These methods are more accurate than blacklists, but they still fall short of providing optimal detection.

In 2020, a group of researchers at the University of Plymouth and the University of Portsmouth used binary visualization and deep learning to develop a novel method for detecting phishing websites.

The technique uses binary visualization libraries to transform website markup and source code into color values.

As is the case with benign and malign application files, when visualizing websites, unique patterns emerge that separate safe and malicious websites. The researchers write, The legitimate site has a more detailed RGB value because it would be constructed from additional characters sourced from licenses, hyperlinks, and detailed data entry forms. Whereas the phishing counterpart would generally contain a single or no CSS reference, multiple images rather than forms and a single login form with no security scripts. This would create a smaller data input string when scraped.

The example below shows the visual representation of the code of the legitimate PayPal login compared to a fake phishing PayPal website.

The researchers created a dataset of images representing the code of legitimate and malicious websites and used it to train a classification machine learning model.

The architecture they used is MobileNet, a lightweight convolutional neural network (CNN) that is optimized to run on user devices instead of high-capacity cloud servers. CNNs are especially suited for computer vision tasks including image classification and object detection.

Once the model is trained, it is plugged into a phishing detection tool. When the user stumbles on a new website, it first checks whether the URL is included in its database of malicious domains. If its a new domain, then it is transformed through the visualization algorithm and run through the neural network to check if it has the patterns of malicious websites. This two-step architecture makes sure the system uses the speed of blacklist databases and the smart detection of the neural networkbased phishing detection technique.

The researchers experiments showed that the technique could detect phishing websites with 94 percent accuracy. Using visual representation techniques allows to obtain an insight into the structural differences between legitimate and phishing web pages. From our initial experimental results, the method seems promising and being able to fast detection of phishing attacker with high accuracy. Moreover, the method learns from the misclassifications and improves its efficiency, the researchers wrote.

[architecture]

I recently spoke to Stavros Shiaeles, cybersecurity lecturer at the University of Portsmouth and co-author of both papers. According to Shiaeles, the researchers are now in the process of preparing the technique for adoption in real-world applications.

Shiaeles is also exploring the use of binary visualization and machine learning to detect malware traffic in IoT networks.

As machine learning continues to make progress, it will provide scientists new tools to address cybersecurity challenges. Binary visualization shows that with enough creativity and rigor, we can find novel solutions to old problems.

Original post:
Computer vision and deep learning provide new ways to detect cyber threats - TechTalks

Artificial Intelligence: Should You Teach It To Your Employees? – Forbes

Back view of a senior professor talking on a class to large group of students.

AI is becoming strategic for many companies across the world.The technology can be transformative for just about any part of a business.

But AI is not easy to implement.Even top-notch companies have challenges and failures.

So what can be done?Well, one strategy is to provide AI education to the workforce.

If more people are AI literate and can start to participate and contribute to the process, more problemsboth big and smallacross the organization can be tackled, said David Sweenor, who is the Senior Director of Product Marketing at Alteryx.We call this the Democratization of AI and Analytics. A team of 100, 1,000, or 5,000 working on different problems in their areas of expertise certainly will have a bigger impact than if left in the hands of a few.

Just look at Levi Strauss & Co.Last year the company implemented a full portfolio of enterprise training programsfor all employees at all levelsfocused on data and AI for business applications.For example, there is the Machine Learning Bootcamp, which is an eight-week program for learning Python coding, neural networks and machine learningwith an emphasis on real-world scenarios.

Our goal is to democratize this skill set and embed data scientists and machine learning practitioners throughout the organization, said Louis DeCesari, who is the Global Head of Data, Analytics, and AI at Levi Strauss & Co.In order to achieve our vision of becoming the worlds best digital apparel company, we need to integrate digital into all areas of the enterprise.

Granted, corporate training programs can easily become a waste.This is especially the case when there is not enough buy-in at the senior levels of management.

It is also important to have a training program that is more than just a bunch of lectures.You need to have outcomes-based training, said Kathleen Featheringham, who is the Director of Artificial Intelligence Strategy at Booz Allen.Focus on how AI can be used to push forward the mission of the organization, not just training for the sake of learning about AI. Also, there should be roles-based training.There is no one-size-fits-all approach to training, and different personas within an organization will have different training needs.

AI training can definitely be daunting because of the many topics and the complex concepts.In fact, it might be better to start with basic topics.

A statistics course can be very helpful, said Wilson Pang, who is the Chief Technology Officer at Appen.This will help employees understand how to interpret data and how to make sense of data. It will equip the company to make data driven decisions.

There also should be coverage of how AI can go off the rails.There needs to be training on ethics, said Aswini Thota, who is a Principal Data Scientist at Bose Corporation.Bad and biased data only exacerbate the issues with AI systems.

For the most part, effective AI is a team sport.So it should really involve everyone in an organization.

The acceleration of AI adoption is inescapablemost of us experience AI on a daily basis whether we realize it or not, said Alex Spinelli, who is the Chief Technology Officer at LivePerson.The more companies educate employees about AI, the more opportunities theyll provide to help them stay up-to-date as the economy increasingly depends on AI-inflected roles. At the same time, nurturing a workforce thats ahead of the curve when it comes to understanding and managing AI will be invaluable to driving the companys overall efficiency and productivity.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL.

See the original post here:
Artificial Intelligence: Should You Teach It To Your Employees? - Forbes

Leveraging Artificial Intelligence and Machine Learning to Solve Scientific Problems in the U.S. – OpenGov Asia

The U.S. Department of Energy (DOE) advanced Computational and Data Infrastructures (CDIs) such as supercomputers, edge systems at experimental facilities, massive data storage, and high-speed networks are brought to bear to solve the nations most pressing scientific problems.

The problems include assisting in astrophysics research, delivering new materials, designing new drugs, creating more efficient engines and turbines, and making more accurate and timely weather forecasts and climate change predictions.

Increasingly, computational science campaigns are leveraging distributed, heterogeneous scientific infrastructures that span multiple locations connected by high-performance networks, resulting in scientific data being pulled from instruments to computing, storage, and visualisation facilities.

However, since these federated services infrastructures tend to be complex and managed by different organisations, domains, and communities, both the operators of the infrastructures and the scientists that use them have limited global visibility, which results in an incomplete understanding of the behaviour of the entire set of resources that science workflows span.

Although scientific workflow systems increase scientists productivity to a great extent by managing and orchestrating computational campaigns, the intricate nature of the CDIs, including resource heterogeneity and the deployment of complex system software stacks, pose several challenges in predicting the behaviour of the science workflows and in steering them past system and application anomalies.

Our new project will provide an integrated platform consisting of algorithms, methods, tools, and services that will help DOE facility operators and scientists to address these challenges and improve the overall end-to-end science workflow.

Research professor of computer science and research director at the University of Southern California

Under a new DOE grant, the project aims to advance the knowledge of how simulation and machine learning (ML) methodologies can be harnessed and amplified to improve the DOEs computational and data science.

The project will add three important capabilities to current scientific workflow systems (1) predicting the performance of complex workflows; (2) detecting and classifying infrastructure and workflow anomalies and explaining the sources of these anomalies; and (3) suggesting performance optimisations. To accomplish these tasks, the project will explore the use of novel simulation, ML, and hybrid methods to predict, understand, and optimise the behaviour of complex DOE science workflows on DOE CDIs.

Assistant director for network research and infrastructure at RENCI stated that in addition to creating a more efficient timeline for researchers, we would like to provide CDI operators with the tools to detect, pinpoint, and efficiently address anomalies as they occur in the complex DOE facilities landscape.

To detect anomalies, the project will explore real-time ML models that sense and classify anomalies by leveraging underlying spatial and temporal correlations and expert knowledge, combine heterogeneous information sources, and generate real-time predictions.

Successful solutions will be incorporated into a prototype system with a dashboard that will be used for evaluation by DOE scientists and CDI operators. The project will enable scientists working on the frontier of DOE science to efficiently and reliably run complex workflows on a broad spectrum of DOE resources and accelerate time to discovery.

Furthermore, the project will develop ML methods that can self-learn corrective behaviours and optimise workflow performance, with a focus on explainability in its optimisation methods. Working together, the researchers behind Poseidon will break down the barriers between complex CDIs, accelerate the scientific discovery timeline, and transform the way that computational and data science are done.

As reported by OpenGov Asia, the U.S. Department of Energys (DOE) Argonne National Laboratory is leading efforts to couple Artificial Intelligence (AI) and cutting-edge simulation workflows to better understand biological observations and accelerate drug discovery.

Argonne collaborated with academic and commercial research partners to achieve near real-time feedback between simulation and AI approaches to understand how two proteins in the SARS-CoV-2 viral genome interact to help the virus replicate and elude the hosts immune system.

Read the original here:
Leveraging Artificial Intelligence and Machine Learning to Solve Scientific Problems in the U.S. - OpenGov Asia

Machine Learning augmented docking studies of aminothioureas at the SARS-CoV-2-ACE2 interface – DocWire News

This article was originally published here

PLoS One. 2021 Sep 9;16(9):e0256834. doi: 10.1371/journal.pone.0256834. eCollection 2021.

ABSTRACT

The current pandemic outbreak clearly indicated the urgent need for tools allowing fast predictions of bioactivity of a large number of compounds, either available or at least synthesizable. In the computational chemistry toolbox, several such tools are available, with the main ones being docking and structure-activity relationship modeling either by classical linear QSAR or Machine Learning techniques. In this contribution, we focus on the comparison of the results obtained using different docking protocols on the example of the search for bioactivity of compounds containing N-N-C(S)-N scaffold at the S-protein of SARS-CoV-2 virus with ACE2 human receptor interface. Based on over 1800 structures in the training set we have predicted binding properties of the complete set of nearly 600000 structures from the same class using the Machine Learning Random Forest Regressor approach.

PMID:34499662 | DOI:10.1371/journal.pone.0256834

Read the original here:
Machine Learning augmented docking studies of aminothioureas at the SARS-CoV-2-ACE2 interface - DocWire News