Category Archives: Machine Learning

What is the Difference Between The Learning Curve of Machine Learning and Artificial Intelligence? – BBN Times

Machine Learning (ML)is about statistical patterns in the artificial data sets, while artificial intelligence (AI) is about causal patterns in the real world data sets.

Source: Medium

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Source: SAS

Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks.Artificial intelligence is important because it automates repetitive learning and discovery through data.Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks. And it does so reliably and without fatigue. Of course, humans are still essential to set up the system and ask the right questions.

Machine learning is a subset of artificial intelligence, that automates analytical model building based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Using statistical learning technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing correlations and patterns in the data.

There are plenty of examples of how easy it is to break the leading pattern-recognition technology in ML/DL, known as deep neural networks (DNNs). These have proved incredibly successful at correctly classifying all kinds of input, including images, speech and data on consumer preferences. But DNNs are fundamentally brittle, taken into unfamiliar territory, they break in unpredictable ways. DNNs do not actually understand the world. Loosely modelled on the architecture of the brain, they are software structures made up of large numbers of digital neurons arranged in many layers. Each neuron is connected to others in layers above and below it.

That could lead to substantial problems. Deep-learning systems are increasingly moving out of the lab into the real world, frompiloting self-driving carstomapping crimeanddiagnosing disease.

An AI footballer in a simulated penalty-shootout is confused when the AI goalkeeper enacts an adversarial policy: falling to the floor (right) | Credit: Adam Gleave

it was possible to use adversarial examples not only to fool a DNN, but also to reprogram it entirely effectively repurposing an AI trained on one task to do another.

There are no fixes for the fundamental brittleness of noise/pixel-fooled DNNs, but making real AIs that can model, explore and exploit the world for themselves, write their own code and retain memories.

Deep Learningis a specific class of machine learning algorithms that use complex neural networks. The building block of the brain is the neuron, while the basic building block of an artificial neural network is a perceptron that accomplishes signal processing. Perceptrons are then connected into a large mesh network. The neural network is taught how to perform a task by having it process and analyze examples, which have been previously labeled. For example, in an object recognition task, the neural network is presented with a large number of objects of a certain type (i.e. a dog, a car). The neural network learns to categorize new images by having been trained on recurring patterns. This approach combines advances in computing power and neural networks to learn complex patterns in large amounts of data.

Source: Forbes & IBM

Contrary to popular assumptions, the biggest challenge facing companies with artificial intelligence (AI) isnt a lack of data scientists but rather data itself. Companies need more of it in every formstructured, unstructured and otherwise.

Source: Nature

Artificial-intelligence researchers are trying to fix the flaws of neural networks.

These kinds of systems will form the story of the coming decade in AI research, emerging as a real true or causal AI with a deep understanding of the structure of the world.

Real AI enables machines or software applications to effectively interact with any environment, while understanding the world and learning from experience, and performing any human-like tasks and beyond.

Of many known definitions, just a few are close to real AI systems:An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. [OECD/LEGAL/0449]

All sectoral applications in the public sectors of various industries, from agriculture and forestry to manufacturing, healthcare, education and government imply the real world AI systems.

The most advanced use case of real AI in the agricultural sector is known as precision agriculture, where AI enabled processing of data allows farmers to make temporally and spatially tailored management decisions, leading to a more efficient use of agricultural inputs, such as fertilisers and pesticides. And the required data is generated through remote sensing technologies using satellites, planes and unmanned aerial vehicles (drones) and through on the ground sensors in combination with IoT technology.

But even a blind pattern recognition with predictive ML algorithms is so extremely powerful that it is good enough to have made companies such as Apple, Microsoft and Amazon, Facebook and Google, Alibaba, Tencent, Amazon the most valuable in the world.

But theres a much bigger wave coming. And this will be about superintelligent machines that manipulate the world and create their own data through their own actions. [Jrgen Schmidhuber at the Dalle Molle Institute for Artificial Intelligence Research in Manno, Switzerland].

We talk about the next generation of MI, Real World AI and Machine Learning. Its universe of discourse is the whole world with all its sub-worlds.

Such a Universe is modeled as consisting of 4 major parts, the universe of Nature (World I), the domain of Mind (World II), the domain of Society and Human Culture (World III), and the realm of Technology and Engineering and Industry (World IV).

Science and technology, the arts and philosophy are unified as a web of intellectual learning, scientific knowledge, and engineering sciences. A union of human knowledge defined as the wisdom science (or scientific wisdom).

It is affording a framework for the most life-critical innovations and breakthroughs, from the Internet of Everything to Theory of Everything, Emerging Technologies to Intelligent Cities and Connected Smart World, all integrated by the Real World AI and ML.

Companies that dont adop machine learning and AI technologies are destined to be left behind. Most industries are already being changed by the emergence of AI.2021 has shown a growing confidence in artificial intelligence and its predictive technology. However, for it to achieve its full potential, AI needs to be trusted by companies.

More here:
What is the Difference Between The Learning Curve of Machine Learning and Artificial Intelligence? - BBN Times

Harvard researchers part of new NSF AI research institute – Harvard School of Engineering and Applied Sciences

Harvard University researchers will take leading roles in a new National Science Foundation (NSF) artificial intelligence research institute housed at the University of Washington (UW). The UW-led AI Institute for Dynamic Systems is among 11 new AI research institutes announced today by the NSF.

Na Li, the Gordon McKay Professor of Electrical Engineering and Applied Mathematics at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS), is a co-principal investigator at the institute and will lead one of the main research thrusts. Michael Brenner, the Michael F. Cronin Professor of Applied Mathematics and Applied Physics and Professor of Physics at SEAS, and Lucas Janson, Assistant Professor of Statistics and Affiliate in Computer Science will also be part of the institutes research team.

The AI Institute for Dynamic Systems will focus on fundamental AI and machine learning theory, algorithms and applications for real-time learning and control of complex dynamic systems, which describe chaotic situations where conditions are constantly shifting and hard to predict. In addition to research, the institute will be focused on training future researchers in this field throughout the education pipeline.

"The engineering sciences are undergoing a revolution that is aided by machine learning and AI algorithms," said institute director J. Nathan Kutz, a UW professor of applied mathematics. "This institute brings together a world-class team of engineers, scientists and mathematicians who aim to integrate fundamental developments in AI with applications in critical and emerging technological applications."

The overall goal of this institute is to integrate physics-based models with AI and machine learning approaches to develop data-enabled efficient and explainable solutions for challenges across science and engineering. The research will be divided into three main thrusts: modeling, control and optimization and sensors.

Li will lead the control research thrust. Li, along with Janson and the rest of the control research team, will leverage the successes of machine learning towards the control of modern complex dynamical systems. Specifically, they will be focused on several challenges pertaining to reinforcement learning (RL), a class of machine learning that addresses the problem of learning to control physical systems by explicitly considering their inherent dynamical structure and feedback loop.

The AI for control team will figure out how to develop scalable learning-based control methods for large-scale dynamical systems; maintain the performance of the learned policies even when there is a model class mismatch; and guarantee that the systems maintain stability and stay within a safety constraint while still learning efficiently.

To date, the successes of RL have been limited to very structured or simulated environments, said Li. Applying RL to real-world systems, like energy systems, advanced manufacturing, and robot autonomy, faces many critical challenges such as scalability, robustness, safety, to name a few. We will develop critically enabling mathematical, computational, and engineering architectures to overcome these challenges to bring the success of AI/ML to our real-world systems.

"One particular focus of ours will be to quantify the statistical uncertainty of what AI learns, enabling us to develop algorithms with rigorous safeguards that prevent them from harming anyone or anything while they explore and learn from their environment," said Janson.

Brenner, a leader in the field of physics-informed machine-learning methods for complex systems, will be part of the AI for modeling research team. That team will be focused on learning physically interpretable models of dynamical systems from off-line and/or on-line streaming data.

"Our research will explore how we can develop better machine-learning technologies by baking in and enforcing known physics, such as conservation laws, symmetries, etc.," said institute associate director Steve Brunton, a UW associate professor of mechanical engineering. "Similarly, in complex systems where we only have partially known or unknown physics such as neuroscience or epidemiology can we use machine learning to learn the 'physics' of these systems?"

Harvard is among several partner institutions including the University of Hawaii at Mnoa, Montana State University, the University of Nevada Reno, Boise State University, the University of Alaska Anchorage, Portland State University and Columbia University. The institute will also partner with high school programs that focus on AI-related projects and creating a post-baccalaureate program that will actively recruit and support recent college graduates from underrepresented groups, United States veterans and first-generation college students with the goal of helping them attend graduate school. The institute will receive about $20 million over five years.

Continue reading here:
Harvard researchers part of new NSF AI research institute - Harvard School of Engineering and Applied Sciences

Machine Learning May Solve the Cube Conundrum – Journal of Petroleum Technology

Optimal well spacing is the question. Well interactions are the problem. And cube drilling was supposed to be the answer. But it didnt turn out that way.

There was this idea that operators could avoid parent/child interactions by codeveloping their wells, said Ted Cross, a technical adviser with Novi Labs, during a recent presentation. They could develop many, many zones and maximize the recovery from a three-dimensional volume of rock.

This was cube drilling.

They could get a lot of operational efficiencies by having multiple frac crews on site, Cross said, building these megapads and saving on pad-construction costs.

The practice was tried, and, when the results were released, production was underwhelming. Stocks fell. Clearly, the cube was not the answer.

Nonetheless, much was learned from the venture into this dense drilling, which saw 50, 60, maybe 70 wells per section, within a given square mile, which is incredibly dense, Cross said. Just because the idea of a 70-well superdevelopment is dead doesnt mean that the concept cant still be useful.

While the concept of megapads has faded, it is not gone. Cross presented development maps and analysis that show people are still going to town on dense development, even if theyre not 60 wells per section. The industry has taken a little bit of time to figure out what geology supports these.

Consequently, well spacing remains important. Its still the key to driving net asset value and cash flow, said Novis president and cofounder, Jon Ludwig. If you go too aggressive, too many wells per section, obviously you lose cash flow, subtract net asset value, and, if youre public, you can subtract a good amount of company value as well. But, if youre not aggressive enough, you leave value on the table. So, its still critical to get this right.

Getting it right takes data, something the oil and gas industry has never lacked and something that cube drilling has produced in great quantities. Courtesy of all this cube development that has occurred, theres a lot of data, Ludwig said. Thats a huge advantage. We know now what good and bad look like. Every single cube thats been developed has left a signature.

Of course, the data doesnt help if it isnt used properly. We can all benefit from that if we know how to use the data well, Ludwig said. This is where machine learning comes in.

Machine learning models can tease out these subtle warnings from the past, Ludwig said.

One technique that benefits from the lessons of cube drilling is what Ludwig calls the surgical strike.

Getting cubes right is not all about a codeveloped cube in greenfield acreage, Ludwig said. A surgical strike, as weve defined it, is: What if I put a lease-line well between these existing developments? Or, what if Ive just acquired acreage in a very developed play like Eagle Ford or Bakken? How do I improve asset value? How do I bring learnings, completions designs, etc. how do I bring that in and actually improve net asset value by figuring out where you could still develop?

The machine-learning models help, Ludwig said, but the data must be dynamic. If youve built any kind of data-driven model, you want to use that model then to actually make forecasts and run scenarios for various ways you might develop your acreage. In order to do that, you need to have dynamic parent/child calculations for these hypothetical developments. If youre going to plan a cube where youre going to come in under an existing development, you need data that gets generated on the fly that describes distances, timing, etc. and allows whatever method youre using for modeling to change the forecast based on those factors.

This, Ludwig added, must be presented as a time series. We learned early on that making a point prediction is valuable and useful but its not nearly as useful as showing the shape of the curve and how the production rates change over time.

A cube, however, will not thrive in a black box. You really need to have the model not only output a forecast but also output something that explains why that forecast was made, what variables are driving that forecast, Ludwig said. He said that the models, if applied correctly, can explain their work.

What I mean by explain their work is: If a model forecasts X or Y, two different forms of a particular cube design, can it tell me also why? Because answering why is important when youre make the kinds of investment decisions that the industry is being asked to make. The sophistication of the models is not just the ability to make accurate forecasts, it is also the ability to explain their work. These two things together are critical for the financial case to continue to develop cubes.

See original here:
Machine Learning May Solve the Cube Conundrum - Journal of Petroleum Technology

LG CNS Recognized by Google Cloud with Machine Learning Specialization – The Korea Bizwire

This photo provided by LG CNS Co. on July 29, 2021, shows LG CNS workers showing Google Clouds Machine Learning Specialization distinction and Tensorflow Developer certificate.

SEOUL, July 29 (Korea Bizwire) LG CNS Co., a major IT service provider under LG Group, said Thursday it has earned a Machine Learning Specialization distinction from Google Cloud as the company strives to expand its presence in the artificial intelligence (AI) sector.

LG CNS said it became the first South Korean company to achieve machine learning in the Google Cloud Partnership Advantage program for its expertise in the sector.

Google Cloud has 17 types of specialization certification programs that are rewarded to its partner companies that prove their specialty in certain technology sectors.

To earn a Machine Learning Specialization distinction, a company has to meet Google Clouds requirements in 33 categories across six fields, in which a firm is assessed on areas ranging from machine learning models to investment plans.

Leveraging Google Clouds technology, LG CNS said it has established AI-powered services for LG Electronics Inc. and AEON Corp., one of Japans largest chains of language learning institutions.

Seven LG CNS professional machine learning engineers were certified by Google, and some 170 of its workers were also recognized with Tensorflow Developer Certificates from the company.

To beef up its AI capabilities, LG CNS said it has 35 working teams dedicated to its AI business.


Go here to see the original:
LG CNS Recognized by Google Cloud with Machine Learning Specialization - The Korea Bizwire

Cassie the bipedal robot uses machine learning to complete a 5km jog – New Atlas

Four years is a long time in robotics, especially so for a bipedal robot developed at Oregon State University (OSU) named Cassie. Dreamt up as an agile machine to carry packages from delivery vans to doorsteps, Cassie has recently developed an ability to run, something its developers have now shown off by having it complete what they say is the first 5-km (3.1-mi) jog by a bipedal robot.

We first took a look at Cassie the bipedal robot back in 2017, when OSU researchers revealed an ostrich-like machine capable of waddling along at a steady pace. It is based on the team's previously developed Atrias bipedal robot, but featured steering feet and sealed electronics in order to function in the rain and snow and navigate outdoor terrain.

The team has since used machine learning to equip Cassie with an impressive new skill: the ability to run. This involved what they call a deep reinforcement learning algorithm, which Cassie combines with its unique biomechanics and knees that bend like an ostrich to make fine adjustments to keep itself upright when on the move.

Deep reinforcement learning is a powerful method in AI that opens up skills like running, skipping and walking up and down stairs, says team member Yesh Godse.

Running robots are of course nothing new. Honda's ASIMO robot has been jogging along at speeds of up to 6 km/h (3.7 mph) since 2004, and in 2011 we looked at machine called Mabel with a peak pace of 10.9 km/h (6.8 mph), which was billed as the world's fastest bipedal robot with knees. More recently, the Atlas humanoid robot from Boston Dynamics has wowed us not just by running through the woods, but performing backflips and parkour.

The OSU team were keen to show off the endurance capabilities of Cassie, by having it use its machine learning algorithms to maintain balance across a 5-km run around the university campus, while untethered and on a single charge of its batteries. It wasn't all smooth sailing, with Cassie falling down twice due to an overheated computer and a high-speed turn gone wrong. But following a couple of resets, the run was completed in a total time of 53 minutes and 3 seconds.

Cassie is a very efficient robot because of how it has been designed and built, and we were really able to reach the limits of the hardware and show what it can do, said Jeremy Dao, a Ph.D. student in the Dynamic Robotics Laboratory.

According to the researchers, this is the first time a bipedal robot has finished a 5-km run, albeit it took place at walking speed and needed a little help along the way. It is possible that other bipedal robots may be capable of covering such distances, but it is also possible that no one has thought to try. Either way, the run is an impressive demonstration of the progress being made by the team. Check it out below.

OSU Bipedal Robot First to Run 5K

Source: Oregon State University

See original here:
Cassie the bipedal robot uses machine learning to complete a 5km jog - New Atlas

Global AI in Information and Communications Technology (ICT) Report 2021: AI and Cognitive Computing in Communications, Applications, Content, and…

DUBLIN--(BUSINESS WIRE)--The "AI in Information and Communications Technology 2021-2026: AI and Cognitive Computing in Communications, Applications, Content, and Commerce" report has been added to's offering.

This report assesses the AI in the ICT ecosystem including technologies, solutions and players. Application areas covered include marketing and business decision making, workplace automation, predictive analysis and forecasting, fraud detection and mitigation.

The report provides detailed forecasts globally, regionally, and across market segments from 2021 to 2026. The report also covers AI subset technologies, embedded in other technologies, and cognitive computing in key industry verticals.

While the opportunities for artificial intelligence in the information and communications technology industry are virtually limitless, we focus on a few key opportunities including AI in big data, chatbots, chipsets, cybersecurity, IoT, smart machines and robotics. AI is poised to fundamentally shift the Information and Communications Technology (ICT) industry as technologies such as Machine Learning, Natural Language Processing, Deep Learning, and others.

AI will dramatically enhance the performance of communications, apps, content, and digital commerce. AI will also drive new business models and create entirely new business opportunities as interfaces and efficiencies facilitate engagement that has been heretofore incomprehensible.

Many other industry verticals will be transformed through this evolution as ICT and digital technologies support many aspects of industry operations including supply chains, sales and marketing processes, product and service delivery and support models.

For example, we see particularly substantial impacts on the medical and bioinformatics as well as financial services segments. Workforce automation is an area that will affect many different industry verticals as AI greatly enhances workflow, processes, and accelerates the ROI for smart workplace investments.

Key Topics Covered:

1 Executive Summary

2 Introduction

2.1 Artificial Intelligence Overview

2.1.1 Intelligent Software Agent

2.1.2 Problem Solving

2.2.4 Practical Approaches to AI

2.2 Machine Learning

2.2.1 Supervised Learning

2.2.2 Unsupervised Learning

2.2.3 Semi-Supervised Learning

2.2.4 Reinforcement Learning

2.3 Deep Learning

2.3.1 Artificial Neural Networks

2.3.2 Artificial Neural Network Deployment

2.4 Cognitive Computing

2.5 AI Algorithms in Applications

2.5.1 Natural Language Processing

2.5.2 Machine Perception

2.5.3 Data Mining

2.5.4 Motion and Manipulation

2.6 Limitations and Challenges for AI Expansion

2.7 AI in Information and Communications Technology Industry

2.7.1 AI Market Drivers in ICT

2.7.2 Key AI Opportunities in ICT Artificial Intelligence and Big Data Artificial Intelligence in Chatbots and Virtual Private Assistants Artificial Intelligence in Chipsets and Microelectronics Artificial Intelligence and Cybersecurity Artificial Intelligence and Internet of Things Artificial Intelligence in Network Management and Optimization Artificial Intelligence in Smart Machines and Robotics

3 AI Intellectual Property Leadership by Country and Company

3.1 Global AI Patents

3.2 AI Patents by Leading Countries

3.3 Global Machine Learning Patents

3.4 Machine Learning Patents by Leading Countries

3.5 Machine Learning Patents by Leading Companies

3.6 Global Deep Learning Patents

3.7 Deep Learning Patents by Leading Countries

3.8 Global Cognitive Computing Patents

3.9 Cognitive Computing Patents by Leading Countries

3.10 AI and Cognitive Computing Innovation Leadership

4 AI in ICT Market Analysis and Forecasts 2021-2026

4.1 Global Markets for AI 2021-2026

4.2 Global Market for AI by Segment 2021-2026

4.3 Regional Markets for AI 2021-2026

4.4 AI Market by Key Application Area 2021-2026

4.4.1 AI Markets for Predictive Analysis and Forecasting 2021-2026

4.4.2 AI Market for Marketing and Business Decision Making 2021-2026

4.4.3 AI Market for Fraud Detection and Classification 2021-2026

4.4.4 AI Market for Workplace Automation 2021-2026

5 AI in Select Industry Verticals

5.1 Market for AI by Key Industry Verticals 2021-2026

5.1.1 AI Market for Internet-related Services and Products 2021-2026

5.1.2 AI Market for Telecommunications 2021-2026

5.1.3 AI Market for Medical and Bioinformatics 2021-2026

5.1.4 AI Market for Financial Services 2021-2026

5.1.5 AI Market for Manufacturing and Heavy Industries 2021-2026

5.2 AI in other Industry Verticals

6 AI in Major Market Segments

6.1 AI Market by Product Segment 2021-2026

6.2 Market for Embedded AI within other Technologies 2021-2026

6.2.1 AI Algorithms in Data Mining 2021-2026

6.2.2 AI in Machine Perception Technology 2021-2026

6.2.3 Market for AI Algorithms in Pattern Recognition Technology 2021-2026

6.2.4 Market for AI Algorithm in Intelligent Decision Support Systems Technology 2021-2026

6.2.5 Market for AI Algorithms in Natural Language Processing Technology 2021-2026

7 Important Corporate AI M&A

7.1 Apple Inc.

7.2 Facebook

7.3 Google

7.4 IBM

7.5 Microsoft

8 AI in ICT Use Cases

8.1 Verizon Uses AI and Machine Learning To Improve Performance

8.2 Deutche Telecom Uses AI

8.3 Use-cases in Telecommunications powered by AI

8.4 KDDI R&D Laboratories Inc., AI-assisted Automated Network Operation System

8.5 Telefonica AI Use Cases

8.6 Brighterion AI, Worldpay Use cases

9 AI in ICT Vendor Analysis

9.1 IBM Corporation

9.1.1 Company Overview

9.1.2 Recent Developments

9.2 Intel Corporation

9.3 Microsoft Corporation

9.4 Google Inc.

9.5 Baidu Inc.


9.7 Hewlett Packard Enterprise

9.8 Apple Inc.

9.9 General Electric

Read more:
Global AI in Information and Communications Technology (ICT) Report 2021: AI and Cognitive Computing in Communications, Applications, Content, and...

Machine Learning for Cardiovascular Disease Improves When Social, Environmental Factors Are Included – NYU News

Research emphasizes the need for algorithms that incorporate community-level data, studies that include more diverse populations

Machine learning can accurately predict cardiovascular disease and guide treatmentbut models that incorporate social determinants of health better capture risk and outcomes for diverse groups, finds a new study by researchers at New York Universitys School of Global Public Health and Tandon School of Engineering. The article, published in the American Journal of Preventive Medicine, also points to opportunities to improve how social and environmental variables are factored into machine learning algorithms.

Cardiovascular disease is responsible for nearly a third of all deaths worldwide and disproportionately affects lower socioeconomic groups. Increases in cardiovascular disease and deaths are attributed, in part, to social and environmental conditionsalso known as social determinants of healththat influence diet and exercise.

Cardiovascular disease is increasing, particularly in low- and middle-income countries and among communities of color in places like the United States, said Rumi Chunara, associate professor of biostatistics at NYU School of Global Public Health and of computer science and engineering at NYU Tandon School of Engineering, as well as the studys senior author. Because these changes are happening over such a short period of time, it is well known that our changing social and environmental factors, such as increased processed foods, are driving this change, as opposed to genetic factors which would change over much longer time scales.

Machine learninga type of artificial intelligence used to detect patterns in datais being rapidly developed in cardiovascular research and care to predict disease risk, incidence, and outcomes. Already, statistical methods are central in assessing cardiovascular disease risk and U.S. prevention guidelines. Developing predictive models gives health professionals actionable information by quantifying a patients risk and guiding the prescription of drugs or other preventive measures.

Cardiovascular disease risk is typically computed using clinical information, such as blood pressure and cholesterol levels, but rarely take social determinants, such as neighborhood-level factors, into account. Chunara and her colleagues sought to better understand how social and environmental factors are beginning to be integrated into machine learning algorithms for cardiovascular diseasewhat factors are considered, how they are being analyzed, and what methods improve these models.

Social and environmental factors have complex, non-linear interactions with cardiovascular disease, said Chunara. Machine learning can be particularly useful in capturing these intricate relationships.

The researchers analyzed existing research on machine learning and cardiovascular disease risk, screening more than 1,600 articles and ultimately focusing on 48 peer-reviewed studies published in journals between 1995 and 2020.

They found that including social determinants of health in machine learning models improved the ability to predict cardiovascular outcomes like rehospitalization, heart failure, and stroke. However, these models did not typically include the full list of community-level or environmental variables that are important in cardiovascular disease risk. Some studies did include additional factors such as income, marital status, social isolation, pollution, and health insurance, but only five studies considered environmental factors such as the walkability of a community and the availability of resources like grocery stores.

The researchers also noted the lack of geographic diversity in the studies, as the majority used data from the United States, countries in Europe, and China, neglecting many parts of the world experiencing increases in cardiovascular disease.

If you only do research in places like the United States or Europe, youll miss how social determinants and other environmental factors related to cardiovascular risk interact in different settings and the knowledge generated will be limited, said Chunara.

Our study shows that there is room to more systematically and comprehensively incorporate social determinants of health into cardiovascular disease statistical risk prediction models, said Stephanie Cook, assistant professor of biostatistics at NYU School of Global Public Health and a study author. In recent years, there has been a growing emphasis on capturing data on social determinants of healthsuch as employment, education, food, and social supportin electronic health records, which creates an opportunity to use these variables in machine learning studies and further improve the performance of risk prediction, particularly for vulnerable groups.

Including social determinants of health in machine learning models can help us to disentangle where disparities are rooted and bring attention to where in the risk structure we should intervene, added Chunara. For example, it can improve clinical practice by helping health professionals identify patients in need of referral to community resources like housing services and broadly reinforces the intricate synergy between the health of individuals and our environmental resources.

In addition to Chunara and Cook, study authors include Yuan Zhao, Erica Wood, and Nicholas Mirin, students at the NYU School of Global Public Health. The research was supported by funding from the National Science Foundation (IIS-1845487).

About the NYU School of Global Public HealthAt the NYU School of Global Public Health (NYU GPH), we are preparing the next generation of public health pioneers with the critical thinking skills, acumen, and entrepreneurial approaches necessary to reinvent the public health paradigm. Devoted to employing a nontraditional, interdisciplinary model, NYU GPH aims to improve health worldwide through a unique blend of global public health studies, research, and practice. The School is located in the heart of New York City and extends to NYU's global network on six continents. Innovation is at the core of our ambitious approach, thinking and teaching. For more, visit

About the New York UniversityTandon School of EngineeringThe NYU Tandon School of Engineering dates to 1854, the founding date for both the New York University School of Civil Engineering and Architecture and the Brooklyn Collegiate and Polytechnic Institute (widely known as Brooklyn Poly). A January 2014 merger created a comprehensive school of education and research in engineering and applied sciences, rooted in a tradition of invention and entrepreneurship and dedicated to furthering technology in service to society. In addition to its main location in Brooklyn, NYU Tandon collaborates with other schools within NYU, one of the countrys foremost private research universities, and is closely connected to engineering programs at NYU Abu Dhabi and NYU Shanghai. It operates Future Labs focused on start-up businesses in downtown Manhattan and Brooklyn and an award-winning online graduate program. For more information, visit

Here is the original post:
Machine Learning for Cardiovascular Disease Improves When Social, Environmental Factors Are Included - NYU News

How Olympic Surfing Is Trying to Ride the Machine Learning Wave – The Wall Street Journal

TOKYOSouth African surfer Bianca Buitendag uses some apps and websites to gauge wind and wave conditions before she competes, but she doesnt consider surfing a high-tech sport. Its mostly about trying to gauge the weather.

Thats about it, she said this week.

Carissa Moore, who on Tuesday faced off with Buitendag for the sports first-ever Olympic gold medal, takes a different approach. She loads up on performance analytics, wave pools and science. The American, who beatBuitendag by nearly 6.5 points to win the gold medal on Tuesday,has competed on artificial wavesand uses technology such as a wearable ring that tracks her sleep and other vitals to help her coaches fine-tune her training and recovery.

Their different approaches go to the heart of a long-running tension in surfing: dueling images of the spiritual, naturalist wave rider versus the modern, techie athlete.

Theres this illusion that youre trying to sustain, even if youre aware of all the stuff thats gone into [surfing], said Peter Westwick, a University of Southern California surfing historian. Hes talking about the use of advanced polymer chemistry-enabled products in surfboards and wetsuits and complex weather modeling that helps govern where and how competitions like this Olympic event are held. The tech has roots in military research and development, he said.

See the rest here:
How Olympic Surfing Is Trying to Ride the Machine Learning Wave - The Wall Street Journal

Holly Herndon on the power of machine learning and developing her digital twin Holly+ – The FADER

The FADER: Holly Herndon, thank you for joining us today for The FADER interview.

Holly Herndon: Thanks for having me.

So Holly+ has been live for about 24 hours. How have you felt about its reception so far?

Honestly, I've been really super pleased with it. I think at one point there were 10 hits to the server a second. So that means people were kind of going insane uploading stuff and that's basically what I wanted to happen. So I've been really, really happy with it. I also am happy with people kind of understanding that it's like, this is still kind of a nascent tech, so it's not a perfect rendition of my voice, but it's still, I think, a really interesting and powerful tool. And I think most people really got that. So I've been really pleased.

That's one of the things that drew me to Holly+ when I first read about it in a press release was that it seems like the technology is being developed specifically for this time and it is nascent and it is sort of still growing, but it feels like an attempt to get in on the ground floor of something that is already happening in a lot of different sectors of technology.

I mean, I've been working with, I like to say machine learning rather than artificial intelligence, because I feel like artificial intelligence is just such a loaded term. People imagine kind of like Skynet, it's kind of sentient. So I'm going to use machine learning for this conversation. But I've been working with machine learning for several years now. I mean the last album that I made, PROTO, I was creating kind of early models of my voice and also the voices of my ensemble members and trying to create kind of a weird hybrid ensemble where people are singing with models of themselves. So it's been going on for a while and of course machine learning has been around for decades, but there have been some really interesting kind of breakthroughs in the last several years that I think is why you see so much activity in this space now.

It's just so much more powerful now I think, than it was a couple decades back. We had some really interesting style transfer, white papers that were released. And so I think it's an exciting time to be involved with it. And I was really wanting to release a public version of kind of a similar technique that I was using on my album that people could just play with and have fun with. And I was actually just kind of reaching out to people on social media. And Yotam sent me back a video of one of my videos, but he had translated it into kind of an orchestral passage. And he was like, "I'm working on this exact thing right now." So it was perfect timing. And so we kind of linked up and started working on this specific Holly+ model.

Talk to me a little bit about some of those really powerful developments in machine learning that have informed Holly+.

Oh gosh. I mean, there's a whole history to go into. But I guess a lot of the research that was happening previously is a lot of people were using MIDI. Basically kind of trying to analyze MIDI scores to create automatic compositions in the style of an existing composer or a combination of existing composers. And I found that to be not so interesting. I'm really interested in kind of the sensual quality of audio itself. And I feel like so much is lost in a MIDI file. So much is lost in a score even. It's like the actual audio is material. I find really interesting. And so when some of the style transfers started to happen and we could start to deal with audio material rather than kind of a representation of audio through MIDI or through score material, that's when things I think got to be really interesting.

So you could imagine if you could do a style transfer of kind of any instrument onto any other instrument. Some of the really unique musical decisions that one might make as a vocalist or as a trombonist or as a guitarist, they're very different kind of musical decisions that you make depending on the kind of physical instrument that you're playing. And if you can kind of translate that to another instrument that would never make those kinds of same decisions, and I'm giving the instrument sentience here, but you know what I mean. Some of the kinds of decisions that a musician would make playing a specific instrument, if you can translate that onto others, I find that a really interesting kind of new way to make music and to find expression through sound generation. And I do think it is actually new for that reason.

I also wanted to talk a little bit about some of the ethical discussions around machine learning and some of the developments that have happened over the past year. Of course, the last time we spoke, it was about Travis Scott, which was an AI generated version of a Travis Scott song using his music, which was created without his consent. And over the past year as well, a Korean company has managed to develop an AI based on deceased popular singer and an entire reality show was created around that. It was something called AI vs. Human. So I was wondering if these sorts of developments in this sphere informed how you approached Holly+ and the more managerial aspects of how you wanted to present it to the world.

This is something that I think about quite a lot. I think that voice models, or even kind of physical likeness models or kind of style emulation, I think it opens up a whole new kind of question for how we deal with IP. I mean, we've been able to kind of re-animate our dead through moving picture or through samples, but this is kind of a brand new kind of field in that you can have the person do something that they never did. It's not just kind of replaying something that they've done in the past. You can kind of re-animate them in and give them entirely new phrases that they may not have approved of in their lifetime or even for living artists that they might not approve of. So I think it opens up a kind of Pandora's box.

And I think we're kind of already there. I mean if you saw the Jukebox project, which was super impressive. I mean, they could start with a kind of known song and then just kind of continue the song with new phrases and new passages and in a way that kind of fit the original style. It's really powerful. And we see some of the really convincing Tom Cruise deep fakes and things. These are kind of part of, I think, our new reality. So I kind of wanted to jump in front of that a little bit. There's kind of different ways that you could take it. You could try to be really protective over your self and your likeness. And we could get into this kind of IP war where you're just kind of doing take downs all the time and trying to hyper control what happens with your voice or with your likeness.

And I think that that is going to be a really difficult thing for most people to do, unless you kind of have a team of lawyers, which I'm sure that's probably already happening with people who do have teams of lawyers. But I think the more interesting way to do it is to kind of open it up and let people play with it and have fun with it and experiment. But then if people want to have kind of an officially approved version of something, then that would go through myself and my collective, which is represented through a DAO. And we can kind of vote together on the stewardship of the voice and of the likeness. And I think it really goes back to kind of really fundamental questions like who owns a voice? What does vocal sovereignty mean?

These are kind of huge questions because in a way a voice is inherently communal. I learned how to use my voice by mimicking the people around me through language, through centuries of evolution on that, or even vocal styles. A pop music vocal is often you're kind of emulating something that came before and then performing your individuality through that kind of communal voice. So I wanted to find a way to kind of reflect that communal ownership and that's why we decided to set up the DAO to kind of steward it as a community, essentially.

I saw on Twitter Professor Aaron Wright, he described DAOs as, "Subreddits with bank accounts and governance that can encourage coordination rather than shit posting and mobs." So how did you choose the different stewards that make up the DAO?

That's a really good question. And it's kind of an ongoing thing that's evolving. It's easy to say, "We figured out the DAO and it's all set up and ready to go." It's actually this thing that's kind of in process and we're working through the mechanics of that as we're going. It's also something that's kind of in real-time unfolding in terms of legal structures around that. I mean, Aaron, who you mentioned, he was part of the open law team that passed legislation in Wyoming recently to allow DAOs to be legally recognized entities, kind of like an LLC, because there's all kinds of, really boring to most people probably, complications around if a group of people ingest funds, who is kind of liable for tax for the XYZ? So there's all kinds of kind of regulatory frameworks that have to come together in order to make this kind of a viable thing.

And Aaron's done a lot of the really heavy lifting on making some of that stuff come about. In terms of our specific DAO, we're starting it out me and Matt. We started the project together and we've also invited in our management team from RVNG and also Chris and Yotam from Never Before Heard Sounds who created the voice model with us. And as well, we plan on having a kind of gallery that we're working on with Zuora. And so the idea is that people can make works with Holly+ and they can submit those works to the gallery. And the works that are approved or selective, then there's kind of a split between the artist and the gallery, the gallery being actually the DAO. And then any artist who presents in that way will also be invited into the DAO. So it's kind of ongoing. There will probably be other ways to onboard onto the DAO as we go, but we're wanting to keep it really simple as we start and not try to put the cart before the horse.

Now, of course, Holly+ is free to use right now for anyone who wants to visit the website. I was hoping you could explain to me how the average listener or a consumer of art can discern the difference between an official artwork that's been certified by the DAO versus something that was just uploaded to the website and taken and put into a tracker, a piece of art?

This is something we had to think about for a long time. It was like, "Do we want to ask people to ask for permission to use it in their tracks to release on Spotify or to upload?" And actually we came to the conclusion that we actually just wanted people to use it. It's not about trying to collect any kind of royalties in that way. I just want people to have fun with it and use it. So in terms of creating works and publishing them, it's completely free and open for anyone to use. We're kind of treating it almost like a VST, like a free VST at this point. So you can use it on anything and it's yours and what you make with it is yours. And you can publish that. And that is 100% yours.

We do have this gallery that we're launching on Zuora. That space is a little bit different in that you can propose a work to the DAO and then the DAO votes on which works we want to include in the gallery. And then those works, there would be a kind of profit split between the DAO and the artists. And basically the funds that are ingested from that, if those works do sell, are basically to go back to producing more tools for Holly+. It's not about trying to make any kind of financial gain, really. It's about trying to continue the development of this project.

Do you have any idea of what those future tools could look like right now?

Well, I don't want to give too much away, but there will be future iterations. So there might be some real-time situations. There might be some plugin situations. There's all kinds of things that we're working on. I mean, I think right now this first version, Chris and Yotam have been able to figure out how to transfer polyphonic audio into a model, which is... I'm a major machine learning nerd. So for me, I'm like, "Oh my God, I can't believe you all figured that out." That's been such a difficult thing for people to figure out. Usually people are doing monophonic, just simple one instrument, monophonic lines. But you can just put in a full track and it will translate it back. And what you get back, it's still does have that kind of machine learning, scratchy kind of neural net sound to it.

I think because it has that kind of quality it's easier for me to just open up and allow anyone to use that freely. I think as the tools evolve and speech and a more kind of maybe naturalistic likeness to my voice becomes possible, I think that that opens up a whole new set of questions around how that IP should be treated. And I certainly don't have all of the answers. It's definitely something that I'm kind of learning in public, doing and figuring out along the way. But I just see this kind of coming along the horizon and I wanted to try to find, I don't know, cool and interesting and somehow fair ways to try to work this out along the way.

Follow this link:
Holly Herndon on the power of machine learning and developing her digital twin Holly+ - The FADER

Automated machine learning optimizes and accelerates predictive modeling from COVID-19 high throughput datasets – DocWire News

This article was originally published here

Sci Rep. 2021 Jul 23;11(1):15107. doi: 10.1038/s41598-021-94501-0.


COVID-19 outbreak brings intense pressure on healthcare systems, with an urgent demand for effective diagnostic, prognostic and therapeutic procedures. Here, we employed Automated Machine Learning (AutoML) to analyze three publicly available high throughput COVID-19 datasets, including proteomic, metabolomic and transcriptomic measurements. Pathway analysis of the selected features was also performed. Analysis of a combined proteomic and metabolomic dataset led to 10 equivalent signatures of two features each, with AUC 0.840 (CI 0.723-0.941) in discriminating severe from non-severe COVID-19 patients. A transcriptomic dataset led to two equivalent signatures of eight features each, with AUC 0.914 (CI 0.865-0.955) in identifying COVID-19 patients from those with a different acute respiratory illness. Another transcriptomic dataset led to two equivalent signatures of nine features each, with AUC 0.967 (CI 0.899-0.996) in identifying COVID-19 patients from virus-free individuals. Signature predictive performance remained high upon validation. Multiple new features emerged and pathway analysis revealed biological relevance by implication in Viral mRNA Translation, Interferon gamma signaling and Innate Immune System pathways. In conclusion, AutoML analysis led to multiple biosignatures of high predictive performance, with reduced features and large choice of alternative predictors. These favorable characteristics are eminent for development of cost-effective assays to contribute to better disease management.

PMID:34302024 | DOI:10.1038/s41598-021-94501-0

View original post here:
Automated machine learning optimizes and accelerates predictive modeling from COVID-19 high throughput datasets - DocWire News