Category Archives: Machine Learning

Cassie the bipedal robot uses machine learning to complete a 5km jog – New Atlas

Four years is a long time in robotics, especially so for a bipedal robot developed at Oregon State University (OSU) named Cassie. Dreamt up as an agile machine to carry packages from delivery vans to doorsteps, Cassie has recently developed an ability to run, something its developers have now shown off by having it complete what they say is the first 5-km (3.1-mi) jog by a bipedal robot.

We first took a look at Cassie the bipedal robot back in 2017, when OSU researchers revealed an ostrich-like machine capable of waddling along at a steady pace. It is based on the team's previously developed Atrias bipedal robot, but featured steering feet and sealed electronics in order to function in the rain and snow and navigate outdoor terrain.

The team has since used machine learning to equip Cassie with an impressive new skill: the ability to run. This involved what they call a deep reinforcement learning algorithm, which Cassie combines with its unique biomechanics and knees that bend like an ostrich to make fine adjustments to keep itself upright when on the move.

Deep reinforcement learning is a powerful method in AI that opens up skills like running, skipping and walking up and down stairs, says team member Yesh Godse.

Running robots are of course nothing new. Honda's ASIMO robot has been jogging along at speeds of up to 6 km/h (3.7 mph) since 2004, and in 2011 we looked at machine called Mabel with a peak pace of 10.9 km/h (6.8 mph), which was billed as the world's fastest bipedal robot with knees. More recently, the Atlas humanoid robot from Boston Dynamics has wowed us not just by running through the woods, but performing backflips and parkour.

The OSU team were keen to show off the endurance capabilities of Cassie, by having it use its machine learning algorithms to maintain balance across a 5-km run around the university campus, while untethered and on a single charge of its batteries. It wasn't all smooth sailing, with Cassie falling down twice due to an overheated computer and a high-speed turn gone wrong. But following a couple of resets, the run was completed in a total time of 53 minutes and 3 seconds.

Cassie is a very efficient robot because of how it has been designed and built, and we were really able to reach the limits of the hardware and show what it can do, said Jeremy Dao, a Ph.D. student in the Dynamic Robotics Laboratory.

According to the researchers, this is the first time a bipedal robot has finished a 5-km run, albeit it took place at walking speed and needed a little help along the way. It is possible that other bipedal robots may be capable of covering such distances, but it is also possible that no one has thought to try. Either way, the run is an impressive demonstration of the progress being made by the team. Check it out below.

OSU Bipedal Robot First to Run 5K

Source: Oregon State University

See original here:
Cassie the bipedal robot uses machine learning to complete a 5km jog - New Atlas

Global AI in Information and Communications Technology (ICT) Report 2021: AI and Cognitive Computing in Communications, Applications, Content, and…

DUBLIN--(BUSINESS WIRE)--The "AI in Information and Communications Technology 2021-2026: AI and Cognitive Computing in Communications, Applications, Content, and Commerce" report has been added to ResearchAndMarkets.com's offering.

This report assesses the AI in the ICT ecosystem including technologies, solutions and players. Application areas covered include marketing and business decision making, workplace automation, predictive analysis and forecasting, fraud detection and mitigation.

The report provides detailed forecasts globally, regionally, and across market segments from 2021 to 2026. The report also covers AI subset technologies, embedded in other technologies, and cognitive computing in key industry verticals.

While the opportunities for artificial intelligence in the information and communications technology industry are virtually limitless, we focus on a few key opportunities including AI in big data, chatbots, chipsets, cybersecurity, IoT, smart machines and robotics. AI is poised to fundamentally shift the Information and Communications Technology (ICT) industry as technologies such as Machine Learning, Natural Language Processing, Deep Learning, and others.

AI will dramatically enhance the performance of communications, apps, content, and digital commerce. AI will also drive new business models and create entirely new business opportunities as interfaces and efficiencies facilitate engagement that has been heretofore incomprehensible.

Many other industry verticals will be transformed through this evolution as ICT and digital technologies support many aspects of industry operations including supply chains, sales and marketing processes, product and service delivery and support models.

For example, we see particularly substantial impacts on the medical and bioinformatics as well as financial services segments. Workforce automation is an area that will affect many different industry verticals as AI greatly enhances workflow, processes, and accelerates the ROI for smart workplace investments.

Key Topics Covered:

1 Executive Summary

2 Introduction

2.1 Artificial Intelligence Overview

2.1.1 Intelligent Software Agent

2.1.2 Problem Solving

2.2.4 Practical Approaches to AI

2.2 Machine Learning

2.2.1 Supervised Learning

2.2.2 Unsupervised Learning

2.2.3 Semi-Supervised Learning

2.2.4 Reinforcement Learning

2.3 Deep Learning

2.3.1 Artificial Neural Networks

2.3.2 Artificial Neural Network Deployment

2.4 Cognitive Computing

2.5 AI Algorithms in Applications

2.5.1 Natural Language Processing

2.5.2 Machine Perception

2.5.3 Data Mining

2.5.4 Motion and Manipulation

2.6 Limitations and Challenges for AI Expansion

2.7 AI in Information and Communications Technology Industry

2.7.1 AI Market Drivers in ICT

2.7.2 Key AI Opportunities in ICT

2.7.2.1 Artificial Intelligence and Big Data

2.7.2.2 Artificial Intelligence in Chatbots and Virtual Private Assistants

2.7.2.3 Artificial Intelligence in Chipsets and Microelectronics

2.7.2.4 Artificial Intelligence and Cybersecurity

2.7.2.5 Artificial Intelligence and Internet of Things

2.7.2.6 Artificial Intelligence in Network Management and Optimization

2.7.2.7 Artificial Intelligence in Smart Machines and Robotics

3 AI Intellectual Property Leadership by Country and Company

3.1 Global AI Patents

3.2 AI Patents by Leading Countries

3.3 Global Machine Learning Patents

3.4 Machine Learning Patents by Leading Countries

3.5 Machine Learning Patents by Leading Companies

3.6 Global Deep Learning Patents

3.7 Deep Learning Patents by Leading Countries

3.8 Global Cognitive Computing Patents

3.9 Cognitive Computing Patents by Leading Countries

3.10 AI and Cognitive Computing Innovation Leadership

4 AI in ICT Market Analysis and Forecasts 2021-2026

4.1 Global Markets for AI 2021-2026

4.2 Global Market for AI by Segment 2021-2026

4.3 Regional Markets for AI 2021-2026

4.4 AI Market by Key Application Area 2021-2026

4.4.1 AI Markets for Predictive Analysis and Forecasting 2021-2026

4.4.2 AI Market for Marketing and Business Decision Making 2021-2026

4.4.3 AI Market for Fraud Detection and Classification 2021-2026

4.4.4 AI Market for Workplace Automation 2021-2026

5 AI in Select Industry Verticals

5.1 Market for AI by Key Industry Verticals 2021-2026

5.1.1 AI Market for Internet-related Services and Products 2021-2026

5.1.2 AI Market for Telecommunications 2021-2026

5.1.3 AI Market for Medical and Bioinformatics 2021-2026

5.1.4 AI Market for Financial Services 2021-2026

5.1.5 AI Market for Manufacturing and Heavy Industries 2021-2026

5.2 AI in other Industry Verticals

6 AI in Major Market Segments

6.1 AI Market by Product Segment 2021-2026

6.2 Market for Embedded AI within other Technologies 2021-2026

6.2.1 AI Algorithms in Data Mining 2021-2026

6.2.2 AI in Machine Perception Technology 2021-2026

6.2.3 Market for AI Algorithms in Pattern Recognition Technology 2021-2026

6.2.4 Market for AI Algorithm in Intelligent Decision Support Systems Technology 2021-2026

6.2.5 Market for AI Algorithms in Natural Language Processing Technology 2021-2026

7 Important Corporate AI M&A

7.1 Apple Inc.

7.2 Facebook

7.3 Google

7.4 IBM

7.5 Microsoft

8 AI in ICT Use Cases

8.1 Verizon Uses AI and Machine Learning To Improve Performance

8.2 Deutche Telecom Uses AI

8.3 H2O.ai Use-cases in Telecommunications powered by AI

8.4 KDDI R&D Laboratories Inc., AI-assisted Automated Network Operation System

8.5 Telefonica AI Use Cases

8.6 Brighterion AI, Worldpay Use cases

9 AI in ICT Vendor Analysis

9.1 IBM Corporation

9.1.1 Company Overview

9.1.2 Recent Developments

9.2 Intel Corporation

9.3 Microsoft Corporation

9.4 Google Inc.

9.5 Baidu Inc.

9.6 H2O.ai

9.7 Hewlett Packard Enterprise

9.8 Apple Inc.

9.9 General Electric

Read more:
Global AI in Information and Communications Technology (ICT) Report 2021: AI and Cognitive Computing in Communications, Applications, Content, and...

Machine Learning for Cardiovascular Disease Improves When Social, Environmental Factors Are Included – NYU News

Research emphasizes the need for algorithms that incorporate community-level data, studies that include more diverse populations

Machine learning can accurately predict cardiovascular disease and guide treatmentbut models that incorporate social determinants of health better capture risk and outcomes for diverse groups, finds a new study by researchers at New York Universitys School of Global Public Health and Tandon School of Engineering. The article, published in the American Journal of Preventive Medicine, also points to opportunities to improve how social and environmental variables are factored into machine learning algorithms.

Cardiovascular disease is responsible for nearly a third of all deaths worldwide and disproportionately affects lower socioeconomic groups. Increases in cardiovascular disease and deaths are attributed, in part, to social and environmental conditionsalso known as social determinants of healththat influence diet and exercise.

Cardiovascular disease is increasing, particularly in low- and middle-income countries and among communities of color in places like the United States, said Rumi Chunara, associate professor of biostatistics at NYU School of Global Public Health and of computer science and engineering at NYU Tandon School of Engineering, as well as the studys senior author. Because these changes are happening over such a short period of time, it is well known that our changing social and environmental factors, such as increased processed foods, are driving this change, as opposed to genetic factors which would change over much longer time scales.

Machine learninga type of artificial intelligence used to detect patterns in datais being rapidly developed in cardiovascular research and care to predict disease risk, incidence, and outcomes. Already, statistical methods are central in assessing cardiovascular disease risk and U.S. prevention guidelines. Developing predictive models gives health professionals actionable information by quantifying a patients risk and guiding the prescription of drugs or other preventive measures.

Cardiovascular disease risk is typically computed using clinical information, such as blood pressure and cholesterol levels, but rarely take social determinants, such as neighborhood-level factors, into account. Chunara and her colleagues sought to better understand how social and environmental factors are beginning to be integrated into machine learning algorithms for cardiovascular diseasewhat factors are considered, how they are being analyzed, and what methods improve these models.

Social and environmental factors have complex, non-linear interactions with cardiovascular disease, said Chunara. Machine learning can be particularly useful in capturing these intricate relationships.

The researchers analyzed existing research on machine learning and cardiovascular disease risk, screening more than 1,600 articles and ultimately focusing on 48 peer-reviewed studies published in journals between 1995 and 2020.

They found that including social determinants of health in machine learning models improved the ability to predict cardiovascular outcomes like rehospitalization, heart failure, and stroke. However, these models did not typically include the full list of community-level or environmental variables that are important in cardiovascular disease risk. Some studies did include additional factors such as income, marital status, social isolation, pollution, and health insurance, but only five studies considered environmental factors such as the walkability of a community and the availability of resources like grocery stores.

The researchers also noted the lack of geographic diversity in the studies, as the majority used data from the United States, countries in Europe, and China, neglecting many parts of the world experiencing increases in cardiovascular disease.

If you only do research in places like the United States or Europe, youll miss how social determinants and other environmental factors related to cardiovascular risk interact in different settings and the knowledge generated will be limited, said Chunara.

Our study shows that there is room to more systematically and comprehensively incorporate social determinants of health into cardiovascular disease statistical risk prediction models, said Stephanie Cook, assistant professor of biostatistics at NYU School of Global Public Health and a study author. In recent years, there has been a growing emphasis on capturing data on social determinants of healthsuch as employment, education, food, and social supportin electronic health records, which creates an opportunity to use these variables in machine learning studies and further improve the performance of risk prediction, particularly for vulnerable groups.

Including social determinants of health in machine learning models can help us to disentangle where disparities are rooted and bring attention to where in the risk structure we should intervene, added Chunara. For example, it can improve clinical practice by helping health professionals identify patients in need of referral to community resources like housing services and broadly reinforces the intricate synergy between the health of individuals and our environmental resources.

In addition to Chunara and Cook, study authors include Yuan Zhao, Erica Wood, and Nicholas Mirin, students at the NYU School of Global Public Health. The research was supported by funding from the National Science Foundation (IIS-1845487).

About the NYU School of Global Public HealthAt the NYU School of Global Public Health (NYU GPH), we are preparing the next generation of public health pioneers with the critical thinking skills, acumen, and entrepreneurial approaches necessary to reinvent the public health paradigm. Devoted to employing a nontraditional, interdisciplinary model, NYU GPH aims to improve health worldwide through a unique blend of global public health studies, research, and practice. The School is located in the heart of New York City and extends to NYU's global network on six continents. Innovation is at the core of our ambitious approach, thinking and teaching. For more, visit http://publichealth.nyu.edu/.

About the New York UniversityTandon School of EngineeringThe NYU Tandon School of Engineering dates to 1854, the founding date for both the New York University School of Civil Engineering and Architecture and the Brooklyn Collegiate and Polytechnic Institute (widely known as Brooklyn Poly). A January 2014 merger created a comprehensive school of education and research in engineering and applied sciences, rooted in a tradition of invention and entrepreneurship and dedicated to furthering technology in service to society. In addition to its main location in Brooklyn, NYU Tandon collaborates with other schools within NYU, one of the countrys foremost private research universities, and is closely connected to engineering programs at NYU Abu Dhabi and NYU Shanghai. It operates Future Labs focused on start-up businesses in downtown Manhattan and Brooklyn and an award-winning online graduate program. For more information, visithttp://engineering.nyu.edu.

Here is the original post:
Machine Learning for Cardiovascular Disease Improves When Social, Environmental Factors Are Included - NYU News

Holly Herndon on the power of machine learning and developing her digital twin Holly+ – The FADER

The FADER: Holly Herndon, thank you for joining us today for The FADER interview.

Holly Herndon: Thanks for having me.

So Holly+ has been live for about 24 hours. How have you felt about its reception so far?

Honestly, I've been really super pleased with it. I think at one point there were 10 hits to the server a second. So that means people were kind of going insane uploading stuff and that's basically what I wanted to happen. So I've been really, really happy with it. I also am happy with people kind of understanding that it's like, this is still kind of a nascent tech, so it's not a perfect rendition of my voice, but it's still, I think, a really interesting and powerful tool. And I think most people really got that. So I've been really pleased.

That's one of the things that drew me to Holly+ when I first read about it in a press release was that it seems like the technology is being developed specifically for this time and it is nascent and it is sort of still growing, but it feels like an attempt to get in on the ground floor of something that is already happening in a lot of different sectors of technology.

I mean, I've been working with, I like to say machine learning rather than artificial intelligence, because I feel like artificial intelligence is just such a loaded term. People imagine kind of like Skynet, it's kind of sentient. So I'm going to use machine learning for this conversation. But I've been working with machine learning for several years now. I mean the last album that I made, PROTO, I was creating kind of early models of my voice and also the voices of my ensemble members and trying to create kind of a weird hybrid ensemble where people are singing with models of themselves. So it's been going on for a while and of course machine learning has been around for decades, but there have been some really interesting kind of breakthroughs in the last several years that I think is why you see so much activity in this space now.

It's just so much more powerful now I think, than it was a couple decades back. We had some really interesting style transfer, white papers that were released. And so I think it's an exciting time to be involved with it. And I was really wanting to release a public version of kind of a similar technique that I was using on my album that people could just play with and have fun with. And I was actually just kind of reaching out to people on social media. And Yotam sent me back a video of one of my videos, but he had translated it into kind of an orchestral passage. And he was like, "I'm working on this exact thing right now." So it was perfect timing. And so we kind of linked up and started working on this specific Holly+ model.

Talk to me a little bit about some of those really powerful developments in machine learning that have informed Holly+.

Oh gosh. I mean, there's a whole history to go into. But I guess a lot of the research that was happening previously is a lot of people were using MIDI. Basically kind of trying to analyze MIDI scores to create automatic compositions in the style of an existing composer or a combination of existing composers. And I found that to be not so interesting. I'm really interested in kind of the sensual quality of audio itself. And I feel like so much is lost in a MIDI file. So much is lost in a score even. It's like the actual audio is material. I find really interesting. And so when some of the style transfers started to happen and we could start to deal with audio material rather than kind of a representation of audio through MIDI or through score material, that's when things I think got to be really interesting.

So you could imagine if you could do a style transfer of kind of any instrument onto any other instrument. Some of the really unique musical decisions that one might make as a vocalist or as a trombonist or as a guitarist, they're very different kind of musical decisions that you make depending on the kind of physical instrument that you're playing. And if you can kind of translate that to another instrument that would never make those kinds of same decisions, and I'm giving the instrument sentience here, but you know what I mean. Some of the kinds of decisions that a musician would make playing a specific instrument, if you can translate that onto others, I find that a really interesting kind of new way to make music and to find expression through sound generation. And I do think it is actually new for that reason.

I also wanted to talk a little bit about some of the ethical discussions around machine learning and some of the developments that have happened over the past year. Of course, the last time we spoke, it was about Travis Scott, which was an AI generated version of a Travis Scott song using his music, which was created without his consent. And over the past year as well, a Korean company has managed to develop an AI based on deceased popular singer and an entire reality show was created around that. It was something called AI vs. Human. So I was wondering if these sorts of developments in this sphere informed how you approached Holly+ and the more managerial aspects of how you wanted to present it to the world.

This is something that I think about quite a lot. I think that voice models, or even kind of physical likeness models or kind of style emulation, I think it opens up a whole new kind of question for how we deal with IP. I mean, we've been able to kind of re-animate our dead through moving picture or through samples, but this is kind of a brand new kind of field in that you can have the person do something that they never did. It's not just kind of replaying something that they've done in the past. You can kind of re-animate them in and give them entirely new phrases that they may not have approved of in their lifetime or even for living artists that they might not approve of. So I think it opens up a kind of Pandora's box.

And I think we're kind of already there. I mean if you saw the Jukebox project, which was super impressive. I mean, they could start with a kind of known song and then just kind of continue the song with new phrases and new passages and in a way that kind of fit the original style. It's really powerful. And we see some of the really convincing Tom Cruise deep fakes and things. These are kind of part of, I think, our new reality. So I kind of wanted to jump in front of that a little bit. There's kind of different ways that you could take it. You could try to be really protective over your self and your likeness. And we could get into this kind of IP war where you're just kind of doing take downs all the time and trying to hyper control what happens with your voice or with your likeness.

And I think that that is going to be a really difficult thing for most people to do, unless you kind of have a team of lawyers, which I'm sure that's probably already happening with people who do have teams of lawyers. But I think the more interesting way to do it is to kind of open it up and let people play with it and have fun with it and experiment. But then if people want to have kind of an officially approved version of something, then that would go through myself and my collective, which is represented through a DAO. And we can kind of vote together on the stewardship of the voice and of the likeness. And I think it really goes back to kind of really fundamental questions like who owns a voice? What does vocal sovereignty mean?

These are kind of huge questions because in a way a voice is inherently communal. I learned how to use my voice by mimicking the people around me through language, through centuries of evolution on that, or even vocal styles. A pop music vocal is often you're kind of emulating something that came before and then performing your individuality through that kind of communal voice. So I wanted to find a way to kind of reflect that communal ownership and that's why we decided to set up the DAO to kind of steward it as a community, essentially.

I saw on Twitter Professor Aaron Wright, he described DAOs as, "Subreddits with bank accounts and governance that can encourage coordination rather than shit posting and mobs." So how did you choose the different stewards that make up the DAO?

That's a really good question. And it's kind of an ongoing thing that's evolving. It's easy to say, "We figured out the DAO and it's all set up and ready to go." It's actually this thing that's kind of in process and we're working through the mechanics of that as we're going. It's also something that's kind of in real-time unfolding in terms of legal structures around that. I mean, Aaron, who you mentioned, he was part of the open law team that passed legislation in Wyoming recently to allow DAOs to be legally recognized entities, kind of like an LLC, because there's all kinds of, really boring to most people probably, complications around if a group of people ingest funds, who is kind of liable for tax for the XYZ? So there's all kinds of kind of regulatory frameworks that have to come together in order to make this kind of a viable thing.

And Aaron's done a lot of the really heavy lifting on making some of that stuff come about. In terms of our specific DAO, we're starting it out me and Matt. We started the project together and we've also invited in our management team from RVNG and also Chris and Yotam from Never Before Heard Sounds who created the voice model with us. And as well, we plan on having a kind of gallery that we're working on with Zuora. And so the idea is that people can make works with Holly+ and they can submit those works to the gallery. And the works that are approved or selective, then there's kind of a split between the artist and the gallery, the gallery being actually the DAO. And then any artist who presents in that way will also be invited into the DAO. So it's kind of ongoing. There will probably be other ways to onboard onto the DAO as we go, but we're wanting to keep it really simple as we start and not try to put the cart before the horse.

Now, of course, Holly+ is free to use right now for anyone who wants to visit the website. I was hoping you could explain to me how the average listener or a consumer of art can discern the difference between an official artwork that's been certified by the DAO versus something that was just uploaded to the website and taken and put into a tracker, a piece of art?

This is something we had to think about for a long time. It was like, "Do we want to ask people to ask for permission to use it in their tracks to release on Spotify or to upload?" And actually we came to the conclusion that we actually just wanted people to use it. It's not about trying to collect any kind of royalties in that way. I just want people to have fun with it and use it. So in terms of creating works and publishing them, it's completely free and open for anyone to use. We're kind of treating it almost like a VST, like a free VST at this point. So you can use it on anything and it's yours and what you make with it is yours. And you can publish that. And that is 100% yours.

We do have this gallery that we're launching on Zuora. That space is a little bit different in that you can propose a work to the DAO and then the DAO votes on which works we want to include in the gallery. And then those works, there would be a kind of profit split between the DAO and the artists. And basically the funds that are ingested from that, if those works do sell, are basically to go back to producing more tools for Holly+. It's not about trying to make any kind of financial gain, really. It's about trying to continue the development of this project.

Do you have any idea of what those future tools could look like right now?

Well, I don't want to give too much away, but there will be future iterations. So there might be some real-time situations. There might be some plugin situations. There's all kinds of things that we're working on. I mean, I think right now this first version, Chris and Yotam have been able to figure out how to transfer polyphonic audio into a model, which is... I'm a major machine learning nerd. So for me, I'm like, "Oh my God, I can't believe you all figured that out." That's been such a difficult thing for people to figure out. Usually people are doing monophonic, just simple one instrument, monophonic lines. But you can just put in a full track and it will translate it back. And what you get back, it's still does have that kind of machine learning, scratchy kind of neural net sound to it.

I think because it has that kind of quality it's easier for me to just open up and allow anyone to use that freely. I think as the tools evolve and speech and a more kind of maybe naturalistic likeness to my voice becomes possible, I think that that opens up a whole new set of questions around how that IP should be treated. And I certainly don't have all of the answers. It's definitely something that I'm kind of learning in public, doing and figuring out along the way. But I just see this kind of coming along the horizon and I wanted to try to find, I don't know, cool and interesting and somehow fair ways to try to work this out along the way.

Follow this link:
Holly Herndon on the power of machine learning and developing her digital twin Holly+ - The FADER

How Olympic Surfing Is Trying to Ride the Machine Learning Wave – The Wall Street Journal

TOKYOSouth African surfer Bianca Buitendag uses some apps and websites to gauge wind and wave conditions before she competes, but she doesnt consider surfing a high-tech sport. Its mostly about trying to gauge the weather.

Thats about it, she said this week.

Carissa Moore, who on Tuesday faced off with Buitendag for the sports first-ever Olympic gold medal, takes a different approach. She loads up on performance analytics, wave pools and science. The American, who beatBuitendag by nearly 6.5 points to win the gold medal on Tuesday,has competed on artificial wavesand uses technology such as a wearable ring that tracks her sleep and other vitals to help her coaches fine-tune her training and recovery.

Their different approaches go to the heart of a long-running tension in surfing: dueling images of the spiritual, naturalist wave rider versus the modern, techie athlete.

Theres this illusion that youre trying to sustain, even if youre aware of all the stuff thats gone into [surfing], said Peter Westwick, a University of Southern California surfing historian. Hes talking about the use of advanced polymer chemistry-enabled products in surfboards and wetsuits and complex weather modeling that helps govern where and how competitions like this Olympic event are held. The tech has roots in military research and development, he said.

See the rest here:
How Olympic Surfing Is Trying to Ride the Machine Learning Wave - The Wall Street Journal

Automated machine learning optimizes and accelerates predictive modeling from COVID-19 high throughput datasets – DocWire News

This article was originally published here

Sci Rep. 2021 Jul 23;11(1):15107. doi: 10.1038/s41598-021-94501-0.

ABSTRACT

COVID-19 outbreak brings intense pressure on healthcare systems, with an urgent demand for effective diagnostic, prognostic and therapeutic procedures. Here, we employed Automated Machine Learning (AutoML) to analyze three publicly available high throughput COVID-19 datasets, including proteomic, metabolomic and transcriptomic measurements. Pathway analysis of the selected features was also performed. Analysis of a combined proteomic and metabolomic dataset led to 10 equivalent signatures of two features each, with AUC 0.840 (CI 0.723-0.941) in discriminating severe from non-severe COVID-19 patients. A transcriptomic dataset led to two equivalent signatures of eight features each, with AUC 0.914 (CI 0.865-0.955) in identifying COVID-19 patients from those with a different acute respiratory illness. Another transcriptomic dataset led to two equivalent signatures of nine features each, with AUC 0.967 (CI 0.899-0.996) in identifying COVID-19 patients from virus-free individuals. Signature predictive performance remained high upon validation. Multiple new features emerged and pathway analysis revealed biological relevance by implication in Viral mRNA Translation, Interferon gamma signaling and Innate Immune System pathways. In conclusion, AutoML analysis led to multiple biosignatures of high predictive performance, with reduced features and large choice of alternative predictors. These favorable characteristics are eminent for development of cost-effective assays to contribute to better disease management.

PMID:34302024 | DOI:10.1038/s41598-021-94501-0

View original post here:
Automated machine learning optimizes and accelerates predictive modeling from COVID-19 high throughput datasets - DocWire News

Will Roches Stock Rebound After A 4% Fall Following Its H1 Results? – Forbes

UKRAINE - 2019/03/28: In this photo illustration a Roche Holding AG logo seen displayed on a smart ... [+] phone. (Photo illustration by Igor Golovniov/SOPA Images/LightRocket via Getty Images)

The stock price of Roche Holdings ADR reached its all-time high of $48 just last week, before a recent sell-off in the stock, which led to over a 4% drop in its price to levels of $46 currently. Much of this fall came in late last week after the company announced its H1 results. Roche reported sales of $30.7 Bil Swiss Francs, better than the street estimates of $29.9 Bil Swiss Francs, led by continued growth in its diagnostics business, courtesy of the companys Covid-19 tests. However, the company also cautioned about a decline in diagnostics sales due to lower testing demand for Covid-19 going forward. Furthermore, despite a solid H1, the company didnt revise its guidance upward. All of these factors impacted the stock price move for Roche.

Now, after a 4% fall in a week, will RHHBY stock continue its downward trajectory over the coming weeks, or is a recovery in the stock imminent? According to the Trefis Machine Learning Engine, which identifies trends in the companys stock price using ten years of historical data, returns for RHHBY stock average 3% in the next one-month (twenty-one trading days) period after experiencing a 4% drop over the previous week (five trading days), implying that the RHHBY stock can see higher levels from here. Also, Gantenerumab - Roches treatment for Alzheimers disease - remains an important trigger for the company going forward. Roche is in discussions with the U.S. FDA, and if approved, it will be set to become yet another blockbuster drug for Roche.

But how would these numbers change if you are interested in holding RHHBY stock for a shorter or a longer time period? You can test the answer and many other combinations on the Trefis Machine Learning Engine to test Roche stock chances of a rise after a fall. You can test the chance of recovery over different time intervals of a quarter, month, or even just 1 day!

MACHINE LEARNING ENGINE try it yourself:

IF RHHBY stock moved by -5% over five trading days, THEN over the next twenty-one trading days RHHBY stock moves an average of 3%, with a good 67% probability of a positive return over this period.

Some Fun Scenarios, FAQs & Making Sense of Roche Stock Movements:

Question 1: Is the average return for Roche stock higher after a drop?

Answer: Consider two situations,

Case 1: Roche stock drops by -5% or more in a week

Case 2: Roche stock rises by 5% or more in a week

Is the average return for Roche stock higher over the subsequent month after Case 1 or Case 2?

RHHBY stock fares better after Case 1, with an average return of 2.9% over the next month (21 trading days) under Case 1 (where the stock has just suffered a 5% loss over the previous week), versus, an average return of 0.1% for Case 2.

In comparison, the S&P 500 has an average return of 3.1% over the next 21 trading days under Case 1, and an average return of just 0.5% for Case 2 as detailed in our dashboard that details the average return for the S&P 500 after a fall or rise.

Try the Trefis machine learning engine above to see for yourself how Roche stock is likely to behave after any specific gain or loss over a period.

Question 2: Does patience pay?

Answer: If you buy and hold Roche stock, the expectation is over time the near-term fluctuations will cancel out, and the long-term positive trend will favor you - at least if the company is otherwise strong.

Overall, according to data and Trefis machine learning engines calculations, patience absolutely pays for most stocks!

For RHHBY stock, the returns over the next N days after a -5% change over the last 5 trading days is detailed in the table below, along with the returns for the S&P500:

RHHBY Average Return

You can try the engine to see what this table looks like for Roche after a larger loss over the last week, month, or quarter.

Question 3: What about the average return after a rise if you wait for a while?

Answer: The average return after a rise is understandably lower than after a fall as detailed in the previous question. Interestingly, though, if a stock has gained over the last few days, you would do better to avoid short-term bets for most stocks.

Its pretty powerful to test the trend for yourself for Roche stock by changing the inputs in the charts above.

While Roche stock looks like it can gain more, 2020 has created many pricing discontinuities which can offer attractive trading opportunities. For example, youll be surprised how counter-intuitive the stock valuation is for Mettler vs Abbott.

See allTrefis Featured AnalysesandDownloadTrefis Datahere

Here is the original post:
Will Roches Stock Rebound After A 4% Fall Following Its H1 Results? - Forbes

AI Machine Learning Could be Latest Healthcare Innovation – The National Law Review

As wementionedin the early days of the pandemic, COVID-19 has been accompanied by a rise in cyberattacks worldwide. At the same time, the global response to the pandemic has acceleratedinterestin the collection, analysis, and sharing of data specifically, patient data to address urgent issues, such as population management in hospitals, diagnoses and detection of medical conditions, and vaccine development, all through the use of artificial intelligence (AI) and machine learning. Typically, AIML churns through huge amounts of real-world data to deliver useful results. This collection and use of that data, however, gives rise to legal and practical challenges. Numerous and increasingly strict regulations protect the personal information needed to feed AI solutions. The response has been to anonymize patient health data in time-consuming and expensive processes (HIPAA alone requires the removal of 18 types of identifying information). But anonymization is not foolproof and, after stripping data of personally identifiable information, the remaining data may be of limited utility. This is where synthetic data comes in.

A synthetic dataset comprises artificial information that can be used as a stand-in for real data. The artificial dataset can be derived in different ways. One approach starts with real patient data. Algorithms process the real patient data and learn patterns, trends, and individual behaviors. The algorithms then replicate those patterns, trends, and behaviors in a dataset of artificial patients, such that if done properly the synthetic dataset has virtually the same statistical properties of the real dataset. Importantly, the synthetic data cannot be linked back to the original patients, unlike some de-identified or anonymized data, which have been vulnerable to re-identification attacks. Other approaches involve the use of existing AI models to generate synthetic data from scratch, or the use of a combination of existing models and real patient data.

While the concept of synthetic data is not new, it hasrecentlybeen described as a promising solution for healthcare innovation, particularly in a time when secure sharing of patient data has been particularly challenged by lab and office closures. Synthetic data in the healthcare space can be applied flexibly to fit different use cases, and they can be expanded to create more voluminous datasets.

Synthetic datas other reported benefits include the elimination of human bias and the democratization of AI (i.e., making AI technology and the underlying data more accessible). Critically too, regulations governing personal information, such as HIPAA, the EU General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA), may be read to permit the sharing and processing of original patient data (subject to certain obligations) such that the resulting synthetic datasets may carry less privacy risk.

Despite the potential benefits, the creation and use of synthetic data has its own challenges. First, there is the risk that AI-generated data is so similar to the underlying real data that real patient privacy is compromised. Additionally, the reliability of synthetic data is not yet firmly established. For example, it is reported that no drug developer has yet relied on synthetic data for a submission to the U.S. Food and Drug Administration because it is not known whether that type of data will be accepted by FDA. Perhaps most importantly, synthetic data is susceptible to adjustment, for good or ill. On the one hand, dataset adjustments can be used to correct for biases imbedded in real datasets. On the other, adjustments can also undermine trust in healthcare and medical research.

As synthetic data platforms proliferate and companies increasingly engage those services to develop innovative solutions, care should be exercised to guard against the potential privacy and reliability risks.

2021 Proskauer Rose LLP. National Law Review, Volume XI, Number 201

Visit link:
AI Machine Learning Could be Latest Healthcare Innovation - The National Law Review

Machine Learning could solve biggest challenges in the world, AWS executive says – The Hindu

AWS executive Kanishka Agiwal, shared his thoughts on Machine Learning its applications, growing adoption in different sectors, function in the future, as well as AWS role in building and supporting the ML ecosystem.

Machine Learning (ML) is now powering a wide range of applications in organisations across various industries. ML is accelerating digital transformation and catalysing business processes, and Amazon Web Services (AWS) is one of the leading firms selling automatic ML methods and pre-trained models to businesses and developers. In an exclusive interview with The Hindu, Kanishka Agiwal, Head Service Lines, AISPL for AWS India & South Asia, shared his thoughts on ML its applications, growing adoption in different sectors, function in the future, as well as AWS role in building and supporting the ML ecosystem.

The following transcript has been edited for clarity and brevity.

Earlier, ML technology was limited to a few major tech companies and academic researchers. Things began to change when cloud computing entered the mainstream. Compute power and data became more available, and ML is now making an impact across every industry, be it finance, retail, fashion, real estate, and healthcare. It is moving from the periphery to now becoming a core part of every business and industry.

ML is already helping companies make better and faster decisions. When deployed with the right strategies, ML increases agility, streamlines processes, boosts revenue by creating new products and improving existing ones, and enables better, faster decision making. Theres no doubt ML and artificial intelligence (AI) can help companies achieve more.

As often happens in a crisis, companies tend to step back and think more strategically about their future operations. Weve seen healthcare organisations lean on technology and the cloud to get accurate, trusted information to patients and direct them to the appropriate level of care. Organisations of every size worldwide have been quick to apply their ML expertise in several areas, whether its scaling customer communications, understanding how COVID-19 spreads or speeding up research and treatment.

Several areas utilise ML including ML-enabled chatbots for contactless screening of COVID-19 symptoms and to answer queries. Using ML models to analyse large volumes of data to provide an early warning system for disease outbreaks and identify vulnerable populations. Making use of ML in medical imaging to recognise patterns and deriving contextual relationships between genes, diseases and drugs, and accelerating the discovery of drugs to help treat COVID-19.

I'm inspired and encouraged by the speed at which these organisations are applying ML to address COVID-19. At AWS, we have always believed in the potential of ML to help solve the biggest challenges in our world - and that promise is now coming to fruition as organisations respond to this crisis.

If you take some of the largest sectors such as agriculture, healthcare, citizen services, financial inclusion, youll notice ML at play. In agriculture, ML is playing a part in farm advisory, crop assaying, pest management, and traceability of crops.

Healthcare and life sciences organisations from the largest healthcare providers, payers, IT vendors, and niche ISVs across the globe are applying AWS ML services to improve patient outcomes and accelerate decision making. Some of the use cases we are seeing include using ML to accelerate the diagnosis of diseases, improve operational efficiency and delivery of care, population health analytics, and to aid scientific discovery. In India, Common Service Centres (CSC) are deploying ML to accelerate delivery of citizen services.

Increasingly, industrial customers across asset intensive industries such as manufacturing, energy, mining, and automotive are using ML to drive faster and better decisions to help improve operational efficiency, quality, and agility. ML services purpose-built for low latency requirements of industrial environments further remove barriers to industrial digital transformation.

Recently, we announced a collaboration with NITI Aayog to establish a Frontier Technologies Cloud Innovation Centre in India. This will bring together public sector stakeholders, startups, and academia to solve critical societal challenges.

Last year, Atal Innovation Mission, NITI Aayog, collaborated with NASSCOM to launch the ATL AI Step Up Module, with a focus on driving AI education among school students in India. Through AWS Educate, students will be able to gain hands-on practical experience on AWS ML services, including Amazon SageMaker.

In addition, the Indian Chamber of Food and Agriculture adopted AWS Educate to introduce certificate courses for agricultural engineering students. In March this year, we announced the AWS DeepRacer Womens League 2021 in India to help foster community learning, which aims to bring together women to gain hands on experience with ML.

To meet our customers where they are on their ML journey and help them achieve specific business outcomes, we provide the broadest set of ML and AI services for builders of all levels of expertise. AWS launched more than 250 new capabilities for ML and AI in 2020 alone.

We are building AI Services that allow developers to easily add intelligence to any application without needing ML skills. These services provide ready-made intelligence applications and workflows to personalise the customer experience, identify and triage anomalies in business metrics, image recognition, and automatically extract meaning from documents.

AWS has also launched end-to-end solutions, which dont require teams to stitch together multiple services themselves.

ML represents a unique opportunity for government and organisations to leverage public data for social good. From chatbots to supporting municipal services to contactless tracing of COVID-19, governments can harness ML to stay close to their citizens through deeper experiences. Organisations can better navigate and utilise data for more strategic and timely decisions if they are equipped with the appropriate technology capabilities. Leveraging ML can result in fewer errors and more timely decision making to enable organisations to execute initiatives with accuracy and speed.

Public safety can be improved over a range of possibilities, from ensuring safe roads, to preventing cyber-attacks and responding to natural disasters. ML will provide government organisations with the ability to improve safety across and beyond this spectrum. ML can dramatically streamline operational processes. As a result, it can save time, costs, and other resources so that organisations can focus on what is more important.

With ML, organisations can leverage data to develop and scale revolutionary ideas that result in cutting-edge research, far beyond human capabilities.

Link:
Machine Learning could solve biggest challenges in the world, AWS executive says - The Hindu

IoT Machine Learning and Artificial Intelligence Services to Reach US$3.6 Billion in Revenue in 2026 – PRNewswire

LONDON, July 13, 2021 /PRNewswire/ -- The next wave of Internet of Things (IoT) analytics development willfully converge with the Big Data domain. Simultaneously, thevalue in the technology stack is shifting beyond the hardware and middleware to analytics and value-added services, such as Machine-Learning (ML) and Artificial Intelligence (AI). According to global tech market advisory firm ABI Research, ML and AI services are estimated to grow within the IoT domain at a CAGR of nearly 40%, reaching US$3.6 billion in 2026.

While COVID-19 impacted many industries, the IoT data analytics market has been less affected. In fact, many newly emerging cloud-native data-enabled analytics vendors have benefited from COVID-19. "Since industries are transitioning to "remote everything," out-of-the-box solutions for remote monitoring, asset management, asset visibility, and predictive maintenance are in high demand and exemplify market acceleration. Vendors, such as DataRobot, are now easing access to ML and AI tool sets through different deployment options at the edge, on-premises, and the cloud, and through consumption using Platform as a Service (PaaS), and Software as a Service (SaaS)," explains Kateryna Dubrova, Research Analyst at ABI Research. "All and all, the COVID-19 pandemic highlighted the importance of rapid deployment solutions, such as hardware agnostic SaaS."

Companies like AWS, C3, and Google also have been successful in promoting their products and analytics capabilities (tool sets and environment) by creating centralized repositories for COVID-19 data. Currently, these data lakes are public and are not monetized. However, it is expected that those companies will attempt to use the data lakes to create products for sale to the healthcare market in the future. From a technology perspective, the data lakes could be the first step for creating and testing data visibility, and streaming analytics services. COVID-19 has showcased the public cloud's healthcare industry ambitions expanding into pharmaceutical, biomedicine, and telemedicine.

Big data and data analytics might not have a remedy for the virus, but IoT-data enabled technologies proved essential to lessen public anxiety, to monitor patients, and prepare the infrastructure for new outbreaks. "AI and ML usage has accelerated during the pandemichowever, greenfield AI projects have seen a significant slowdown. The AI and ML in the IoT is at its early adoption stage, the lack of the development of data-enabled infrastructure prevented rapid adoption of the machine learning on operational level when COVID-19 accelerated," Dubrova concludes.

These findings are from ABI Research's IoT Data-Enabled Services: Value Chain, Companies to Watch, and Cloud Wars application analysis report. This report is part of the company'sM2M, IoT & IoEresearch service, which includes research, data, and analyst insights.Application Analysisreports present an in-depth analysis of key market trends and factors for a specific technology.

About ABI ResearchABI Research provides strategic guidance to visionaries, delivering actionable intelligence on the transformative technologies that are dramatically reshaping industries, economies, and workforces across the world. ABI Research's global team of analysts publish groundbreaking studies often years ahead of other technology advisory firms, empowering our clients to stay ahead of their markets and their competitors.

ABI Research1990

For more information about ABI Research's services, contact us at +1.516.624.2500 in the Americas, +44.203.326.0140 in Europe, +65.6592.0290 in Asia-Pacific or visit http://www.abiresearch.com.

Contact Info:

Global Deborah Petrara Tel: +1.516.624.2558 [emailprotected]

SOURCE ABI Research

http://www.abiresearch.com

Original post:
IoT Machine Learning and Artificial Intelligence Services to Reach US$3.6 Billion in Revenue in 2026 - PRNewswire