Category Archives: Machine Learning

ClearBuds: First wireless earbuds that clear up calls using deep learning – University of Washington

Engineering | News releases | Research | Technology

July 11, 2022

ClearBuds use a novel microphone system and are one of the first machine-learning systems to operate in real time and run on a smartphone.Raymond Smith/University of Washington

As meetings shifted online during the COVID-19 lockdown, many people found that chattering roommates, garbage trucks and other loud sounds disrupted important conversations.

This experience inspired three University of Washington researchers, who were roommates during the pandemic, to develop better earbuds. To enhance the speakers voice and reduce background noise, ClearBuds use a novel microphone system and one of the first machine-learning systems to operate in real time and run on a smartphone.

The researchers presented this project June 30 at the ACM International Conference on Mobile Systems, Applications, and Services.

ClearBuds differentiate themselves from other wireless earbuds in two key ways, said co-lead author Maruchi Kim, a doctoral student in the Paul G. Allen School of Computer Science & Engineering. First, ClearBuds use a dual microphone array. Microphones in each earbud create two synchronized audio streams that provide information and allow us to spatially separate sounds coming from different directions with higher resolution. Second, the lightweight neural network further enhances the speakers voice.

While most commercial earbuds also have microphones on each earbud, only one earbud is actively sending audio to a phone at a time. With ClearBuds, each earbud sends a stream of audio to the phone. The researchers designed Bluetooth networking protocols to allow these streams to be synchronized within 70 microseconds of each other.

The teams neural network algorithm runs on the phone to process the audio streams. First it suppresses any non-voice sounds. And then it isolates and enhances any noise thats coming in at the same time from both earbuds the speakers voice.

Because the speakers voice is close by and approximately equidistant from the two earbuds, the neural network can be trained to focus on just their speech and eliminate background sounds, including other voices, said co-lead author Ishan Chatterjee, a doctoral student in the Allen School. This method is quite similar to how your own ears work. They use the time difference between sounds coming to your left and right ears to determine from which direction a sound came from.

Shown here, the ClearBuds hardware (round disk) in front of the 3D printed earbud enclosures.Raymond Smith/University of Washington

When the researchers compared ClearBuds with Apple AirPods Pro, ClearBuds performed better, achieving a higher signal-to-distortion ratio across all tests.

Its extraordinary when you consider the fact that our neural network has to run in less than 20 milliseconds on an iPhone that has a fraction of the computing power compared to a large commercial graphics card, which is typically used to run neural networks, said co-lead author Vivek Jayaram, a doctoral student in the Allen School. Thats part of the challenge we had to address in this paper: How do we take a traditional neural network and reduce its size while preserving the quality of the output?

The team also tested ClearBuds in the wild, by recording eight people reading from Project Gutenberg in noisy environments, such as a coffee shop or on a busy street. The researchers then had 37 people rate 10- to 60-second clips of these recordings. Participants rated clips that were processed through ClearBuds neural network as having the best noise suppression and the best overall listening experience.

One limitation of ClearBuds is that people have to wear both earbuds to get the noise suppression experience, the researchers said.

But the real-time communication system developed here can be useful for a variety of other applications, the team said, including smart-home speakers, tracking robot locations or search and rescue missions.

The team is currently working on making the neural network algorithms even more efficient so that they can run on the earbuds.

Additional co-authors are Ira Kemelmacher-Shlizerman, an associate professor in the Allen School; Shwetak Patel, a professor in both the Allen School and the electrical and computer engineering department; and Shyam Gollakota and Steven Seitz, both professors in the Allen School. This research was funded by The National Science Foundation and the University of Washingtons Reality Lab.

For more information, contact the team at clearbuds@cs.washington.edu.

Here is the original post:
ClearBuds: First wireless earbuds that clear up calls using deep learning - University of Washington

C3 AI Named a Leader in AI and Machine Learning Platforms – Business Wire

REDWOOD CITY, Calif.--(BUSINESS WIRE)--C3 AI (NYSE: AI), the Enterprise AI application software company, today announced that Forrester Research has named it a Leader in AI and Machine Learning Platforms in its July 2022 report, The Forrester Wave: AI/ML Platforms, Q3 2022.

Ahead of its time, C3 AIs strategy is to make AI application-centric by building a growing library of industry solutions, forging deep industry partnerships, running in every cloud, and facilitating extreme reuse through common data models, the report states.

We are pleased to be recognized as a leader in AI and ML platforms," said Thomas Siebel, C3 AI CEO. Im delighted to see C3 AIs significant investments in enterprise AI software be acknowledged. I believe that Forrester Research has made an important contribution, having published the first professional comprehensive analysis of enterprise AI and Machine Learning platforms, Siebel continued, changing the dialogue from a focus on disjointed tools to the importance of cohesive enterprise AI platforms. This is certain to accelerate the market adoption of enterprise AI and simplify often protracted decision processes.

Of the 15 vendors in the report, C3 AI received the top ranking in the Strategy category. For the following criteria, C3 AI received:

Download The Forrester Wave: AI and Machine Learning Platforms, Q3 2022 report here.

About C3 AI

C3 AI is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 AI Suite, an end-to-end platform for developing, deploying, and operating enterprise AI applications and C3 AI Applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally.

Read more from the original source:
C3 AI Named a Leader in AI and Machine Learning Platforms - Business Wire

Automated identification of hip arthroplasty implants using artificial intelligence | Scientific Reports – Nature.com

Study design and radiograph acquisition

After institutional review board approval, we retrospectively collected all radiographs taken between June 1, 2011 and Dec 1, 2020 at one university hospital. The images are collected by Neusoft PACS/RIS Version 5.5 on a personal computer running Windows 10. We confirm that all methods were performed in accordance with the relevant guidelines and regulations. Images were collected from surgeries performed by 3 fellowship-trained arthroplasty surgeons to ensure a variety of implant manufacturers and implant designs. At the time of collection, images had all identifying information removed and were thus de-identified. Implant type was identified through the primary surgery operative note and crosschecked with implant sheets. Implant designs were only included in our analysis if more than 30 images per model were identified14.

From the medical records of 313 patients, a total of 357 images were included in this analysis.

Although Zimmer and Biomet merged (Zimmer Biomet), these were treated as two distinct manufacturers. The following 4 designs from the four industry leading manufacturers were included: Biomet Echo Bi-Metric (Zimmer Biomet), Biomet Universal RingLoc (Zimmer Biomet), Depuy Corail (Depuy Synthes), Depuy Pinnacle (Depuy Synthes), LINK Lubinus SP II, LINK Vario cup, and Zimmer Versys FMT and Trilogy (Zimmer Biomet). Implant designs that did not meet the 30-implant threshold were not included. Figure1 demonstrated an example of Cup and Stem anteriorposterior (AP) radiographs of each included implant design. The four types of implants are denoted as type A, type B, type C, and type D respectively in this paper.

Demonstrated an example of cup and stem radiographs of each included implant design.

We used convolutional neural network-based (CNN) algorithms for classification of hip implants. Our training data consist of images of anteroposterior (AP) view of the hips. For each image, we manually cut the image into two parts: the stem and the cup. We will train four CNN models, the first one using stem images (stem network), the second one using cup images (cup network), and the third one using the original uncut images (combined network). The fourth one is an integration of the models trained with stem network and the cup network (joint network).

Since the models involve millions of parameters, while our data set only contained less than one thousand images, it was infeasible to train a CNN model from scratch using our data. Therefore, we adopted the transfer learning framework to train our networks17. The transfer learning framework is a paradigm in the machine learning literature that is widely applied in scenarios where the training data is scarce compared to the scale of the model18. Under the transfer learning framework, the model is first initialized to some model pretrained with other data sets that contain enough data for a different but related task. Then, we tune the model using our data set by performing gradient descent (backward-propagation) only on the last two layers of the networks. As the parameters in the last two layers of the network are comparable with the size of our data set (for the target task), and the parameters in the previous layers have been tuned from the pre-trained model, the resulting network model can have satisfactory performance on the target task.

In our case, our CNN models we used are based on the established ResNet50 network pre-trained on the ImageNet data set19. The target task and our training data sets correspond to the images of the AP views of the hips (stem, cup, and combined).

Figure2 demonstrates the overview of the framework of our deep learning-based method.

Overview of the framework of our deep learning-based method.

Our dataset contained 714 images from 4 different kinds of implants.

We followed standard procedures to pre-process our training data so that it could work with a network trained on ImageNet. We rescaled each image to a size of 224*224 and normalized it according to ImageNet standards. We also performed data augmentation, i.e., random rotation, horizontal flips, etc., to increase the amount of training data and make our algorithm robust to the orientation of the images.

We first divided the set of patients into three groups of sizes~60% (group 1),~30% (group 2), and~10% (group 3). This split technique was used on a per-design basis to ensure the ratio of each implant remained constant. Next, we used the cup and stem images of patients in group 1 for training, those of patients in group 2 for validation, and those of patients in group 3 for testing. The validation set was used to compute cross-validation loss for hyper-parameter tuning and early stopping determination.

We adopted the adaptive gradient method ADAM20 to train our models. Based on the cross-validation loss, we chose the hyper-parameters for ADAM as (learning rate (mathrm{alpha }) = 0.001, ({upbeta }_{1}=0.9, {beta }_{2}=0.99, epsilon ={10}^{-8},) weight_decay=0). The maximum number of epochs was 1000 and the batch size was 16. The early stopping threshold was set to 8. During the training process of each network, the early stopping threshold was hit after around 50 epochs. As we mentioned above, we trained four networks in total.

The first network is trained with the stem images, the second with the cup images. The third network is trained with the original uncut images, which is one way we propose to combine the power of stem images and cup images. We further integrate the first and the second network as an alternative way of jointly utilizing stem and cup images. The integration was done via the following logistic-regression based method. We collected the outputs of the stem network and the cup network (both are of the form of a 4-dimensional vector, with each element corresponding to the classification weight the network gives to the category of implants), and then fed them as the input to a two-layer feed-forward neural network, and trained the network with the data from the validation set. The integration is similar to a weighted-voting procedure among the outputs of the stem network and the cup network, with the weighting votes computed through the validation data set. Note that the above construction relied on our dataset division procedure, where the training set, validation set, and testing set, each contained the stem and cup images of the same set of patients. We referred to the resulting network constructed from the outputs of stem network and cup network as the joint network.

We tested our models (stem, cup, Joint) using the testing set. The prediction result on each testing image was a 4-dimensional vector, with each coordinate representing the classification confidence of the corresponding category of implants.

Since we were studying a multi-class classification problem, we would directly present the confusion matrices of our methods on the testing data, and compute the operation characteristics generalized for multi-class classification.

The institutional review board approved the study with a waiver of informed consent because all images were anonymized before the time of the study.

Read more:
Automated identification of hip arthroplasty implants using artificial intelligence | Scientific Reports - Nature.com

Harnessing the power of artificial intelligence – UofSC News & Events – SC.edu

On an early visit to the University of South Carolina, Amit Sheth was surprised when 10 deans showed up for a meeting with him about artificial intelligence.

Sheth the incoming director of the universitys Artificial Intelligence Institute at the time thought he would need to sell the deans on the idea. Instead, it was them pitching the importance of artificial intelligence to him.

All of them were telling me why they are interested in AI, rather than me telling them why they should be interested in AI, Sheth said in a 2020 interview with the universitys Breakthrough research magazine. The awareness of AI was already there and the desire to incorporate AI into the activities that their faculty and students do was already on the campus.

Since the university announced the institute in 2019, that interest has only grown. There are now dozens of researchers throughout campus exploring how artificial intelligence and machine learning can be used to advance fields from health care and education to manufacturing and transportation. On Oct. 6, faculty will gather at the Darla Moore School of Business for a panel discussion on artificial intelligence led by Julius Fridriksson, vice president for research.

South Carolina's efforts stand out in several ways: the collaborative nature of research, which involves researchers from many different colleges and schools; a commitment to harnessing the power of AI in an ethical way; and the university's commitment to projects that will have a direct, real-world impact.

This week, as the Southeastern Conference marks AI in the SEC Day, we look at some of the remarkable efforts of South Carolina researchers in the area of artificial intelligence.

See the original post:
Harnessing the power of artificial intelligence - UofSC News & Events - SC.edu

UF partners with CIA on improving cybersecurity – News – University of Florida – University of Florida

From the shutdown of an oil pipeline to disrupted access to government, business and healthcare system databases, high-profile cyberattacks in 2021 prompted heightened interest in improving the nations cybersecurity.

Answers on how to do that may come from a collaboration between the University of Florida and the U.S. Central Intelligence Agency, the first of its kind in the nation.

The university and the CIA have entered an agreement to study how artificial intelligence and machine learning applications (AIML) can be used to detect and deter malicious agents that infiltrate computer networks. The work will be carried out by researchers associated with UFs Florida Institute for National Security.

"If you're operating retroactively in cybersecurity, oftentimes you are too late," said Damon Woodard, principal researcher and newly appointed director of the Florida Institute for National Security. "This collaboration will accelerate our ability to understand and expand the research on AI applications of AIML to cybersecurity."

One area of research will be on reinforcement learning, which attempts to mimic how humans learn through trial-and-error. Woodard said little work has been done on this method of machine learnings application to cybersecurity problems. Researchers will explore this technology on simple problems and then see if solutions can be scaled up.

In terms of a cyberattack, you are trying to figure out what the person attacking you is trying to do so you can anticipate and make adjustments on your side to stop them, Woodard said.

The Identity Theft Resource Center reported in January there were 1,603 cyberattack-related data breaches in 2021, an increase of about 500 over the previous year. Ransomware attacks are also on the rise, doubling in each of the past two years, the nationally recognized nonprofit organization said.

The hope, Woodard said, is the work will revolutionize the way the world thinks about cybersecurity and provide insights and technologies that can better protect data and strengthen security across both the government and private sectors. The team also includes two UF graduate students.

Im excited to see the ramifications of this project in the security domain as well as in other domains, such as biomedical and business, said Olivia Dizon-Paradis, a doctoral student in Electrical and Computer Engineering. Im hoping my involvement in this project will help jumpstart my research career in lifelong machine learning.

Stephen Wormald, also a doctoral student in Electrical and Computer Engineering, said he was excited about being able to work with leading researchers to develop state-of-the-art technology.

My involvement will develop personal skills in research, writing and mathematics that I can use long-term in industry, Wormald said. I hope to apply my skills to develop technology and study basic research problems that improve individuals quality of life.

The Florida Institute for National Security was launched in May with the goal of taking a leading role in multidisciplinary research on national security through long-term partnerships with industry, academe and government that lead to commercial products and spin-off companies.

The project is the latest initiative in UFs sweeping focus on artificial intelligence, a $1billion effort to advance AI across the curriculum and in research and industry. The universitys initiative -- andthe work ofthe institute -- is aided by access totheHiPerGator supercomputer.

Woodard said working with the CIA offers the opportunity to share project expertise and provides exposure to many diverse challenges.

"Working with the CIA is a major benefit because they present interesting constraints in cybersecurity," Woodard said. "You're dealing with worst-case scenarios to prepare for everything from low-quality data to low-resolution images. This level of research allows us to reach our full capacity for understanding potential shortcomings."

Excerpt from:
UF partners with CIA on improving cybersecurity - News - University of Florida - University of Florida

In iOS 16 A New iPhone Tool Makes Photobombing A Thing of the Past – CNET

This story is part of WWDC 2022, CNET's complete coverage from and about Apple's annual developers conference.

Apple'siOS 16will include a lot of new iPhone features likeeditable Messagesand acustomizable lock screen. But there was one feature that truly grabbed my attention during WWDC 2022, despite taking up less than 15 seconds of the event.

The feature hasn't been given a name, but here's how it works: You tap and hold on a photo to separate a picture's subject, like a person, from the background. And if you keep holding, you can then "lift" the cutout from the photo and drag it into another app to post, share or make a collage, for example.

Technically, the tap-and-lift photo feature is part of Visual Lookup, which was first launched with iOS 15 and can recognize objects in your photos such as plants, food, landmarks and even pets. In iOS 16, Visual Lookup let you lift that object out of a photo or PDF by doing nothing more than tapping and holding.

During the WWDC, Apple showed someone tapping and holding on the dog in a photo to lift it from the background and share in a Message.

Robby Walker, Apple senior director of Siri Language and Technologies, demonstrated the new tap-and-lift tool on a photo of a French bulldog. The dog was "cut out" of the photo and then dragged and dropped into the text field of a message.

"It feels like magic," Walker said.

Sometimes Apple overuses the word "magic," but this tool does seem impressive. Walker was quick to point out that the effect was the result of an advanced machine-learning model, which is accelerated by core machine learning and Apple's neural engine to perform 40 billion operations in a second.

Knowing the amount of processing and machine learning required to cut a dog out of a photo thrills me to no end. Many times new phone features need to be revolutionary or solve a serious problem. I guess you could say that the tap-and-hold tool solves the problem of removing the background of a photo, which to at least some could be a serious matter.

I couldn't help notice the similarity to another photo feature in iOS 16. On the lock screen, the photo editor separates the foreground subject from the background of the photo used for your wallpaper. This makes it so lock screen elements like the time and date can be layered behind the subject of your wallpaper but in front of the photo's background. It makes it look like the cover of a magazine.

I tried the new Visual Lookup feature in the Public Beta for iOS 16. I am still impressed how quickly and reliably it works. If you have a spare iPhone to try it on, a developer beta for iOS 16 is already available and a public beta version of iOS 16 will be out in July.

For more, check out everything that Apple announced at WWDC 2022, including the new M2 MacBook Air.

See the original post here:
In iOS 16 A New iPhone Tool Makes Photobombing A Thing of the Past - CNET

New study to probe machine learning role in treating depression – The Indian Express

In one of the first studies of its kind, a machine learning approach will be used to determine optimal treatments for patients suffering from depression, especially in the Indian context. If successful, this technological tool can then be used in low and middle-income countries too.

The US National Institute of Mental Health-funded study will be a collaborative effort between Sangath, a 26-year-old mental health research organisation based in Goa with regional hubs in Pune, Bhopal and New Delhi, and AIIMS Bhopal.

Dr Vikram Patel from Harvard Medical School and co-founder of Sangath said that this precision medicine approach for treating depression will also examine whether polygenic risk scores can predict response to either anti-depressant medication or psychological counselling. It is a four-year project and will be implemented closely with AIIMS Bhopal. The study will have a sample size of 1,500 patients, he said. He and Dr Steve Hollon from Vanderbilt University will lead the study as project investigators.

The machine learning approach will take into consideration various data points like specific genetic factors, family information, medical and clinical history that will predict treatment outcomes in patients with depression. The research study is based on the assumption that using a machine learning approach to select the optimal treatment for each individual patient will prove to be more effective than leaving things to chance.

Depression is a major contributor to the global disease burden. Recently, WHO chief scientist Dr Soumya Swaminathan tweeted that one billion people live with a mental health disorder. Suicide accounts for one in 100 deaths, specially among adolescents. Still the government spends two per cent of health budgets on mental health care. At WHO, the pandemic has sparked a push for global mental health transformation, Dr Swaminathan tweeted.

Study researchers said, In the case of moderate to severe depression, a patient is either offered medicines (antidepressant medication) or counselling or both. However, which is the right treatment for each patient is a difficult decision to make and the protocol involves trying out various alternatives. The research study aims to improve the outcomes of treatment for patients with depression by personalising the treatment options.

The study is being conducted in collaboration with the National Health Mission, Madhya Pradesh, the Madhya Pradesh health department and AIIMS Bhopal for improving depression care in low-resource, primary healthcare settings.

Follow this link:
New study to probe machine learning role in treating depression - The Indian Express

How companies can benefit from upskilling their employees with AI and Machine learning – The Financial Express

By Glenn Campbell

The Future of Work has arrived much sooner than many of us have anticipated. Today, we are living in a world that is tech-driven, and ever-developing new technologies like AI, Automation, and Big Data have bought a paradigm shift in the job markets by bringing in powerful opportunities.

Businesses across the world are responding to a high-tech future of work by upgrading their existing skills and building new capabilities to stay relevant with the times. They are increasingly adopting new technologies to grow and scale deep thinking and analysis.

Not only this, with upskilling being the new trend, millions of employees today want to learn on the job and companies are investing heavily in learning and development programmes for the

employees. Additionally, organizations are expecting their employees to constantly evolve and be multi-skilled while on the job. AI technologies in the future will help increase demand for skills insulated from automation, such as creativity, leadership, and organisational and interpersonal communication skills. AI and automation-based solutions are already contributing to the transition from analogue to digital vocational education and training (VET) systems. And keeping abreast of this new technology and its application within a business is paramount for todays leaders.

Some Common benefits of AI and Machine Learning include:

Better, faster decision-making: The companies are harnessing the potential of AI and Machine Learning to identify their gaps and optimise these to support their growth. These capabilities are fostering a culture of new-age development within companies where employees are encouraged to solve problems with critical thinking and pursue new ideas for the overall growth of the company. All these factors are playing a vital role in ensuring better and faster decision making.

Increased operational efficiency: The new advancements in AI and Machine Learning promise

continuous development in operational excellence of new-age companies. Today, companies are at the forefront of using training and development to support the implementation and adoption of new technologies. With the help of expert teams, the companies are able to identify gaps and adopt effective and customised solutions or a mix of workplace solutions and skills products to intensify their growth.

How companies can benefit by upskilling their workforce?

All these future-oriented training and upskilling programs are utterly essential for organisations to fight the skill gap. And an upskilled and trained set of employees will create a more cross-trained workforce that will automatically translate into the enhancement of the teams productivity. For organisations to stay relevant with the changing times and ensure productivity, keeping their employees happy and providing them with a self-improving environment is very important. Training and Upskilling programs are an investment and shows that companies care for their employees future. This plays a vital role in increasing their loyalty to the companies and ensures high retention rate. Upskilling is rather substantial return on a smaller investment (ROI) as it ensures not only winning the trust of the employees but further saves organisations time and money that they invest in replacing employees. Further, this will also help organisations to get away with the tedious process of hiring the new talent as upskilled employees may recommend the organization to others. New-age learning and development strategies to address the skill gap will help companies build cognitive capabilities, social skills, increased adaptability, and resilience in the longer run.The author is the executive director of Deakin University.

More:
How companies can benefit from upskilling their employees with AI and Machine learning - The Financial Express

Collaboration will advance cardiac health through AI – EurekAlert

ITHACA, N.Y. --Employing artificial intelligence to help improve outcomes for people with cardiovascular disease is the focus of a three-year, $15 million collaboration among Cornell Tech, the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS) and NewYork-Presbyterian with physicians from its affiliated medical schools Weill Cornell Medicine and Columbia University Vagelos College of Physicians and Surgeons (Columbia University VP&S).

The Cardiovascular AI Initiative, to be funded by NewYork-Presbyterian, was launched this summer in a virtual meeting featuring approximately 40 representatives from the institutions.

AI is poised to fundamentally transform outcomes in cardiovascular health care by providing doctors with better models for diagnosis and risk prediction in heart disease, said Kavita Bala, professor of computer science and dean of Cornell Bowers CIS. This unique collaboration between Cornells world-leading experts in machine learning and AI and outstanding cardiologists and clinicians from NewYork-Presbyterian, Weill Cornell Medicine and Columbia will drive this next wave of innovation for long-lasting impact on cardiovascular health care.

NewYork-Presbyterian is thrilled to be joining forces with Cornell Tech and Cornell Bowers CIS to harness advanced technology and develop insights into the prediction and prevention of heart disease to benefit our patients, said Dr. Steven J. Corwin, president and chief executive officer of NewYork-Presbyterian. Together with our world-class physicians from Weill Cornell Medicine and Columbia, we can transform the way health care is delivered.

The collaboration aims to improve heart failure treatment, as well as predict and prevent heart failure. Researchers from Cornell Tech and Cornell Bowers CIS, along with physicians from Weill Cornell Medicine and Columbia University VP&S, will use AI and machine learning to examine data from NewYork-Presbyterian in an effort to detect patterns that will help physicians predict who will develop heart failure, inform care decisions and tailor treatments for their patients.

Artificial intelligence and technology are changing our society and the way we practice medicine, said Dr. Nir Uriel, director of advanced heart failure and cardiac transplantation at NewYork-Presbyterian, an adjunct professor of medicine in the Greenberg Division of Cardiology at Weill Cornell Medicine and a professor of medicine in the Division of Cardiology at Columbia University Vagelos College of Physicians and Surgeons. We look forward to building a bridge into the future of medicine, and using advanced technology to provide tools to enhance care for our heart failure patients.

The Cardiovascular AI Initiative will develop advanced machine-learning techniques to learn and discover interactions across a broad range of cardiac signals, with the goal of providing improved recognition accuracy of heart failure and extend the state of care beyond current, codified and clinical decision-making rules. It will also use AI techniques to analyze raw data from time series (EKG) and imaging data.

Major algorithmic advances are needed to derive precise and reliable clinical insights from complex medical data, said Deborah Estrin, the Robert V. Tishman 37 Professor of Computer Science, associate dean for impact at Cornell Tech and a professor of population health science at Weill Cornell Medicine. We are excited about the opportunity to partner with leading cardiologists to advance the state-of-the-art in caring for heart failure and other challenging cardiovascular conditions.

Researchers and clinicians anticipate the data will help answer questions around heart failure prediction, diagnosis, prognosis, risk and treatment, and guide physicians as they make decisions related to heart transplants and left ventricular assist devices (pumps for patients who have reached end-stage heart failure).

Future research will tackle the important task of heart failure and disease prediction, to facilitate earlier intervention for those most likely to experience heart failure, and preempt progression and damaging events. Ultimately this would also include informing the specific therapeutic decisions most likely to work for individuals.

At the initiative launch, Bala spoke of CornellsRadical Collaboration initiative in AI, and the key areas in which she sees AI a discipline in which Cornell ranks near the top of U.S. universities playing a major role in the future.

We identified health and medicine as one of Cornells key impact areas in AI, she said, so the timing of this collaboration could not have been more perfect. We are excited for this partnership as we consider high-risk, high-reward, long-term impact in this space.

-30-

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

See the rest here:
Collaboration will advance cardiac health through AI - EurekAlert

Statistics and Machine Learning Toolbox – MATLAB

Statistics and Machine Learning Toolbox provides functions and apps to describe, analyze, and model data. You can use descriptive statistics, visualizations, and clustering for exploratory data analysis; fit probability distributions to data; generate random numbers for Monte Carlo simulations, and perform hypothesis tests. Regression and classification algorithms let you draw inferences from data and build predictive models either interactively, using the Classification and Regression Learner apps, or programmatically, using AutoML.

For multidimensional data analysis and feature extraction, the toolbox provides principal component analysis (PCA), regularization, dimensionality reduction, and feature selection methods that let you identify variables with the best predictive power.

The toolbox provides supervised, semi-supervised, and unsupervised machine learning algorithms, including support vector machines (SVMs), boosted decision trees, shallow neural nets, k-means, and other clustering methods. You can apply interpretability techniques such as partial dependence plots, Shapley values and LIME, and automatically generate C/C++ code for embedded deployment. Native Simulink blocks let you use predictive models with simulations and Model-Based design. Many toolbox algorithms can be used on data sets that are too big to be stored in memory.

Read the original here:
Statistics and Machine Learning Toolbox - MATLAB