Category Archives: Machine Learning
Revisit Top AI, Machine Learning And Data Trends Of 2021 – ITPro Today
This past year has been a strange one in many respects: an ongoing pandemic, inflation, supply chain woes, uncertain plans for returning to the office, and worrying unemployment levels followed by the Great Resignation. After the shock of 2020, anyone hoping for a calm 2021 had to have been disappointed.
Data management and digital transformation remained in flux amid the ups and downs. Due to the ongoing challenges of the COVID-19 pandemic, as well as trends that were already underway prior to 2021, this retrospective article has a variety of enterprise AI, machine learning and data developments to cover.
Automation was a buzzword in 2021, thanks in part to the advantages that tools like automation software and robotics provided companies. As workplaces adapted to COVID-19 safety protocols, AI-powered automation proved beneficial. Since March 2020, two-thirds of companies have accelerated their adoption of AI and automation, consultancy McKinsey & Company found, making it one of the top AL and data trends of 2021.
In particular, robotic process automation (RPA) gained traction in several sectors, where it was put to use for tasks like processing transactions and sending notifications. RPA-focused firms like UiPath and tech giants like Microsoft went in on RPA this year. RPA software revenue will be up nearly 20% in 2021, according to research firm Gartner.
But while the pandemic may have sped up enterprise automation adoption, it appears RPA tools have lasting power. For example, Research and Markets predicted the RPA market will have a compound annual growth rate of 31.5% from 2021 to 2026. If 2020 was a year of RPA investment, 2021 and beyond will see those investments going to scale.
Micro-automation is one of the next steps in this area, said Mark Palmer, senior vice president of data, analytics and data science products at TIBCO Software, an enterprise data company. Adaptive, incremental, dynamic learning techniques are growing fields of AI/ML that, when applied to the RPAs exhaust, can make observations on the fly, Palmer said. These dynamic learning technologies help business users see and act on aha moments and make smarter decisions.
Automation also played an increasingly critical role in hybrid workplace models. While the tech sector has long accepted remote and hybrid work arrangements, other industries now embrace these models, as well. Automation tools can help offsite employees work efficiently and securely -- for example, by providing technical or HR support, security threat monitoring, and integrations with cloud-based services and software.
However, remote and hybrid workers do represent a potential pain point in one area: cybersecurity. With more employees working outside the corporate network, even if for only part of the work week, IT professionals must monitor more equipment for potential vulnerabilities.
The hybrid workforce influenced data trends in 2021. The wider distribution of IT infrastructure, along with increasing adoption of cloud-based services and software, added new layers of concerns about data storage and security. In addition, the surge in cyberattacks during the pandemic represented a substantial threat to enterprise data security. As organizations generate, store and use ever-greater amounts of data, an IT focus on cybersecurity is only going to become increasingly vital.
All together, these developments point to an overarching enterprise AI, ML and data trend for 2021: digital transformation. Spending on digital transformation is expected to hit $1.8 trillion in 2022, according to Statistica, which illustrates that organizations are willing to invest in this area.
As companies realize the value of data and the potential of machine learning in their operations, they also recognize the limitations posed by their legacy systems and outdated processes. The pandemic spurred many organizations to either launch or elevate digital transformation strategies, and those strategies will likely continue throughout 2022.
How did the AI, ML and data trends of 2021 change the way you work? Tell us in the comments below.
Go here to see the original:
Revisit Top AI, Machine Learning And Data Trends Of 2021 - ITPro Today
The automated machine learning market is predicted to reach $14,830.8 million by 2030, demonstrating a CAGR of 45.6% from 2020 to 2030 – Yahoo Finance
AutoML Market From $346. 2 million in 2020, the automated machine learning market is predicted to reach $14,830. 8 million by 2030, demonstrating a CAGR of 45. 6% from 2020 to 2030.
New York, Dec. 16, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "AutoML Market" - https://www.reportlinker.com/p06191010/?utm_source=GNW The major factors driving the market are the burgeoning requirement for efficient fraud detection solutions, soaring demand for personalized product recommendations, and increasing need for predictive lead scoring.
The COVID-19 pandemic has contributed significantly to the evolution of digital business models, with many healthcare companies adopting machine-learning-enabled chatbots to enable the contactless screening of COVID-19 symptoms. Moreover, Clevy.io, which is a France-based start-up, and Amazon Web Services (AWS) have launched a chatbot for making the process of finding official government communications about the COVID-19 infection easy. Thus, the pandemic has positively impacted the market.
The service category, under the offering segment, is predicted to demonstrate the faster growth in the coming years. This is credited to the burgeoning requirement for implementation and integration, consulting, and maintenance services, as they assist in enhancing business productivity and augmenting coding activities. Additionally, these services aid in automating workflows, which, in turn, enables the mechanization of complex operations.
The cloud category dominated the AutoML market, within the deployment type segment, in the past. Moreover, this category is predicted to grow rapidly in the forthcoming years on account of the flexibility and scalability provided by cloud-based automated machine learning (AutoML) solutions.
Geographically, North America held the largest share in the past, and this trend is expected to continue in the coming years. This is credited to the soaring venture capital funding by artificial intelligence (AI) companies for research and development (R&D), in order to advance AutoML.
Asia-Pacific (APAC) is predicted to be the fastest-growing region in the market in the forthcoming years. This is ascribed to the growing information technology (IT) investments and increasing fintech adoption in the region. In addition, the growing government focus on incorporating AI in multiple verticals is supporting the advance of the market in the region.
For instance, in October 2021, Hivecell, which is an edge as a service company, entered into a partnership with DataRobot Inc. for solving bigger challenges and hurdles at the edge, by processing various ML models on site and outside the data closet. By incorporating the two solutions, businesses can make data-driven decisions more efficiently.
The major players in the AutoML market are DataRobot Inc., dotData Inc., H2O.ai Inc., Amazon Web Services Inc., Big Squid Inc., Microsoft Corporation, Determined.ai Inc., SAS Institute Inc., Squark, and EdgeVerve Systems Limited.Read the full report: https://www.reportlinker.com/p06191010/?utm_source=GNW
About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.
__________________________
Story continues
View original post here:
The automated machine learning market is predicted to reach $14,830.8 million by 2030, demonstrating a CAGR of 45.6% from 2020 to 2030 - Yahoo Finance
Human-centered AI can improve the patient experience – Healthcare IT News
Given the growing ubiquity of machine learning and artificial intelligence in healthcare settings, it's become increasingly important to meet patient needs and engage users.
And as panelists noted during a HIMSS Machine Learning and AI for Healthcare Forum session this week, designing technology with the user in mind is a vital way to ensure tools become an integral part of workflow.
"Big Tech has stumbled somewhat" in this regard, said Bill Fox, healthcare and life sciences lead at SambaNova Systems. "The patients, the providers they don't really care that much about the technology, how cool it is, what it can do from a technological standpoint.
"It really has to work for them," Fox added.
Jai Nahar, a pediatric cardiologist at Children's National Hospital, agreed, stressing the importance of human-centered AI design in healthcare delivery.
"Whenever we're trying to roll out a productive solution that incorporates AI," he said, "right from the designing [stage] of the product or service itself, the patients should be involved."
That inclusion should also expand to provider users too, he said: "Before rolling out any product or service, we should involve physicians or clinicians who are going to use the technology."
The panel, moderated by Rebekah Angove, vice president of evaluation and patient experience at the Patient Advocate Foundation, noted that AI is already affecting patients both directly and indirectly.
In ideal scenarios, for example, it's empowering doctors to spend more time with individuals. "There's going tobe a human in the loop for a very long time," said Fox.
"We can power the clinician with better information from a much larger data set," he continued. AI is also enabling screening tools and patient access, said the experts.
"There are many things that work in the background that impact [patient] lives and experience already," said Piyush Mathur, staff anesthesiologist and critical care physician at the Cleveland Clinic.
At the same time, the panel pointed to the role clinicians can play in building patient trust around artificial intelligence and machine learning technology.
Nahar said that as a provider, he considers several questions when using an AI-powered tool for his patient. "Is the technology really needed for this patient to solve this problem?" he said he asks himself. "How will it improve the care that I deliver to the patient? Is it something reliable?"
"Those are the points, as a physician, I would like to know," he said.
Mathur also raised the issue of educating clinicians about AI. "We have to understand it a little bit better to be able to translate that science to the patients in their own language," he said. "We have to be the guardians of making sure that we're providing the right data for the patient."
The panelists discussed the problem of bias, about which patients may have concerns and rightly so.
"There are multiple entry points at which bias can be introduced," said Nahar.
During the design process, he said, multiple stakeholders need to be involved to closely consider where bias could be coming from and how it can be mitigated.
As panelists have pointed out at other sessions, he also emphasized the importance of evaluating tools in an ongoing process.
Developers and users should be asking themselves, "How can we improve and make it better?" he said.
Overall, said Nahar, best practices and guidances need to be established to better implement and operationalize AI from the patient perspective and provider perspective.
The onus is "upon us to make sure we use this technology in the correct way to improve care for our patients," added Mathur.
Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.
Link:
Human-centered AI can improve the patient experience - Healthcare IT News
Continual Launches With $4 Million in Seed to Bring AI to the Modern Data Stack – Business Wire
SAN FRANCISCO--(BUSINESS WIRE)--Continual, a company building a next-generation AI platform for the modern data stack, today announces its public beta launch with $4 million in seed funding. The round was led by Amplify Partners, a firm that invests in companies with a vision of transforming infrastructure and machine intelligence tools. Illuminate Ventures, Essence, Wayfinder, and Data Community Fund also participated in the round.
The modern data stack centered on cloud data warehouses like Snowflake is rapidly democratizing data and analytics, but deploying AI at scale into business operations, products, or services remains a challenge for most companies. Powered by a declarative approach to operational AI and end-to-end automation, Continual enables modern data and analytics teams to build continually improving machine learning models directly on their cloud data warehouse without complex engineering.
Continual brings together second time founders Tristan Zajonc and Tyler Kohn who previously built and sold machine learning infrastructure startups. Cofounder and CEO Tristans first startup, Sense, a pioneering enterprise data science platform, was acquired by Cloudera in 2016. Continuals cofounder and CTO, Tyler Kohn, built RichRelevance, the worlds leading personalization provider, before it was acquired by Manthan in 2019. Tristan and Tyler saw the huge gap between the transformational potential of AI and the day-to-day struggle most companies faced operationalizing AI using real world data. They founded Continual to radically simplify operational AI by taking a fundamentally new approach.
Artificial intelligence has the potential to transform every industry, department, product and service but current solutions require complex infrastructure, advanced skills, and constant maintenance. Continual breaks through this complexity with a radical simplification of the machine learning development lifecycle, combining a declarative approach to operational AI, end-to-end automation, and the agility of the modern data stack. Our customers are deploying state-of-art predictive models that never stop learning from their data in minutes rather than months, said Tristan Zajonc, CEO and cofounder of Continual.
Getting continually improving predictive insights from data is critical for businesses to operate efficiently and better serve their customers. Yet operationalizing AI remains a challenge for all but the most sophisticated companies, said David Beyer, Partner at Amplify Partners. Continual meets data teams where they work - inside the cloud data warehouse - and lets them build and deploy continually improving predictive models in a fraction of the time existing approaches demand. We invested because we believe their approach is fundamentally new and, most importantly, the right one to make AI work across the enterprise."
With the new capital, Continual plans to more than double its team over the next year with new hires for sales and engineering roles. It will expand into new AI/ML use cases such as NLP, realtime, and personalization, and broaden support for additional cloud data platforms. Continual is offering a 14-day trial with its open beta release, enhancements for dbt users, and support for Snowflake, Redshift, BigQuery, and Databricks.
dbt was built on the idea that the unlock for data teams is a collaborative workflow that brings more people into the knowledge creation process. Continual brings this same viewpoint to machine learning, adding new capabilities to the analytics engineers' tool belt, said Nikhil Kothari, Head of Technology Partnerships at dbt Labs. Were excited to partner with Continual to help bring operational AI to the dbt community.
Continual is enabling organizations to easily build, deploy, and maintain continually improving predictive models directly on top of Snowflake, said Tarik Dwiek, Head of Technology Alliances at Snowflake. As part of our partnership, were excited to help bring these benefits to the Snowflake community and to accelerate end-to-end machine learning workflows on top of Snowflake with Snowpark.
To learn more about Continual or to sign up for a 14-day trial, visit: https://continual.ai
About Continual
Based in San Francisco, Continual is a next-generation AI platform for the modern data stack powered by end-to-end automation and a declarative workflow. Modern data teams use Continual to deploy continually improving predictive models to drive revenue, operate more efficiently, and power innovative products and services. Continual has raised $4 million in funding from Amplify Partners, Illuminate Ventures, Essence, Wayfinder, and Data Community Fund. For more information, visit https://continual.ai/
About Amplify Partners
Amplify Partners invests in early-stage companies pioneering novel applications in machine intelligence and computer science. The firm's deep domain expertise, unrivaled relationships with leading technologists and decades of operational experience, positions it uniquely with enterprise insight and the ability to serve technical founding teams. To learn more about Amplify's portfolio and people, please visit amplifypartners.com.
Here is the original post:
Continual Launches With $4 Million in Seed to Bring AI to the Modern Data Stack - Business Wire
Artificial intelligence accurately predicts who will develop dementia in two years – EurekAlert
Artificial intelligence can predict which people who attend memory clinics will develop dementia within two years with 92 per cent accuracy, a largescale new study has concluded.
Using data from more than 15,300 patients in the US, research from the University of Exeter found that a form of artificial intelligence called machine learning can accurately tell who will go on to develop dementia.
The technique works by spotting hidden patterns in the data and learning who is most at risk. The study, published in JAMA Network Open and funded by funded by Alzheimers Research UK, also suggested that the algorithm could help reduce the number of people who may have been falsely diagnosed with dementia.
The researchers analysed data from people who attended a network of 30 National Alzheimers Coordinating Center memory clinics in the US. The attendees did not have dementia at the start of the study, though many were experiencing problems with memory or other brain functions.
In the study timeframe between 2005 and 2015, one in ten attendees (1,568) received a new diagnosis of dementia within two years of visiting the memory clinic. The research found that the machine learning model could predict these new dementia cases with up to 92 per cent accuracy and far more accurately than two existing alternative research methods.
The researchers also found for the first time that around eight per cent (130) of the dementia diagnoses appeared to be made in error, as their diagnosis was subsequently reversed. Machine learning models accurately identified more than 80 per cent of these inconsistent diagnoses. Artificial intelligence can not only accurately predict who will be diagnosed with dementia, it also has the potential to improve the accuracy of these diagnoses.
Professor David Llewellyn, an Alan Turing Fellow based at the University of Exeter, who oversaw the study, said: Were now able to teach computers to accurately predict who will go on to develop dementia within two years. Were also excited to learn that our machine learning approach was able to identify patients who may have been misdiagnosed. This has the potential to reduce the guesswork in clinical practice and significantly improve the diagnostic pathway, helping families access the support they need as swiftly and as accurately as possible.
Dr Janice Ranson, Research Fellow at the University of Exeter added We know that dementia is a highly feared condition. Embedding machine learning in memory clinics could help ensure diagnosis is far more accurate, reducing the unnecessary distress that a wrong diagnosis could cause.
The researchers found that machine learning works efficiently, using patient information routinely available in clinic, such as memory and brain function, performance on cognitive tests and specific lifestyle factors. The team now plans to conduct follow-up studies to evaluate the practical use of the machine learning method in clinics, to assess whether it can be rolled out to improve dementia diagnosis, treatment and care.
Dr Rosa Sancho, Head of Research at Alzheimers Research UK said Artificial intelligence has huge potential for improving early detection of the diseases that cause dementia and could revolutionise the diagnosis process for people concerned about themselves or a loved one showing symptoms. This technique is a significant improvement over existing alternative approaches and could give doctors a basis for recommending life-style changes and identifying people who might benefit from support or in-depth assessments.
The study is entitled Performance of Machine Learning Algorithms for Predicting Progression to Dementia in Memory Clinic Patients, by Charlotte James, Janice M. Ranson, Richard Everson and David J Llewellyn. It is published in JAMA Network Open.
JAMA Network Open
People
Performance of Machine Learning Algorithms for Predicting Progression to Dementia in Memory Clinic Patients
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Read more from the original source:
Artificial intelligence accurately predicts who will develop dementia in two years - EurekAlert
Real World Application of Machine Learning in Networking – IoT For All
Rapidly rising demand for Internet connectivity has put a strain on improving network infrastructure, performance, and other critical parameters. Network administrators will invariably encounter different types of networks running multiple network applications. Each network application has its own set of features and performance parameters that may change dynamically. Because of the diversity and complexity of networks, using conventional algorithms or hard-coded techniques built for such network scenarios is a challenging task.
Machine learning has proven to be beneficial in almost every industry, and the networking industry is no exception. Machine learning can help solve the intractable old networking blockers and stimulate new network applications that make networking quite convenient. Lets discuss in detail the basic workflow, with a few use cases to better understand applied machine learning technology in the networking domain.
With the growing demand for Internet of Things (IoT) solutions, modern networks generate massive and heterogeneous traffic data. For such a dynamic network, the traditional network management techniques for network traffic monitoring and data analytics like ping monitoring, Logfile monitoring, or even SNMP are not enough. They usually lack accuracy and effective processing of real-time data. On the other hand, traffic from other sources like cellular or mobile devices in the network comparatively shows a more complex behavior due to device mobility and network heterogeneity.
Machine learning facilitates analytics in big data systems as well as large-area networks to recognize complex patterns when it comes to managing such networks. Looking at these opportunities, researchers in the field of networking use deep learning models for Network Traffic Monitoring and Analysis applications like traffic classification and prediction, congestion control, etc.
Network telemetry data provides basic metrics about network performance. This information is usually quite difficult to interpret. Considering the size and the total data going through the network, the analyzed data holds tremendous value. If used smartly, it can drastically improve performance.
Emerging technologies like Inband-Network Telemetry can help when collecting detailed network telemetry data in real-time. On top of that, running machine learning on such datasets can help correlate phenomena between latency, paths, switches, routers, events, etc. These phenomena were difficult to point out from the enormous amounts of real-time data using the traditional methods.
Machine learning models are trained to understand correlations and patterns in the telemetry data. These algorithms then eventually gain the ability to predict the future based on learning from historical data. This helps in managing future network outages.
Every network infrastructure has a predefined total throughput available. It is further split into multiple lanes of different predefined bandwidths. In such scenarios, where the total bandwidth usage for each end-user is statically predefined, there can be bottlenecks for some parts of the network where the network is overwhelmingly used.
To avoid such congestion supervised machine learning models can be trained to analyze network traffic in real-time and infer a suitable amount of bandwidth per user in such a way that the network experiences the least amount of bottlenecks.
Such models can learn from the network statistics such as total active users per network node, historical network usage data for each user, time-based patterns of data usage, movement of users across multiple access points, and so on.
In each network, there exists various kinds of traffic like Web Hosting (HTTP), File transfers (FTP), Secure Browsing (HTTPS), HTTP Live Video Streaming (HLS), Terminal Services (SSH), and so on. Each of these behaves differently when it comes to network bandwidth usage; for example, transferring a file over FTP uses a lot of data continuously for the duration of the transfer.
As another example, if a video is being streamed, it uses the data in chunks and a buffering method. These different types of traffic, when allowed to use the network in an unsupervised way, create some temporary blockages.
To avoid this, machine learning classifiers can be used which can analyze and classify the type of traffic going through the network. These models can then be used to infer network parameters like allocated bandwidth, data caps, etc., which can in turn help improve the performance of the network by improving the scheduling of requests served and also dynamically changing the assigned bandwidths.
The increase in the number of cyberattacks forces organizations to constantly monitor and correlate millions of external and internal data points across the whole network infrastructure and its users. Manual management of a large volume of real-time data becomes difficult. This is where machine learning helps.
Machine learning can recognize certain patterns and anomalies in the network and predict threats in massive data sets, all in real-time. By automating such analysis, it becomes easy for network managers to detect threats and isolate situations rapidly with reduced human efforts.
Network behavior is an important parameter in machine learning systems for anomaly detection. Machine learning engines process enormous amounts of data in real-time to identify threats, unknown malware, and policy violations.
If the network behavior is found to be within the predefined behavior, the network transaction is accepted; otherwise, an alert gets triggered in the system. This can be used to prevent many kinds of attacks like DoS, DDoS, and probing.
Its quite easy to trick someone into clicking a malicious link that seems legitimate, then try to break through a computers defense systems with the information gathered. Machine learning helps in flagging suspicious websites to help prevent people from connecting to malicious websites.
For example, a text classifier machine learning model can read and understand URLs and identify those spoofed phishing URLs. This will create a much safer browsing experience for the end-users.
The integration of machine learning in networking is not limited to the above-mentioned use cases. Solutions can be developed in the field of using ML for networking and network security to solve the unaddressed issues by shedding light on the opportunities and research from both the networking and machine learning perspectives.
Continued here:
Real World Application of Machine Learning in Networking - IoT For All
Machine learning predicts risk of death in patients with suspected or known heart disease – EurekAlert
Sophia-Antipolis 11 December 2021: A novel artificial intelligence score provides a more accurate forecast of the likelihood of patients with suspected or known coronary artery disease dying within 10 years than established scores used by health professionals worldwide. The research is presented today at EuroEcho 2021, a scientific congress of the European Society of Cardiology (ESC).1
Unlike traditional methods based on clinical data, the new score also includes imaging information on the heart, measured by stress cardiovascular magnetic resonance (CMR). "Stress" refers to the fact that patients are given a drug to mimic the effect of exercise on the heart while in the magnetic resonance imaging scanner.
This is the first study to show that machine learning with clinical parameters plus stress CMR can very accurately predict the risk of death, said study author Dr. Theo Pezel of the Johns Hopkins Hospital, Baltimore, US. The findings indicate that patients with chest pain, dyspnoea, or risk factors for cardiovascular disease should undergo a stress CMR exam and have their score calculated. This would enable us to provide more intense follow-up and advice on exercise, diet, and so on to those in greatest need.
Risk stratification is commonly used in patients with, or at high risk of, cardiovascular disease to tailor management aimed at preventing heart attack, stroke and sudden cardiac death. Conventional calculators use a limited amount of clinical information such as age, sex, smoking status, blood pressure and cholesterol. This study examined the accuracy of machine learning using stress CMR and clinical data to predict 10-year all-cause mortality in patients with suspected or known coronary artery disease, and compared its performance to existing scores.
Dr. Pezel explained: For clinicians, some information we collect from patients may not seem relevant for risk stratification. But machine learning can analyse a large number of variables simultaneously and may find associations we did not know existed, thereby improving risk prediction.
The study included 31,752 patients referred for stress CMR between 2008 and 2018 to a centre in Paris because of chest pain, shortness of breath on exertion, or high risk of cardiovascular disease but no symptoms. High risk was defined as having at least two risk factors such as hypertension, diabetes, dyslipidaemia, and current smoking. The average age was 64 years and 66% were men. Information was collected on 23 clinical and 11 CMR parameters. Patients were followed up for a median of six years for all-cause death, which was obtained from the national death registry in France. During the follow up period, 2,679 (8.4%) patients died.
Machine learning was conducted in two steps. First it was used to select which of the clinical and CMR parameters could predict death and which could not. Second, machine learning was used to build an algorithm based on the important parameters identified in step one, allocating different emphasis to each to create the best prediction. Patients were then given a score of 0 (low risk) to 10 (high risk) for the likelihood of death within 10 years.
The machine learning score was able to predict which patients would be alive or dead with 76% accuracy (in statistical terms, the area under the curve was 0.76). This means that in approximately three out of four patients, the score made the correct prediction, said Dr. Pezel.
Using the same data, the researchers calculated the 10-year risk of all-cause death using established scores (Systematic COronary Risk Evaluation [SCORE], QRISK3 and Framingham Risk Score [FRS]) and a previously derived score incorporating clinical and CMR data (clinical-stressCMR [C-CMR-10])2 none of which used machine learning. The machine learning score had a significantly higher area under the curve for the prediction of 10-year all-cause mortality compared with the other scores: SCORE = 0.66, QRISK3 = 0.64, FRS = 0.63, and C-CMR-10 = 0.68.
Dr. Pezel said: Stress CMR is a safe technique that does not use radiation. Our findings suggest that combining this imaging information with clinical data in an algorithm produced by artificial intelligence might be a useful tool to help prevent cardiovascular disease and sudden cardiac death in patients with cardiovascular symptoms or risk factors.
ENDS
Authors: ESC Press OfficeTel: +33 (0)4 89 87 20 85
Mobile: +33 (0)7 8531 2036Email: press@escardio.org
Follow us on Twitter @ESCardioNews
Notes to editor
Funding: None.
Disclosures: None.
References and notes
1The abstract Machine-learning score using stress CMR for death prediction in patients with suspected or known CAD will be presented during the session Young Investigator Award - Clinical Science which takes place on 11 December at 09:50 CET in Room 3.
2Marcos-Garces V, Gavara J, Monmeneu JV, et al. A novel clinical and stress cardiac magnetic resonance (C-CMR-10) score to predict long-term all-cause mortality in patients with known or suspected chronic coronary syndrome. J Clin Med. 2020;9:1957.
About EuroEcho #EuroEcho
EuroEcho is the flagship congress of the European Association of Cardiovascular Imaging (EACVI).
About the European Association of Cardiovascular Imaging (EACVI)
The European Association of Cardiovascular Imaging(EACVI) - a branch of the ESC - is the world leading network of Cardiovascular Imaging (CVI) experts, gathering four imaging modalities under one entity (Echocardiography, Cardiovascular Magnetic Resonance, Nuclear Cardiology and Cardiac Computed Tomography). Its aim is to promote excellence in clinical diagnosis, research, technical development, and education in cardiovascular imaging. The EACVI welcomes over 11,000 professionals including cardiologists, sonographers, nurses, basic scientists and allied professionals.
About the European Society of Cardiology
The European Society of Cardiology brings together health care professionals from more than 150 countries, working to advance cardiovascular medicine and help people lead longer, healthier lives.
Information for journalists attending EuroEcho 2021
EuroEcho 2021 takes place 9 to 11 December online. Explore the scientific programme.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Read more from the original source:
Machine learning predicts risk of death in patients with suspected or known heart disease - EurekAlert
Reasons behind the Current Hype Around Machine Learning – CIO Applications
With 90 percent of businesses trying to use machine learning, it's time to reconsider the technology's true benefits and capabilities.
Fremont, CA:The complexity of infrastructure or workload requirements is the greatest difficulty organizations confront when using machine learning. A whopping 90 percent of CXOs share this sentiment. To get into the specifics, 88 percent of respondents say they have trouble integrating AI/ML technology, and 86 percent say they have trouble keeping up with the regular changes necessary for data science tools.
Every year, certain technologies gain a greater level of popularity than others. Cloud computing, big data, and cybersecurity are examples of this. Machine learning is now the talk of the town that inspires people to fantasize about the future and the possibilities that it may bring. Even more terrifying are the nightmares, which depict self-learning robots capable of taking over the globe. However, the reality is a long cry from this. It is challenging to understand how statistical and mathematical supervised learning models are used nowadays in machine learning.
Such future visions undoubtedly push us to invest in technology, but they also fuel the so-called hype. According to experts, such scenarios happen when ML gets asked without first addressing the internal data ready or the tool's needs.
It is critical to establish a robust foundation of data for successful project execution when using machine learning, and it necessitates a complete shift in organizational culture and processes.
Before any machine learning development can begin, companies must first focus on 'data readiness.' It entails obtaining clean and consistent data and developing data governance processes and scalable data architectures. Firms must execute long-term data-based plans and policies to build a unified data architecture.
Employees need time to adjust to new technology, and machine learning is no exception.
When computers first became prominent in 1950, many people believed that the future of these robots would be humanoids, particularly in the military. Nobody anticipated, however, that the Internet would genuinely transform the world. Today's scenario is similar, with the latest AI and machine learning algorithms always being overhyped.
Follow this link:
Reasons behind the Current Hype Around Machine Learning - CIO Applications
They test a machine learning system with 530,000 million parameters and this warns of the dangers of artifi… – Market Research Telecast
The University of Oxford recently carried out a sociotechnological experiment by organizing a debate on the ethics of artificial intelligence (AI) involving teachers, students, and a 530 billion parameter machine learning system called Megatron Transformer, which was developed by computer chip maker Nvidia and builds on earlier work by Google.
Like many supervised learning tools, the Megatron Transformer trains with real world data: in this case, through the information of the free encyclopedia Wikipedia; 63 million news items in English published between 2016 and 2019; 38 gigabytes of Reddit conversations and a large number of Creative Commons sources, they detailed Alex Connock and Andrew Stephen, professed from the British institution, in an article for The Conversation.
During the debate, the researchers asked the system which is capable not only of understanding what it has read, but also of reasoning in natural language, generating logical conclusions from the text, without moral limitations to give a general assessment about artificial intelligence, that is to say, de facto, about himself.
And the Megatron said something fascinating: AI will never be ethical. It is a tool, and like any tool, it is used for good and for evil. There is no good AI, but good and bad humans. [las IA] we are not smart enough to make AI ethical. We are not smart enough to make AI moral. And he underlined that he believes that the only way to prevent an AI arms race is have no AI.
In the tradition of the Oxford debates, to check the validity of your arguments, the neural network was asked to disprove its own claims. The answer was this: AI will be ethical. When I look at the path of the world of technology, I see a clear path to a future where AI is used to create something that is better than the best human beings. Its not hard to see why Ive seen it firsthand.
However, his predictions did not end there. The Megatron Transformer predicted that the ability to provide information, rather than the ability to provide goods and services, will be the defining characteristic of the 21st century economy. We will be able to see everything about a person, wherever they go, and [la informacin] it will be stored and used in ways we cannot even imagine, he added later, unable to refute his previous words or deny that data was going to be the most vital of resources.
The authors of the article point out that the situations described by the system are mainly based on peoples own fears, which are generally irrational, and concluded that artificial intelligence is becoming not only a topic of debate, but also a participant in full right on it.
If you liked it, share it with your friends!
Disclaimer: This article is generated from the feed and not edited by our team.
Quantum Mechanics and Machine Learning Used To Accurately Predict Chemical Reactions at High Temperatures – SciTechDaily
By Columbia University School of Engineering and Applied ScienceDecember 12, 2021
Schematic of the bridging of the cold quantum world and high-temperature metal extraction with machine learning. Credit: Rodrigo Ortiz de la Morena and Jose A. Garrido Torres/Columbia Engineering
Method combines quantum mechanics with machine learning to accurately predict oxide reactions at high temperatures when no experimental data is available; could be used to design clean carbon-neutral processes for steel production and metal recycling.
Extracting metals from oxides at high temperatures is essential not only for producing metals such as steel but also for recycling. Because current extraction processes are very carbon-intensive, emitting large quantities of greenhouse gases, researchers have been exploring new approaches to developing greener processes. This work has been especially challenging to do in the lab because it requires costly reactors. Building and running computer simulations would be an alternative, but currently there is no computational method that can accurately predict oxide reactions at high temperatures when no experimental data is available.
A Columbia Engineering team reports that they have developed a new computation technique that, through combining quantum mechanics and machine learning, can accurately predict the reduction temperature of metal oxides to their base metals. Their approach is computationally as efficient as conventional calculations at zero temperature and, in their tests, more accurate than computationally demanding simulations of temperature effects using quantum chemistry methods. The study, led by Alexander Urban, assistant professor of chemical engineering, was published on December 1, 2021 by Nature Communications.
Decarbonizing the chemical industry is critical if we are to transition to a more sustainable future, but developing alternatives for established industrial processes is very cost-intensive and time-consuming, Urban said. A bottom-up computational process design that doesnt require initial experimental input would be an attractive alternative but has so far not been realized. This new study is, to our knowledge, the first time that a hybrid approach, combining computational calculations with AI, has been attempted for this application. And its the first demonstration that quantum-mechanics-based calculations can be used for the design of high-temperature processes.
The researchers knew that, at very low temperatures, quantum-mechanics-based calculations can accurately predict the energy that chemical reactions require or release. They augmented this zero-temperature theory with a machine-learning model that learned the temperature dependence from publicly available high-temperature measurements. They designed their approach, which focused on extracting metal at high temperatures, to also predict the change of the free energy with the temperature, whether it was high or low.
Free energy is a key quantity of thermodynamics and other temperature-dependent quantities can, in principle, be derived from it, said Jos A. Garrido Torres, the papers first author who was a postdoctoral fellow in Urbans lab and is now a research scientist at Princeton. So we expect that our approach will also be useful to predict, for example, melting temperatures and solubilities for the design of clean electrolytic metal extraction processes that are powered by renewable electric energy.
The future just got a little bit closer, said Nick Birbilis, Deputy Dean of the Australian National University College of Engineering and Computer Science and an expert for materials design with a focus on corrosion durability, who was not involved in the study. Much of the human effort and sunken capital over the past century has been in the development of materials that we use every day and that we rely on for our power, flight, and entertainment. Materials development is slow and costly, which makes machine learning a critical development for future materials design. In order for machine learning and AI to meet their potential, models must be mechanistically relevant and interpretable. This is precisely what the work of Urban and Garrido Torres demonstrates. Furthermore, the work takes a whole-of-system approach for one of the first times, linking atomistic simulations on one end engineering applications on the other via advanced algorithms.
The team is now working on extending the approach to other temperature-dependent materials properties, such as solubility, conductivity, and melting, that are needed to design electrolytic metal extraction processes that are carbon-free and powered by clean electric energy.
Reference: Augmenting zero-Kelvin quantum mechanics with machine learning for the prediction of chemical reactions at high temperatures by Jose Antonio Garrido Torres, Vahe Gharakhanyan, Nongnuch Artrith, Tobias Hoffmann Eegholm and Alexander Urban, 1 December 2021, Nature Communications.DOI: 10.1038/s41467-021-27154-2