Page 2,420«..1020..2,4192,4202,4212,422..2,4302,440..»

Atos announces hybridisation projects at its 8th Quantum Advisory Board – Scientific Computing World

At the meeting of the 8th Atos Quantum Advisory Board, a group of international experts, mathematicians and physicists, authorities in their fields, Atos has announced investments, along with partner start-ups Pasqal and IQM, in two major quantum hybridisation projects in France and Germany.

Held at Atos' R&D centre, dedicated to research in quantum computing and high-performance computing, in Clayes-sous-Bois, in the presence of Atos next CEO, Rodolphe Belmer, this meeting of the Quantum Advisory Board was an opportunity to review Atos recent work and to take stock of future prospects.

Artur Ekert, professor of quantum physics at the Mathematical Institute, University of Oxford, founding director of the Centre for Quantum Technologies in Singapore and member of the Quantum Advisory Board said: We are truly impressed by the work and the progress that Atos has made over the past year. The company takes quantum computing seriously and it gives us great pleasure to see it becoming one of the key players in the field. It is a natural progression for Atos. As a world leader in High Performance Computing (HPC), Atos is in a unique position to combine its existing, extensive, expertise in HPC with quantum technology and take both fields to new heights. We are confident that Atos will shape the quantum landscape in years to come, both with research and applications that have a long lasting impact.

In the field of quantum hybridisation, Atos is enabling several applications - in the areas of chemistry, such as catalysis design for nitrogen fixation, and for the optimisation of smart grids. Atos is also involved in two additional quantum hybridization projects, which are currently being launched:

The European HPC-QS (Quantum Simulation) project, which started in December 2021, aims to build the first European hybrid supercomputer with an integrated quantum accelerator by the end of 2023.

Atos is involved in this project alongside national partners including the CEA, GENCI, Pasqal and the Julich Supercomputing Centre. Pasqal will provide its analog quantum accelerator and Atos, with its quantum simulator, the Quantum Learning Machine (QLM), will ensure the hybridization with the HPCs at the two datacenters at GENCI and Julich.

The Q-EXA project, part of the German governmental quantum plan, will see a consortium of partners, including Atos, work together to integrate a German quantum computer into an HPC supercomputer for the first time. Atos QLM will be instrumental in connecting the quantum computer, from start-up IQM (also part of the Atos Scaler program) to the Leibniz Supercomputing centre.

The European Organization for Nuclear Research (CERN), one of the worlds largest and most respected research centres, based in Geneva, has recently acquired an Atos Quantum Learning Machine (QLM) appliance and joined the Atos User Club. The Atos QLM, delivered to CERN in October, will be made available to the CERN scientific community to support research activities in the framework of the CERN Quantum Technology Initiative (CERN QTI), thus accelerating the investigation of quantum advantage for high-energy physics (HEP) and beyond.

Alberto Di Meglio, coordinator of the CERN Quantum Technology Initiative comments: Building on CERNs unique expertise and strong collaborative culture, co-development efforts are at the core of CERN QTI. As we explore the fast-evolving field of quantum technologies, access to the Atos Quantum Learning Machine and Atos expertise can play an important role in our quantum developments roadmap in support of the high-energy physics community and beyond. A dedicated training workshop is being organized with Atos to investigate the full functionality and potential of the quantum appliance, as well as its future application for some of the CERN QTI activities.

Pierre Barnab, interim co-CEO and head of Big Data and Cybersecurity at Atos added: Atos is the world leader in the convergence of supercomputing and quantum computing, as shown by these two major and strategic projects we are involved in France and Germany. At a time when the French government is expected to announce its plan for quantum computing, the durability of our Quantum Board, the quality of the work carried out and the concrete applications of this research in major projects reinforce this position.

The Quantum Advisory Board is made up of universally recognised quantum physicists and includes:

As a result of Atos programme to anticipate the future of quantum computing and to be prepared for the opportunities and challenges that come with it - Atos Quantum - Atos was the first organization to offer a quantum noisy simulation module that can simulate real Qubits, the Atos QLM and to propose Q-score, the only universal metrics to assess quantum performance and superiority. Atos is also the first European patent holder in quantum computing.

Read the rest here:
Atos announces hybridisation projects at its 8th Quantum Advisory Board - Scientific Computing World

Read More..

IonQ Stock Is an Investment in Cutting Edge, Global Solutions – InvestorPlace

IonQ(NYSE:IONQ) seeks to lead the way in a very specific market: quantum computing. Fortunately, you dont have to be a mathematician or computer scientist to invest in IONQ stock.

Source: Amin Van / Shutterstock.com

It is important to understand what the company does, though. To put it simply, IonQ develops quantum computers designed to solve the worlds most complex problems.

This niche industry has vast moneymaking potential. According to IonQ, experts predict that the total addressable market for quantum computing will reach around $65 billion by 2030.

IonQ got in fairly early and aggressively, as the company has been around since 2015 and produced six generations of quantum computers. Theres a terrific investment opportunity here, yet the share price is down and if you ask me, this just doesnt compute.

Going back to the beginning, IonQoffered its shares for public tradingon theNew York Stock Exchange on Oct. 1, 2021, after reverse-merging with dMY Technology Group III.

The stock started off at around $10 but sank to the low $7s in just a few days time. However, that turned out to be a great time to start a long position.

Amazingly, IONQ stock staged a swift turnaround and soared to nearly $36 in November. In hindsight, however, this rally went too fast and too far.

Inevitably, a retracement ensued and the early investors had to cough up some of their gains. By early December, the share price had declined to $18 and change.

Sure, you could wait and hope that IONQ stock falls further before considering a position. Yet, you might miss out on a buy-the-dip opportunity with an ambitious, future-facing tech business.

I case I didnt make it abundantly clear already, IonQ is serious about advancing quantum-computing technology.

Case in point: in order to cement its leadership position in this niche, IonQ recently revealed its plans to use barium ions as qubits in its systems, thereby bringing about a wave of advantages it believes will enable advanced quantum computing architectures.

A qubit, or quantum bit, is basically a tiny bit of coded information in quantum mechanics.

Its perfectly fine if you dont fully understand the scientific minutiae, as IonQ President and CEO Peter Chapman and his team have the necessary know-how and experience.

We believe the advanced architectures enabled by barium qubits will be even more powerful and more scalable than the systems we have been able to build so far, opening the door to broader applications of quantum computing, Chapman assured.

Apparently, the advantages of using barium ions as qubits include lower error rates, higher gate fidelity, better state detection, more easily networked quantum systems and iterable, more reliable hardware, with more uptime for customers.

Thankfully, now I can leave the science to the scientists, and focus on what I do best: breaking down financial data.After all, Id be hard-pressed to recommend any company if it didnt at least have a decent capital position.

CFO Thomas Kramer was evidently glad to report that, as of Sept. 30 IonQ had cash and cash equivalents of $587 million.The companys strong balance sheet, according to Kramer will allow IonQ to accelerate [the] scaling of all business functions and continue attracting the industrys best and brightest.

Since IonQ is well-capitalized, the company should be well-positioned to benefit from Capitol Hills interest in quantum as shown by the infrastructure bill, the CFO added.

Its also worth noting that IonQ generated $223,000 in revenues during 2021s third quarter, bringing the year-to-date total to $451,000.

Hopefully, the company can parlay its quantum-computing know-how into seven-figure revenues in the near future.

IonQs loyal investors dont need to understand everything about qubits. They only need to envision a robust future for the quantum-computing market.

We cant claim that IonQ is generating massive revenues at this point. Therefore, it requires patience and foresight to invest in this company with confidence.

Yet, an early stake could offer vast rewards in the long run. After all, when it comes to deep-level, next-gen quantum computing, IonQ clearly has it down to a science.

On the date of publication, David Moadeldid not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

David Moadel has provided compelling content and crossed the occasional line on behalf of Crush the Street, Market Realist, TalkMarkets, Finom Group, Benzinga, and (of course) InvestorPlace.com. He also serves as the chief analyst and market researcher for Portfolio Wealth Global and hosts the popular financial YouTube channel Looking at the Markets.

More:
IonQ Stock Is an Investment in Cutting Edge, Global Solutions - InvestorPlace

Read More..

Another setback for ‘Majorana’ particle as Science paper earns an expression of concern – Retraction Watch

You might say that the third time is not the charm for a paper on some elusive fermions.

For the third time this year, a leading science journal has raised concerns about a paper on the Majorana particle, which, if it exists, would hold promise for building a quantum computer.

In March, Nature retracted a paper on the particle, and in July, Science placed an expression of concern on a different paper that purported to find a relatively easy route to creating and controlling [Majorana zero modes] MZMs in hybrid materials.

Today, Science is slapping an expression of concern on another Majorana paper:

On 21 July, 2017, Science published the Report Chiral Majorana fermion modes in a quantum anomalous Hall insulator-superconductor structure by Q. L. He et al. Since that time, raw data files were offered by the authors in response to queries from readers who had failed to reproduce the findings. Those data files did not clarify the underlying issues, and now their provenance has come into question. While the authors institutions investigate further, Science is alerting readers to these concerns.

The article has been cited 355 times, according to Clarivate Analytics Web of Science, earning it a Highly Cited Paper designation.

None of the authors could be reached for comment, and a few of their emails bounced because they had left their employers since 2017. He Qinglin and Wang Kanglong, two of the authors, defended the findings in a 2020 blog post.

Vincent Mourik, who along with Sergey Frolov had raised concerns about the retracted Nature paper and the other Science paper subjected to an expression of concern, said he and Frolov had not spoken publicly about the newly flagged paper, nor formally reached out to Science about it. Mourik told Retraction Watch:

First, upon reading this expression of concern carefully, it appears there are significant problems with the raw data itself. To me the usage of the word provenance suggests that it is now unclear where they came from, after repeated reader requests.

Second, this paper has been controversial from the start. Already the paper figures raised many questions, simply put, they seem to violate some very basic rules in electrical circuits called Kirchhoff rules.

Third, an extensive reproduction study carried out at Penn State failed at finding the same signatures.

He added:

Frankly speaking, I am happy to see other researchers in our field also take on this challenging and thankless task of investigating suspicious papers if more people would do it, one day it may be tolerable again to do science.

Like Retraction Watch? You can make aone-timetax-deductible contribution by PayPalorby Square, or amonthly tax-deductible donation by Paypalto support our work, follow uson Twitter, like uson Facebook, add us to yourRSS reader, or subscribe to ourdaily digest. If you find a retraction thatsnot in our database, you canlet us know here. For comments or feedback, email us at team@retractionwatch.com.

Related

Read this article:
Another setback for 'Majorana' particle as Science paper earns an expression of concern - Retraction Watch

Read More..

US is risking APOCALYPSE with millions lining up for food & water if theres a cyberattack on power grid,… – The Sun

THE US is risking an "apocalypse" with millions lined up for basic needs if there is a cyberattack on the power grid, experts have warned.

Experts have been worried for years that the national power grid is vulnerable to cyberattacks from outside countries should they wish to target the US.

3

3

A study being carried out by researchers at Hudson Institute's Quantum Alliance Initiative is looking into how destructive a hypothetical quantum cyberattack on the US power grid would be, and preliminary results are bleak, to say the least.

The early results suggest that the protection of the country's power grids should be an urgent priority, much more so than it already is.

"The study's preliminary results offer important clues as to the areas on which policymakers should focus, not only to secure our power grid from a large-scale quantum computer attack but also, in the event this were to be unsuccessful, to mitigate such an attack's impact on our infrastructure, both in terms of economic and national security," an introduction to the Hudson Institute's study says.

The study authors gave an example of the disastrous effects when the power grid went down in Texas after a storm earlier this year.

"Millions without power; stores and banks shut down; vital services running on emergency generators, if at all; lines of hapless people awaiting food and water.

"The experience that the state of Texas underwent during February 2021 is only a preview of what we would all face should the United States ever-vulnerable energy grid be subject to a major cyberattack," the study introduction says.

A task force within the US Department of Energy, the North American Energy Resiliency Model (NAERM), is already tasked with considering how to best protect the country's energy grid from both natural disasters and terrorism or cyber assaults.

However, study authors warn that NAERM is focused on known, existing cyber threats and not on the possibility of quantum computer attacks.

"NAERMs purview ... encompasses only existing, conventional cyber threats and does not extend to quantum computer attacks, whose effects would be far more protracted and far worse than those of a conventional cyberattack," the study says.

"Indeed, the 'smarter' a grid is, that is, the greater the extent to which it relies on computer supervision and control, the more vulnerable it would be to such an attack."

The authors warn that a quantum computer attack could cause "catastrophic harm" to both the economy and society as a whole unless steps are taken now to mitigate the risk.

3

Originally posted here:
US is risking APOCALYPSE with millions lining up for food & water if theres a cyberattack on power grid,... - The Sun

Read More..

Revisit Top AI, Machine Learning And Data Trends Of 2021 – ITPro Today

This past year has been a strange one in many respects: an ongoing pandemic, inflation, supply chain woes, uncertain plans for returning to the office, and worrying unemployment levels followed by the Great Resignation. After the shock of 2020, anyone hoping for a calm 2021 had to have been disappointed.

Data management and digital transformation remained in flux amid the ups and downs. Due to the ongoing challenges of the COVID-19 pandemic, as well as trends that were already underway prior to 2021, this retrospective article has a variety of enterprise AI, machine learning and data developments to cover.

Automation was a buzzword in 2021, thanks in part to the advantages that tools like automation software and robotics provided companies. As workplaces adapted to COVID-19 safety protocols, AI-powered automation proved beneficial. Since March 2020, two-thirds of companies have accelerated their adoption of AI and automation, consultancy McKinsey & Company found, making it one of the top AL and data trends of 2021.

In particular, robotic process automation (RPA) gained traction in several sectors, where it was put to use for tasks like processing transactions and sending notifications. RPA-focused firms like UiPath and tech giants like Microsoft went in on RPA this year. RPA software revenue will be up nearly 20% in 2021, according to research firm Gartner.

But while the pandemic may have sped up enterprise automation adoption, it appears RPA tools have lasting power. For example, Research and Markets predicted the RPA market will have a compound annual growth rate of 31.5% from 2021 to 2026. If 2020 was a year of RPA investment, 2021 and beyond will see those investments going to scale.

Micro-automation is one of the next steps in this area, said Mark Palmer, senior vice president of data, analytics and data science products at TIBCO Software, an enterprise data company. Adaptive, incremental, dynamic learning techniques are growing fields of AI/ML that, when applied to the RPAs exhaust, can make observations on the fly, Palmer said. These dynamic learning technologies help business users see and act on aha moments and make smarter decisions.

Automation also played an increasingly critical role in hybrid workplace models. While the tech sector has long accepted remote and hybrid work arrangements, other industries now embrace these models, as well. Automation tools can help offsite employees work efficiently and securely -- for example, by providing technical or HR support, security threat monitoring, and integrations with cloud-based services and software.

However, remote and hybrid workers do represent a potential pain point in one area: cybersecurity. With more employees working outside the corporate network, even if for only part of the work week, IT professionals must monitor more equipment for potential vulnerabilities.

The hybrid workforce influenced data trends in 2021. The wider distribution of IT infrastructure, along with increasing adoption of cloud-based services and software, added new layers of concerns about data storage and security. In addition, the surge in cyberattacks during the pandemic represented a substantial threat to enterprise data security. As organizations generate, store and use ever-greater amounts of data, an IT focus on cybersecurity is only going to become increasingly vital.

All together, these developments point to an overarching enterprise AI, ML and data trend for 2021: digital transformation. Spending on digital transformation is expected to hit $1.8 trillion in 2022, according to Statistica, which illustrates that organizations are willing to invest in this area.

As companies realize the value of data and the potential of machine learning in their operations, they also recognize the limitations posed by their legacy systems and outdated processes. The pandemic spurred many organizations to either launch or elevate digital transformation strategies, and those strategies will likely continue throughout 2022.

How did the AI, ML and data trends of 2021 change the way you work? Tell us in the comments below.

Go here to see the original:
Revisit Top AI, Machine Learning And Data Trends Of 2021 - ITPro Today

Read More..

The automated machine learning market is predicted to reach $14,830.8 million by 2030, demonstrating a CAGR of 45.6% from 2020 to 2030 – Yahoo Finance

AutoML Market From $346. 2 million in 2020, the automated machine learning market is predicted to reach $14,830. 8 million by 2030, demonstrating a CAGR of 45. 6% from 2020 to 2030.

New York, Dec. 16, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "AutoML Market" - https://www.reportlinker.com/p06191010/?utm_source=GNW The major factors driving the market are the burgeoning requirement for efficient fraud detection solutions, soaring demand for personalized product recommendations, and increasing need for predictive lead scoring.

The COVID-19 pandemic has contributed significantly to the evolution of digital business models, with many healthcare companies adopting machine-learning-enabled chatbots to enable the contactless screening of COVID-19 symptoms. Moreover, Clevy.io, which is a France-based start-up, and Amazon Web Services (AWS) have launched a chatbot for making the process of finding official government communications about the COVID-19 infection easy. Thus, the pandemic has positively impacted the market.

The service category, under the offering segment, is predicted to demonstrate the faster growth in the coming years. This is credited to the burgeoning requirement for implementation and integration, consulting, and maintenance services, as they assist in enhancing business productivity and augmenting coding activities. Additionally, these services aid in automating workflows, which, in turn, enables the mechanization of complex operations.

The cloud category dominated the AutoML market, within the deployment type segment, in the past. Moreover, this category is predicted to grow rapidly in the forthcoming years on account of the flexibility and scalability provided by cloud-based automated machine learning (AutoML) solutions.

Geographically, North America held the largest share in the past, and this trend is expected to continue in the coming years. This is credited to the soaring venture capital funding by artificial intelligence (AI) companies for research and development (R&D), in order to advance AutoML.

Asia-Pacific (APAC) is predicted to be the fastest-growing region in the market in the forthcoming years. This is ascribed to the growing information technology (IT) investments and increasing fintech adoption in the region. In addition, the growing government focus on incorporating AI in multiple verticals is supporting the advance of the market in the region.

For instance, in October 2021, Hivecell, which is an edge as a service company, entered into a partnership with DataRobot Inc. for solving bigger challenges and hurdles at the edge, by processing various ML models on site and outside the data closet. By incorporating the two solutions, businesses can make data-driven decisions more efficiently.

The major players in the AutoML market are DataRobot Inc., dotData Inc., H2O.ai Inc., Amazon Web Services Inc., Big Squid Inc., Microsoft Corporation, Determined.ai Inc., SAS Institute Inc., Squark, and EdgeVerve Systems Limited.Read the full report: https://www.reportlinker.com/p06191010/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

View original post here:
The automated machine learning market is predicted to reach $14,830.8 million by 2030, demonstrating a CAGR of 45.6% from 2020 to 2030 - Yahoo Finance

Read More..

Human-centered AI can improve the patient experience – Healthcare IT News

Given the growing ubiquity of machine learning and artificial intelligence in healthcare settings, it's become increasingly important to meet patient needs and engage users.

And as panelists noted during a HIMSS Machine Learning and AI for Healthcare Forum session this week, designing technology with the user in mind is a vital way to ensure tools become an integral part of workflow.

"Big Tech has stumbled somewhat" in this regard, said Bill Fox, healthcare and life sciences lead at SambaNova Systems. "The patients, the providers they don't really care that much about the technology, how cool it is, what it can do from a technological standpoint.

"It really has to work for them," Fox added.

Jai Nahar, a pediatric cardiologist at Children's National Hospital, agreed, stressing the importance of human-centered AI design in healthcare delivery.

"Whenever we're trying to roll out a productive solution that incorporates AI," he said, "right from the designing [stage] of the product or service itself, the patients should be involved."

That inclusion should also expand to provider users too, he said: "Before rolling out any product or service, we should involve physicians or clinicians who are going to use the technology."

The panel, moderated by Rebekah Angove, vice president of evaluation and patient experience at the Patient Advocate Foundation, noted that AI is already affecting patients both directly and indirectly.

In ideal scenarios, for example, it's empowering doctors to spend more time with individuals. "There's going tobe a human in the loop for a very long time," said Fox.

"We can power the clinician with better information from a much larger data set," he continued. AI is also enabling screening tools and patient access, said the experts.

"There are many things that work in the background that impact [patient] lives and experience already," said Piyush Mathur, staff anesthesiologist and critical care physician at the Cleveland Clinic.

At the same time, the panel pointed to the role clinicians can play in building patient trust around artificial intelligence and machine learning technology.

Nahar said that as a provider, he considers several questions when using an AI-powered tool for his patient. "Is the technology really needed for this patient to solve this problem?" he said he asks himself. "How will it improve the care that I deliver to the patient? Is it something reliable?"

"Those are the points, as a physician, I would like to know," he said.

Mathur also raised the issue of educating clinicians about AI. "We have to understand it a little bit better to be able to translate that science to the patients in their own language," he said. "We have to be the guardians of making sure that we're providing the right data for the patient."

The panelists discussed the problem of bias, about which patients may have concerns and rightly so.

"There are multiple entry points at which bias can be introduced," said Nahar.

During the design process, he said, multiple stakeholders need to be involved to closely consider where bias could be coming from and how it can be mitigated.

As panelists have pointed out at other sessions, he also emphasized the importance of evaluating tools in an ongoing process.

Developers and users should be asking themselves, "How can we improve and make it better?" he said.

Overall, said Nahar, best practices and guidances need to be established to better implement and operationalize AI from the patient perspective and provider perspective.

The onus is "upon us to make sure we use this technology in the correct way to improve care for our patients," added Mathur.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Link:
Human-centered AI can improve the patient experience - Healthcare IT News

Read More..

Continual Launches With $4 Million in Seed to Bring AI to the Modern Data Stack – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Continual, a company building a next-generation AI platform for the modern data stack, today announces its public beta launch with $4 million in seed funding. The round was led by Amplify Partners, a firm that invests in companies with a vision of transforming infrastructure and machine intelligence tools. Illuminate Ventures, Essence, Wayfinder, and Data Community Fund also participated in the round.

The modern data stack centered on cloud data warehouses like Snowflake is rapidly democratizing data and analytics, but deploying AI at scale into business operations, products, or services remains a challenge for most companies. Powered by a declarative approach to operational AI and end-to-end automation, Continual enables modern data and analytics teams to build continually improving machine learning models directly on their cloud data warehouse without complex engineering.

Continual brings together second time founders Tristan Zajonc and Tyler Kohn who previously built and sold machine learning infrastructure startups. Cofounder and CEO Tristans first startup, Sense, a pioneering enterprise data science platform, was acquired by Cloudera in 2016. Continuals cofounder and CTO, Tyler Kohn, built RichRelevance, the worlds leading personalization provider, before it was acquired by Manthan in 2019. Tristan and Tyler saw the huge gap between the transformational potential of AI and the day-to-day struggle most companies faced operationalizing AI using real world data. They founded Continual to radically simplify operational AI by taking a fundamentally new approach.

Artificial intelligence has the potential to transform every industry, department, product and service but current solutions require complex infrastructure, advanced skills, and constant maintenance. Continual breaks through this complexity with a radical simplification of the machine learning development lifecycle, combining a declarative approach to operational AI, end-to-end automation, and the agility of the modern data stack. Our customers are deploying state-of-art predictive models that never stop learning from their data in minutes rather than months, said Tristan Zajonc, CEO and cofounder of Continual.

Getting continually improving predictive insights from data is critical for businesses to operate efficiently and better serve their customers. Yet operationalizing AI remains a challenge for all but the most sophisticated companies, said David Beyer, Partner at Amplify Partners. Continual meets data teams where they work - inside the cloud data warehouse - and lets them build and deploy continually improving predictive models in a fraction of the time existing approaches demand. We invested because we believe their approach is fundamentally new and, most importantly, the right one to make AI work across the enterprise."

With the new capital, Continual plans to more than double its team over the next year with new hires for sales and engineering roles. It will expand into new AI/ML use cases such as NLP, realtime, and personalization, and broaden support for additional cloud data platforms. Continual is offering a 14-day trial with its open beta release, enhancements for dbt users, and support for Snowflake, Redshift, BigQuery, and Databricks.

dbt was built on the idea that the unlock for data teams is a collaborative workflow that brings more people into the knowledge creation process. Continual brings this same viewpoint to machine learning, adding new capabilities to the analytics engineers' tool belt, said Nikhil Kothari, Head of Technology Partnerships at dbt Labs. Were excited to partner with Continual to help bring operational AI to the dbt community.

Continual is enabling organizations to easily build, deploy, and maintain continually improving predictive models directly on top of Snowflake, said Tarik Dwiek, Head of Technology Alliances at Snowflake. As part of our partnership, were excited to help bring these benefits to the Snowflake community and to accelerate end-to-end machine learning workflows on top of Snowflake with Snowpark.

To learn more about Continual or to sign up for a 14-day trial, visit: https://continual.ai

About Continual

Based in San Francisco, Continual is a next-generation AI platform for the modern data stack powered by end-to-end automation and a declarative workflow. Modern data teams use Continual to deploy continually improving predictive models to drive revenue, operate more efficiently, and power innovative products and services. Continual has raised $4 million in funding from Amplify Partners, Illuminate Ventures, Essence, Wayfinder, and Data Community Fund. For more information, visit https://continual.ai/

About Amplify Partners

Amplify Partners invests in early-stage companies pioneering novel applications in machine intelligence and computer science. The firm's deep domain expertise, unrivaled relationships with leading technologists and decades of operational experience, positions it uniquely with enterprise insight and the ability to serve technical founding teams. To learn more about Amplify's portfolio and people, please visit amplifypartners.com.

Here is the original post:
Continual Launches With $4 Million in Seed to Bring AI to the Modern Data Stack - Business Wire

Read More..

Artificial intelligence accurately predicts who will develop dementia in two years – EurekAlert

Artificial intelligence can predict which people who attend memory clinics will develop dementia within two years with 92 per cent accuracy, a largescale new study has concluded.

Using data from more than 15,300 patients in the US, research from the University of Exeter found that a form of artificial intelligence called machine learning can accurately tell who will go on to develop dementia.

The technique works by spotting hidden patterns in the data and learning who is most at risk. The study, published in JAMA Network Open and funded by funded by Alzheimers Research UK, also suggested that the algorithm could help reduce the number of people who may have been falsely diagnosed with dementia.

The researchers analysed data from people who attended a network of 30 National Alzheimers Coordinating Center memory clinics in the US. The attendees did not have dementia at the start of the study, though many were experiencing problems with memory or other brain functions.

In the study timeframe between 2005 and 2015, one in ten attendees (1,568) received a new diagnosis of dementia within two years of visiting the memory clinic. The research found that the machine learning model could predict these new dementia cases with up to 92 per cent accuracy and far more accurately than two existing alternative research methods.

The researchers also found for the first time that around eight per cent (130) of the dementia diagnoses appeared to be made in error, as their diagnosis was subsequently reversed. Machine learning models accurately identified more than 80 per cent of these inconsistent diagnoses. Artificial intelligence can not only accurately predict who will be diagnosed with dementia, it also has the potential to improve the accuracy of these diagnoses.

Professor David Llewellyn, an Alan Turing Fellow based at the University of Exeter, who oversaw the study, said: Were now able to teach computers to accurately predict who will go on to develop dementia within two years. Were also excited to learn that our machine learning approach was able to identify patients who may have been misdiagnosed. This has the potential to reduce the guesswork in clinical practice and significantly improve the diagnostic pathway, helping families access the support they need as swiftly and as accurately as possible.

Dr Janice Ranson, Research Fellow at the University of Exeter added We know that dementia is a highly feared condition. Embedding machine learning in memory clinics could help ensure diagnosis is far more accurate, reducing the unnecessary distress that a wrong diagnosis could cause.

The researchers found that machine learning works efficiently, using patient information routinely available in clinic, such as memory and brain function, performance on cognitive tests and specific lifestyle factors. The team now plans to conduct follow-up studies to evaluate the practical use of the machine learning method in clinics, to assess whether it can be rolled out to improve dementia diagnosis, treatment and care.

Dr Rosa Sancho, Head of Research at Alzheimers Research UK said Artificial intelligence has huge potential for improving early detection of the diseases that cause dementia and could revolutionise the diagnosis process for people concerned about themselves or a loved one showing symptoms. This technique is a significant improvement over existing alternative approaches and could give doctors a basis for recommending life-style changes and identifying people who might benefit from support or in-depth assessments.

The study is entitled Performance of Machine Learning Algorithms for Predicting Progression to Dementia in Memory Clinic Patients, by Charlotte James, Janice M. Ranson, Richard Everson and David J Llewellyn. It is published in JAMA Network Open.

JAMA Network Open

People

Performance of Machine Learning Algorithms for Predicting Progression to Dementia in Memory Clinic Patients

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read more from the original source:
Artificial intelligence accurately predicts who will develop dementia in two years - EurekAlert

Read More..

Real World Application of Machine Learning in Networking – IoT For All

Rapidly rising demand for Internet connectivity has put a strain on improving network infrastructure, performance, and other critical parameters. Network administrators will invariably encounter different types of networks running multiple network applications. Each network application has its own set of features and performance parameters that may change dynamically. Because of the diversity and complexity of networks, using conventional algorithms or hard-coded techniques built for such network scenarios is a challenging task.

Machine learning has proven to be beneficial in almost every industry, and the networking industry is no exception. Machine learning can help solve the intractable old networking blockers and stimulate new network applications that make networking quite convenient. Lets discuss in detail the basic workflow, with a few use cases to better understand applied machine learning technology in the networking domain.

With the growing demand for Internet of Things (IoT) solutions, modern networks generate massive and heterogeneous traffic data. For such a dynamic network, the traditional network management techniques for network traffic monitoring and data analytics like ping monitoring, Logfile monitoring, or even SNMP are not enough. They usually lack accuracy and effective processing of real-time data. On the other hand, traffic from other sources like cellular or mobile devices in the network comparatively shows a more complex behavior due to device mobility and network heterogeneity.

Machine learning facilitates analytics in big data systems as well as large-area networks to recognize complex patterns when it comes to managing such networks. Looking at these opportunities, researchers in the field of networking use deep learning models for Network Traffic Monitoring and Analysis applications like traffic classification and prediction, congestion control, etc.

Network telemetry data provides basic metrics about network performance. This information is usually quite difficult to interpret. Considering the size and the total data going through the network, the analyzed data holds tremendous value. If used smartly, it can drastically improve performance.

Emerging technologies like Inband-Network Telemetry can help when collecting detailed network telemetry data in real-time. On top of that, running machine learning on such datasets can help correlate phenomena between latency, paths, switches, routers, events, etc. These phenomena were difficult to point out from the enormous amounts of real-time data using the traditional methods.

Machine learning models are trained to understand correlations and patterns in the telemetry data. These algorithms then eventually gain the ability to predict the future based on learning from historical data. This helps in managing future network outages.

Every network infrastructure has a predefined total throughput available. It is further split into multiple lanes of different predefined bandwidths. In such scenarios, where the total bandwidth usage for each end-user is statically predefined, there can be bottlenecks for some parts of the network where the network is overwhelmingly used.

To avoid such congestion supervised machine learning models can be trained to analyze network traffic in real-time and infer a suitable amount of bandwidth per user in such a way that the network experiences the least amount of bottlenecks.

Such models can learn from the network statistics such as total active users per network node, historical network usage data for each user, time-based patterns of data usage, movement of users across multiple access points, and so on.

In each network, there exists various kinds of traffic like Web Hosting (HTTP), File transfers (FTP), Secure Browsing (HTTPS), HTTP Live Video Streaming (HLS), Terminal Services (SSH), and so on. Each of these behaves differently when it comes to network bandwidth usage; for example, transferring a file over FTP uses a lot of data continuously for the duration of the transfer.

As another example, if a video is being streamed, it uses the data in chunks and a buffering method. These different types of traffic, when allowed to use the network in an unsupervised way, create some temporary blockages.

To avoid this, machine learning classifiers can be used which can analyze and classify the type of traffic going through the network. These models can then be used to infer network parameters like allocated bandwidth, data caps, etc., which can in turn help improve the performance of the network by improving the scheduling of requests served and also dynamically changing the assigned bandwidths.

The increase in the number of cyberattacks forces organizations to constantly monitor and correlate millions of external and internal data points across the whole network infrastructure and its users. Manual management of a large volume of real-time data becomes difficult. This is where machine learning helps.

Machine learning can recognize certain patterns and anomalies in the network and predict threats in massive data sets, all in real-time. By automating such analysis, it becomes easy for network managers to detect threats and isolate situations rapidly with reduced human efforts.

Network behavior is an important parameter in machine learning systems for anomaly detection. Machine learning engines process enormous amounts of data in real-time to identify threats, unknown malware, and policy violations.

If the network behavior is found to be within the predefined behavior, the network transaction is accepted; otherwise, an alert gets triggered in the system. This can be used to prevent many kinds of attacks like DoS, DDoS, and probing.

Its quite easy to trick someone into clicking a malicious link that seems legitimate, then try to break through a computers defense systems with the information gathered. Machine learning helps in flagging suspicious websites to help prevent people from connecting to malicious websites.

For example, a text classifier machine learning model can read and understand URLs and identify those spoofed phishing URLs. This will create a much safer browsing experience for the end-users.

The integration of machine learning in networking is not limited to the above-mentioned use cases. Solutions can be developed in the field of using ML for networking and network security to solve the unaddressed issues by shedding light on the opportunities and research from both the networking and machine learning perspectives.

Continued here:
Real World Application of Machine Learning in Networking - IoT For All

Read More..