Page 3,122«..1020..3,1213,1223,1233,124..3,1303,140..»

Utilizing machine learning to uncover the right content at KMWorld Connect 2020 – KMWorld Magazine

At KMWorld Connect 2020 David Seuss, CEO, Northern Light, Sid Probstein, CTO, Keeeb, and Tom Barfield, chief solution architect, Keeb discussed Machine Learning & KM.

KMWorld Connect, November 16-19, and its co-located events, covers future-focused strategies, technologies, and tools to help organizations transform for positive outcomes.

Machine learning can assist KM activities in many ways. Seuss discussed using a semantic analysis of keywords in social posts about a topic of interest to yield clear guidance as to which terms have actual business relevance and are therefore worth investing in.

What are we hearing from our users? Seuss asked. The users hate the business research process.

By using AstraZeneca as an example, Seuss started the analysis of the companys conference presentations. By looking at the topics, Diabetes sank lower as a focus of AstraZenicas focus.

When looking at their twitter account, themes included oncology, COVID-19, and environmental issues. Not one reference was made to diabetes, according to Seuss.

Social media is where the energy of the company is first expressed, Seuss said.

An instant news analysis using text analytics tells us the same story: no mention of diabetes products, clinical trials, marketing, etc.

AI-based automated insight extraction from 250 AstraZeneca oncolcogy conference presentations gives insight into R&D focus.

Let the machine read the content and tell you what it thinks is important, Seuss said.

You can do that with a semantic graph of all the ideas in the conference presentations. Semantic graphs look for relationships between ideas and measure the number and strength of the relationships. Google search results are a real-world example of this in action.

We are approaching the era when users will no longer search for information, they will expect the machine to analyze and then summarize for them what they need to know, Seuss said. Machine-based techniques will change everything.

Probstein and Barfield addressed new approaches to integrate knowledge sharing into work. They looked at collaborative information curation so end users help identify the best content, allowing KM teams to focus on the most strategic knowledge challenges as well as the pragmatic application of AI through text analytics to improve both curation and findability and improve performance.

The super silo is on the rise, Probstein said. It stores files, logs, customer/sales and can be highly variable. He looked at search results for how COVID-19 is having an impact on businesses.

Not only are there many search engines, each one is different, Probstein said.

Probstein said Keeeb can help with this problem. The solution can search through a variety of data sources to find the right information.

One search, a few seconds, one pane of glass, Probstein said. Once you solve the search problem, now you can look through the documents.

Knowledge isnt always a whole document, it can be a few paragraphs or an image, which can then be captured and shared through Keeeb.

AI and machine learning can enable search to be integrated with existing tools or any system. Companies should give end-users simple approaches to organize with content-augmented with AI-benefitting themselves and others, Barfield said.

More here:
Utilizing machine learning to uncover the right content at KMWorld Connect 2020 - KMWorld Magazine

Read More..

Machine Learning Predicts How Cancer Patients Will Respond to Therapy – HealthITAnalytics.com

November 18, 2020 -A machine learning algorithm accurately determined how well skin cancer patients would respond to tumor-suppressing drugs in four out of five cases, according to research conducted by a team from NYU Grossman School of Medicine and Perlmutter Cancer Center.

The study focused on metastatic melanoma, a disease that kills nearly 6,800 Americans each year. Immune checkpoint inhibitors, which keep tumors from shutting down the immune systems attack on them, have been shown to be more effective than traditional chemotherapies for many patients with melanoma.

However, half of patients dont respond to these immunotherapies, and these drugs are expensive and often cause side effects in patients.

While immune checkpoint inhibitors have profoundly changed the treatment landscape in melanoma, many tumors do not respond to treatment, and many patients experience treatment-related toxicity, said corresponding study authorIman Osman, medical oncologist in the Departments of Dermatology and Medicine (Oncology) at New York University (NYU) Grossman School of Medicine and director of the Interdisciplinary Melanoma Program at NYU Langones Perlmutter Cancer Center.

An unmet need is the ability to accurately predict which tumors will respond to which therapy. This would enable personalized treatment strategies that maximize the potential for clinical benefit and minimize exposure to unnecessary toxicity.

READ MORE: How Social Determinants Data Can Enhance Machine Learning Tools

Researchers set out to develop a machine learning model that could help predict a melanoma patients response to immune checkpoint inhibitors. The team collected 302 images of tumor tissue samples from 121 men and women treated for metastatic melanoma with immune checkpoint inhibitors at NYU Langone hospitals.

They then divided these slides into 1.2 million portions of pixels, the small bits of data that make up images. These were fed into the machine learning algorithm along with other factors, such as the severity of the disease, which kind of immunotherapy regimen was used, and whether a patient responded to the treatment.

The results showed that the machine learning model achieved an AUC of 0.8 in both the training and validation cohorts, and was able to predict which patients with a specific type of skin cancer would respond well to immunotherapies in four out of five cases.

Our findings reveal that artificial intelligence is a quick and easy method of predicting how well a melanoma patient will respond to immunotherapy, said study first author Paul Johannet, MD, a postdoctoral fellow at NYU Langone Health and its Perlmutter Cancer Center.

Researchers repeated this process with 40 slides from 30 similar patients at Vanderbilt University to determine whether the results would be similar at a different hospital system that used different equipment and sampling techniques.

READ MORE: Simple Machine Learning Method Predicts Cirrhosis Mortality Risk

A key advantage of our artificial intelligence program over other approaches such as genetic or blood analysis is that it does not require any special equipment, said study co-author Aristotelis Tsirigos, PhD, director of applied bioinformatics laboratories and clinical informatics at the Molecular Pathology Lab at NYU Langone.

The team noted that aside from the computer needed to run the program, all materials and information used in the Perlmutter technique are a standard part of cancer management that most, if not all, clinics use.

Even the smallest cancer center could potentially send the data off to a lab with this program for swift analysis, said Osman.

The machine learning method used in the study is also more streamlined than current predictive tools, such as analyzing stool samples or genetic information, which promises to reduce treatment costs and speed up patient wait times.

Several recent attempts to predict immunotherapy responses do so with robust accuracy but use technologies, such as RNA sequencing, that are not readily generalizable to the clinical setting, said corresponding study authorAristotelis Tsirigos, PhD, professor in the Institute for Computational Medicine at NYU Grossman School of Medicine and member of NYU Langones Perlmutter Cancer Center.

READ MORE: Machine Learning Forecasts Prognosis of COVID-19 Patients

Our approach shows that responses can be predicted using standard-of-care clinical information such as pre-treatment histology images and other clinical variables.

However, researchers also noted that the algorithm is not yet ready for clinical use until they can boost the accuracy from 80 percent to 90 percent and test the algorithm at more institutions. The research team plans to collect more data to improve the performance of the model.

Even at its current level of accuracy, the model could be used as a screening method to determine which patients across populations would benefit from more in-depth tests before treatment.

There is potential for using computer algorithms to analyze histology images and predict treatment response, but more work needs to be done using larger training and testing datasets, along with additional validation parameters, in order to determine whether an algorithm can be developed that achieves clinical-grade performance and is broadly generalizable, said Tsirigos.

There is data to suggest that thousands of images might be needed to train models that achieve clinical-grade performance.

See the original post:
Machine Learning Predicts How Cancer Patients Will Respond to Therapy - HealthITAnalytics.com

Read More..

This New Machine Learning Tool Might Stop Misinformation – Digital Information World

Misinformation has always been a problem, but the combination of widespread social media as well as a loose definition of what can be seen as factual truth in recent times has lead to a veritable explosion in misinformation over the course of the past few years. The problem is so dire that in a lot of cases websites are made specifically because of the fact that this is the sort of thing that could potentially end up allowing misinformation to spread more easily, and this is a problem that might just have been addressed by a new machine learning tool.

This machine learning tool was developed by researchers at UCL, Berkeley and Cornell will be able to detect domain registration data and use this to ascertain whether the URL is legitimate or if it has been made specifically to legitimize a certain piece of information that people might be trying to spread around. A couple of other factors also come into play here. For example, if the identity of the person that registered the domain is private, this might be a sign that the site is not legitimate. The timing of the domain registration matters to. If it was done around the time a major news event broke out, such as the recent US presidential election, this is also a negative sign.

With all of that having been said and out of the way, it is important to note that this new machine learning tool has a pretty impressive success rate of about 92%, which is the proportion of fake domains it was able to discover. Being able to tell whether or not a news source is legitimate or whether it is direct propaganda is useful because of the fact that it can help reduce the likelihood that people might just end up taking the misinformation seriously.

Original post:
This New Machine Learning Tool Might Stop Misinformation - Digital Information World

Read More..

Fujitsu, AIST and RIKEN Achieve Unparalleled Speed on MLPerf HPC Machine Learning Processing Benchmark – HPCwire

TOKYO, Nov 19, 2020 Fujitsu, the National Institute of Advanced Industrial Science and Technology (AIST), and RIKEN today announced a performance milestone in supercomputing, achieving the highest performance and claiming the ranking positions on the MLPerf HPC benchmark. The MLPerf HPC benchmark measures large-scale machine learning processing on a level requiring supercomputers and the parties achieved these outcomes leveraging approximately half of the AI-Bridging Cloud Infrastructure (ABCI) supercomputer system, operated by AIST, and about 1/10 of the resources of the supercomputer Fugaku, which is currently under joint development by RIKEN and Fujitsu.

Utilizing about half the computing resources of its system, ABCI achieved processing speeds 20 times faster than other GPU-type systems. That is the highest performance among supercomputers based on GPUs, computing devices specialized in deep learning. Similarly, about 1/10 of Fugaku was utilized to set a record for CPU-type supercomputers consisting of general-purpose computing devices only, achieving a processing speed 14 times faster than that of other CPU-type systems.

The results were presented as MLPerf HPC v0.7 on November 18th (November 19th Japan Time) at the 2020 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC20) event, which is currently being held online.

Background

MLPerf HPC is a performance competition in two benchmark programs: CosmoFlow(2), which predicts cosmological parameters, and DeepCAM, which identifies abnormal weather phenomena. The ABCI ranked first in metrics of all registered systems in the CosmoFlow benchmark program, with about half of the whole ABCI system, and Fugaku ranked second with measurement of about 1/10 of the whole system. The ABCI system delivered 20 times the performance of the other GPU types, while Fugaku delivered 14 times the performance of the other CPU types. ABCI achieved first place amongst all registered systems in the DeepCAM benchmark program as well, also with about half of the system. In this way, ABCI and Fugaku overwhelmingly dominated the top positions, demonstrating the superior technological capabilities of Japanese supercomputers in the field of machine learning.

Fujitsu, AIST, RIKEN and Fujitsu Laboratories Limited will release the software stacks including the library and the AI framework which accelerate the large-scale machine learning process developed for this measurement to the public. This move will make it easier to use large-scale machine learning with supercomputers, while its use in analyzing simulation results is anticipated to contribute to the detection of abnormal weather phenomena and to new discoveries in astrophysics. As a core platform for building Society 5.0, it will also contribute to solve social and scientific issues, as it is expected to expand to applications such as the creation of general-purpose language models that require enormous computational performance.

About MLPerf HPC

MLPerf is a machine learning benchmark community established in May 2018 for the purpose of creating a performance list of systems running machine learning applications. MLPerf developed MLPerf HPC as a new machine learning benchmark to evaluate the performance of machine learning calculations using supercomputers. It is used for supercomputers around the world and is expected to become a new industry standard. MLPerf HPC v0.7 evaluated performance on two real applications, CosmoFlow and DeepCAM, to measure large-scale machine learning performance requiring the use of a supercomputer.

All measurement data are available on the following website: https://mlperf.org/

Comments from the Partners

Fujitsu, Executive Director, Naoki Shinjo: The successful construction and optimization of the software stack for large-scale deep learning processing, executed in close collaboration with AIST, RIKEN, and many other stakeholders made this achievement a reality, helping us to successfully claim the top position in the MLPerf HPC benchmark in an important milestone for the HPC community. I would like to express my heartfelt gratitude to all concerned for their great cooperation and support. We are confident that these results will pave the way for the use of supercomputers for increasingly large-scale machine learning processing tasks and contribute to many research and development projects in the future, and we are proud that Japans research and development capabilities will help lead global efforts in this field.

Hirotaka Ogawa, Principal Research Manager, Artificial Intelligence Research Center, AIST: ABCI was launched on August 1, 2018 as an open, advanced, and high-performance computing infrastructure for the development of artificial intelligence technologies in Japan. Since then, it has been used in industry-academia-government collaboration and by a diverse range of businesses, to accelerate R&D and verification of AI technologies that utilize high computing power, and to advance social utilization of AI technologies. The overwhelming results of MLPerf HPC, the benchmark for large-scale machine learning processing, showed the world the high level of technological capabilities of Japans industry-academia-government collaboration. AISTs Artificial Intelligence Research Center is promoting the construction of large-scale machine learning models with high versatility and the development of its application technologies, with the aim of realizing easily-constructable AI. We expect that the results of this time will be utilized in such technological development.

Satoshi Matsuoka, Director General, RIKEN Center for Computational Science: In this memorable first MLPerf HPC, Fugaku, Japans top CPU supercomputer, along with AISTs ABCI, Japans top GPU supercomputer, exhibited extraordinary performance and results, serving as a testament to Japans ability to compete at an exceptional level on the global stage in the area of AI research and development. I only regret that we couldnt achieve the overwhelming performance as we did for HPL-AI to be compliant with inaugural regulations for MLPerf HPC benchmark. In the future, as we continue to further improve the performance on Fugaku, we will make ongoing efforts to take advantage of Fugakus super large-scale environment in the area of high-performance deep learning in cooperation with various stakeholders.

About Fujitsu

Fujitsu is a leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Approximately 130,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE:6702) reported consolidated revenues of 3.9 trillion yen (US$35 billion) for the fiscal year ended March 31, 2020. For more information, please see http://www.fujitsu.com.

About National Institute of Advanced Industrial Science & Technology (AIST)

AIST is the largest public research institute established in 1882 in Japan. The research fields of AIST covers all industrial sciences, e.g., electronics, material science, life science, metrology, etc. Our missions are bridging the gap between basic science and industrialization and solving social problems facing the world. we prepare several open innovation platforms to contribute to these missions, where researchers in companies, university professors, graduated students, as well as AIST researchers, get together to achieve our missions. The open innovation platform established recently is The Global Zero Emission Research Center which contributes to achieving a zero-emission society collaborating with foreign researches.https://www.aist.go.jp/index_en.html

About RIKEN Center for Computational Science

RIKEN is Japans largest comprehensive research institution renowned for high-quality research in a diverse range of scientific disciplines. Founded in 1917 as a private research foundation in Tokyo, RIKEN has grown rapidly in size and scope, today encompassing a network of world-class research centers and institutes across Japan including the RIKEN Center for Computational Science (R-CCS), the home of the supercomputer Fugaku. As the leadership center of high-performance computing, the R-CCS explores the Science of computing, by computing, and for computing. The outcomes of the exploration the technologies such as open source software are its core competence. The R-CCS strives to enhance the core competence and to promote the technologies throughout the world.

Source: Fujitsu

View post:
Fujitsu, AIST and RIKEN Achieve Unparalleled Speed on MLPerf HPC Machine Learning Processing Benchmark - HPCwire

Read More..

SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD – Sports Video Group

This fall SVG will be presenting a series of White Papers covering the latest advancements and trends in sports-production technology. The full series of SVGs Tech Insight White Papers can be found in the SVG Fall SportsTech Journal HERE.

Following the height of the 2020 global pandemic, live sports are starting to re-emerge worldwide albeit predominantly behind closed doors. For the majority of sports fans, video is the only way they can watch and engage with their favorite teams or players. This means the quality of the viewing experience itself has become even more critical.

With UHD being adopted by both households and broadcasters around the world, there is a marked expectation around visual quality. To realize these expectations in the immediate term, it will be necessary for some years to up-convert from HD to UHD when creating 4K UHD sports channels and content.

This is not so different from the early days of HD, where SD sporting related content had to be up-converted to HD. In the intervening years, however, machine learning as a technology has progressed sufficiently to be a serious contender for performing better up-conversions than with more conventional techniques, specifically designed to work for TV content.

Ideally, we want to process HD content into UHD with a simple black box arrangement.

The problem with conventional up-conversion, though, is that it does not offer an improved resolution, so does not fully meet the expectations of the viewer at home watching on a UHD TV. The question, therefore, becomes: can we do better for the sports fan? If so, how?

UHD is a progressive scan format, with the native TV formats being 38402160, known as 2160p59.64 (usually abbreviated to 2160p60) or 2160p50. The corresponding HD formats, with the frame/field rates set by region, are either progressive 1280720 (720p60 or 720p50) or interlaced 19201080 (1080i30 or 1080i25).

Conversion from HD to UHD for progressive images at the same rate is fairly simple. It can be achieved using spatial processing only. Traditionally, this might typically use a bi-cubic interpolation filter, (a 2-dimensional interpolation commonly used for photographic image scaling.) This uses a grid of 44 source pixels and interpolates intermediate locations in the center of the grid. The conversion from 1280720 to 38402160 requires a 3x scaling factor in each dimension and is almost the ideal case for an upsampling filter.

These types of filters can only interpolate, resulting in an image that is a better result than nearest-neighbor or bi-linear interpolation, but does not have the appearance of being a higher resolution.

Machine Learning (ML) is a technique whereby a neural network learns patterns from a set of training data. Images are large, and it becomes unfeasible to create neural networks that process this data as a complete set. So, a different structure is used for image processing, known as Convolutional Neural Networks (CNNs). CNNs are structured to extract features from the images by successively processing subsets from the source image and then processes the features rather than the raw pixels.

Up-conversion process with neural network processing

The inbuilt non-linearity, in combination with feature-based processing, mean CNNs can invent data not in the original image. In the case of up-conversion, we are interested in the ability to create plausible new content that was not present in the original image, but that doesnt modify the nature of the image too much. The CNN used to create the UHD data from the HD source is known as the Generator CNN.

When input source data needs to be propagated through the whole chain, possibly with scaling involved, then a specific variant of a CNN known as a Residual Network (ResNet) is used. A ResNet has a number of stages, each of which includes a contribution from a bypass path that carries the input data. For this study, a ResNet with scaling stages towards the end of the chain was used as the Generator CNN.

For the Generator CNN to do its job, it must be trained with a set of known data patches of reference images and a comparison is made between the output and the original. For training, the originals are a set of high-resolution UHD images, down-sampled to produce HD source images, then up-converted and finally compared to the originals.

The difference between the original and synthesized UHD images is calculated by the compare function with the error signal fed back to the Generator CNN. Progressively, the Generator CNN learns to create an image with features more similar to original UHD images.

The training process is dependent on the data set used for training, and the neural network tries to fit the characteristics seen during training onto the current image. This is intriguingly illustrated in Googles AI Blog [1], where a neural network presented with a random noise pattern introduces shapes like the ones used during training. It is important that a diverse, representative content set is used for training. Patches from about 800 different images were used for training during the process of MediaKinds research.

The compare function affects the way the Generator CNN learns to process the HD source data. It is easy to calculate a sum of absolute differences between original and synthesized. This causes an issue due to training set imbalance; in this case, the imbalance is that real pictures have large proportions with relatively little fine detail, so the data set is biased towards regenerating a result like that which is very similar to the use of a bicubic interpolation filter.

This doesnt really achieve the objective of creating plausible fine detail.

Generative Adversarial Neural Networks (GANs) are a relatively new concept [2], where a second neural network, known as the Discriminator CNN, is used and is itself trained during the training process of the Generator CNN. The Discriminator CNN learns to detect the difference between features that are characteristic of original UHD images and synthesized UHD images. During training, the Discriminator CNN sees either an original UHD image or a synthesized UHD image, with the detection correctness fed back to the discriminator and, if the image was a synthesized one, also fed back to the Generator CNN.

Each CNN is attempting to beat the other: the Generator by creating images that have characteristics more like originals, while the Discriminator becomes better at detecting synthesized images.

The result is the synthesis of feature details that are characteristic of original UHD images.

With a GAN approach, there is no real constraint to the ability of the Generator CNN to create new detail everywhere. This means the Generator CNN can create images that diverge from the original image in more general ways. A combination of both compare functions can offer a better balance, retaining the detail regeneration, but also limiting divergence. This produces results that are subjectively better than conventional up-conversion.

Conversion from 1080i60 to 2160p60 is necessarily more complex than from 720p60. Starting from 1080i, there are three basic approaches to up-conversion:

Training data is required here, which must come from 2160p video sequences. This enables a set of fields to be created, which are then downsampled, with each field coming from one frame in the original 2160p sequence, so the fields are not temporally co-located.

Surprisingly, results from field-based up-conversion tended to be better than using de-interlaced frame conversion, despite using sophisticated motion-compensated de-interlacing: the frame-based conversion being dominated by the artifacts from the de-interlacing process. However, it is clear that potentially useful data from the opposite fields did not contribute to the result, and the field-based approach missed data that could produce a better result.

A solution to this is to use multiple fields data as the source data directly into a modified Generator CNN, letting the GAN learn how best to perform the deinterlacing function. This approach was adopted and re-trained with a new set of video-based data, where adjacent fields were also provided.

This led to both high visual spatial resolution and good temporal stability. These are, of course, best viewed as a video sequence, however an example of one frame from a test sequence shows the comparison:

Comparison of a sample frame from different up-conversion techniques against original UHD

Up-conversion using a hybrid GAN with multiple fields was effective across a range of content, but is especially relevant for the visual sports experience to the consumer. This offers a realistic means by which content that has more of the appearance of UHD can be created from both progressive and interlaced HD source, which in turn can enable an improved experience for the fan at home when watching a sports UHD channel.

1 A. Mordvintsev, C. Olah and M. Tyka, Inceptionism: Going Deeper into Neural Networks, 2015. [Online]. Available: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

2 I. e. a. Goodfellow, Generative Adversarial Nets, Neural Information Processing Systems Proceedings, vol. 27, 2014.

Continued here:
SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD - Sports Video Group

Read More..

SiMa.ai Adopts Arm Technology to Deliver a Purpose-built Heterogeneous Machine Learning Compute Platform for the Embedded Edge – Design and Reuse

Licensing agreement enables machine learning intelligence with best-in-class performance and power for robotics, surveillance, autonomous, and automotive applications

SAN JOSE, Calif.-- November 18, 2020 -- SiMa.ai, the machine learning company enabling high performance compute at the lowest power, today announced the adoption of low-power Arm compute technology to build its purpose-built Machine Learning SoC (MLSoC) platform. The licensing of this technology brings machine learning intelligence with best-in-class performance and power to a broad set of embedded edge applications including robotics, surveillance, autonomous, and automotive.

SiMa.ai is adopting Arm Cortex-A and Cortex-M processors optimized for power, throughput efficiency, and safety-critical tasks. In addition, SiMa.ai is leveraging a combination of widely used open-source machine learning frameworks from Arms vast ecosystem, to allow software to seamlessly enable machine learning for legacy applications at the embedded edge.

Arm is the industry leader in energy-efficient processor design and advanced computing, said Krishna Rangasayee, founder and CEO of SiMa.ai. The integration of SiMa.ai's high performance and low power machine learning accelerator with Arm technology accelerates our progress in bringing our MLSoC to the market, creating new solutions underpinned by industry-leading IP, the broad Arm ecosystem, and world-class support from its field and development teams."

From autonomous systems to smart cities, the applications enabled by ML at the edge are delivering increased functionality, leading to more complex device requirements, said Dipti Vachani, senior vice president and general manager, Automotive and IoT Line of Business at Arm. SiMa.ai is innovating on top of Arms foundational IP to create a unique low power ML SoC that will provide intelligence to the next generation of embedded edge use cases.

SiMa.ai is strategically leveraging Arm technology to deliver its unique Machine Learning SoC. This includes:

About SiMa.ai

SiMa.ai is a machine learning company enabling high performance compute at the lowest power. Initially focused on solutions for computer vision applications at the embedded edge, the company is led by a team of technology experts committed to delivering the industrys highest frames per second per watt solution to its customers. To learn more, visit http://www.sima.ai.

See the rest here:
SiMa.ai Adopts Arm Technology to Deliver a Purpose-built Heterogeneous Machine Learning Compute Platform for the Embedded Edge - Design and Reuse

Read More..

Quantum computer race intensifies as alternative technology gains steam – Nature.com

  1. Quantum computer race intensifies as alternative technology gains steam  Nature.com
  2. Quantum Computing Market is Expected to Reach $2.2 Billion by 2026  GlobeNewswire
  3. Quantum Computing Market 2020 Size, Demand, Share, Opportunities And Forecasts To 2026 | Major Giants ID Quantique, Toshiba Research Europe Ltd, Google,Inc., Microsoft Corporation  re:Jerusalem
  4. Quantum Computing in Aerospace and Defense Market Statistics Shows Revolutionary growth in Coming decade | Want to Know Biggest Opportunity for Growth?  TechnoWeekly
  5. View Full Coverage on Google News

Read the original here:
Quantum computer race intensifies as alternative technology gains steam - Nature.com

Read More..

Quantum computing now is a bit like SQL was in the late 80s: Wild and wooly and full of promise – ZDNet

Quantum computing is bright and shiny, with demonstrations by Google suggesting a kind of transcendent ability to scale beyond the heights of known problems.

But there's a real bummer in store for anyone with their head in the clouds: All that glitters is not gold, and there's a lot of hard work to be done on the way to someday computing NP-hard problems.

"ETL, if you get that wrong in this flow-based programming, if you get the data frame wrong, it's garbage in, garbage out," according to Christopher Savoie, who is the CEO and a co-founder of a three-year-old startup Zapata Computing of Boston, Mass.

"There's this naive idea you're going to show up with this beautiful quantum computer, and just drop it in your data center, and everything is going to be solved it's not going to work that way," said Savoie, in a video interview with ZDNet. "You really have to solve these basic problems."

"There's this naive idea you're going to show up with this beautiful quantum computer, and just drop it in your data center, and everything is going to be solved it's not going to work that way," said Savoie, in a video interview with ZDNet. "You really have to solve these basic problems."

Zapata sells a programming tool for quantum computing, called Orquestra. It can let developers invent algorithms to be run on real quantum hardware, such as Honeywell's trapped-ion computer.

But most of the work of quantum today is not writing pretty algorithms, it's just making sure data is not junk.

"Ninety-five percent of the problem is data cleaning," Savoie told ZDNet in a video interview. "There wasn't any great toolset out there, so that's why we created Orquestra to do this."

The company on Thursday announced it has received a Series B round of investment totaling $38 million from large investors that include Honeywell's venture capital outfit and returning Series A investors Comcast Ventures, Pitango, and Prelude Ventures, among others. The company has now raised $64.4 million.

Also:Honeywell introduces quantum computing as a service with subscription offering

Zapata was spun out of Harvard University in 2017 by scholars including Aln Aspuru-Guzik, who has done fundamental work on quantum. But a lot of what is coming up are the mundane matters of data prep and other gotchas that can be a nightmare in a bold new world of only partially-understood hardware.

Things such as extract, transform, load, or ETL, which become maddening when prepping a quantum workload.

"We had a customer who thought they had a compute problem because they had a job that was taking a long time; it turned out, when we dug in, just parallelizing the workflow, the ETL, gave them a compute advantage," recalled Savoie.

Such pitfalls are things, said Savoie, that companies don't know are an issue until they get ready to spend valuable time on a quantum computer and code doesn't run as expected.

"That's why we think it's critical for companies to start now," he said, even though today's noisy intermediate-scale quantum, or NISQ, machines have only a handful of qubits.

"You have to solve all these basic problems we really haven't even solved yet in classical computing," said Savoie.

The present moment in time in the young field of quantum sounds a bit like the early days of microcomputer-based relational databases. And, in fact, Savoie likes to make an analogy to the era of the 1980s and 1990s, when Oracle database was taking over workloads from IBM's DB/2.

Also:What the Google vs. IBM debate over quantum supremacy means

"Oracle is a really good analogy, he said. "Recall when SQL wasn't even a thing, and databases had to be turned on a per-on-premises, as-a-solution basis; how do I use a database versus storage, and there weren't a lot of tools for those things, and every installment was an engagement, really," recalled Savoie.

"There are a lot of close analogies to that" with today's quantum, said Savoie. "It's enterprise, it's tough problems, it's a lot of big data, it's a lot of big compute problems, and we are the software company sitting in the middle of all that with a lot of tools that aren't there yet."

Mind you, Savoie is a big believer in quantum's potential, despite pointing out all the challenges. He has seen how technologies can get stymied, but also how they ultimately triumph. He helped found startup Dejima, one of the companies that became a component of Apple's Siri voice assistant, in 1998. Dejima didn't produce an AI wave, it sold out to database giant Sybase.

"We invented this natural language understanding engine, but we didn't have the great SpeechWorks engine, we didn't have 3G, never mind 4G cell phones or OLED displays," he recalled. "It took ten years from 1998 till it was a product, till it was Siri, so I've seen this movie before I've been in that movie."

But the technology of NLP did survive and is now thriving. Similarly, the basic science of quantum, as with the basic science of NLP, is real, is validated. "Somebody is going to be the iPhone" of quantum, he said, although along the way there may be a couple Apple Newtons, too, he quipped.

Even an Apple Newton of quantum will be a breakthrough. "It will be solving real problems," he said.

Also: All that glitters is not quantum AI

In the meantime, handling the complexity that's cropping up now, with things like ETL, suggests there's a role for a young company that can be for quantum what Oracle was for structured query language.

"You build that out, and you have best practices, and you can become a great company, and that's what we aspire to," he said.

Zapata has fifty-eight employees and has had contract revenue since its first year of operations, and has doubled each year, said Savoie.

Originally posted here:
Quantum computing now is a bit like SQL was in the late 80s: Wild and wooly and full of promise - ZDNet

Read More..

Construction begins for Duke University’s new quantum computing center – WRAL Tech Wire

DURHAM Construction is currently underway on a 10,000-square foot expansion of Dukes existing quantum computing center in the Chesterfield Building, a former cigarette factory in downtown Durham.

The new space will house what is envisioned to be a world-beating team of quantum computing scientists. The DQC, Duke Quantum Center, is expected to be online in March 2021 and is one of five new quantum research centers to be supported by a recently announced$115 million grant from the U.S. Department of Energy.

The Error-corrected Universal Reconfigurable Ion-trap Quantum Archetype, or EURIQA, is the first generation of an evolving line of quantum computers that will be available to users in Dukes Scalable Quantum Computing Laboratory, or SQLab. The machine was built with funding from IARPA, the U.S. governments Intelligence Advanced Research Projects Activity. The SQLab intends to offer programmable, reconfigurable quantum computing capability to engineers, physicists, chemists, mathematicians or anyone who comes forward with a complex optimization problem theyd like to try on a 20-qubit system.

Unlike the quantum systems that are now accessible in the cloud, the renamed Duke Quantum Archetype, DQA, will be customized for each research problem and users will have open access to its gutsa more academic approach to solving quantum riddles.

(C) Duke University

See the original post here:
Construction begins for Duke University's new quantum computing center - WRAL Tech Wire

Read More..

Is Now the Time to Start Protecting Government Data from Quantum Hacking? – Nextgov

My previous column about the possibility of pairing artificial intelligence with quantum computing to supercharge both technologies generated a storm of feedback via Twitter and email. Quantum computing is a science that is still somewhat misunderstood, even by scientists working on it, but might one day be extremely powerful. And artificial intelligence has some scary undertones with quite a few trust issues. So I understand the reluctance that people have when considering this marriage of technologies.

Unfortunately, we dont really get a say in this. The avalanche has already started, so its too late for all of us pebbles to vote against it. All we can do now is deal with the practical ramifications of these recent developments. The most critical right now is protecting government encryption from the possibility of quantum hacking.

Two years ago I warned that government data would soon be vulnerable to quantum hacking, whereby a quantum machine could easily shred the current AES encryption used to protect our most sensitive information. Government agencies like NIST have been working for years on developing quantum-resistant encryption schemes. But adding AI to a quantum computer might be the tipping point needed to give quantum the edge, while most of the quantum-resistant encryption protections are still being slowly developed. At least, that is what I thought.

One of the people who contacted me after my last article was Andrew Cheung, the CEO of 01 Communique Laboratory and IronCAP. They have a product available right now which can add quantum-resistant encryption to any email. Called IronCAP X, its available for free for individual users, so anyone can start protecting their email from the threat of quantum hacking right away. In addition to downloading the program to test, I spent about an hour interviewing Cheung about how quantum-resistant encryption works, and how agencies can keep their data protection one step ahead of some of the very same quantum computers they are helping to develop.

For Cheung, the road to quantum-resistant encryption began over 10 years ago, long before anyone was seriously engineering a quantum computer. It almost felt like we were developing a bulletproof vest before anyone had created a gun, Cheung said.

But the science of quantum-resistant encryption has actually been around for over 40 years, Cheung said. It was just never specifically called that. People would ask how we could develop encryption that would survive hacking by a really fast computer, he said. At first, nobody said the word quantum, but that is what we were ultimately working against.

According to Cheung, the key to creating quantum-resistant encryption is to get away from the core strength of computers in general, which is mathematics. He explained that RSA encryption used by the government today is fundamentally based on prime number factorization, where if you multiply two prime numbers together, the result is a number that can only be broken down into those primes. Breaking encryption involves trying to find those primes by trial and error.

So if you have a number like 21, then almost anyone can use factorization to quickly break it down and find its prime numbers, which are three and seven. If you have a number like 221, then it takes a little bit longer for a human to come up with 13 and 17 as its primes, though a computer can still do that almost instantaneously. But if you have something like a 500 digit number, then it would take a supercomputer more than a century to find its primes and break the related encryption. The fear is that quantum computers, because of the strange way they operate, could one day do that a lot more quickly.

To make it more difficult for quantum machines, or any other kind of fast computer, Cheung and his company developed an encryption method based on binary Goppa code. The code was named for the renowned Russian mathematician who invented it, Valerii Denisovich Goppa, and was originally intended to be used as an error-correcting code to improve the reliability of information being transmitted over noisy channels. The IronCAP program intentionally introduces errors into the information its protecting, and then authorized users can employ a special algorithm to decrypt it, but only if they have the private key so that the numerous errors can be removed and corrected.

What makes encryption based on binary Goppa code so powerful against quantum hacking is that you cant use math to guess at where or how the errors have been induced into the protected information. Unlike encryption based on prime number factorization, there isnt a discernible pattern, and theres no way to brute force guess at how to remove the errors. According to Cheung, a quantum machine, or any other fast system like a traditional supercomputer, cant be programmed to break the encryption because there is no system for it to use to begin its guesswork.

A negative aspect to binary Goppa code encryption, and also one of the reasons why Cheung says the protection method is not more popular today, is the size of the encryption key. Whether you are encrypting a single character or a terabyte of information, the key size is going to be about 250 kilobytes, which is huge compared with the typical 4 kilobyte key size for AES encryption. Even ten years ago, that might have posed a problem for many computers and communication methods, though it seems tiny compared with file sizes today. Still, its one of the main reasons why AES won out as the standard encryption format, Cheung says.

I downloaded the free IronCAP X application and easily integrated it into Microsoft Outlook. Using the application was extremely easy, and the encryption process itself when employing it to protect an email is almost instantaneous, even utilizing the limited power of an average desktop. And while I dont have access to a quantum computer to test its resilience against quantum hacking, I did try to extract the information using traditional methods. I can confirm that the data is just unreadable gibberish with no discernable pattern to unauthorized users.

Cheung says that binary Goppa code encryption that can resist quantum hacking can be deployed right now on the same servers and infrastructure that agencies are already using. It would just be a matter of switching things over to the new method. With quantum computers evolving and improving so rapidly these days, Cheung believes that there is little time to waste.

Yes, making the switch in encryption methods will be a little bit of a chore, he said. But with new developments in quantum computing coming every day, the question is whether you want to maybe deploy quantum-resistant encryption two years too early, or risk installing it two years too late.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

Read this article:
Is Now the Time to Start Protecting Government Data from Quantum Hacking? - Nextgov

Read More..