Category Archives: Quantum Computing
Nvidia Declares That It Is A Full-Stack Platform – The Next Platform
In a decade and a half, Nvidia has come a long way from its early days provider of graphics chips for personal computers and other consumer devices.
Jensen Huang, Nvidia co-founder and chief executive officer put his sights on the datacenter, pushing GPUs as a way of accelerating HPC applications and the CUDA software development environment as a way of making that happen. Five years later, Huang declared artificial intelligence as the future of computing and that Nvidia would not only enable that, but bet the company on this being the future of software development. That AI-enhanced everything would be, in fact, the next platform.
The company has continued to evolve, with Nvidia expanding its hardware and software capabilities aimed at meeting the demands of an ever-changing IT landscape that now includes multiple clouds and the fast-growing edge and, Huang expects, a virtual world of digital twins and avatars, and all of this is dependent on the companys technologies.
Nvidia has not been a a point product provider for some time, but is now a full-stack platform vendor for this new computing world.
Accelerated computing starts with Nvidia CUDA general-purpose programmable GPUs, Huang said during his keynote address at the companys virtual GTC 2021 event this week. The magic of accelerated computing comes from the combination of CUDA, the acceleration libraries of algorithms that speed-up applications and the distributed computing systems and software that scale processing across an entire data center.
Nvidia has been advancing CUDA and expanding the surrounding ecosystem for it for more than fifteen years.
We optimize across the full stack, iterating between GPU, acceleration libraries, systems, and applications continuously, all the while expanding the reach of our platform by adding new application domains that we accelerate, he said. With our approach, end users experience speedups through the life of the product. It is not unusual for us to increase application performance by many X-factors on the same chip over several years. As we accelerate more applications, our network of partners see growing demand for Nvidia platforms. Starting from computer graphics, the reach of our architecture has reached deep into the worlds largest industries. We start with amazing chips, but for each field of science, industry and application, we create a full stack.
To illustrate that, Huang pointed to the more than 150 software development kits that target a broad range of industries, from design to life sciences, and at GTC announced 65 new or updated SDKs touching on such areas as quantum computing, cybersecurity, and robotics. The number of developers using Nvidia technologies has grown to almost 3 million, increasing six-fold over the past five years. In addition, CUDA has been downloaded 30 million times over 15 years, including 7 million times last year.
Our expertise in full-stack acceleration and datacenter-scale architectures lets us help researchers and developers solve problems at the largest scales, he said. Our approach to computing is highly energy-efficient. The versatility of architecture lets us contribute to fields ranging from AI to quantum physics to digital biology to climate science.
That said, Nvidia is not without its challenges. The companys $40 billion bid for Arm is no sure thing, with regulators from the UK and Europe saying they want to take a deeper look at the possible market impacts the deal would create and Qualcomm leading opposition to the proposed acquisition. In addition, the competition in GPU-accelerated computing is heating up, with AMD advancing its capabilities we recently wrote about the companys Aldebaran Instinct MI200 GPU accelerator and Intel last week saying that it expects the upcoming Aurora supercomputer will scale beyond 2 exaflops due in large part to a better-than-expected performance by its Ponte Vecchio Xe HPC GPUs.
Still, Nvidia sees its future in creating the accelerated-computing foundation for the expansion of AI, machine learning and deep learning into a broad array of industries, as illustrated by the usual avalanche of announcements coming out of GTC. Among the new libraries was ReOpt, which is aimed finding the shortest and most efficient routes for getting products and services to their destinations, which can save companies time and money in last-mile delivery efforts.
CuQuantum is another library for creating quantum simulators to validate research in the field while the industry builds the first useful quantum computers. Nvidia has built a cuQuantum DGX appliance for speeding up quantum circuit simulations, with the first accelerated quantum simulator coming to Googles Cirq framework coming in the first quarter 2022. Meanwhile, cuNumeric is aimed at accelerating NumPy workloads, scaling from one GPU to multi-node clusters.
Nvidias new Quantum-2 interconnect (which has nothing to do with quantum computing) is a 400 Gb/sec InfiniBand platform that comprises the Quantum-2 switch, the ConnectX-7 SmartNIC, the BlueField 3 DPU, and features like performance isolation, a telemetry-based congestion-control system and 32X higher in-switch processing for AI training. In addition, nanosecond timing will enable cloud datacenters to get into the telco space by hosting software-defined 5G radio services.
Quantum-2 is the first networking platform to offer the performance of a supercomputer and the share-ability of cloud computing, Huang said. This has never been possible before. Until Quantum-2, you get either bare-metal high-performance or secure multi-tenancy. Never both. With Quantum-2, your valuable supercomputer will be cloud-native and far better utilized.
The 7 nanometer InfiniBand switch chip holds 57 billion transistors similar to Nvidias A100 GPU and has 64 ports running at 400 Gb/sec or 128 ports running at 200 Gb/sec. A Quantum-2 system can connect up to 2,048 ports, as compared to the 800 ports with Quantum-1. The switch is sampling now and comes with options for the ConnectX-7 SmartNIC sampling in January or BlueField 3 DPU, which will sample in May.
BlueField DOCA 1.2 is a suite of cybersecurity capabilities that Huang said will make BlueField an even more attractive platform for building a zero-trust architecture by offloading infrastructure software that is eating up as much as 30 percent of CPU capacity. In addition, Nvidias Morpheus deep-learning cybersecurity platform uses AI to monitor and analyze data from users, machines and services to detect anomalies and abnormal transactions.
Cloud computing and machine learning are driving a reinvention of the datacenter, Huang said. Container-based applications give hyperscalers incredible abilities to scale out, allowing millions to use their services concurrently. The ease of scale out and orchestration comes at a cost: east-west network traffic increased incredibly with machine-and-machine message passing and these disaggregated applications open many ports inside the datacenter that need to be secured from cyberattack.
Nvidia has bolstered its Triton Inferencing Server with new support for the Arm architecture; the system already supported Nvida GPUs and X86 chips from Intel and AMD. In addition, version 2.15 of Triton also can run multiple GPU and multi-node inference workloads, which Huang called arguably one of the most technically challenging runtime engines the world has ever seen.
As these models are growing exponentially, particularly in new use cases, theyre often getting too big for you to run on a single CPU or even a single server, Ian Buck, vice president and general manager of Nvidias Tesla datacenter business, said during a briefing with journalists. Yet the demands [and] the opportunities for these large models want to be delivered in real-time. The new version of Triton actually supports distributed inference. We take the model and we split it across multiple GPUs and multiple servers to deliver that to optimize the computing to deliver the fastest possible performance of these incredibly large models.
Nvidia also unveiled NeMo Megatron, a framework for training large language models (LLMs) that have trillions of parameters. NeMo Megatron can be used for such jobs as language translation and compute program writing, and it leverages the Triton Inference Server. Nvidia last month unveiled Megatron 530B, a language mode with 530 billion parameters.
The recent breakthrough of large language models is one of the great achievements in computer science, Huang said. Theres exciting work being done in self-supervised multi-modal learning and models that can do tasks that it was never trained on called zero-shot learning. Ten new models were announced last year alone. Training LLMs is not for the faint of heart. Hundred-million-dollar systems, training trillion-parameter models on petabytes of data for months requires conviction, deep expertise, and an optimized stack.
A lot of time at the event was spent on Nvidias Omniverse platform, the virtual environment introduced last year that the company believes will be a critical enterprise tool in the future. Skeptics point to avatars and the like in suggesting that Omniverse is little more than a second coming of Second Life. In responding to a question, Buck said there are two areas where Omniverse is catching on in the enterprise.
The first is digital twins, virtual representations of machines or systems that recreates an environment like the work were doing in embedded and robotics and other places to be able to simulate virtual worlds, actually simulate the products that are being built in a virtual environment and be able to prototype them entirely with Omniverse. A virtual setting allows the product development to happen in a way that has been before remotely, virtually around the world.
The other is in the commercial use of virtual agents this is where the AI-based avatars can come in to help with call centers and similar customer-facing tasks.
Excerpt from:
Nvidia Declares That It Is A Full-Stack Platform - The Next Platform
ANET: Add These 3 Soaring Computer Hardware Stocks to Your Watchlist – StockNews.com
Because several companies are expected to extend remote or hybrid working arrangements for the foreseeable future, and the digitization of business processes continues, computer hardware companies are expected to see robust demand. Indeed, the computer hardware industrys sales increased 23.4% year-over-year in the third quarter.
The growing adoption of the internet of things (IoT), artificial intelligence (AI), and cloud-based products and services should increase the need for computer hardware. The global computer hardware market is expected to grow at a 9.4% CAGR this year.
Given this backdrop, we think it could be wise to add quality computer hardware stocks Arista Networks, Inc. (ANET), IonQ, Inc. (IONQ), and Velo3D, Inc. (VLD) to ones watchlist.
Arista Networks, Inc. (ANET)
Santa Clara, Calif.-based ANET develops, markets, and sells cloud networking solutions around the globe. Its cloud networking solutions consist of extensible operating systems, a set of network applications, and gigabit Ethernet switching and routing platforms.
On November 2, 2021, ANET announced the next major expansion of its Arista EOS network stack, introducing the EOS Network Data Lake. Ken Duda, Founder and CTO at Arista Networks, said, Arista is entering the third generation of its flagship software stack. Developing a network-based data lake foundation from the ground up on our existing network state database makes Arista EOS NetDL a differentiated network and data-centric operating system.
ANETs total revenue increased 23.7% year-over-year to $748.70 million for its fiscal third quarter, ended September 30, 2021. The companys gross profit came in at $478.62 million, up 24.3% year-over-year. In addition, its income from operations was $233.29 million, up 23.8% year-over-year.
Analysts expect ANETs revenue to be $3.70 billion in its fiscal year 2022, representing a 27.3% year-over-year rise. In addition, the companys EPS is expected to increase 23.5% year-over-year to $13.68 in its fiscal year 2022. It surpassed the consensus EPS estimates in each of the trailing four quarters. The stock has gained 41.5% in price over the past month to close yesterdays trading session at $526.42.
Click here to check out our Cloud Computing Industry Report for 2021
IonQ, Inc. (IONQ)
IONQ develops general-purpose quantum computing systems. It sells access to quantum computers with 11 qubits. It is a leader in quantum computing, with a proven record of innovation and deployment. The company is headquartered in College Park, Md.
On September 23, 2021, IONQ and GE Research formed a partnership to explore the impact of quantum computing and IonQs quantum computers in the pivotal field of risk analysis. Peter Chapman, CEO and President of IONQ, said, As we explore how quantum computing could help us calculateand correct forthese risks, were proud to partner with GE, whose forward-thinking team sees that the rise of data availability pairs naturally with quantum computers to find new solutions to these management challenges.
IONQs net intangible assets were $5.11 million for the period ended June 30, 2021, compared to $2.69 million for the period ended December 31, 2020. Its accounts receivable came in at $420,000 compared to $390,000 for the same period. Moreover, its net property and equipment was $15.56 million, versus $11.99 million, also for the same period.
Analysts expect IONQs revenue to be $14.57 million in its fiscal year 2022, representing a 148.2% year-over-year rise. In addition, the companys EPS is expected to increase 11.1% in the next year and 20% per annum over the next five years. Over the past month, the stock has gained 153.9% in price to close yesterdays trading session at $21.35.
Velo3D, Inc. (VLD)
VLD produces metal additive three-dimensional printers. The Campbell, Calif.-based companys printers enable the production of components for space rockets, jet engines, fuel delivery systems, and other high-value metal parts, which it sells or leases to customers for use in their businesses.
On September 29, 2021, VLD completed its merger with JAWS Spitfire Acquisition Corporation. Barry Sternlicht, Chairman of JAWS Spitfire Acquisition, said, What Velo3D has done for its customersmost of whom are at the forefront of innovation in their industriesis nothing less than transformative. Were proud to be affiliated with Benny and the rest of the Velo3D team.
On September 30, 2021, VLDs common stock began trading on the New York Stock Exchange under the ticker symbol VLD.
Analysts expect VLDs revenue to grow 238.6% year-over-year to $84.69 million in its fiscal year 2022. Its EPS is estimated to grow at 16.7% in the next year. Over the past month, the stock has gained 22.9% in price to close yesterdays trading session at $12.28.
ANET shares were trading at $534.00 per share on Tuesday morning, up $7.58 (+1.44%). Year-to-date, ANET has gained 83.78%, versus a 26.09% rise in the benchmark S&P 500 index during the same period.
Riddhima is a financial journalist with a passion for analyzing financial instruments. With a master's degree in economics, she helps investors make informed investment decisions through her insightful commentaries. More...
See the original post here:
ANET: Add These 3 Soaring Computer Hardware Stocks to Your Watchlist - StockNews.com
Rigetti and Oxford Instruments Participate and Sponsor The City – Marketscreener.com
5 November 2021
Rigetti Computing, a pioneer in full-stack quantum computing, and Oxford Instruments are gold sponsors of the upcoming City Quantum Summit London 2021 which is taking place on Wednesday 10th November at Mansion House. The Summit will bring together founders and CEOs of quantum computing companies with the aim of clearly demonstrating the need for quantum computing in all sectors and industries, from sustainable development to medical revelations. The event is being hosted by William Russell, Lord Mayor Of The City Of London, in collaboration with Robinson Hambro.
"The City Quantum Summit is a great example of the level of collaboration required to secure ongoing quantum commercialisation here in the UK and we're pleased to be actively participating in the discussions and working closely with other leading players like Rigetti to accelerate real applications in quantum computing today," says Stuart Woods, Managing Director at Oxford Instruments NanoScience. Rigetti and Oxford Instruments are partners in a public-private consortium to deliver the first commercial quantum computer in the U.K.
Woods and Marco Paini, Technology Partnerships Director for Europe at Rigetti, will both be participating in the event as part of a panel discussion focused on financial modelling. The discussion will cover quantum computing applications for the financial sector and a project with Standard Chartered to analyse use cases including, for example, synthetic financial data generation and classification for implied volatility.
"Some of the most promising use cases for quantum computing are based in the financial sector. The ability to develop practical applications, like financial modelling, on real hardware puts us in a strong position to accelerate the commercialisation of quantum computing in the UK. Our Innovate UK consortium brings together industry experts and full-stack quantum expertise to make practical quantum computing a reality."
The Summit will be conducted as a hybrid event with the opportunity to join virtually as well as in person. You can find out more about how to register for the event here
Disclaimer
Oxford Instruments plc published this content on 09 November 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 09 November 2021 09:32:06 UTC.
Read the rest here:
Rigetti and Oxford Instruments Participate and Sponsor The City - Marketscreener.com
Lost in Space-Time newsletter: Will a twisted universe save cosmology? – New Scientist
By Richard Webb
Albert Einsteins general theory of relativity didnt have to be
Hello, and welcome to Novembers Lost in Space-Time, the monthly physics newsletter that unpicks the fabric of the universe and attempts to stitch it back together in a slightly different way. To receive this free, monthly newsletter in your inbox, sign up here.
Theres a kind of inevitability about the fact that, if you write a regular newsletter about fundamental physics, youll regularly find yourself banging on about Albert Einstein. As much as it comes with the job, I also make no apology for it: he is a towering figure in the history of not just fundamental physics, but science generally.
A point that historians of science sometimes make about his most monumental achievement, the general theory of relativity, is that, pretty much uniquely, it was a theory that didnt have to be. When you look at the origins of something like Charles Darwins theory of evolution by natural selection, for example not to diminish his magisterial accomplishment in any way youll find that other people had been scratching around similar ideas surrounding the origin and change of species for some time as a response to the burgeoning fossil record, among other discoveries.
Even Einsteins special relativity, the precursor to general relativity that first introduced the idea of warping space and time, responded to a clear need (first distinctly identified with the advent of James Clerk Maxwells laws of electromagnetism in the 1860s) to explain why the speed of light appeared to be an absolute constant.
When Einstein presented general relativity to the world in 1915, there was nothing like that. We had a perfectly good working theory of gravity, the one developed by Isaac Newton more than two centuries earlier. True, there was a tiny problem in that it couldnt explain some small wobbles in the orbit of Mercury, but they werent of the size that demanded we tear up our whole understanding of space, time, matter and the relationship between them. But pretty much everything we know (and dont know) about the wider universe today stems from general relativity: the expanding big bang universe and the standard model of cosmology, dark matter and energy, black holes, gravitational waves, you name it.
So whyamI banging on about this? Principally because, boy, do we need a new idea in cosmology now and in a weird twist of history, it might just be Einstein who supplies it. Im talking about anintriguing feature by astrophysicist Paul M. Sutter in the magazine last month. It deals with perhaps general relativitys greatest (perceived, at least) weakness the way it doesnt mesh with other bits of physics, which are all explained by quantum theory these days. The mismatch exercised Einstein a great deal, and he spent much of his later years engaged in a fruitless quest to unify all of physics.
Perhaps his most promising attempt came with a twist literally on general relativity that Einstein played about with early on. By developing a mathematical language not just for how space-time bends (which is the basis of how gravity is created within relativity) but for how it twists, he hoped to create a theory that also explained the electromagnetic force. He succeeded in the first bit, creating a description of how massive, charged objects might twist space-time into mini-cyclones around them. But it didnt create a convincing description of electromagnetism, and Einstein quietly dropped the theory.
Well, the really exciting bit, as Sutter describes, is that this teleparallel gravity seems to be back in a big way. Many cosmologists now think it could be a silver bullet to explain away some of the most mysterious features of todays universe, such as thenature of dark matterand dark energy and thetroublesome period of faster-than-light inflationright at the moment of the big bang that is invoked to explain features of todays universe, such as its extraordinary smoothness. Not only that, but there could be a way to test the theory soon. Id recommendreading the featureto get all the details, but in the meantime, its about as exciting a development as youll get in cosmology these days.
Lets take just a quick dip into the physics arXiv preprint server, where the latest research is put up. One paper that caught my eye recently has the inviting title Life, the universe and the hidden meaning of everything . Its by Zhi-Wei Wang at the College of Physics in China and Samuel L. Braunstein at the University of York in the UK, and it deals with a question thats been bugging a lot of physicists and cosmologists ever since we started making detailed measurements of the universe and developing cogent theories to explain what we see: why does everything in the universe (the strengths of the various forces, the masses of fundamental particles, etc.) seem so perfectly tuned to allow the existence of observers like us to ask the question?
This has tended to take cosmologists and physicists down one of two avenues. The first says things are how they are because thats how theyre made. For some, that sails very close to an argument via intelligent design, aka the existence of god. The other avenue tends to be some form of multiverse argument: our universe is as it is because we are here to observe it (we could hardly be here to observe it if it werent), but it is one of a random subset of many possible universes that happen to be conducive to intelligent life arising.
This paper examines more closely a hypothesis from British physicist Dennis Sciama (doctoral supervisor to the stars: among his students in the 1960s and 1970s wereStephen Hawking, quantum computing pioneer David Deutsch and the UKs astronomer royal, Martin Rees ) that if ours were a random universe, there would be a statistical pattern in its fundamental parameters that would give us evidence of that. In this paper, the researchers argue that the logic is actually reversed. In their words: Were our universe random, it could give the false impression of being intelligently designed, with the fundamental constants appearing to be fine-tuned to a strong probability for life to emerge and be maintained.
Full disclosure Im writing something on this very subject for New Scientists 65th-anniversary issue, due out on 20 November. Read more there!
While Im banging on about Einstein, I stumbled across one of my favourite features Ive worked on while at the magazine the other day, and thought it was worth sharing. Called Reality check: Closing the quantum loopholes, its from 2011, a full 10 years ago, but the idea it deals with stretches back way before that and is still a very live one.
The basic question is: is quantum theory a true description of reality, or are its various weirdnesses not least the entanglement of quantum objects over vast distances indications of goings-on in an underlying layer of reality not described by quantum theory (or indeed any other theory to date)? I talked about entanglement quite a bit in last months newsletter, so I wont go into its workings here.
The alternative idea of hidden variables explaining the workings of the quantum world goes back to a famous paper published by Einstein and two collaborators, Nathan Rosen and Boris Podolsky, back in 1935. It led Einstein into a long-drawn-out debate about the nature of quantum theory with another of its pioneers, Niels Bohr, that continued decorously right until Einsteins death in 1955. It wasnt until the 1980s that we began to have the theoretical and experimental capabilities to actually pit the two pictures against one another.
The observatories atop the volcano Teide on Tenerife were one scene of a bold test of quantum reality.
Phil Crean A/ Alamy
I love the story not just for this rich history, but also for the way that, after each iteration of the experiments every time showing that quantum theory, and entanglement, are the right explanation for what is going on, whatever they might mean the physicists found another loophole in the experiments that might allow Einsteins hidden variable idea back into the frame again.
That led them to some pretty impressive feats of experimental derring-do to close the loopholes again the feature opens with a group of modern physicists shooting single photons between observatories on Tenerife and La Palma in the Canary Islands. In an update to the story that we published in 2018 (with the rather explicit titleEinstein was wrong: Why normal physics cant explain reality), they even reproduced the result with photons coming at us from galaxies billions of light years away proving that, if not the whole universe, then a goodly proportion of it follows quantum rules. You cant win em all, Einstein.
One reason Ive been thinking particularly frequently about Einstein and his work lately is that Ive been putting together the latestNew Scientist Essential Guidecalled Einsteins Universe. Its a survey of his theories of relativity and all those things that came out of it: the big bang universe and the standard model of cosmology, dark matter and energy, gravitational waves, black holes and, of course,the search for that elusive unifying theory of physics. Ive just putting the finishing touches to theEssential Guidewith my left hand as I type this, and I think its a fair expectation that youll find me banging on about that (and Einstein) a lot more next month.
1. Talking of fine-tuned universes, if you havent done so already, you can still catch up with Brian Cleggs New Scientist Event talk,The Patterns That Explain the Universe, from last month, available on demand.
2. If youre fan of big ideas (I hope thats why youre here) and like casting your net a little wider than just physics, then a ticket to ourBig Thinkers series of live events gives you access to 10 talks from top researchers from across the board, including Harvard astronomer Avi Loeb on the search for extraterrestrial life and Michelle Simmons and John Martinis on quantum computing.
3. It happened just after my last newsletter, but it would be remiss not to mention the awarding of this years Nobel prize to three researchers who played a leading role in advancing our understanding of chaotic systems notably the climate. You can find out more about what they didhere.
More on these topics:
Read more:
Lost in Space-Time newsletter: Will a twisted universe save cosmology? - New Scientist
IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits? – HPCwire
On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the companys roughly 100 employees travel to New York to ring the opening bell of the New York Stock Exchange. It will also be interesting to listen to IonQs first scheduled financial results call (Q3) on November 15.
IonQ is in the big leagues now. Wall Street can be brutal as well as rewarding, although these are certainly early days for IonQ as a public company. Founded in 2015 by Monroe and Duke researcher Jungsang Kim who is the company CTO IonQ now finds itself under a new magnifying glass.
How soon quantum computing will become a practical tool is a matter of debate, although theres growing consensus that it will, in fact, become such a tool. There are several competing flavors (qubit modality) of quantum computing being pursued. IonQ has bet that trapped ion technology will be the big winner. So confident is Monroe that he suggests other players with big bets on other approaches think superconducting, for example are waking up to ion traps advantages and are likely to jump into ion trap technology as direct competitors.
In a wide-ranging discussion with HPCwire, Monroe talked about ion technology and IonQs (roughly) three-step plan to scale up quickly; roadblocks facing other approaches (superconducting and photonic); how an IonQ system with about 1,200 physical qubits and home-grown error-correction will be able to tackle some applications; and why IonQ is becoming a software company and thats a good thing.
In ion trap quantum computing, ions are held in position by magnetic forces where they can be manipulated by laser beams. IonQ uses ytterbium (Yb) atoms. Once the atoms are turned into ions by stripping off one valence electron, IonQ use a specialized chip called alinear ion trap to hold the ions precisely in 3D space. Literally, they sort of float above the surface. This small trap features around 100 tiny electrodes precisely designed, lithographed, and controlled to produce electromagnetic forces that hold our ions in place, isolated from the environment to minimize environmental noise and decoherence, as described by IonQ.
It turns out ions have naturally longer coherence times and therefore require somewhat less error correction and are suitable for longer operations. This is the starting point for IonQs advantage. Another plus is that system requirements themselves are less complicated and less intrusive (noise producing) than systems for semiconductor-based, superconducting qubits think of the need to cram control cables into a dilution refrigerator to control superconducting qubits. That said, all of the quantum computing paradigms are plenty complicated.
For the moment, ion traps using lasers to interact with the qubits is one of the most straightforward approaches. It has its own scaling challenge but Monroe contends modular scaling will solve that problem and leverage ion traps other strengths.
Repeatability [in manufacturing superconducting qubits] is wonderful but we dont need atomic scale deposition, like you hear of with five nanometer feature sizes on the latest silicon chips, said Monroe. The atoms themselves are far away from the chips, theyre 100 microns, i.e. a 10th of a millimeter away, which is miles atomically speaking, so they dont really see all the little imperfections in the chip. I dont want to say it doesnt matter. We put a lot of care into the design and the fab of these chips. The glass trap has certain features; [for example] its actually a wonderful material for holding off high voltage compared to silicon.
IonQ started with silicon-based traps and is now moving to evaporated glass traps.
What is interesting is that weve built the trap to have several zones. This is one of our strategies for scale. Right now, at IonQ, we have exactly one chain of atoms, these are the qubits, and we typically have a template of about 32 qubits. Thats as many as we control. You might ask, how come youre not doing 3200 qubits? The reason is, if you have that many qubits, you better be able to perform lots and lots of operations and you need very high quality operations to get there. Right now, the quality of our operation is approaching 99.9%. That is a part per 1000 error, said Monroe.
This is sort of back of the envelope calculations but that would mean that you can do about 1000 ops. Theres an intuition here [that] if you have n qubits, you really want to do about n2 ops. The reason is, you want these pairwise operations, and you want to entangle all possible pairs. So if you have 30 qubits, you should be able to get to about 1000 ops. Thats sort of where we are now. The reason we dont have 3200 yet is that if you have 3200 qubits, you should be able to do 10 million ops and that means your noise should be one part in 107. Were not there yet. We have strategy to get there, said Monroe.
While you could put more ions in a trap, controlling them becomes more difficult. Long chains of ions become soft and squishy. A smaller chain is really stiff [and] much less noisy. So 32 is a good number. 16 might be a good number. 64 is a good number, but its going to be somewhere probably under 100 ions, said Monroe.
The first part of the strategy for scaling is to have multiple chains on a chip that are separated by a millimeter or so which prevents crosstalk and permits local operations. Its sort of like a multi-core classical architecture, like the multi-core Pentium or something like that. This may sound exotic, but we actually physically move the atoms, we bring them together, the multiple chains to connect them. Theres no real wires. This is sort of the first [step] in rolling out a modular scale-up, said Monroe.
In proof of concept work, IonQ announced the ability to arbitrarily move four chains of 16 atoms around in a trap, bringing them together and separating them without losing any of the atoms. It wasnt a surprise we were able to do that, said Monroe. But it does take some design in laying out the electrodes. Its exactly like surfing, you know, the atoms are actually surfing on an electric field wave, and you have to design and implement that wave. That was that was the main result there. In 2022, were going to use that architecture in one of our new systems to actually do quantum computations.
There are two more critical steps in IonQs plan for scaling. Error correction is one. Clustering the chips together into larger systems is the other. Monroe tackled the latter first.
Think about modern datacenters, where you have a bunch of computers that are hooked together by optical fibers. Thats truly modular, because we can kind of plug and play with optical fibers, said Monroe. He envisions something similar for trapped ion quantum computers. Frankly, everyone in the quantum computing community is looking at clustering approaches and how to use them effectively to scale smaller systems into larger ones.
This interface between individual atom qubits and photonic qubits has been done. In fact, my lab at University of Maryland did this for the first time in 2007. That was 14 years ago. We know how to do this, how to move memory quantum bits of an atom onto a propagating photon and actually, you do it twice. If you have a chip over here and a chip over here, you bring two fibers together, and they interfere and you detect the photons. That basically makes these two photons entangled. We know how to do that.
Once we get to that level, then were sort of in manufacturing mode, said Monroe. We can stamp out chips. We imagine having a rack-mounted chips, probably multicore. Maybe well have several 100 atoms on that chip, and a few of the atoms on the chip will be connected to optical conduits, and that allows us to connect to the next rack-mounted system, he said.
They key enabler, said Monroe, is a nonblocking optical switch. Think of it as an old telephone operator. They have, lets say they have 100 input ports and 100 output ports. And the operator connects, connects with any input to any output. Now, there are a lot of connections, a lot of possibilities there. But these things exist, these automatic operators using mirrors, and so forth. Theyre called n-by-n, nonblocking optical switches and you can reconfigure them, he said.
Whats cool about that is you can imagine having several hundred, rack-mounted, multi-core quantum computers, and you feed them into this optical switch, and you can then connect any multi-core chip to any other multi-core chip. The software can tell you exactly how you want to network. Thats very powerful as an architecture because we have a so-called full connection there. We wont have to move information to nearest neighbor and shuttle it around to swap; we can just do it directly, no matter where you are, said Monroe.
The third leg is error correction, which without question is a daunting challenge throughout quantum computing. The relative unreliability of qubits means you need many redundant physical qubits estimates vary widely on how many to have a single reliable logical qubit. Ions are among the better behaving qubits. For starters, all the ions are literally identical and not subject to manufacturing defects. A slight downside is that Ion qubit switching speed is slower than other modalities, which some observers say may hamper efficient error correction.
Said Monroe, The nice thing about trapped ion qubits is their errors are already pretty good natively. Passively, without any fancy stuff, we can get to three or four nines[i] before we run into problems.
What are those problems? I dont want to say theyre fundamental, but there are brick walls that require a totally different architecture to get around, said Monroe. But we dont need to get better than three or four nines because of error correction. This is sort of a software encoding. The price you pay for error correction, just like in classical error correction encoding, is you need a lot more bits to redundantly encode. The same is true in quantum. Unfortunately, with quantum there are many more ways you can have an error.
Just how many physical qubits are needed for a logical qubit is something of an open question.
It depends what you mean by logical qubit. Theres a difference in philosophy in the way were going forward compared to many other platforms. Some people have this idea of fault tolerant quantum computing, which means that you can compute infinitely long if you want. Its a beautiful theoretical result. If you encode in a certain way, with enough overhead, you can actually you can run gates as long as you want. But to get to that level, the overhead is something like 100,000 to one, [and] in some cases a million to one, but that logical qubit is perfect, and you get to go as far as you want [in terms of number of gate operations], he said.
IonQ is taking a different tack that leverages software more than hardware thanks to ions stability and less noisy overall support system [ion trap]. He likens improving qubit quality to buying a nine in the commonly-used five nines vernacular of reliability. Five nines 99.999 percent (five nines) is used describe availability, or put another way, time between shutdowns because of error.
Were going to gradually leak in error correction only as needed. So were going to buy a nine with an overhead of about 16 physical qubits to one logical qubit. With another overhead of 32 to one, we can buy another nine. By then we will have five nines and several 100 logical qubits. This is where things are going to get interesting, because then we can do algorithms that you cant simulate classically, [such] as some of these financial models were doing now. This is optimizing some function, but its doing better than the classical result. Thats where we think we will be at that point, he said.
Monroe didnt go into detail about exactly how IonQ does this, but he emphasized that software is the big driver now at IonQ. Our whole approach at IonQ is to throw everything up to software as much as we can. Thats because we have these perfectly replicable atomic qubits, and we dont have manufacturing errors, we dont have to worry about a yield or anything like that everything is a control problem.
So how big a system do you need to run practical applications?
Thats a really good question, because I can safely say we dont exactly know the answer to that. What we do know if you get to about 100 qubits, maybe 72, or something like that, and these qubits are good enough, meaning that you can do 10s of 1000s of ops. Remember, with 100 qubits you want to do about 10,000 ops to something you cant simulate classically. This is where you might deploy some machine learning techniques that you would never be able to do classically. Thats probably where the lowest hanging fruit are, said Monroe.
Now for us to get to 100 [good] qubits and say 50,000 Ops, that requires about 1000 physical qubits, maybe 1500 physical qubits. Were looking at 1200 physical qubits, and this might be 16 cores with 64 ions in each core before we have to go to photonic connections. But the photonic connection is the key because [its] where you start to have a truly modular data center. You can stamp these things out. At that point, were just going to be making these things like crazy, and wiring them together. I think well be able to do interesting things before we get to that stage and it will be important if we can show some kind of value (application results/progress) and that we have the recipe for scaling indefinitely, thats a big deal, he said.
It is probably going too far to say that Monroe believes scaling up IonQs quantum computer is now just a straightforward engineering task, but it sometimes sounds that way. The biggest technical challenges, he suggests, are largely solved. Presumably, IonQ will successfully demonstrate its modular architecture in 2022. He said competing approaches superconducting and all-photonics, for example wont be able to scale. They are stuck, he said.
I think they will see atomic systems as being less exotic than they once thought. I mean, we think of computers as built from silicon and as solid state. For better for worse you have companies that that forgot that they supposed to build computers, not silicon or superconductors. I think were going to see a lot more fierce competition on our own turf, said Monroe. There are ion trap rivals. Honeywell is one such rival (Honeywell has announced plans to merge with Cambridge Quantum), said Monroe.
His view of the long-term is interesting. As science and hardware issues are solved, software will become the driver. IonQ already has a substantial software team. The company uses machine learning now to program its control system elements such as the laser pulses and connectivity. Were going to be a software company in the long haul, and Im pretty happy with that, said Monroe.
IonQ has already integrated with the three big cloud providers (AWS, Google, Microsoft) quantum offerings and embraced the growing ecosystem of software and tools providers and has APIs for use with a variety of tools. Monroe, like many in the quantum community, is optimistic but not especially precise about when practical applications will appear. Sometime in the next three years is a good guess, he suggests. As for which application area will be first, it may not matter in the sense that he thinks as soon as one domain shows benefit (e.g. finance or ML) other domains will rush in.
These are heady times at IonQ, as they are throughout quantum computing. Stay tuned.
[i] He likens improving qubit quality to buying a nine in the commonly-used five nines vernacular of reliability. Five nines 99.999 percent (five nines) is used describe availability, or put another way, time between shutdowns because of error.
See the original post here:
IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits? - HPCwire
QUANTUM COMPUTING INC. Management’s Discussion and Analysis of Financial Condition and Results of Operations, (form 10-Q) – marketscreener.com
Management's discussion and analysis of results of operations and financialcondition ("MD&A") is a supplement to the accompanying condensed financialstatements and provides additional information on Quantum Computing Inc.'s("Quantum" or the "Company') business, current developments, financialcondition, cash flows and results of operations.
When we say "we," "us," "our," "Company," or "Quantum," we mean QuantumComputing Inc.
This section should be read in conjunction with other sections of this QuarterlyReport, specifically, Selected Financial Statements and Supplementary Data.
Products and Products in Development
Qatalyst is integrated with the Amazon Cloud BRAKET API, offering access tomultiple Quantum Processing Units ("QPUs") including DWave, Rigetti, and IonQ.Qatalyst also integrates directly with IBM's QPUs.
In addition to commercial markets, the Company is pursuing a number of USgovernment funded opportunities.
Three Months Ended September 30, 2021 vs. September 30, 2020
Nine Months Ended September 30, 2021 vs. September 30, 2020
Liquidity and Capital Resources
The following table summarizes total current assets, liabilities and workingcapital at September 30, 2021, compared to December 31, 2020:
On a long-term basis, our liquidity is dependent on continuation and expansionof operations and receipt of revenues.
Critical Accounting Policies and Estimates
We have identified the accounting policies below as critical to our businessoperations and the understanding of our results of operations.
The Company's policy is to present bank balances under cash and cashequivalents, which at times, may exceed federally insured limits. The Companyhas not experienced any losses in such accounts.
Lease expense for operating leases consists of the lease payments plus anyinitial direct costs, primarily brokerage commissions, and is recognized on astraight-line basis over the lease term.
Net loss per share is based on the weighted average number of common shares andcommon shares equivalents outstanding during the period.
Off Balance Sheet Arrangements
Edgar Online, source Glimpses
Go here to see the original:
QUANTUM COMPUTING INC. Management's Discussion and Analysis of Financial Condition and Results of Operations, (form 10-Q) - marketscreener.com
Pasqal named startup of the year by L’Usine Nouvelle – EurekAlert
PARIS, Nov. 4, 2021 Pasqal, developers of neutral atom-based quantum technology, today announced it was named Startup of the Year by LUsine Nouvelle, a leading French business news site covering economic and industrial news across industries. LUsine Nouvelles awards programs honor innovations, individuals and projects that aim to solve societys biggest challenges.
Founded in 2019 as a spin-off from Institut dOptique, Pasqal was the first startup dedicated to quantum computing in France. The company is on an accelerated growth track and expects to grow from 40 employees to 100 by the end of 2022. Pasqal raised a 25 M Series A funding round in June 2021, one of the largest series A rounds in Europe for a deep tech startup. This award comes on the heels of a momentous year for Pasqal. In 2021, the company grew its employee base by 300%, adding 30 new team members from eight different regions.
This award recognizes Pasqals tremendous contributions to the quantum ecosystem. Pasqals initiatives are aligned with the French quantum national plan and the France 2030 investment plan which identified deep tech and quantum computing as critical industries for Frances success. Pasqal aims to solve real-word challenges through quantum technology and believes it will deliver a 1000-qubit quantum processor to the market in 2023, faster than the quantum development roadmaps of the tech giants in the field. Capable of operating at room temperature, Pasqals full-stack, software-agnostic quantum processing units have the potential to address complex problems in medicine, finance and sustainability more efficiently than classical computers. Pasqals open-source library, Pulser enablesthecontrolof neutralatoms-basedprocessors at the level of laser pulses.
Pasqal is already exploring specific use cases across industries. The company is working with a leading French utility company, EDF, to optimize charging schedules for electric vehicles to combat climate change. Pasqal is also working with Crdit Agricole CIB and Multiverse Computing to design and implement new approaches running on classical and quantum computers to outperform state-of-the-art algorithms for capital markets and risk management. In addition, the company is working with Qubit Pharmaceuticals to accelerate drug discovery through quantum technology. Pasqal hopes these initial use cases will open the door to additional applications in carbon capture, energy and sustainability.
With the companys recent funding, Pasqal plans to continue innovating and developing new solutions. By the end of 2022, Pasqal plans to provide cloud access to its quantum computing services and hopes to deliver a full quantum computing device operating on the cloud by 2023.
Georges-Olivier Reymond, CEO and founder of Pasqal, accepted the award at LUsine Nouvelles Foundations of the Industry event today in Paris. The event was attended by more than 150 industry leaders and decision-makers, uniting various industry sectors in France and Europe.
Were honored to be named Startup of the Year among the many French technology startups aiming to solve the worlds biggest challenges, said Reymond. Were proud to be part of Frances hub for technology innovation, supported by the French government, and we look forward to putting our quantum technology to real-world use throughout the region.
To learn more about Pasqal and its award-winning solutions, please visit: http://www.pasqal.io.
About Pasqal
Pasqalis building quantum processors out of neutral atoms atoms possessing an equal number of electrons and protons through the use of optical tweezers using laser light, enabling the engineering of full-stack processors with high connectivity and scalability.
The company is dedicated to delivering a 1000-qubit quantum processor by 2023 to help customers achieve quantum advantage in the fields of quantum simulation and optimization across several vertical sectors, including finance, energy and supercomputing.
For more information, please contact the company:contact@pasqal.io
###
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Read the original:
Pasqal named startup of the year by L'Usine Nouvelle - EurekAlert
Quantum Xchange Joins the Hudson Institute’s Quantum Alliance Initiative – PRNewswire
BETHESDA, Md., Nov. 3, 2021 /PRNewswire/ -- Quantum Xchange, delivering the future of encryption with its leading-edge key distribution platform, today announced its membership with the Hudson Institute's Quantum Alliance Initiative (QAI), a consortium of companies, institutions, and universities whose mission is to raise awareness and develop policies that promote the critical importance of U.S. leadership in quantum technology, while simultaneously working to ensure that the nation's commercial businesses, government agencies, and digital infrastructure will be safe from a future quantum computer cyberattack by 2025.
The arrival of quantum computers is expected to break popular encryption methods, e.g., Public Key Encryption (PKE), widely used to protect nearly every aspect of digital life. Earlier this month, the U.S. Department of Homeland Security released guidance to help organizations prepare for the largest cryptographic transition in the history of computing with Secretary Mayorkas stating, "We must prepare now to protect the confidentiality of data that already exists today and remains sensitive in the future." Despite these early warnings, most U.S. businesses and federal agencies have taken a lax position, waiting for NIST to publish its post-quantum cryptography (PQC) standard before any action is taken.
"Government and business leaders don't fully recognize the urgency of the quantum threat or magnitude of the multi-year crypto migration problem it will require after NIST publishes the PQC standard," said Eddy Zervigon, CEO of Quantum Xchange. "As a quantum security trailblazer, with an enterprise-ready solution, we believe it's our duty to help raise awareness and arm cybersecurity professionals, and lawmakers, with the information needed to become stewards of change within their organizations conveying to leadership and the public the severity and immediacy of the quantum security threat. We are pleased to be a member of QAI and to advance this common agenda."
Quantum Xchange's radically reimagined approach to data encryption addresses the weaknesses of legacy encryption systems and the quantum threat at once. Using the company's groundbreaking out-of-band symmetric key delivery technology, Phio Trusted Xchange, leading businesses and government agencies can simply and affordably future-proof the security of their data and communications networks, overcome the vulnerabilities of present-day encryption techniques, and better protect against known and future attacks.
"Hudson's Quantum Alliance Initiative aims to transform how we think about quantum, the science and technology that will dominate the world's economies, security, and prospects for freedom," said QAI Director Arthur Herman. "Having Quantum Xchange as a member is a welcome addition to the international coalition we are building, to make sure America is quantum ready for the 21st century."
About Quantum Xchange Quantum Xchangegives commercial enterprises and government agencies the ultimate solution for protecting data in motion today and in the quantum future. Its award-winning out-of-band symmetric key distribution system, Phio Trusted Xchange (TX), is uniquely capable of making existing encryption environments quantum safe and supports both post-quantum crypto (PQC) and Quantum Key Distribution (QKD). Only by decoupling key generation and delivery from data transmissions can organizations achieve true crypto agility and quantum readiness with no interruptions to underlying infrastructure or business operations. To learn more about future-proofing your data from whatever threat awaits, visit QuantumXC.com or follow us on Twitter @Quantum_Xchange #BeQuantumSafe.
SOURCE Quantum Xchange
Read the rest here:
Quantum Xchange Joins the Hudson Institute's Quantum Alliance Initiative - PRNewswire
Quantum Blockchain Technologies Plc – Update on FPGA and ASIC Development – Yahoo Finance UK
Early internal calculations show a final chip that could perform 24% quicker than current best available ASIC
OVERVIEW
The goal of the Company is to develop disruptive Bitcoin mining technology, to mine both faster and with less overall energy consumption than current practices. A number of advanced technologies are being used by QBT to achieve this goal; namely, quantum computing, AI Neural Networks - Deep Learning, Algebraic-Boolean reductions, Very Big Data, Cryptography and custom chip programming and design - using GPU, FPGA and ASIC chips.
The current technique used by producers of Bitcoin mining technology on dedicated computers to achieve the fastest performance, is by manufacturing single purpose, customised ASIC chips, which can perform only one wired function, i.e, the computation of the double hashing; the SHA26 cryptographic algorithm used to extract Bitcoins. The simple reality is that the faster the algorithms are computed and the more ASIC chips deployed, the more chances a miner has to extract Bitcoins.
Before manufacturing an ASIC chip, which is an expensive operation, there are usually two initial steps; firstly, to develop the logic gates architecture which will be used by the final ASIC chip this is performed on a cheaper but slower chip, called an FPGA, which already contains some pre-defined functions - and secondly, by customising the design to take advantage of the greater freedom offered by ASIC technology, initially by manufacturing a prototype in a small batch, to keep costs low. The final stage, manufacturing the completed ASIC chip, is an expensive process, but the end result is a very small scale (currently up to 5nm) processing chip, which is significantly quicker, leading to greater results when mining Bitcoin.
QBT has now completed the FPGA development phase and is moving to develop its ASIC protype.
Initial estimates derived from the FPGA performance obtained from our internal testing, would indicate that when the final industrial ASIC prototype design is completed it could outperform the fastest ASIC chip, currently being used to mine Bitcoin by at least 24%.
Moreover, early experimental evidence, using AI techniques to multiply by several factors the speed of an FGPA for computing the Bitcoin mining algorithm, would make even an FPGA a competitive Bitcoin mining tool. This same principal would apply to ASIC and other existing commercial mining tools. Tests on this innovative approach, will continue over the next three months.
DETAILED VIEW
Following testing of a variety of design options, an unrolled SHA-256 architecture has now been implemented, with a number of existing optimisations coded, for QBTs FPGA chip prototype.
The Companys patented ASIC ULTRA Boost improvements (patent under application) will be added in the next few weeks, as a result of the close cooperation between the in-house cryptography expert and the Companys FPGA designer.
Current performance of the Bitcoin mining architecture developed by QBT on the FPGA (based on 16nm technology, at 600MHz basic cycle and the average general purpose available coding area) is 2.8 Giga Hash per second (GH/s) with an estimated 50W energy consumption. The ASIC ULTRA Boost optimisation should improve the above performance by 7% as previously reported.
To put things into context, our in-house expert has calculated that, as of today, a top of the range FPGA (using QBTs architecture of the implementation of the algorithm, to make it run at approximately 15 times faster than the standard FPGA), is still approximately 23.4 times slower than the best-in-class existing 7nm ASIC Bitcoin mining chip and much less energy efficient.
It was never the Companys intention to compete on speed and energy efficiency against an ASIC chip, using an FPGA chip but, in order to keep testing costs significantly lower, it has been a necessary step for QBT to take in this phase of its development. As a result, the Company is now in a much better position to assess the performance projection of its SHA256 Bitcoin mining architecture, and therefore the team is confident that it can now transfer this solution over to an ASIC chip.
Preliminary approximate calculations indicate that on a 12nm ASIC chip, extrapolated by comparing it to a commercial mid-range 16nm ASIC Bitcoin mining chip (with circa 300 million gates), our ASIC could achieve a double hash rate of 392 GH/s, with the ASIC ULTRA Boost optimisation adding an extra 7% as previously announced, reaching 419 GH/s, which would still be 2.26 times slower than the fastest ASIC commercially available.
However, the Company strongly believes that when compared with the best commercial 7nm ASIC chip for Bitcoin mining available on the market today, an industrial production of QBTs ASIC at 7nm, would indicate a double hash rate of 1.19TH/s, hence 24% faster.
This performance of QBTs architecture on the 7nm ASIC, doesnt yet include the 7% efficiency achieved by the optimisation of the patent application filed in September, which is still to be implemented. The current work on a second patent by the Companys cryptography expert will hopefully lead to further material optimisations.
Detailed simulations on energy consumption will be run as soon as our ASIC gate design layout is completed.
ASIC programming will start this month, and we estimate that by the end of Q1 2022 we will be able to announce when the first batch of prototype chips will be available for in-house testing. Following the completion of testing, the Company envisages that chip production, for QBTs own use, will commence by the end of 2022.
Concurrently, QBTs R&D team is also considering an alternative SHA256 computing approach (which is categorised as the basic Bitcoin mining algorithm), and this will be tested within the next three months. The joint effort by the members of our AI team and the Companys FPGA expert, could improve the current FPGA hash rate of 2.8GH/s by a highly material multiple factor, making mining by the slower FPGA chip potentially competitive against the best-in-class ASIC chip. Should this route be successful, mining via this method could commence as early as Q2 2022.
These AI techniques, if successful, could also improve the performance of current commercial ASIC Bitcoin miners.
The Companys new R&D IT infrastructure, which will also allow for Bitcoin mining tests to be carried out, will be operational in five weeks time. The wait has been due to the serious worldwide shortage of silicon chips, which is delaying the expected delivery of the hardware. However, the Company has adopted the heavy use of cloud resources in order to avoid any interruption in the R&D activities to the various groups.
The Company remains very confident on the R&D strategy it has adopted which it believes could result in disruptive Bitcoin mining. It is worth noting that the Companys R&D programme is fully funded until the end of 2022.
Francesco Gardin, CEO and Chairman of QBT, commented, Our R&D has delivered some very impressive results in a very short time: In only four months since the programme commenced, we have filed a patent application where we believe the ASIC ULTRA Boost has improved the standard mining algorithm, after five years of little or no progress following the publication in 2016 of the ASIC Boost paper.
We will soon be ready to start the design of our ASIC Bitcoin mining chip which, on paper, already outperforms, in speed, the current best in class ASIC Bitcoin mining commercial solution. This significant improvement is before the implementation of the new optimisation from ASIC ULTRA Boost and we are confident that our second patent application, which is under development by our cryptography expert, will add a further radical improvement to the process, including also a reduction in energy consumption.
All our other teams are working extremely hard on the other R&D fronts: Quantum computing and AI Neural Networks-Deep Learning and algebraic-Boolean optimisation. Meanwhile an AI accelerator will be tested within the next three months, which we believe could radically improve the performance of existing commercial miners, as well as our GPU, FPGA and, in the near future, our AISC chip. We consider that the R&D activity undertaken by our group of 15 experts is unbelievably exciting and, if successful, could potentially be radically innovative for the industry.
Read the rest here:
Quantum Blockchain Technologies Plc - Update on FPGA and ASIC Development - Yahoo Finance UK
Is This the Right Time for a Cryptography Risk Assessment? – Security Boulevard
If youre having trouble getting a handle on your cryptographical instances, youre not alone. According to Ponemon Institutes most recent Global Encryption Trends Study, Discovering where sensitive data resides is the number one challenge.[i] And its no surprise given the surge in cryptographical use cases spawned from modern IT practices such as DevOps, machine identity, cloud, and multi-cloud environments.
Discussions at the DHS (Department for Homeland Security) and NIST (National Institute of Standards and Technology) are raising awareness with urgency aimed at public and private organizations to find tools and methods that will give them visibility into their cryptographical instances in order to be able to monitor it.
Many information technology (IT) and operational technology (OT) systems are dependent on public-key cryptography, but many organizations have no inventory of where that cryptography is used. This makes it difficult to determine where and with what priority post-quantum algorithms will need to replace the current public-key systems. Tools are urgently needed to facilitate the discovery of where and how public-key cryptography is being used in existing technology infrastructures.[1] This concern was raised by NIST in a recent report on adopting and using post-quantum algorithms.
DHS recently partnered with NIST to create a roadmap designed to reduce the risks that are expected with advancements in technology, particularly quantum computing. The roadmap provides a guide for chief information officers on how to mitigate risks, advising them to: stay on top of changing standards, inventory and prioritize systems and datasets, audit vulnerabilities, and to use the gathered information for transition planning. In the statement, Homeland Security SecretaryAlejandro N. Mayorkas advised, Now is the time for organizations to assess and mitigate their related risk exposure. As we continue responding to urgent cyber challenges, we must also stay ahead of the curve by focusing on strategic, long-term goals.
The roadmap ostensibly advises organizations to embark on what industry analyst Gartner refers to as a Cryptographic Center of Excellence (CryptoCoE), which is a group within an organization that takes ownership of an enterprise-wide strategy for crypto and PKI: discovering, inventorying, monitoring, and executing.
By organizing the people, protocols, processes, and technology needed to prepare for quantum resilience, CIOs are laying the foundation for a strong crypto strategy and building a CryptoCoE within their organization to enforce governance and compliance and bring crypto agility.
Crypto agility describes a way for implementing cryptography that shouldnt be limited to preparations for post-quantum computing. Crypto agility means that cryptographical updates can be made without causing business disruption ensuring that algorithm replacement is relatively straightforward and can happen without changing the function of an application. This means being prepared to easily transition to new requirements as they are updated by standards groups and regulatory bodies. Requirements and regulations change in order to keep up with a threat climate that is always in motion, necessitating the need for stronger algorithms and longer key lengths.
Another driver for having an accurate picture of your cryptographic inventory is to know what certificates are in use throughout the organization, if they are in compliance, and when they expire. Certificate expiry causes outages that make business applications unavailable. Outages can be costly, cause potential breach of service-level agreements, and damage brand reputation.
The sooner an organization can gain visibility into all of its cryptographical instances, which means going behind the endpoints to uncover SSH keys, crypto libraries, and hardcoded cryptography hidden inside of hosts and applications, the better prepared it will be to avoid data breaches and maintain compliance as new key lengths and algorithms are required to defend an organization from known threats. If youre wondering whether or not its time to perform an enterprise-wide cryptography risk assessment, the time is now.
Other Resources:
DHS releases roadmap to post-quantum cryptography
Getting Ready for Post-Quantum Cryptography: Exploring Challenges Associated with Adopting and Using Post-Quantum Cryptographic Algorithms
NIST 4-28-2021, https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04282021.pdf
[1] TheNational Institute of Standards and Technology(NIST), https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04282021.pdf
[i] 2021 Global Encryption Trends Study, Ponemon Institute
The post Is This the Right Time for a Cryptography Risk Assessment? appeared first on Entrust Blog.
*** This is a Security Bloggers Network syndicated blog from Entrust Blog authored by Diana Gruhn. Read the original post at: https://www.entrust.com/blog/2021/11/is-this-the-right-time-for-a-cryptography-risk-assessment/
See the original post:
Is This the Right Time for a Cryptography Risk Assessment? - Security Boulevard