Category Archives: Quantum Computer
Quantum Computing Will Breach Your Data Security BRINK Conversations and Insights on Global Business – BRINK
Researchers talk to each other next to the IBM Q System One quantum computer at IBM's research facility in Yorktown Heights, New York. Quantum computing's speed and efficacy represent one of the biggest threat to data security in the future.
Photo: Misha Friedman/Getty Images
Quantum computing (QC) represents the biggest threat to data security in the medium term, since it can make attacks against cryptography much more efficient. With quantum computing capabilities having advanced from the realm of academic exploration to tangible commercial opportunities, now is the time to take steps to secure everything from power grids and IoT infrastructures to the burgeoning cloud-based information-sharing platforms that we are all increasingly dependent upon.
Despite encrypted data appearing random, encryption algorithms follow logical rules and can be vulnerable to some kinds of attacks. All algorithms are inherently vulnerable to brute-force attacks, in which all possible combinations of the encryption key are tried.
According to Verizons 2021 Data Breach report, 85% of breaches caused by hacking involve brute force or the use of credentials that have been lost or stolen. Moreover, cybercrime costs the U.S. economy $100 billion a year and costs the global economy $450 billion annually.
Although traditionally, a 128-bit encryption key establishes a secure theoretical limit against brute-force attacks, this is a bare-minimum requirement for Advanced Encryption Standard symmetric keys, which are currently the default symmetric encryption cipher used for public and commercial purposes.
These are considered to be computationally infeasible to crack, and most experts consider todays 128-bit and 256-bit encryption keys to be generally secure. However, within the next 20 years, sufficiently large quantum computers will be able to break essentially all public-key schemes currently in use in a matter of seconds.
Quantum computing speeds up prime number factorization, so computers with quantum computation can easily break cryptographic keys via quickly calculating and exhaustively searching secret keys. A task thought to be computationally impossible by conventional computer architectures becomes easy by compromising existing cryptographic algorithms, shortening the span of time needed to break public-key cryptography from years to hours.
Quantum computers outperform conventional computers for specific problems by leveraging complex phenomena such as quantum entanglement and the probabilities associated with superpositions (when quantum bits [qubits] exist in several states at the same time) to perform a series of operations in such a way that favorable probabilities are enhanced. When a quantum algorithm is applied, the probability of measuring the correct answer is maximized.
Algorithms such as RSA, AES, and Blowfish remain worldwide standards in cybersecurity. The cryptographic keys of these algorithms are based mainly on two mathematical procedures the integer factorization problem and the discrete logarithm problem that make it difficult to crack the key, preserving the systems security.
Two algorithms for quantum computers challenge current cryptography systems. Shors algorithm greatly speeds up the time required for solving the integer factorization problem. Grovers quantum search algorithm, while not as fast, still significantly increases the speed of decryption keys that, with traditional computing technologies, would take time on the order of quintillions of years.
All widely used public-key cryptographic algorithms are theoretically vulnerable to attacks based on Shors algorithm, but the algorithm depends on operations that can only be achieved by a large-scale quantum computer (>7000 qubits). Quantum computers are thus likely to make encryption systems based on RSA and discrete logarithm assumptions (DSA, ECDSA) obsolete. Companies like D-Wave Systems promise to deliver a 7000+ qubit solution by 2023-2024.
Quantum technologies are expected to bring about disruption in multiple sectors. Cybersecurity will be one of the main industries to feel this disruption; and although there are already several players preparing for and developing novel approaches to cybersecurity in a post-quantum world, it is vital for corporations, governments, and cybersecurity supply-chain stakeholders to understand the impact of quantum adoption and learn about some of the key players working on overcoming the challenges that this adoption brings about.
Businesses can implement quantum-safe cybersecurity solutions that range from developing risk management plans to harnessing quantum mechanics itself to fight the threats QC poses.
Related Reading
The replacement of encryption algorithms generally requires steps including replacing cryptographic libraries, implementation of validation tools, deployment of hardware required by the algorithm, updating dependent operating systems and communications devices, and replacing security standards and protocols. Hence, post-quantum cryptography needs to be prepared for eventual threats as many years in advance as is practical, despite quantum algorithms not currently being available to cyberattackers.
Quantum computing has the potential for both disrupting and augmenting cybersecurity. There are techniques that leverage quantum physics to protect from quantum-computing related threats, and industries that adopt these technologies will find themselves significantly ahead of the curve as the gap between quantum-secure and quantum-vulnerable systems grows.
See the original post here:
Quantum Computing Will Breach Your Data Security BRINK Conversations and Insights on Global Business - BRINK
This Week’s Awesome Tech Stories From Around the Web (Through July 9) – Singularity Hub
SPACE
Rocket Lab Offers Next-Day Shipping to SpaceDevin Coldewey | TechCrunchIt wasnt long ago that orbital launches were something that took years of planning and months of tests and careful preparation. But Rocket Labs new program will enable customers to show up at the launch site with their payload in the boot and have it in orbit 24 hours later. Premium next-day rateswillapply, of course.
FIFA Will Track Players Bodies Using AI to Make Offside Calls at 2022 World CupJames Vincent | The VergeThe semi-automated system consists of a sensor in the ball that relays its position on the field 500 times a second, and 12 tracking cameras mounted underneath the roof of stadiums, which use machine learning to track 29 points in players bodies. Software will combine this data to generate automated alerts when players commit offside offenses
MIT Proposes Brazil-Sized Space Bubbles to Cool EarthKristin Houser | Big ThinkInstead of injecting particles into Earths atmosphere to cool the planet, an interdisciplinary team of MIT researchers proposes we take solar geoengineering to space. The proposed shield would be about the size of Brazil, and the bubbles for it could be manufactured and deployed in space, possibly out of siliconthe group has already experimented with creating these space bubbles in the lab.
The Robot Guerrilla Campaign to Recreate the Elgin MarblesFranz Lidz | The New York TimesWhile security staff [at the British Museum] looked on, the two used standard iPhones and iPads, as many of the latest models are equipped with Lidar sensors and photogrammetry software, to create 3D digital images. The 3D images of the marble horse head were uploaded into the carving robot, which shaved the prototype over four days.
Bacteria Could Produce Powerful and Cleaner Rocket FuelKevin Hurler | GizmodoBecause the carbon geometry in a POP-FAME is more compact than those found in preexisting fuels, it allows a greater number of molecules to fill the same amount of space. Whats more, acute angles within POP-FAMEs place stress on the carbon bonds, and this stress, the researchers surmised, could be a major source of potential energy and with a cleaner production process.
Autonomous Drones Challenge Human Champions in First Fair RaceEvan Ackerman | IEEE SpectrumHere are some preliminary clips from one of the vision-based autonomous drones flying computer-to-head with a human; the human-piloted drone is red, while the autonomous drone is blue. With a top speed of 80 km/h, the vision-based autonomous drone outraced the fastest human by 0.5 second during a three-lap race, where just one or two-tenths of a second is frequently the difference between a win and a loss.
3D Printing Grows Beyond Its Novelty RootsSteve Lohr | The New York TimesThey say 3D printing, also called additive manufacturing, is no longer a novelty technology for a few consumer and industrial products, or for making prototype design concepts. It is now a technology that is beginning to deliver industrial-grade product quality and printing in volume, said Jrg Bromberger, a manufacturing expert at McKinsey & Company.
Cryptos Free Rein May Be Coming to a CloseGian M. Volpicelli | WiredRegulation is coming for crypto. After more than a decade when cryptocurrencies and related technologies have surged, boomed, and busted in a regulatory vacuum, lawmakers in both the US and Europe are writing new rules for a sector that has grown dangerously large in both value and reach, touching $2.9 trillion at its peak in November 2021. The ongoing crash on crypto marketshas only strengthened rule-makers resolve.
Cruises Robot Car Outages Are Jamming Up San FranciscoAarian Marshall | WiredFor [MIT roboticist and entrepreneur Rodney Brooks], robotaxis getting stuck in and blocking traffic is evidence of the challenges faced by Cruise and its competitors as they try to turn promising prototype autonomous vehicles into large-scale commercial services. A lot of technologists think if you do a demo, then thats it. But scaling is what kills you, he says. You run into all sorts of things that didnt happen at a smaller scale.i
Will These Algorithms Save You From Quantum Threats?Amit Katwala | WiredFor the last six years, the National Institute of Standards and Technology (NIST)has been running a competition to find the algorithms that it hopes will secure our data against quantum computers. This week, it published the results. People have to understand the threat that quantum computers can pose to cryptography, says Dustin Moody, who leads the post-quantum cryptography project at NIST. We need to have new algorithms to replace the ones that are vulnerable, and the first step is to standardize them.i
Image Credit: Bit Cloud /Unsplash
Visit link:
This Week's Awesome Tech Stories From Around the Web (Through July 9) - Singularity Hub
Colorado’s quantum revolution: How scientists exploring a universe of tiny things are transforming the state into a new Silicon Valley – CU Boulder…
Artist's depiction of an atomic clock. (Credit: Steven Burrows/JILA)
Researchers at CU Boulder and LongPath Technologies are using quantum sensors to detect methane leaks from oil and gas sites. (Credit: CU Boulder)
That new quantum revolution began, in many ways, with a clocknot a wristwatch or a grandfather clock, but a device that can do a lot more with the help of atoms.
Today, scientists at JILA and NIST are developing some of the worlds most precise and accurate atomic clocks. They build off decades of work by Nobel laureates Jan Hall, Dave Wineland and Eric Cornell and Carl Wieman.
First, researchers collect clouds of atoms and chill them down, then trap those atoms in an artificial crystal made of laser light. Next, they hit the atoms with yet another laser. Like pushing a pendulum, that laser beam starts the atoms ticking, causing them to oscillate between energy levels at a rate of quadrillions of times per second.
Child wears a helmet made up of more than 100 OPM sensors. (Credit: FieldLine)
These clocks are also incredibly sensitive. Ye, for example, demonstrated an atomic clock that can register the difference in Earths gravity if you lift it up by just a millimeter. Ye, whos also the director of CUbit, leads a center on campus funded by the National Science Foundation called Quantum Systems through Entangled Science and Engineering (Q-SEnSE).
He imagines using such devices to, for example, predict when a volcano is about to erupt by sensing the flow of magma miles below Earths surface.
For me, one of the most promising technological avenues is quantum sensors, said Rey who has worked with Ye over the years to take atomic clocks to greater and greater levels of precision. Weve already seen that quantum can help us do better measurements.
A team of engineers at CU Boulder is using different quantum sensors to detect methane leaking from natural gas operations in the West.
Meanwhile, CU Boulders Svenja Knappe and her colleagues employ a quantum sensor called an optically-pumped magnetometer, or OPM, to dive into the complex territory of the human brain.
The first time I went to a neuroscience meeting, the neuroscientists there looked at me like I was from Mars, said Knappe, associate professor in the Paul M. Rady Department of Mechanical Engineering.
Knappes OPMs each measure about the size of two sugar cubes. They contain a group of atoms that change their orientation of their "spins," a strange property of atoms and particles, in response to the magnetic fields around them. Its a bit like how the needle in a compass always points north. She and her colleagues are employing the sensors to measure the tiny blips of energy that neurons emit when humans move, think or even just breathe.
Neuroscientists are already using helmets embedded with 128 of these sensors to collect maps of the brains activity, or magnetoencephalograms (MEGs). They are important tools for studying or diagnosing illnesses like schizophrenia and Parkinsons Disease. To date, Knappe and her colleagues have sold sensors to about a dozen clients through a Boulder-based company called FieldLine.
This is not a technology that's 20 years out. Quantum sensors can make an impact on your life now, she said.
Artist's depiction of an Earth-sized planet orbiting a star roughly 100 light-years from our own. (Credit:NASA/Goddard Space Flight Center)
By going smaller and ever more precise, scientists might also pursue questions that have eluded them for decades.
Jun Ye shares his team's new atomic clock, the world's most precise yet. (Credit: NIST)
Learn more about the power of frequency combs. (Credit: NIST)
For instance: What is dark matter?
This mysterious substance constitutes about 84% of the mass in the universe, but scientists have yet to identify what type of particle its made of. Dark matter is, as far as physicists can tell, completely invisible and rarely interacts with normal matter. But Ye suspects that some candidates for dark matter could bump into the atoms in his atomic clocknot very often, but often enough that he and his colleagues could, theoretically, detect the disturbance.
Say you have a clock here in the U.S. and another clock somewhere near the North Pole, Ye said. If one clock was speeding up, while the other was slowing down, and if you have accounted for all other known effects, that might indicate that were seeing different fields pass Earth as it moves through the universe.
Other researchers in Colorado are narrowing the search for dark matter using a different set of quantum technologies.
Quantum researcher Scott Diddamsis turning quantum sensors toward space to look for planets circling stars far away from Earth.
Diddams is a former physicist at NIST who recently joined CU Boulder as a professor in the Department of Electrical, Computer and Energy Engineering. He explained that when exoplanets orbit their home stars, they tug on those stars a little bit, causing them to wobble.
Telescopes on the ground can spot those wobbles, but the changes are very, very faint.
As a star is being pulled away from us, the colors of light look ever so slightly more red. As its being pulled toward us, its light will look slightly more blue, he said. And by slight, I mean less than one part in 1 billion.
He uses a powerful type of laser called a frequency comb to help narrow in on that slight shift. More than 20 years ago, the physicist was part of the team that invented these lasers when he was a postdoctoral scientist at JILAat first, researchers used them to count out the ticking in atomic clocks. But if you also install one of these tools in a telescope on the ground, Diddams said, it can act almost like a ruler for light waves. Astronomers can deploy these rulers to more precisely measure the color of light coming from distant stars, potentially finding planets hiding just out of view.
Diddams colleagues are already doing just that every night from two observatories on the ground. Teams led by Penn State have installed frequency combs at the McDonald Observatory in Texas and the Kitt Peak National Observatory in Arizona, and the researchers haveplans for more.
Its a really interesting example of transitioning technology first developed at JILA out of the lab and into real experiments, he said.
Artist's depiction of a laser heating up bars of silicon many times thinner than the width of a human hair. (Credit: Steven Burrows/JILA)
When Antony van Leeuwenhoek first observed the green cells belonging to algae from lakes in the Netherlands, he was seeing the world at about 200 times its normal size.
Physicists Margaret Murnane and Henry Kapteyn have spent their careers trying to look deeper than thatroughly 100 to 1,000 times deeper.
Margaret Murnane and Henry Kapteyn in their lab on campus. (Credit: Glenn Asakawa/CU Boulder)
A little more than a decade ago, the duo built the worlds first X-ray laser that could fit on a tabletop. And they did it by tapping into the quantum nature of electrons and atoms.
The team uses a laser to pluck electrons in atoms, essentially making them vibrate violentlyakin to what happens if you pluck a guitar string really hard. In the process, the atoms, like guitar strings, can snap, breaking apart but also emitting excess energy in the form of X-ray light. The resulting beams, which are today among the fastestmicroscopes on Earth, oscillate more than a quintillion, or a billion billion, times per second.
Murnane and Kapteyns group at CU Boulder has used these lasers to better understand how heat flows in nanodevices thinner than the width of a human hair. Theyve also found that light can manipulate the magnetic properties of materials more than 500 times faster than scientists previously predicted.
NIST and Imec, a company that develops semiconductors and other devices, are already employing Murnane and Kapteyns lasers to design new nano-sized electronics. Seeing is understanding, said Murnane, a JILA fellow and distinguished professor of physics at CU Boulder. We still cant see everything we need to see to be able to understand nature.
To build the microscopes of tomorrow, Murnane and Kapteyn helped launch a $24 million center on campus called STROBE with funding from the U.S. National Science Foundation. In the 1990s, they started a company called KM Labs to sell their X-ray lasers.
Theyre also continuing to push the limits of what atoms are capable of. Kapteyn said that the team would like to one day create an X-ray laser so powerful that it could see inside human tissue. Such a machine would allow doctors to zoom in on specific regions of the body, and with much greater resolution than current X-rays.
There are a lot of quantum questions that are still outstanding in this area around how far we can push this technology, said Kapteyn, JILA fellow and professor of physics.
Artist's depiction of a qubit formed from an ytterbium atom trapped in laser light. (Credit: Steven Burrows/JILA)
There may be no quantum technology that gets as much hype as quantum computers.
Some of the largest technology companies in the world, includingGoogle, Amazon, Microsoft and IBM, are trying their hand at developing computers that are based on unusual properties of quantum physics. Experts believe quantum computers could one day solve problems that even the largest supercomputers on Earth right now couldnt.
Cindy Regal, center, helped to consult on a mural by artist Amanda Phingbodhipakkiya in Denver's Washington Park celebrating women in physics. (Credit:Amanda Phingboddhipakkiya)
But researchers at CU Boulder say quantum computers that can solveproblems relevant to the lives of real people may still be a long way away. For one thing the quantum processors currently available often produce too many errors to do a lot of basic calculations.
Quantum computers may be really good for the specific tasks they are designed for, said Shuo Sun, a JILA fellow and assistant professor of physics at CU Boulder. But they wont be good at everything."
Researchers at CU Boulder, however, are striving to design new and better qubits. Like the bits that run your home laptop, qubits form the basis for quantum computers. But theyre a lot more flexible: Qubits can take on values of zero or one, like normal bits, but they can also exist in a ghostlike state, or superposition, of zero and one at the same time.
Cindy Regal, a fellow at JILA and associate professor of physics at CU Boulder, and her colleagues are using a technique called optical tweezing to make qubits out of neutral atoms, or atoms without a charge. Optical tweezers deploy laser beams to carefully move around and arrange those atoms. Scientists can then build complex lattices made up of atoms, carefully controlling how they interact with each other.
Sun, in contrast, is designing a different kind of qubit by implanting lone atoms inside diamonds and other crystals.
Regal noted that even if these qubits dont wind up in a computer anytime soon, they can still help scientists answer new questionsallowing them to simulate, for example, the physics of weird states of matter in a controlled lab setting.
Some experts have even imagined quantum computers leading the hunt for new medicines. These devices might, decades from now, scan through large databases of molecules, looking for ones with the exact chemical behavior doctors want.
In the end, Murnane said if quantum researchers want to bring their quantum technologies out of the lab, they need to continue to build connections with researchers beyond physicscollaborating with engineers, materials scientists, astrophysicists, biologists and more.
If you want your research to have a big impact, you need to look beyond your field, Murnane said.
Ye noted that the most amazing quantum technologies may be the ones that scientists havent dreamed up yet. Like van Leeuwenhoek discovered four centuries ago, the deeper you look into the world, the more surprises youll find.
Its just like when people first invented a microscope, Ye said. Theres an entire world down there filled with bacteria, and we never realized that.
Follow this link:
Colorado's quantum revolution: How scientists exploring a universe of tiny things are transforming the state into a new Silicon Valley - CU Boulder...
CEA-Leti and the Silicon Valleys of Grenoble, France – ComputerWeekly.com
CEA-Leti is one of the worlds top five semiconductor research technology organisations (RTOs), non-profit organisations that serve as intermediaries between research institutes and industrial players. RTOs work closely with pure research institutes to identify promising innovations, and they work closely with industrial partners to develop prototypes and demonstrators that prepare those innovations for mass production.
RTOs also cooperate with a wide array of public actors to help develop strategies for technological development within a given country. In Europe, there are two other RTOs in the worlds top five: Imec in Belgium, and Fraunhofer in Germany. There are several other smaller ones in Europe, including VTT in Finland and Tecnalia in Spain.
CEA-Leti is part of a bigger organisation CEA, the French Commission for Alternative Energies and Atomic Energy, said Jean-Ren Lquepeys, deputy director and CTO at CEA-Leti. We are an applied research laboratory, making the link with fundamental research. We collaborate closely with the fundamental research division within CEA, as well as with other research institutes outside of CEA, to explore novel paths for research. We take pure research and transfer it to industry in good condition.
We have a very strong partnership with STMicroelectronics for a variety of projects. Two other strong partnerships are with Soitec for materials, and with Lynred for cooled and uncooled infrared detectors. These three partners are in the Grenoble area. We also work with a variety of large companies including Intel, Schneider, Siemens, Valeo, Renault and some of the GAFAM companies [Google (Alphabet), Apple, Facebook (Meta), Amazon, Microsoft], with whom we have confidential partnerships. And, of course, we support our startup companies 74 startups have spun out of CEA-Leti so far.
Lquepeys added: While not as known to the general public as Silicon Valley, Grenoble has been a hotspot for microelectronics and More-than-Moore technologies for quite a long time.
CEA-Leti works with companies spread across the threevalleys around Grenoble. One valley is a hotbed for microelectronics, another is home to innovation in imagery systems, and the third valley is dedicated to display technologies.
Here are Lquepeys answers to questions from Computer Weekly:
Can you provide an example of how RTOs work with industry?
Lquepeys: A good example of the role RTOs play is the way in which CEA-Leti developed fully depleted silicon on insulator (FD-SOI) technology and transferred it to Samsung, STMicroelectronics and GlobalFoundries for production.
FD-SOI uses an ultra-thin layer of insulator that sits on top of the base silicon. The transistor channel is a silicon layer that is very thin and therefore requires no doping hence the term fully depleted. FD-SOI offers a good compromise between pure computing power and reduction of power consumption 40% less then with traditional approaches. It works very well as a building block for analog and radio frequency applications.
Another advantage of FD-SOI is that we have the capability to have a back bias to put a gate below the transistor. The voltage threshold of the transistors can be shifted by applying voltage to the gate.
This makes it possible to boost the performance of the transistors when needed. You can still take the transistors down to 0.4V when you dont need huge power computing, and when you want to optimise the power consumption.
So this is a technology that offers a very good trade-off between power, energy consumption and cost. It is simpler than FinFET technology in terms of number of lithography steps required.FinFET is better for pure computing and for density and is well adapted to big CPUs or big GPUs, but not well adapted to analog and RF chips.
One example of the increasing use of FD-SOI is that Qualcomm, Mediatek and GlobalFoundries began building radio frequency (RF) front-end solutions for 5G phones, based on FD-SOI technology.
Can you give a few examples of hot areas of applied research at CEA-Leti?
Lquepeys: The first is the chiplet approach. Instead of designing a very large circuit in a single node, we design a modular architecture based on smaller chips. By doing that,we obtain several big advantages.
The first is in cost. The yield of the circuit decreases with the size. This offers performance gains because you can choose the best technology node for a given function. For example, we used a very advanced node for processor functions, and a more relaxed node for the analog and the radio frequency functions. We gain in flexibility by using a highly programmable circuit, or FPGA [field programmable gate array] in the chiplets.
Chiplets can use scalable massively parallel architectures. We demonstrated the benefits of this approach by developing prototypes of chiplets twoor threeyears ago, each with 16 cores, so 96 cores in total.
The chiplets were developed using 28nm FD-SOI. The resulting prototypes had the computing power of 10 laptops, running more than 220 Giga operations per second in a very small silicon area less than 200mm2.
Thanks to this very modular approach exploiting 3D technology, the energy efficiency gain is of a factor greater than 10. For this approach to be widely used, it is necessary to develop open communications standards to ensure communication between chiplets and to have a whole library of chiplets optimised for standard functions.
A committee to introduce standardisation of communication between chiplets was launched by big US companies including Intel, which put a lot of emphasis on advanced packaging solutions, which are solutions based on chiplets.
What is another hot area of applied research you are working on?
Lquepeys: The second one is computing based on neuromorphic chips. The human brain has a very high computing efficiency for applications like recognising forms or faces. It performs these operations using only 20W. Another example is the bees brain, which does very little computing, but is still very efficient compared with integrated circuits.
Biological systems have become a source of inspiration for the semiconductor industry. We tried to mimic the behaviour of the brain, with synapses and neurons, using non-volatile memories and transistors to reach the same order of energy efficiency.
We first launched a neuromorphic chip called Spirit. It was a demonstrator, using quite old technology 130nm CMOS technology, with integrated non-volatile memories.
We had only 10 neurons and 144 synapsis on this demonstrator and the circuit consumed only 3.6 picojoules per synaptic event. Thats around eight or 10 less than the Intel demonstrator and 10 times less than the IBM demonstrator. The computational task we had this chip perform was to recognise handwritten digits.
We disclosed this chip at an IBM conference in 2019. Now we are preparing the next generation, which will be a scaled-up version of this approach. We plan to have more than 100,000 neurons on a chip and more than 75 million synapses.
This is a promising area, with a potential for huge gains in power efficiency.
Any more hot areas of applied research that CEA-Leti is involved in that you can talk about?
Lquepeys: Yes, and its a very hot topic quantum computing. There are several approaches that are already being used around the world. These are photons, superconductors, silicon technology, cold atoms and trapped ions. If we compare these against a set of objective criteria, no technology has come out as the overall winner yet.
In terms of number of entangled qubits, which is an important criterion, photons, superconductors and cold atoms are in the lead. But scaling those approaches up would be difficult in the near future.
At CEA-Leti, we have chosen to make qubits on silicon. We think this is the most promising solution in terms of being able to scale up. That is for two reasons. First, the maturity of the electronics industry means we have the equipment and processes needed to scale up. And second, the small size of the qubits built on silicon means you can put a lot of them in a small area.
A qubit on silicon is a million times smaller than qubits using superconductors and photons and that will offer the ability to put millions of qubits on a single chip. However, at the present time, this technology is lagging in terms of the number of qubits that have been put in a single computer so far.
At CEA-Leti, we have a very aggressive roadmap. We plan to have six entangled qubits by the end of this year, compatible with an industrial pilot line, based on FD-SOI. We expect to reach 100 qubits in 2024 and a quantum processor by 2030, with a complete software stack and all the adequate error-correcting codes. This is one of our very high priorities, and huge amounts of resources have been mobilised to make this happen.
We plan to launch a startup company in this area, probably at the end of this year or the beginning of next year.
For our work on quantum computing, we have forged strategic partnerships with CNRS and INRIA, two mature research centres in France. We are financially supported by the national programme launched by [French president] Emmanuel Macron and we also get funding from the European projects we coordinate.
Speaking of Europe, where do you think Europes strengths lie in microelectronics?
Lquepeys: While Europe produces only 10% of the worlds semiconductor components, it is in a strong position for circuits in the automotive market, with a 36% market share, thanks to STMicroelectronics, Infineon and NXP. Europe is also strong in Industry 4.0 and strong in wireless connectivity.
Europe is also leading the power devices market, with Infineon and STMicroelectronics, and also has strong market share in sensors, with the two world leaders, Bosch and STMicroelectronics.
Europe is also in the leading position for microcontrollers, with STMicroelectronics, Infineon and NXP and in secure devices. All of this is around smart cards and hardware security modules for payment.
We are also in the leading position for the standardisation of telecommunication systems, from 2G up to 6G systems, which is already being planned.
Why is Grenoble a good place for all this work?
Lquepeys: Grenoble is a good place for microelectronics. It is a young and dynamic city with engineering schools and a well-established university. What is important is the trio of education, research and industry working in close collaboration to develop future products. And by the way, Grenoble was elected European Green Capital for 2022.
Grenoble welcomes the World Electronic Forum [WEF] in 2022. With the strong support of CEA, CEA-Leti is working with INRIA to organise the next edition of the WEF, which will be held in Grenoble from 5-7 October 2022. The WEF is an annual global event by invitation, limited to 150 participants in total, which brings together the largest electronic and digital industrial association, as well as industrial players and policy-makers.
The 2022 edition will be focused on a proposal by CEA-Leti on sustainable digital electronics in response to multiple environmental, geopolitical and health crises, both in developed and emerging countries. It will be a good hotspot for us, and a good way to show off Grenoble in key areas for global technological innovation, both in hardware and software.
More than 25 startups, including several from CEA-Leti, will exhibit during the WEF event, and a very strong presence is expected from leading countries in the field of electronics the US, Japan, China, Korea, Taiwan. We will also have countries with ambitions in this area India, Middle East, Southeast Asian countries.
High-level officials in the European Commission have been invited to attend, including Thierry Breton, and Frances minister of industry, Bruno Le Maire, has been invited to inaugurate the event.
View post:
CEA-Leti and the Silicon Valleys of Grenoble, France - ComputerWeekly.com
Quantum Error Correction: Time to Make It Work – IEEE Spectrum
Dates chiseled into an ancient tombstone have more in common with the data in your phone or laptop than you may realize. They both involve conventional, classical information, carried by hardware that is relatively immune to errors. The situation inside a quantum computer is far different: The information itself has its own idiosyncratic properties, and compared with standard digital microelectronics, state-of-the-art quantum-computer hardware is more than a billion trillion times as likely to suffer a fault. This tremendous susceptibility to errors is the single biggest problem holding back quantum computing from realizing its great promise.
Fortunately, an approach known as quantum error correction (QEC) can remedy this problem, at least in principle. A mature body of theory built up over the past quarter century now provides a solid theoretical foundation, and experimentalists have demonstrated dozens of proof-of-principle examples of QEC. But these experiments still have not reached the level of quality and sophistication needed to reduce the overall error rate in a system.
The two of us, along with many other researchers involved in quantum computing, are trying to move definitively beyond these preliminary demos of QEC so that it can be employed to build useful, large-scale quantum computers. But before describing how we think such error correction can be made practical, we need to first review what makes a quantum computer tick.
Information is physical. This was the mantra of the distinguished IBM researcher Rolf Landauer. Abstract though it may seem, information always involves a physical representation, and the physics matters.
Conventional digital information consists of bits, zeros and ones, which can be represented by classical states of matter, that is, states well described by classical physics. Quantum information, by contrast, involves qubitsquantum bitswhose properties follow the peculiar rules of quantum mechanics.
A classical bit has only two possible values: 0 or 1. A qubit, however, can occupy a superposition of these two information states, taking on characteristics of both. Polarized light provides intuitive examples of superpositions. You could use horizontally polarized light to represent 0 and vertically polarized light to represent 1, but light can also be polarized on an angle and then has both horizontal and vertical components at once. Indeed, one way to represent a qubit is by the polarization of a single photon of light.
These ideas generalize to groups of n bits or qubits: n bits can represent any one of 2n possible values at any moment, while n qubits can include components corresponding to all 2n classical states simultaneously in superposition. These superpositions provide a vast range of possible states for a quantum computer to work with, albeit with limitations on how they can be manipulated and accessed. Superposition of information is a central resource used in quantum processing and, along with other quantum rules, enables powerful new ways to compute.
Researchers are experimenting with many different physical systems to hold and process quantum information, including light, trapped atoms and ions, and solid-state devices based on semiconductors or superconductors. For the purpose of realizing qubits, all these systems follow the same underlying mathematical rules of quantum physics, and all of them are highly sensitive to environmental fluctuations that introduce errors. By contrast, the transistors that handle classical information in modern digital electronics can reliably perform a billion operations per second for decades with a vanishingly small chance of a hardware fault.
Of particular concern is the fact that qubit states can roam over a continuous range of superpositions. Polarized light again provides a good analogy: The angle of linear polarization can take any value from 0 to 180 degrees.
Pictorially, a qubits state can be thought of as an arrow pointing to a location on the surface of a sphere. Known as a Bloch sphere, its north and south poles represent the binary states 0 and 1, respectively, and all other locations on its surface represent possible quantum superpositions of those two states. Noise causes the Bloch arrow to drift around the sphere over time. A conventional computer represents 0 and 1 with physical quantities, such as capacitor voltages, that can be locked near the correct values to suppress this kind of continuous wandering and unwanted bit flips. There is no comparable way to lock the qubits arrow to its correct location on the Bloch sphere.
Early in the 1990s, Landauer and others argued that this difficulty presented a fundamental obstacle to building useful quantum computers. The issue is known as scalability: Although a simple quantum processor performing a few operations on a handful of qubits might be possible, could you scale up the technology to systems that could run lengthy computations on large arrays of qubits? A type of classical computation called analog computing also uses continuous quantities and is suitable for some tasks, but the problem of continuous errors prevents the complexity of such systems from being scaled up. Continuous errors with qubits seemed to doom quantum computers to the same fate.
We now know better. Theoreticians have successfully adapted the theory of error correction for classical digital data to quantum settings. QEC makes scalable quantum processing possible in a way that is impossible for analog computers. To get a sense of how it works, its worthwhile to review how error correction is performed in classical settings.
Simple schemes can deal with errors in classical information. For instance, in the 19th century, ships routinely carried clocks for determining the ships longitude during voyages. A good clock that could keep track of the time in Greenwich, in combination with the suns position in the sky, provided the necessary data. A mistimed clock could lead to dangerous navigational errors, though, so ships often carried at least three of them. Two clocks reading different times could detect when one was at fault, but three were needed to identify which timepiece was faulty and correct it through a majority vote.
The use of multiple clocks is an example of a repetition code: Information is redundantly encoded in multiple physical devices such that a disturbance in one can be identified and corrected.
As you might expect, quantum mechanics adds some major complications when dealing with errors. Two problems in particular might seem to dash any hopes of using a quantum repetition code. The first problem is that measurements fundamentally disturb quantum systems. So if you encoded information on three qubits, for instance, observing them directly to check for errors would ruin them. Like Schrdingers cat when its box is opened, their quantum states would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.
The second issue is a fundamental result in quantum mechanics called the no-cloning theorem, which tells us it is impossible to make a perfect copy of an unknown quantum state. If you know the exact superposition state of your qubit, there is no problem producing any number of other qubits in the same state. But once a computation is running and you no longer know what state a qubit has evolved to, you cannot manufacture faithful copies of that qubit except by duplicating the entire process up to that point.
Fortunately, you can sidestep both of these obstacles. Well first describe how to evade the measurement problem using the example of a classical three-bit repetition code. You dont actually need to know the state of every individual code bit to identify which one, if any, has flipped. Instead, you ask two questions: Are bits 1 and 2 the same? and Are bits 2 and 3 the same? These are called parity-check questions because two identical bits are said to have even parity, and two unequal bits have odd parity.
The two answers to those questions identify which single bit has flipped, and you can then counterflip that bit to correct the error. You can do all this without ever determining what value each code bit holds. A similar strategy works to correct errors in a quantum system.
Learning the values of the parity checks still requires quantum measurement, but importantly, it does not reveal the underlying quantum information. Additional qubits can be used as disposable resources to obtain the parity values without revealing (and thus without disturbing) the encoded information itself.
Like Schrdingers cat when its box is opened, the quantum states of the qubits you measured would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.
What about no-cloning? It turns out it is possible to take a qubit whose state is unknown and encode that hidden state in a superposition across multiple qubits in a way that does not clone the original information. This process allows you to record what amounts to a single logical qubit of information across three physical qubits, and you can perform parity checks and corrective steps to protect the logical qubit against noise.
Quantum errors consist of more than just bit-flip errors, though, making this simple three-qubit repetition code unsuitable for protecting against all possible quantum errors. True QEC requires something more. That came in the mid-1990s when Peter Shor (then at AT&T Bell Laboratories, in Murray Hill, N.J.) described an elegant scheme to encode one logical qubit into nine physical qubits by embedding a repetition code inside another code. Shors scheme protects against an arbitrary quantum error on any one of the physical qubits.
Since then, the QEC community has developed many improved encoding schemes, which use fewer physical qubits per logical qubitthe most compact use fiveor enjoy other performance enhancements. Today, the workhorse of large-scale proposals for error correction in quantum computers is called the surface code, developed in the late 1990s by borrowing exotic mathematics from topology and high-energy physics.
It is convenient to think of a quantum computer as being made up of logical qubits and logical gates that sit atop an underlying foundation of physical devices. These physical devices are subject to noise, which creates physical errors that accumulate over time. Periodically, generalized parity measurements (called syndrome measurements) identify the physical errors, and corrections remove them before they cause damage at the logical level.
A quantum computation with QEC then consists of cycles of gates acting on qubits, syndrome measurements, error inference, and corrections. In terms more familiar to engineers, QEC is a form of feedback stabilization that uses indirect measurements to gain just the information needed to correct errors.
QEC is not foolproof, of course. The three-bit repetition code, for example, fails if more than one bit has been flipped. Whats more, the resources and mechanisms that create the encoded quantum states and perform the syndrome measurements are themselves prone to errors. How, then, can a quantum computer perform QEC when all these processes are themselves faulty?
Remarkably, the error-correction cycle can be designed to tolerate errors and faults that occur at every stage, whether in the physical qubits, the physical gates, or even in the very measurements used to infer the existence of errors! Called a fault-tolerant architecture, such a design permits, in principle, error-robust quantum processing even when all the component parts are unreliable.
A long quantum computation will require many cycles of quantum error correction (QEC). Each cycle would consist of gates acting on encoded qubits (performing the computation), followed by syndrome measurements from which errors can be inferred, and corrections. The effectiveness of this QEC feedback loop can be greatly enhanced by including quantum-control techniques (represented by the thick blue outline) to stabilize and optimize each of these processes.
Even in a fault-tolerant architecture, the additional complexity introduces new avenues for failure. The effect of errors is therefore reduced at the logical level only if the underlying physical error rate is not too high. The maximum physical error rate that a specific fault-tolerant architecture can reliably handle is known as its break-even error threshold. If error rates are lower than this threshold, the QEC process tends to suppress errors over the entire cycle. But if error rates exceed the threshold, the added machinery just makes things worse overall.
The theory of fault-tolerant QEC is foundational to every effort to build useful quantum computers because it paves the way to building systems of any size. If QEC is implemented effectively on hardware exceeding certain performance requirements, the effect of errors can be reduced to arbitrarily low levels, enabling the execution of arbitrarily long computations.
At this point, you may be wondering how QEC has evaded the problem of continuous errors, which is fatal for scaling up analog computers. The answer lies in the nature of quantum measurements.
In a typical quantum measurement of a superposition, only a few discrete outcomes are possible, and the physical state changes to match the result that the measurement finds. With the parity-check measurements, this change helps.
Imagine you have a code block of three physical qubits, and one of these qubit states has wandered a little from its ideal state. If you perform a parity measurement, just two results are possible: Most often, the measurement will report the parity state that corresponds to no error, and after the measurement, all three qubits will be in the correct state, whatever it is. Occasionally the measurement will instead indicate the odd parity state, which means an errant qubit is now fully flipped. If so, you can flip that qubit back to restore the desired encoded logical state.
In other words, performing QEC transforms small, continuous errors into infrequent but discrete errors, similar to the errors that arise in digital computers.
Researchers have now demonstrated many of the principles of QEC in the laboratoryfrom the basics of the repetition code through to complex encodings, logical operations on code words, and repeated cycles of measurement and correction. Current estimates of the break-even threshold for quantum hardware place it at about 1 error in 1,000 operations. This level of performance hasnt yet been achieved across all the constituent parts of a QEC scheme, but researchers are getting ever closer, achieving multiqubit logic with rates of fewer than about 5 errors per 1,000 operations. Even so, passing that critical milestone will be the beginning of the story, not the end.
On a system with a physical error rate just below the threshold, QEC would require enormous redundancy to push the logical rate down very far. It becomes much less challenging with a physical rate further below the threshold. So just crossing the error threshold is not sufficientwe need to beat it by a wide margin. How can that be done?
If we take a step back, we can see that the challenge of dealing with errors in quantum computers is one of stabilizing a dynamic system against external disturbances. Although the mathematical rules differ for the quantum system, this is a familiar problem in the discipline of control engineering. And just as control theory can help engineers build robots capable of righting themselves when they stumble, quantum-control engineering can suggest the best ways to implement abstract QEC codes on real physical hardware. Quantum control can minimize the effects of noise and make QEC practical.
In essence, quantum control involves optimizing how you implement all the physical processes used in QECfrom individual logic operations to the way measurements are performed. For example, in a system based on superconducting qubits, a qubit is flipped by irradiating it with a microwave pulse. One approach uses a simple type of pulse to move the qubits state from one pole of the Bloch sphere, along the Greenwich meridian, to precisely the other pole. Errors arise if the pulse is distorted by noise. It turns out that a more complicated pulse, one that takes the qubit on a well-chosen meandering route from pole to pole, can result in less error in the qubits final state under the same noise conditions, even when the new pulse is imperfectly implemented.
One facet of quantum-control engineering involves careful analysis and design of the best pulses for such tasks in a particular imperfect instance of a given system. It is a form of open-loop (measurement-free) control, which complements the closed-loop feedback control used in QEC.
This kind of open-loop control can also change the statistics of the physical-layer errors to better comport with the assumptions of QEC. For example, QEC performance is limited by the worst-case error within a logical block, and individual devices can vary a lot. Reducing that variability is very beneficial. In an experiment our team performed using IBMs publicly accessible machines, we showed that careful pulse optimization reduced the difference between the best-case and worst-case error in a small group of qubits by more than a factor of 10.
Some error processes arise only while carrying out complex algorithms. For instance, crosstalk errors occur on qubits only when their neighbors are being manipulated. Our team has shown that embedding quantum-control techniques into an algorithm can improve its overall success by orders of magnitude. This technique makes QEC protocols much more likely to correctly identify an error in a physical qubit.
For 25 years, QEC researchers have largely focused on mathematical strategies for encoding qubits and efficiently detecting errors in the encoded sets. Only recently have investigators begun to address the thorny question of how best to implement the full QEC feedback loop in real hardware. And while many areas of QEC technology are ripe for improvement, there is also growing awareness in the community that radical new approaches might be possible by marrying QEC and control theory. One way or another, this approach will turn quantum computing into a realityand you can carve that in stone.
This article appears in the July 2022 print issue as Quantum Error Correction at the Threshold.
From Your Site Articles
Related Articles Around the Web
Continued here:
Quantum Error Correction: Time to Make It Work - IEEE Spectrum
Alan Turing’s Everlasting Contributions to Computing, AI and Cryptography – NIST
An enigma machine on display outside the Alan Turing Institute entrance inside the British Library, London.
Credit: Shutterstock/William Barton
Suppose someone asked you to devise the most powerful computer possible. Alan Turing, whose reputation as a central figure in computer science and artificial intelligence has only grown since his untimely death in 1954, applied his genius to problems such as this one in an age before computers as we know them existed. His theoretical work on this problem and others remains a foundation of computing, AI and modern cryptographic standards, including those NIST recommends.
The road from devising the most powerful computer possible to cryptographic standards has a few twists and turns, as does Turings brief life.
Alan Turing
Credit: National Portrait Gallery, London
In Turings time, mathematicians debated whether it was possible to build a single, all-purpose machine that could solve all problems that are computable. For example, we can compute a cars most energy-efficient route to a destination, and (in principle) the most likely way in which a string of amino acids will fold into a three-dimensional protein. Another example of a computable problem, important to modern encryption, is whether or not bigger numbers can be expressed as the product of two smaller numbers. For example, 6 can be expressed as the product of 2 and 3, but 7 cannot be factored into smaller integers and is therefore a prime number.
Some prominent mathematicians proposed elaborate designs for universal computers that would operate by following very complicated mathematical rules. It seemed overwhelmingly difficult to build such machines. It took the genius of Turing to show that a very simple machine could in fact compute all that is computable.
His hypothetical device is now known as a Turing machine. The centerpiece of the machine is a strip of tape, divided into individual boxes. Each box contains a symbol (such as A,C,T, G for the letters of genetic code) or a blank space. The strip of tape is analogous to todays hard drives that store bits of data. Initially, the string of symbols on the tape corresponds to the input, containing the data for the problem to be solved. The string also serves as the memory of the computer. The Turing machine writes onto the tape data that it needs to access later in the computation.
Credit: NIST
The device reads an individual symbol on the tape and follows instructions on whether to change the symbol or leave it alone before moving to another symbol. The instructions depend on the current state of the machine. For example, if the machine needs to decide whether the tape contains the text string TC it can scan the tape in the forward direction while switching among the states previous letter was T and previous letter was not C. If while in state previous letter was T it reads a C, it goes to a state found it and halts. If it encounters the blank symbol at the end of the input, it goes to the state did not find it and halts. Nowadays we would recognize the set of instructions as the machines program.
It took some time, but eventually it became clear to everyone that Turing was right: The Turing machine could indeed compute all that seemed computable. No number of additions or extensions to this machine could extend its computing capability.
To understand what can be computed it is helpful to identify what cannot be computed. Ina previous life as a university professor I had to teach programming a few times. Students often encounter the following problem: My program has been running for a long time; is it stuck? This is called the Halting Problem, and students often wondered why we simply couldnt detect infinite loops without actually getting stuck in them. It turns out a program to do this is an impossibility. Turing showed that there does not exist a machine that detects whether or not another machine halts. From this seminal result followed many other impossibility results. For example, logicians and philosophers had to abandon the dream of an automated way of detecting whether an assertion (such as whether there are infinitely many prime numbers) is true or false, as that is uncomputable. If you could do this, then you could solve the Halting Problem simply by asking whether the statement this machine halts is true or false.
Turing went on to make fundamental contributions to AI, theoretical biology and cryptography. His involvement with this last subject brought him honor and fame during World War II, when he played a very important role in adapting and extending cryptanalytic techniques invented by Polish mathematicians. This work broke the German Enigma machine encryption, making a significant contribution to the war effort.
Turing was gay. After the war, in 1952, the British government convicted him for having sex with a man. He stayed out of jail only by submitting to what is now called chemical castration. He died in 1954 at age 41 by cyanide poisoning, which was initially ruled a suicide but may have been an accident according to subsequent analysis. More than 50 years would pass before the British government apologized and pardoned him (after years of campaigning by scientists around the world). Today, the highest honor in computer sciences is called the Turing Award.
Turings computability work provided the foundation for modern complexity theory. This theory tries to answer the question Among those problems that can be solved by a computer, which ones can be solved efficiently? Here, efficiently means not in billions of years but in milliseconds, seconds, hours or days, depending on the computational problem.
For example, much of the cryptography that currently safeguards our data and communications relies on the belief that certain problems, such as decomposing an integer number into its prime factors, cannot be solved before the Sun turns into a red giant and consumes the Earth (currently forecast for 4 billion to 5 billion years). NIST is responsible for cryptographic standards that are used throughout the world. We could not do this work without complexity theory.
Technology sometimes throws us a curve, such as the discovery that if a sufficiently big and reliable quantum computer is built it would be able to factor integers, thus breaking some of our cryptography. In this situation, NIST scientists must rely on the worlds experts (many of them in-house) in order to update our standards. There are deep reasons to believe that quantum computers will not be able to break the cryptography that NIST is about to roll out. Among these reasons is that Turings machine can simulate quantum computers. This implies that complexity theory gives us limits on what a powerful quantum computer can do.
But that is a topic for another day. For now, we can celebrate how Turing provided the keys to much of todays computing technology and even gave us hints on how to solve looming technological problems.
See the original post here:
Alan Turing's Everlasting Contributions to Computing, AI and Cryptography - NIST
With Too Many Negative Factors to Beat, IonQ’s Valuation Makes it a Sell – InvestorPlace
Source: Amin Van / Shutterstock.com
IonQ (NYSE:IONQ), a company that defines itself as a leader in quantum computing has seen its shares crash 70.23% in 2022, falling from nearly $17.50 in early January to a $5.23 on Jun. 24. The quantum computing firm faces several risks now and its first-quarter (Q1) 2022 financial results showed revenue is made, but this figure is still not meaningful for a company with a market capitalization of $1.03 billion. Is IONQ stock a buy today after its steep decline this year?
I personally see no fundamental reason that is in favor of this, as the stock is overpriced. The company made its public debut on Oct.1, 2021 through of a business combination with dMY Technology Group, Inc. III, a special purpose acquisition company.
It must tough to be the management of IonQ and read a report by Scorpion Capital with severe accusations. Scorpion Capital has called IonQ a hoax, reporting that it is a part-time side-hustle run by two academics who barely show up, and a scam built on phony statements about nearly all key aspects of the technology and business. On top of that, Scorpion Capital mentioned that IonQ has a useless toy that cant even add 1+1, as revealed by experiments we hired experts to run and that it generates fictitious revenue via sham transactions and related-party round-tripping.
These accusations are very strong, but IonQ has responded to them by calling them important inaccuracies and mischaracterizations regarding IonQs business and progress to date.
The company is determined to build a quantum future. The report by Scorpion Capital has caused another big problem for IonQ. The company is facing a securities fraud lawsuit.
The securities fraud lawsuit has summarized its allegations, citing IonQ had not yet developed a 32-qubit quantum computer and that the firms 11-quantum qubit computer suffered from significant error rates, rendering it useless. It also states that a significant portion of IonQs revenue was derived from improper round-tripping transactions with related parties.
So far, things do not look good for IonQ. I am not a lawyer, but I am not excited at all about these accusations.
What about the fundamentals? Can they change the negative opinion formed from the above information?
In its Q1 2022 financial results, IonQ reported revenue of $2 million and a net loss of $4.2 million. The company expects revenue between $2.3 and $2.5 million for Q2 2022.
We are talking about a company with a market capitalization of $1.03 billion. This company is unprofitable and is burning cash. It is too pricey with a current price-to-sales ratio (TTM) of 261.87.
The expectations for IonQ are for revenue growth of 407.11% in 2022, 78.37% in 2023, 178.44% in 2024, and 250.59% in 2025. I am not bullish at all as the earnings per share projections are for negative 34 cents in 2022, negative 50 cents in 2023, negative 61 cents in 2024, and negative 26 cents in 2025. Is it a good idea to wait until 2025 for an unprofitable company to become profitable? I dont think so.
The revenue generated today is not meaningful and the firm is losing money. I see quantum computing as a bet that lacks sense based on financial results and on valuation. I will totally skip it.
On the date of publication, Stavros Georgiadis, CFA did not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.
Stavros Georgiadis is a CFA charter holder, an Equity Research Analyst, and an Economist. He focuses on U.S. stocks and has his own stock market blog at thestockmarketontheinternet.com. He has written in the past various articles for other publications and can be reached on Twitter and on LinkedIn.
See more here:
With Too Many Negative Factors to Beat, IonQ's Valuation Makes it a Sell - InvestorPlace
IonQ and GE Research Demonstrate High Potential of Quantum Computing for Risk Aggregation – Business Wire
COLLEGE PARK, Md.--(BUSINESS WIRE)--IonQ (NYSE: IONQ), an industry leader in quantum computing, today announced promising early results with its partner, GE Research, to explore the benefits of quantum computing for modeling multi-variable distributions in risk management.
Leveraging a Quantum Circuit Born Machine-based framework on standardized, historical indexes, IonQ and GE Research, the central innovation hub for the General Electric Company (NYSE: GE), were able to effectively train quantum circuits to learn correlations among three and four indexes. The prediction derived from the quantum framework outperformed those of classical modeling approaches in some cases, confirming that quantum copulas can potentially lead to smarter data-driven analysis and decision-making across commercial applications. A blog post further explaining the research methodology and results is available here.
Together with GE Research, IonQ is pushing the boundaries of what is currently possible to achieve with quantum computing, said Peter Chapman, CEO and President, IonQ. While classical techniques face inefficiencies when multiple variables have to be modeled together with high precision, our joint effort has identified a new training strategy that may optimize quantum computing results even as systems scale. Tested on our industry-leading IonQ Aria system, were excited to apply these new methodologies when tackling real world scenarios that were once deemed too complex to solve.
While classical techniques to form copulas using mathematical approximations are a great way to build multi-variate risk models, they face limitations when scaling. IonQ and GE Research successfully trained quantum copula models with up to four variables on IonQs trapped ion systems by using data from four representative stock indexes with easily accessible and variating market environments.
By studying the historical dependence structure among the returns of the four indexes during this timeframe, the research group trained its model to understand the underlying dynamics. Additionally, the newly presented methodology includes optimization techniques that potentially allow models to scale by mitigating local minima and vanishing gradient problems common in quantum machine learning practices. Such improvements demonstrate a promising way to perform multi-variable analysis faster and more accurately, which GE researchers hope lead to new and better ways to assess risk with major manufacturing processes such as product design, factory operations, and supply chain management.
As we have seen from recent global supply chain volatility, the world needs more effective methods and tools to manage risks where conditions can be so highly variable and interconnected to one another, said David Vernooy, a Senior Executive and Digital Technologies Leader at GE Research. The early results we achieved in the financial use case with IonQ show the high potential of quantum computing to better understand and reduce the risks associated with these types of highly variable scenarios.
Todays results follow IonQs recent announcement of the companys new IonQ Forte quantum computing system. The system features novel, cutting-edge optics technology that enables increased accuracy and further enhances IonQs industry leading system performance. Partnerships with the likes of GE Research and Hyundai Motors illustrate the growing interest in our industry-leading systems and feeds into the continued success seen in Q1 2022.
About IonQ
IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQ's current generation quantum computer, IonQ Forte, is the latest in a line of cutting-edge systems, including IonQ Aria, a system that boasts industry-leading 20 algorithmic qubits. Along with record performance, IonQ has defined what it believes is the best path forward to scale. IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visit http://www.ionq.com.
IonQ Forward-Looking Statements
This press release contains certain forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Some of the forward-looking statements can be identified by the use of forward-looking words. Statements that are not historical in nature, including the words anticipate, expect, suggests, plan, believe, intend, estimates, targets, projects, should, could, would, may, will, forecast and other similar expressions are intended to identify forward-looking statements. These statements include those related to IonQs ability to further develop and advance its quantum computers and achieve scale; IonQs ability to optimize quantum computing results even as systems scale; the expected launch of IonQ Forte for access by select developers, partners, and researchers in 2022 with broader customer access expected in 2023; IonQs market opportunity and anticipated growth; and the commercial benefits to customers of using quantum computing solutions. Forward-looking statements are predictions, projections and other statements about future events that are based on current expectations and assumptions and, as a result, are subject to risks and uncertainties. Many factors could cause actual future events to differ materially from the forward-looking statements in this press release, including but not limited to: market adoption of quantum computing solutions and IonQs products, services and solutions; the ability of IonQ to protect its intellectual property; changes in the competitive industries in which IonQ operates; changes in laws and regulations affecting IonQs business; IonQs ability to implement its business plans, forecasts and other expectations, and identify and realize additional partnerships and opportunities; and the risk of downturns in the market and the technology industry including, but not limited to, as a result of the COVID-19 pandemic. The foregoing list of factors is not exhaustive. You should carefully consider the foregoing factors and the other risks and uncertainties described in the Risk Factors section of IonQs Quarterly Report on Form 10-Q for the quarter ended March 31, 2022 and other documents filed by IonQ from time to time with the Securities and Exchange Commission. These filings identify and address other important risks and uncertainties that could cause actual events and results to differ materially from those contained in the forward-looking statements. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and IonQ assumes no obligation and does not intend to update or revise these forward-looking statements, whether as a result of new information, future events, or otherwise. IonQ does not give any assurance that it will achieve its expectations.
Originally posted here:
IonQ and GE Research Demonstrate High Potential of Quantum Computing for Risk Aggregation - Business Wire
The Spooky Quantum Phenomenon You’ve Never Heard Of – Quanta Magazine
Perhaps the most famously weird feature of quantum mechanics is nonlocality: Measure one particle in an entangled pair whose partner is miles away, and the measurement seems to rip through the intervening space to instantaneously affect its partner. This spooky action at a distance (as Albert Einstein called it) has been the main focus of tests of quantum theory.
Nonlocality is spectacular. I mean, its like magic, said Adn Cabello, a physicist at the University of Seville in Spain.
But Cabello and others are interested in investigating a lesser-known but equally magical aspect of quantum mechanics: contextuality. Contextuality says that properties of particles, such as their position or polarization, exist only within the context of a measurement. Instead of thinking of particles properties as having fixed values, consider them more like words in language, whose meanings can change depending on the context: Timeflies likean arrow. Fruitflies likebananas.
Although contextuality has lived in nonlocalitys shadow for over 50 years, quantum physicists now consider it more of a hallmark feature of quantum systems than nonlocality is. A single particle, for instance, is a quantum system in which you cannot even think about nonlocality, since the particle is only in one location, said Brbara Amaral, a physicist at the University of So Paulo in Brazil. So [contextuality] is more general in some sense, and I think this is important to really understand the power of quantum systems and to go deeper into why quantum theory is the way it is.
Researchers have also found tantalizing links between contextuality and problems that quantum computers can efficiently solve that ordinary computers cannot; investigating these links could help guide researchers in developing new quantum computing approaches and algorithms.
And with renewed theoretical interest comes a renewed experimental effort to prove that our world is indeed contextual. In February, Cabello, in collaboration with Kihwan Kim at Tsinghua University in Beijing, China, published a paper in which they claimed to have performed the first loophole-free experimental test of contextuality.
The Northern Irish physicist John Stewart Bell is widely credited with showing that quantum systems can be nonlocal. By comparing the outcomes of measurements of two entangled particles, he showed with his eponymous theorem of 1965 that the high degree of correlations between the particles cant possibly be explained in terms of local hidden variables defining each ones separate properties. The information contained in the entangled pair must be shared nonlocally between the particles.
Bell also proved a similar theorem about contextuality. He and, separately, Simon Kochen and Ernst Specker showed that it is impossible for a quantum system to have hidden variables that define the values of all their properties in all possible contexts.
In Kochen and Speckers version of the proof, they considered a single particle with a quantum property called spin, which has both a magnitude and a direction. Measuring the spins magnitude along any direction always results in one of two outcomes: 1 or 0. The researchers then asked: Is it possible that the particle secretly knows what the result of every possible measurement will be before it is measured? In other words, could they assign a fixed value a hidden variable to all outcomes of all possible measurements at once?
Quantum theory says that the magnitudes of the spins along three perpendicular directions must obey the 101 rule: The outcomes of two of the measurements must be 1 and the other must be 0. Kochen and Specker used this rule to arrive at a contradiction. First, they assumed that each particle had a fixed, intrinsic value for each direction of spin. They then conducted a hypothetical spin measurement along some unique direction, assigning either 0 or 1 to the outcome. They then repeatedly rotated the direction of their hypothetical measurement and measured again, each time either freely assigning a value to the outcome or deducing what the value must be in order to satisfy the 101 rule together with directions they had previously considered.
They continued until, in the 117th direction, the contradiction cropped up. While they had previously assigned a value of 0 to the spin along this direction, the 101 rule was now dictating that the spin must be 1. The outcome of a measurement could not possibly return both 0 and 1. So the physicists concluded that there is no way a particle can have fixed hidden variables that remain the same regardless of context.
While the proof indicated that quantum theory demands contextuality, there was no way to actually demonstrate this through 117 simultaneous measurements of a single particle. Physicists have since devised more practical, experimentally implementable versions of the original Bell-Kochen-Specker theorem involving multiple entangled particles, where a particular measurement on one particle defines a context for the others.
In 2009, contextuality, a seemingly esoteric aspect of the underlying fabric of reality, got a direct application: One of the simplified versions of the original Bell-Kochen-Specker theorem was shown to be equivalent to a basic quantum computation.
The proof, named Mermins star after its originator, David Mermin, considered various combinations of contextual measurements that could be made on three entangled quantum bits, or qubits. The logic of how earlier measurements shape the outcomes of later measurements has become the basis for an approach called measurement-based quantum computing. The discovery suggested that contextuality might be key to why quantum computers can solve certain problems faster than classical computers an advantage that researchers have struggled mightily to understand.
Robert Raussendorf, a physicist at the University of British Columbia and a pioneer of measurement-based quantum computing, showed that contextuality is necessary for a quantum computer to beat a classical computer at some tasks, but he doesnt think its the whole story. Whether contextuality powers quantum computers is probably not exactly the right question to ask, he said. But we need to get there question by question. So we ask a question that we understand how to ask; we get an answer. We ask the next question.
Some researchers have suggested loopholes around Bell, Kochen and Speckers conclusion that the world is contextual. They argue that context-independent hidden variables havent been conclusively ruled out.
In February, Cabello and Kim announced that they had closed every plausible loophole by performing a loophole free Bell-Kochen-Specker experiment.
The experiment entailed measuring the spins of two entangled trapped ions in various directions, where the choice of measurement on one ion defined the context for the other ion. The physicists showed that, although making a measurement on one ion does not physically affect the other, it changes the context and hence the outcome of the second ions measurement.
Skeptics would ask: How can you be certain that the context created by the first measurement is what changed the second measurement outcome, rather than other conditions that might vary from experiment to experiment? Cabello and Kim closed this sharpness loophole by performing thousands of sets of measurements and showing that the outcomes dont change if the context doesnt. After ruling out this and other loopholes, they concluded that the only reasonable explanation for their results is contextuality.
Cabello and others think that these experiments could be used in the future to test the level of contextuality and hence, the power of quantum computing devices.
If you want to really understand how the world is working, said Cabello, you really need to go into the detail of quantum contextuality.
See the original post:
The Spooky Quantum Phenomenon You've Never Heard Of - Quanta Magazine
Reporting on the Future, Now – The New York Times
In labs across the world, scientists are developing the power of quantum mechanics, a field of physics that behaves unlike anything we experience in our everyday lives. In 2019, Google announced that its quantum computer had achieved quantum supremacy, which could allow new kinds of computers to perform calculations at what were once unimaginable speeds. Other companies and research labs are exploring techniques like quantum teleportation, which sends data between locations without moving the matter that holds it a concept that Albert Einstein had deemed impossible.
The seemingly impossible, though, is what Cade Metzs reporting hinges on. As a Times correspondent who covers emerging technologies, he has written about quantum teleportation; cars that drive themselves; immersive digital worlds; and artistry in artificial intelligence. If a technology is futuristic or disruptive, Mr. Metz most likely knows about it.
In an interview, Mr. Metz talked about advances in technology and the challenges in translating complicated subjects for everyday readers. This interview has been edited.
What is your background in technology reporting?
Before coming to The Times, I was with Wired, where I was trying to identify technologies that were coming out of research labs and changing in ways that were likely to have an impact on our daily lives.
I was an English major in college, and I always wanted to be a writer and a reporter, but my father was an engineer. Ive always been interested in writing about engineers and researchers who I feel like, even within technology coverage, can get short shrift meaning, when people write stories and books about the tech industry, they write about entrepreneurs; they write about the heads of companies. They dont necessarily write about the people actually building the stuff. Like anybody else, theyre fascinating people in their own right, so thats what Ive always gravitated to.
How do you write about such complicated subjects for the average reader?
You have to make an effort to show people that this type of thing is still in the future. Just by writing about it, you give the impression that its imminent, and that can be a danger. Its the same thing when I cover artificial intelligence; just using that term implies things. When we hear those words, it brings up certain images decades of science fiction movies and science fiction novels.
When you write about quantum computing, in particular, you have to use analogy. This is behavior that we dont experience in our everyday lives, and that makes it fascinating. The quantum supremacy story got a lot of attention; it sort of captures the imagination. But it is very hard to find that sweet spot where youre properly representing the technology and doing so in a way that people can grasp it.
I think the key challenge is not to give people the wrong impression, and not making this seem closer than it is. Particularly with this type of technology, you want to be judicious.
When did quantum teleportation become something you wanted to report on?
Ive written about other types of quantum technology. The trick is figuring out the right time to write about it again. You look for certain milestones that indicate progress will continue. The quantum supremacy movement is definitely one of those. I had targeted that for years. Quantum supremacy moments show that you can do something with a quantum machine that you couldnt do with a traditional machine.
What is most useful, and I think this is true of all sorts of technologies, is talking to a lot of people about each particular aspect..
These are academics, government researchers. There are people all over the world. You talk to these researchers here, in the Netherlands. Theres a lot of work in China. I spent time talking to a lot of people in all those places.
What excites you most about the beat?
When you get to write not only about the technology but also about the people building it and what it means for them. Those are the stories Im most satisfied with. Thats ultimately what I wanted to do, dating back to watching my father and his career realizing the significance of intersections of the technology with the people.
Read the rest here:
Reporting on the Future, Now - The New York Times