Category Archives: Quantum Computer
An IBM patent shows a hexagonal array of qubits in a quantum computer, arranged to minimize problems controlling the finicky data processing elements.
IBM secured 9,130 US patents in 2020, more than any other company as measured by an annual ranking, and this year quantum computing showed up as part of Big Blue's research effort. The company wouldn't disclose how many of the patents were related to quantum computing -- certainly fewer than the 2,300 it received for artificial intelligence work and 3,000 for cloud computing -- but it's clear the company sees them as key to the future of computing.
The IFI Claims patent monitoring service compiles the list annually, and IBM is a fixture at the top. The IBM Research division, with labs around the globe, has for decades invested in projects that are far away from commercialization. Even though the work doesn't always pay dividends, it's produced Nobel prizes and led to entire industries like hard drives, computer memory and database software.
Subscribe to CNET Now for the day's most interesting reviews, news stories and videos.
"A lot of the work we do in R&D really is not just about the number of patents, but a way of thinking," Jerry Chow, director of quantum hardware system development, said in an exclusive interview. "New ideas come out of it."
IFI's US patent list is dominated by computer technology companies. Second place went to Samsung with 6,415 patents, followed by Canon with 3,225, Microsoft with 2,905 and Intel with 2,867. Next on the list are Taiwan Semiconductor Manufacturing Corp., LG, Apple, Huawei and Qualcomm. The first non-computing company is Toyota, in 14th place.
Internationally, IBM ranked second to Samsung in patents for 2020, and industrial companies Bosch and General Electric cracked the top 10. Many patents are duplicative internationally since it's possible to file for a single patent in 153 countries.
Quantum computing holds the potential to tackle computing problems out of reach of conventional computers. During a time when it's getting harder to improve ordinary microprocessors, quantum computers could pioneer new high-tech materials for solar panels and batteries, improve chemical processes, speed up package delivery, make factories more efficient and lower financial risks for investors.
Industrywide, quantum computing is a top research priority, with dozens of companies investing millions of dollars even though most don't expect a payoff for years. The US government is bolstering that effort with a massive multilab research effort. It's even become a headline event at this year's CES, a conference that more typically focuses on new TVs, laptops and other consumer products.
"Tactical and strategic funding is critical" to quantum computing's success, said Hyperion Research analyst Bob Sorensen. That's because, unlike more mature technologies, there's not yet any virtuous cycle where profits from today's quantum computing products and services fund the development of tomorrow's more capable successors.
IBM has taken a strong early position in quantum computing, but it's too early to pick winners in the market, Sorensen added.
The long-term goal is what's called a fault tolerant quantum computer, one that uses error correction to keep calculations humming even when individual qubits, the data processing element at the heart of quantum computers, are perturbed. In the nearer term, some customers like financial services giant JPMorgan Chase, carmaker Daimler and aerospace company Airbus are investing in quantum computing work today with the hope that it'll pay off later.
Quantum computing is complicated to say the least, but a few patents illustrate what's going on in IBM's labs.
Patent No. 10,622,536 governs different lattices in which IBM lays out its qubits. Today's 27-qubit "Falcon" quantum computers use this approach, as do the newer 65-qubit "Hummingbird" machines and the much more powerful 1,121-qubit "Condor" systems due in 2023.
A close-up view of an IBM quantum computer. The processor is in the silver-colored cylinder.
IBM's lattices are designed to minimize "crosstalk," in which a control signal for one qubit ends up influencing others, too. That's key to IBM's ability to manufacture working quantum processors and will become more important as qubit counts increase, letting quantum computers tackle harder problems and incorporate error correction, Chow said.
Patent No. 10,810,665 governs a higher-level quantum computing application for assessing risk -- a key part of financial services companies figuring out how to invest money. The more complex the options being judged, the slower the computation, but the IBM approach still outpaces classical computers.
Patent No. 10,599,989 describes a way of speeding up some molecular simulations, a key potential promise of quantum computers, by finding symmetries in molecules that can reduce computational complexity.
Most customers will tap into the new technology throughquantum computing as a service. Because quantum computers typically must be supercooled to within a hair's breadth of absolute zero to avoid perturbing the qubits, and require spools of complicated wiring, most quantum computing customers are likely to tap into online services from companies like IBM, Google, Amazon and Microsoft that offer access to their own carefully managed machines.
The rest is here:
Quantum computing research helps IBM win top spot in patent race - CNET
Error Protected Quantum Bits Entangled: A Milestone in the Development of Fault-Tolerant Quantum Computers – SciTechDaily
Quantum particles lined up in a lattice form the basis for an error-tolerant quantum processor. Credit: Uni Innsbruck/Harald Ritsch
Even computers can miscalculate. Already small disturbances change stored information and corrupt results. That is why computers use methods to continuously correct such errors.
In quantum computers, the vulnerability to errors can be reduced by storing quantum information in more than a single quantum particle. These logical quantum bits are less sensitive to errors.
In recent years, theorists have developed many different error correction codes and optimized them for different tasks. The most promising codes in quantum error correction are those defined on a two-dimensional lattice, explains Thomas Monz from the Department of Experimental Physics at the University of Innsbruck. This is due to the fact that the physical structure of current quantum computers can be very well mapped through such lattices.
With the help of the codes, logical quantum bits can be distributed over several quantum objects. The quantum physicists from Innsbruck have now succeeded for the first time in entangling two quantum bits coded in this way. The entanglement of two quantum bits is an important resource of quantum computers, giving them a performance advantage over classical computers.
For their experiment, the physicists use an ion-trap quantum computer with ten ions. Into these ions the logical quantum bits are encoded. Using a technique that scientists refer to as lattice surgery, two logical qubits encoded on a lattice can be stitched together. A new larger qubit is created from the qubits stitched together in this way, explains Alexander Erhard from the Innsbruck team. In turn, a large logical qubit can be separated into two individual logical qubits by lattice surgery.
In contrast to the standard operations between two logical qubits, lattice surgery only requires operations along the boundary of the encoded qubits, not on their entire surface. This reduces the number of operations required to create entanglement between two encoded qubits, explains the theoretical physicists Nicolai Friis and Hendrik Poulsen Nautrup.
Lattice surgery is considered one of the key techniques for the operation of future fault-tolerant quantum computers. Using lattice surgery, the physicists led by Thomas Monz and Rainer Blatt, together with the theoretical physicists Hendrik Poulsen Nautrup and Hans Briegel from the Department of Theoretical Physics at the University of Innsbruck and Nicolai Friis from the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences in Vienna, have now demonstrated the generation of entanglement between two encoded qubits.
This is the first experimental realization of non-classical correlations between topologically encoded qubits. Furthermore, the researchers were able to demonstrate for the first time the teleportation of quantum states between two encoded qubits.
Reference: 13 January 2021, Nature.DOI: 10.1038/s41586-020-03079-6
The research was financially supported by the Austrian Science Fund FWF and the Research Promotion Agency FFG as well as the EU.
You can find a $180K solar-powered car, qubit controls, and breathing tips at the NL Tech Pavilion at CES 2021 – TechRepublic
90 entrepreneurs and researchers from the Netherlands want to solve all the world's problems with collaboration and innovation.
The Lightyear One charges its own batteries via five square meters of solar panels built into the car itself.
Image: Lightyear One
The 90 Dutch companies in the NL Tech Pavilion at CES 2021 represent every possible use of technology as a problem-solving tool, from air quality and cars to sustainability and violence.
This collection of companies is one of the largest private sector delegations at this year's virtual CES.
The companies represent 13 sectors including advanced materials; artificial intelligence, big data and quantum computing; energy power and climate change; enterprise solutions; robotics and future work; digital health and wellness; cybersecurity and resilience; smart cities and mobility; sustainability and circularity; and 5G, IoT sensors, photonics and nanotech.
This year, Dutch organizers wanted to highlight how companies must work together to create economic, environmental, and social change by using partnerships between government, private and public companies, and research and knowledge institutions.
SEE: CES 2021: The big trends for business (ZDNet/TechRepublic special feature)
The quantum computing contingent at CES is one example of that collaboration. In addition to three companies, the Pavilion is hosting Quantum Delta NL as well. The organization supports networking among researchers and educational efforts around quantum computing.
"As global power players race toward building the first quantum computer, we continue to encourage productive collaboration between Dutch research institutes like QuTech, our national ecosystem for excellence Quantum Delta and groundbreaking startups like Qblox who all play an equally essential role in establishing the Netherlands as a leader in the evolution of quantum innovation," said Mona Keijzer, State Secretary for Economic Affairs and Climate Policy.
These six companies in the delegation are working in artificial intelligence, data centers, energy use, digital health, blockchain, and solar-powered cars:
Here's a look at how these six companies are using technology to solve old and new problems.
Incooling and Lightyear are taking on energy use in two different sectors. Incooling SVC is a compressor-based cooling system to cool high-performance servers by focusing on the CPU. This closed-loop system can be inserted directly into servers, according to the company, and can respond quickly to changing temperatures. The system uses two-phase cooling. With this system, the coolant is heated and subjected to phase change. This means that when the cooling material is heated to the boiling point, it can change from a liquid to a gas. This allows the cooling system to absorb more heat.
The Lightyear One charges its own batteries via five square meters of solar panels built into the car itself. The solar cells on the hood and the roof are encased in safety glass. The car also has four independent in-wheel motors that provide power when and where it is needed. This long-range solar electric car is two to three times more energy efficient than the current crop of electric vehicles, according to the company. Lightyear One uses 83 watts per kilometer, which will cover a range of 725 kilometers, or 450 miles. The Lightyear One goes on sale in late 2021 in Europe for 150,000 euros, or about $182,395.
Oddity is working on a commercial violence recognition algorithm with advanced deep learning techniques. The algorithm monitors video feeds in real time to watch for potential violence and alert security officers. The company claims the system has a detection speed of less than half a second. The company also states that the algorithms analyze subjects in full anonymity and deploy on premises to protect privacy.
Verisign reports that there are about 330 million registered domain names but a significant portion of those are not active. Dan.com is using blockchain to make it easier for businesses and individuals to find, buy, and transfer these unused domains. Dan.com used IBM's blockchain technology to automate domain name processes such as transferring a name to a new owner and to power new services such as domain name rental and lease to own.
Music has the power to influence emotions and AlphaBeats is using that power to help individuals relax. The company's app measures stress via breathing, heart rate variability, and brainwaves. The sound quality from the ear buds changes based on the level of stress the biofeedback algorithms detect. As a person relaxes, the quality of the music improves. AlphaBeats is licensing a neurofeedback algorithm from Philips to power the app. AlphaBeats claims that 10 minute training sessions will help users train themselves to relax on command. The company is signing up beta testers for the iOS and Android apps.
Breath in Balanz also wants to train users to be healthier and its focus is breathing. The coaching system uses an app and a belt to improve breathing patterns to prevent hyperventilation. Breathing too shallowly or too fast can affect a person's overall health, including sleep and heart conditions. Breath in Balanz offers an 80-day training program that is divided into seven segments. The idea is to train the variety of muscles used to breathe via the app and a sensor.
The Netherlands Pavilion includes a quantum computing cohort this year with three companies and one industry organization attending. Orange Quantum System helps R&D labs with quantum research. Qblox is advancing quantum technology with scalable and low-latency qubit control equipment. Quantum Inspire is a multi-hardware quantum technology platform.
Quantum Delta NL supports the broader quantum ecosystem by encouraging collaboration among the country's five major quantum research hubs, strengthening large-scale facilities across the country for nanotech research, and accelerating education efforts to support a quantum economy. The organization's catalyst programs include building the first European quantum computing platform, establishing a national quantum network, and supporting companies that could build quantum sensing applications. Intel's quantum researchers work with the Dutch company QuTech to test quantum chips that the hardware company is developing.
Our editors highlight the TechRepublic articles, downloads, and galleries that you cannot miss to stay current on the latest IT news, innovations, and tips. Fridays
The National Security Agency (NSA) issued its first Cybersecurity Year In Review report, highlighting key achievements from 2020 including encryption work for the Pentagon and looking ahead to threats for 2021.
The NSA launched its Cybersecurity Directorate in October of 2019 to prevent and eradicate cyber threats to the United States.
As we began our first year, we took a deliberate approach to building trust by sharing unclassified threat and cybersecurity advice. We forged deeper relationships with our U.S. government and industry partners to deliver better outcomes than any of us could achieve alone, Anne Neuberger wrote in the report. Neuberger has served as the agencys Director of Cybersecurity since 2019, and was recently chosen by President-elect Joe Biden to serve in a newly created cybersecurity role on the Presidents National Security Council (NSC).
The report highlights key achievements from the past year, including a push to modernize encryption across the Department of Defense (DoD). The effort both eliminated cryptography at risk from attack due to adversarial computational advances, and made it resistant to exploitation by a quantum computer.
Additionally, the NSA highlighted achievements such as supporting the 2020 election security defense effort, development efforts of the COVID-19 vaccine, and the creation of the NSA Cybersecurity Twitter handle, @NSACyber.
Looking ahead, NSA warned that China, Russia, and Iran are the three biggest foreign threats for 2021. Reportedly, China is using widespread intellectual property theft, Russia is using cyber operations as a corrosive and destabilizing force across multiple geographic regions, and Iran has demonstrated the ability and willingness to launch disruptive cyber operations on critical infrastructure.
NSA acknowledged 2020 was a tough year and going forward, the agency has made a commitment to be more open and transparent about the work it does.
‘Magic’ angle graphene and the creation of unexpected topological quantum states – Princeton University
Electrons inhabit a strange and topsy-turvy world. These infinitesimally small particles have never ceased to amaze and mystify despite the more than a century that scientists have studied them. Now, in an even more amazing twist, physicists have discovered that, under certain conditions, interacting electrons can create what are called topological quantum states. This finding, which was recently published in the journal Nature,holds great potential for revolutionizing electrical engineering, materials science and especially computer science.
Topological states of matter are particularly intriguing classes of quantum phenomena. Their study combines quantum physics with topology, which is the branch of theoretical mathematics that studies geometric properties that can be deformed but not intrinsically changed. Topological quantum states first came to the publics attention in 2016 when three scientists Princetons Duncan Haldane, who is Princetons Thomas D. Jones Professor of Mathematical Physics and Sherman Fairchild University Professor of Physics, together with David Thouless and Michael Kosterlitz were awarded the Nobel Prize for their work in uncovering the role of topology in electronic materials.
A Princeton-led team of physicists have discovered that, under certain conditions, interacting electrons can create what are called topological quantum states, which,has implications for many technological fields of study, especially information technology. To get the desired quantum effect, the researchersplaced two sheets of graphene on top of each other with the top layer twisted at the "magic" angle of 1.1 degrees, whichcreates a moir pattern. This diagram shows a scanning tunneling microscopeimaging the magic-angle twisted bilayer graphene.
Image courtesy of Kevin Nuckolls
The last decade has seen quite a lot of excitement about new topological quantum states of electrons, said Ali Yazdani, the Class of 1909 Professor of Physics at Princeton and the senior author of the study. Most of what we have uncovered in the last decade has been focused on how electrons get these topological properties, without thinking about them interacting with one another.
But by using a material known as magic-angle twisted bilayer graphene, Yazdani and his team were able to explore how interacting electrons can give rise to surprising phases of matter.
The remarkable properties of graphene were discovered two years ago when Pablo Jarillo-Herrero and his team at the Massachusetts Institute of Technology (MIT) used it to induce superconductivity a state in which electrons flow freely without any resistance. The discovery was immediately recognized as a new material platform for exploring unusual quantum phenomena.
Yazdani and his fellow researchers were intrigued by this discovery and set out to further explore the intricacies of superconductivity.
But what they discovered led them down a different and untrodden path.
This was a wonderful detour that came out of nowhere, said Kevin Nuckolls, the lead author of the paper and a graduate student in physics. It was totally unexpected, and something we noticed that was going to be important.
Following the example of Jarillo-Herrero and his team, Yazdani, Nuckolls and the other researchers focused their investigation on twisted bilayer graphene.
Its really a miracle material, Nuckolls said. Its a two-dimensional lattice of carbon atoms thats a great electrical conductor and is one of the strongest crystals known.
Graphene is produced in a deceptively simple but painstaking manner: a bulk crystal of graphite, the same pure graphite in pencils, is exfoliated using sticky tape to remove the top layers until finally reaching a single-atom-thin layer of carbon, with atoms arranged in a flat honeycomb lattice pattern.
To get the desired quantum effect, the Princeton researchers, following the work of Jarillo-Herrero, placed two sheets of graphene on top of each other with the top layer angled slightly. This twisting creates a moir pattern, which resembles and is named after a common French textile design. The important point, however, is the angle at which the top layer of graphene is positioned: precisely 1.1 degrees, the magic angle that produces the quantum effect.
Its such a weird glitch in nature, Nuckolls said, that it is exactly this one angle that needs to be achieved. Angling the top layer of graphene at 1.2 degrees, for example, produces no effect.
The researchers generated extremely low temperatures and created a slight magnetic field. They then used a machine called a scanning tunneling microscope, which relies on a technique called quantum tunneling rather than light to view the atomic and subatomic world. They directed the microscopes conductive metal tip on the surface of the magic-angle twisted graphene and were able to detect the energy levels of the electrons.
They found that the magic-angle graphene changed how electrons moved on the graphene sheet. It creates a condition which forces the electrons to be at the same energy, said Yazdani. We call this a flat band.
When electrons have the same energy are in a flat band material they interact with each other very strongly. This interplay can make electrons do many exotic things, Yazdani said.
One of these exotic things, the researchers discovered, was the creation of unexpected and spontaneous topological states.
This twisting of the graphene creates the right conditions to create a very strong interaction between electrons, Yazdani explained. And this interaction unexpectedly favors electrons to organize themselves into a series of topological quantum states.
The researchers discovered that the interaction between electrons creates topological insulators:unique devices that whose interiors do not conduct electricity but whose edges allow the continuous and unimpeded movement ofelectrons. This diagram depicts thedifferent insulating states of the magic-angle graphene, each characterized by an integer called its Chern number, which distinguishes between different topological phases.
Image courtesy of Kevin Nuckolls
Specifically, they discovered that the interaction between electrons creates what are called topological insulators. These are unique devices that act as insulators in their interiors, which means that the electrons inside are not free to move around and therefore do not conduct electricity. However, the electrons on the edges are free to move around, meaning they are conductive. Moreover, because of the special properties of topology, the electrons flowing along the edges are not hampered by any defects or deformations. They flow continuously and effectively circumvent the constraints such as minute imperfections in a materials surface that typically impede the movement of electrons.
During the course of the work, Yazdanis experimental group teamed up two other Princetonians Andrei Bernevig, professor of physics, and Biao Lian, assistant professor of physics to understand the underlying physical mechanism for their findings.
Our theory shows that two important ingredients interactions and topology which in nature mostly appear decoupled from each other, combine in this system, Bernevig said. This coupling creates the topological insulator states that were observed experimentally.
Although the field of quantum topology is relatively new, itcouldtransform computer science. People talk a lot about its relevance to quantum computing, where you can use these topological quantum states to make better types of quantum bits, Yazdani said. The motivation for what were trying to do is to understand how quantum information can be encoded inside a topological phase. Research in this area is producing exciting new science and can have potential impact in advancing quantum information technologies.
Yazdani and his team will continue their research into understanding how the interactions of electrons give rise to different topological states.
The interplay between the topology and superconductivity in this material system is quite fascinating and is something we will try to understand next, Yazdani said.
In addition to Yazdani, Nuckolls, Bernevig and Lian, contributors to the study included co-first authors Myungchul Oh and Dillon Wong, postdoctoral research associates, as well as Kenji Watanabe and Takashi Taniguchi of the National Institute for Material Science in Japan.
Strongly Correlated Chern Insulators in Magic-Angle Twisted Bilayer Graphene, by Kevin P. Nuckolls, Myungchul Oh, Dillon Wong, Biao Lian, Kenji Watanabe, Takashi Taniguchi, B. Andrei Bernevig and Ali Yazdani, was published Dec. 14 in the journal Nature (DOI:10.1038/s41586-020-3028-8). This work was primarily supported by the Gordon and Betty Moore Foundations EPiQS initiative (GBMF4530, GBMF9469) and the Department of Energy (DE-FG02-07ER46419 and DE-SC0016239). Other support for the experimental work was provided by the National Science Foundation (Materials Research Science and Engineering Centers through the Princeton Center for Complex Materials (NSF-DMR-1420541, NSF-DMR-1904442) and EAGER DMR-1643312), ExxonMobil through the Andlinger Center for Energy and the Environment at Princeton, the Princeton Catalysis Initiative, the Elemental Strategy Initiative conducted by Japans Ministry of Education, Culture, Sports, Science and Technology (JPMXP0112101001, JSPS KAKENHI grant JP20H0035, and CREST JPMJCR15F3), the Princeton Center for Theoretical Science at Princeton University, the Simons Foundation, the Packard Foundation, the Schmidt Fund for Innovative Research, BSF Israel US foundation (2018226), the Office of Naval Research (N00014-20-1-2303) and the Princeton Global Network Funds.
Tried but true legacy systems can be like old friends. Sure, they may walk a little slower and need to rest more often, but they still have your back when you need them. So saying goodbye to them is always a tough choice.
Nostalgia aside, it may also simply not make financial sense to rip and replace your legacy system. It can be costly not only in terms of the expense of new systems, but the risk involved when the transition to new systems causes downtime or unhappy customers. Companies therefore need to carefully weigh the pros and cons when deciding whether to invest in legacy system modernization or to send it to the scrap heap.
Aside from the cost of modern systems and the downtime or problems that can arise during the transition, perhaps the biggest risk in ripping and replacing an older system is the loss of data. Ensuring a good data migration is hard to do and many projects fail to do it well, causing data, which can be the lifeblood of your business, to go missing or corrupted.
Ripping and replacing legacy systems can impact the entire company. It impacts business operations, employees and customers. Training staff on new systems is a huge investment in time and as we all know, people dont like change. Staff will need some time to get up-to-speed on new systems. An unfortunate potential result is the impact on customers when errors occur as employees get used to the new system.
Related Article: Modernizing Legacy Tech: Big Bang or Piecemeal?
Yet regardless of any comfort old servers provide, their worth may be decreasing over time. At some point companies in every industry need to change, and often, those systems simply wont be able to keep up. You cant wait until your old system is gasping for breath to think about giving it some respite initiate the change before it reaches that point.
Another consideration is that many of the skilled experts in legacy systems are retiring and retiring their expertise with them, so the aging systems are becoming expensive to support and maintain (while requiring more support and maintenance than ever before).
Fortunately, digital technologies such as cloud, mobile and AI are opening up new opportunities for businesses today, helping them reduce operating costs, perform more efficiently and boost the customer experience. Yet legacy systems and applications built on languages such as COBOL often cant take full advantage of some of todays technologies, such as cloud and mobile apps.
Related Article: The Elephant in the Digital Transformation Room: The Long Tail of Legacy Tech
The problem is that while shifting to modern applications could be beneficial, many banks or telecommunication firms have 40-year-old mainframes in their data centers, serving as host systems for millions of customers, and they aren't going away any time soon. These firms can still transform their business and implement emerging technologies by taking an incremental hybrid approach.
A strategic, incremental hybrid approach can help organizations leverage what they have while transitioning to a future marked by faster transactions, better customer service and data-driven insights that can fuel better customer experiences. As an example, its quite possible to connect the trusty old mainframe to the data lake that resides in the cloud which feedsnew applications that impact processes, culture and business value, such as a chatbot to quickly answer customer queries.
Banks and insurance companies are masters of this hybrid world, where mainframes and modernization blend seamlessly together. For example, one of banking's key focuses is on improving the customer experience, by enabling faster, more seamless transactions, such as transferring money from one account to another, or having cash immediately ready when depositing a check. The traditional method of handling those transactions was through batch processing, which unfortunately isnt possible to accomplish any faster than it currently is. Often batches have been processed once a day. Its possible to speed that up and have batches processed several times a day, but todays customers are demanding real-time, immediate transactions. This type of capability would require decoupling parts of the system and moving it to the cloud.
For companies looking to modernize their legacy systems, one approach is to build a communications layer, or service layer, to the mainframe and integrate everybody through the mainframe over that service layer. Its easier to change the mainframe without changing everything.
As another example, banks, insurance or telecommunication firms may be looking to offer mobile apps for their customer's convenience. Legacy systems can still be used by creating an API layer to the core processing unit of the system which can integrate modern mobile apps while still using the compute power of the legacy system.
Related Article: Why Legacy Systems May Force You to Upgrade Your Collaboration Tools
In many instances, legacy and modern can coexist peacefully for now. Yet any company debating the best next step should ask themselves the four questions below:
Until quantum computing takes hold in a broad way, the mainframe computer will remain the go-to solution for processing massive amounts of data in key markets. Companies can find ways to spruce them up by replacing legacy software and systems with modern applications, and gradually transitioning to the cloud. As for the mainframe, whats old can be new again, it just takes a modern approach to derive the greatest value.
Carlos M. Melndez is the COO and co-founder of Wovenware, a Puerto Rico-based design-driven company that delivers customized AI and other digital transformation solutions that create measurable value for customers across the US.
See more here:
Bringing Your Mainframe Into the Cloud Age - CMSWire
ASC20-21 Student Supercomputer Challenge Kickoff: Quantum Computing Simulations, AI Language Exam and Pulsar Searching with FAST – Business Wire
BEIJING--(BUSINESS WIRE)--The preliminary round of the 2020-2021 ASC Student Supercomputer Challenge (ASC20-21) officially kicked off on November 16, 2020. More than 300 university teams from five continents registered to participate in this competition. Over the next two months, they will be challenged in several cutting-edge applications of Supercomputing and AI. The 20 teams that eventually make out of the preliminaries will participate in the finals from May 8 to 12, 2021 at Southern University of Science and Technology in Shenzhen, China. During the finals, they will compete for various awards including the Champion, Silver Prize, Highest LINPACK, and e- Prize.
Among the registered participants for ASC20-21 are three prior champion teams: the SC19/SC20 champion team of Tsinghua University, the ISC20 champion team of University of Science and Technology of China, and the ASC19 champion of National Tsing Hua University. Other power competitors include teams from University of Washington (USA), University of Warsaw (Poland), Ural Federal University (Russia), Monash University (Australia), EAFIT University (Columbia) and so much more.
For the tasks of this preliminary round of merged ASC20 and ASC21, the organizing committee has retained the quantum computing simulation and language exam tasks from the ASC20, and added a new fascinating, cutting-edge task in astronomy -- searching for pulsars.
Pulsars are fast-spinning neutron stars, and remnants of collapsed super stars. Pulsars feature a high density and strong magnetic field. By observing and studying the extreme physic of pulsars, the scientists can delve into the mysterious space around black holes and detect the gravitational waves triggered from the intense merge of super massive black holes in distant galaxies. Because of the unique nature of pulsars, the Nobel Prize in physics has been awarded twice for pulsar-related discoveries. Using radio telescopes over the previous decades, astronomers have discovered nearly 3,000 pulsars with 700 being discovered by PRESTO, the open-source pulsar search and analysis software. In ASC20-21, the participants are asked to use PRESTO from its official website, and the observational data from Five-hundred-meter Aperture Spherical radio Telescope (FAST), the worlds largest single-dish radio telescope located in Guizhou, China, operated by National Astronomical Observatories, Chinese Academy of Sciences. Participating teams should achieve the applications maximum parallel acceleration, while searching for a pulsar in the FAST observational data loaded in the computer cluster they build. Practically the teams will need to understand the pulsar search process, complete the search task, analyze the code, and optimize the PRESTO application execution, by minimizing the computing time and resources.
The quantum computing simulation task will require each participating team to use the QuEST (Quantum Exact Simulation Toolkit) running on computer cluster to simulate 30 qubits in two cases: quantum random circuits (random.c), and quantum fast Fourier transform circuits (GHZ_QFT.c). Quantum simulations provides a reliable platform for studying of quantum algorithms, which are particularly important because quantum computers are not practically available yet in the industry.
The Language Exam task will require all participating teams to train AI models on an English Cloze Test dataset, striving to achieve the highest "test scores". The dataset covers multiple levels of English language tests used in China.
This years ASC training camp will be held on November 30 to help the participating teams from all around the world prepare for the competition. HPC and AI experts from Chinese Academy of Sciences, Peng Cheng Laboratory, State Key Laboratory of High-end Server & Storage Technology will introduce in details the competition rules, computer cluster build and optimization, and provide guidance.
The ASC Student Supercomputer Challenge is the worlds largest student supercomputer competition, sponsored and organized by Asia Supercomputer Community in China and supported by Asian, European, and American experts and institutions. The main objectives of ASC are to encourage exchange and training of young supercomputing talent from different countries, improve supercomputing applications and R&D capacity, boost the development of supercomputing, and promote technical and industrial innovation. The first ASC Student Supercomputer Challenge was held in 2012 and since has attracted nearly 10,000 undergraduates from all over the world. Learn more ASC at https://www.asc-events.org/.
Virtual ICM Seminar with Hiroaki Kitano, ‘Nobel Turing Challenge-Creating the Engine of Scientific Discovery’ to Be Held Nov 26 – HPCwire
Nov. 25, 2020 The Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) at the University of Warsaw invites enthusiasts of HPC and all people interested in challenging topics in Computer and Computational Science to the ICM Seminar in Computer and Computational Science that will be held on Thursday, November 26 (4 pm CET). The event is free.
A new grand challenge for AI: to develop an AI system that can make major scientific discoveries in biomedical sciences and that is worthy of a Nobel Prize. There are a series of human cognitive limitations that prevent us from making accelerated scientific discoveries, particularity in biomedical sciences. As a result, scientific discoveries are left at the level of a cottage industry. AI systems can transform scientific discoveries into highly efficient practices, thereby enabling us to expand knowledge in unprecedented ways. Such systems may outcompute all possible hypotheses and may redefine the nature of scientific intuition, hence the scientific discovery process.
Hiroaki Kitano, PhD is the president of the Systems Biology Institute (SBI-http://www.sbi.jp/); President and CEO of Sony Computer Science Laboratories, Inc.; and a Principal Investigator at Open Biology Unit, Okinawa Institute of Science and Technology (OIST).
Dr Kitano is known for developing AIBO (Artificial Intelligence Robot a series of robotic dogs designed and manufactured by Sony) and the robotic world cup tournament known as Robocup.
Register now: https://supercomputingfrontiers.eu/2020/seminars/
Virtual ICM Seminars in Computer and Computational Science are a continuation of the Supercomputing Frontiers Europe conference, which took place virtually in March this year.
Worldwide Open Science online meetings in HPC, Artificial Intelligence, Quantum Computing, BigData, IoT, computer and data networks are a place to meet and discuss with such personalities as Stephen Wolfram (Founder & CEO, Wolfram Research), Alan Edelman (MIT), Aneta Afelt (ICM, Espace-DEV, IRD Montpellier France), Simon Mutch (University of Melbourne) or Scott Aaronson (University of Texas at Austin).
For the listing of all ICM seminars please check this link with recordings https://supercomputingfrontiers.eu/2020/past-seminars/
So far, over 2,000 people from all over the world have participated in both initiatives. The organizer of meetings with outstanding scientists is the Interdisciplinary Centre for Mathematical and Computational Modelling. https://supercomputingfrontiers.eu/2020/seminars/
About the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw (UW)
Established by a resolution of the Senate of the University of Warsaw dated 29 June 1993, the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw, is one of the top HPC centres in Poland. ICM is engaged in serving the needs of a large community of computational researchers in Poland through provision of HPC and grid resources, storage, networking and expertise. It has always been an active research centre with high quality research contributions in computer and computational science, numerical weather prediction, visualisation, materials engineering, digital repositories, social network analysis and other areas.
Source: ICM UW
The history of computer chips is a thrilling tale of extreme miniaturization.
The smaller, the better is a trend thats given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while theres no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world.
At least, thats what Cerebras, the maker of the biggest computer chip in the world, is hoping.
The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIAs A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade.
So can Goliath flip the script on David? Cerebras is on a mission to find out.
When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models.
Since then, the WSE has made its way into a handful of supercomputing labs, where the companys customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI.
So, in a recent trial, researchers pitted the chipwhich is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.
The trial was described in a preprint paper written by a team led by Cerebrass Michael James and NETLs Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.
The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, It can tell you what is going to happen in the future faster than the laws of physics produce the same result.
The researchers said the CS-1s performance couldnt be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true no matter how large the supercomputer is. At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. Thats why Joules simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.
A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars.
While the task is niche (but useful) and the problem well-suited to the CS-1, its still a pretty stunning result. So howd they pull it off? Its all in the design.
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.
Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; its also why, in this case, its better.
Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so theyre in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.
Its a little like an old-timey company that does all its business on paper.
The company uses couriers to send and collect documents from other branches and archives across town. The couriers know the best routes through the city, but the trips take some minimum amount of time determined by the distance between the branches and archives, the couriers top speed, and how many other couriers are on the road. In short, distance and traffic slow things down.
Now, imagine the company builds a brand new gleaming skyscraper. Every branch is moved into the new building and every worker gets a small filing cabinet in their office to store documents. Now any document they need can be stored and retrieved in the time it takes to step across the office or down the hall to their neighbors office. The information commute has all but disappeared. Everythings in the same house.
Cerebrass megachip is a bit like that skyscraper. The way it shuttles informationaided further by its specially tailored compiling softwareis far more efficient compared to a traditional supercomputer that needs to network a ton of traditional chips.
Its worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machines ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the processsomething not possible with traditional chips.
Another opportunity, they note, would be to use a simulation as input to train a neural network also residing on the chip. In an intriguing and related example, a Caltech machine learning technique recently proved to be 1,000 times faster at solving the same kind of partial differential equations at play here to simulate fluid dynamics.
They also note that improvements in the chip (and others like it, should they arrive) will push back the limits of what can be accomplished. Already, Cerebras has teased the release of its next-generation chip, which will have 2.6 trillion transistors, 850,00 cores, and more than double the memory.
Of course, it still remains to be seen whether wafer-scale computing really takes off. The idea has been around for decades, but Cerebras is the first to pursue it seriously. Clearly, they believe theyve solved the problem in a way thats useful and economical.
Other new architectures are also being pursued in the lab. Memristor-based neuromorphic chips, for example, mimic the brain by putting processing and memory into individual transistor-like components. And of course, quantum computers are in a separate lane, but tackle similar problems.
It could be that one of these technologies eventually rises to rule them all. Or, and this seems just as likely, computing may splinter into a bizarre quilt of radical chips, all stitched together to make the most of each depending on the situation.
Image credit: Cerebras
November 23, 2020• Physics 13, 183
Classical computers can efficiently simulate the behavior of quantum computers if the quantum computer is imperfect enough.
With a few quantum bits, an ideal quantum computer can process vast amounts of information in a coordinated way, making it significantly more powerful than a classical counterpart. This predicted power increase will be great for users but is bad for physicists trying to simulate on a classical computer how an ideal quantum computer will behave. Now, a trio of researchers has shown that they can substantially reduce the resources needed to do these simulations if the quantum computer is imperfect . The arXiv version of the trios paper is one of the most Scited papers of 2020 and the result generated quite a stir when it first appeared back in FebruaryI overheard it being enthusiastically discussed at the Quantum Optics Conference in Obergurgl, Austria, at the end of that month, back when we could still attend conferences in person.
In 2019, Google claimed to have achieved the quantum computing milestone known as quantum advantage, publishing results showing that their quantum computer Sycamore had performed a calculation that was essentially impossible for a classical one . More specifically, Google claimed that they had completed a three-minute quantum computationwhich involved generating random numbers with Sycamores 53 qubitsthat would take thousands of years on a state-of-the-art classical supercomputer, such as IBMs Summit. IBM quickly countered the claim, arguing that more efficient memory storage would reduce the task time on a classical computer to a couple of days . The claims and counterclaims sparked an industry clash and an intense debate among supporters in the two camps.
Resolving the disparity between these estimates is one of the goals of the new work by Yiqing Zhou, of the University of Illinois at UrbanaChampaign, and her two colleagues . In their study, they focused on algorithms for classically replicating imperfect quantum computers, which are also known as NISQ (noisy intermediate-scale quantum) devices . Todays state-of-the-art quantum computersincluding Sycamoreare NISQ devices. The algorithms the team used are based on so-called tensor network methods, specifically matrix product states (MPS), which are good for simulating noise and so are naturally suited for studying NISQ devices. MPS methods approximate low-entangled quantum states with simpler structures, so they provide a data-compression-like protocol that can make it less computationally expensive to classically simulate imperfect quantum computers (see Viewpoint: Pushing Tensor Networks to the Limit).
Zhou and colleagues first consider a random 1D quantum circuit made of neighboring, interleaved two-qubit gates and single-qubit random unitary operations. The two-qubit gates are either Controlled-NOT gates or Controlled-Z (CZ) gates, which create entanglement. They ran their algorithm for NISQ circuits containing different numbers of qubits, N, and different depths, Da parameter that relates to the number of gates the circuit executes (Fig. 1). They also varied a parameter in the MPS algorithm. is the so-called bond dimension of the MPS and essentially controls how well the MPS capture entanglement between qubits.
The trio demonstrate that they can exactly simulate any imperfect quantum circuit if D and N are small enough and is set to a value within reach of a classical computer. They can do that because shallow quantum circuits can only create a small amount of entanglement, which is fully captured by a moderate . However, as D increases, the team finds that cannot capture all the entanglement. That means that they cannot exactly simulate the system, and errors start to accumulate. The team describes this mismatch between the quantum circuit and their classical simulations using a parameter that they call the two-qubit gate fidelity fn. They find that the fidelity of their simulations slowly drops, bottoming out at an asymptotic value f as D increases. This qualitative behavior persists for different values of N and . Also, while their algorithm does not explicitly account for all the error and decoherence mechanisms in real quantum computers, they show that it does produce quantum states of the same quality (perfection) as the experimental ones.
In light of Googles quantum advantage claims, Zhou and colleagues also apply their algorithm to 2D quantum systemsSycamore is built on a 2D chip. MPS are specifically designed for use in 1D systems, but the team uses well-known techniques to extend their algorithm to small 2D ones. They use their algorithm to simulate an N=54, D=20 circuit, roughly matching the parameters of Sycamore (Sycamore has 54 qubits but one is unusable because of a defect). They replace Googles more entangling iSWAP gates with less entangling CZ gates, which allow them to classically simulate the system up to the same fidelity as reported in Ref.  with a single laptop. The simulation cost should increase quadratically for iSWAP-gate circuits, and although the team proposes a method for performing such simulations, they have not yet carried them out because of the large computational cost it entails.
How do these results relate to the quantum advantage claims by Google? As they stand, they do not weaken or refute claimswith just a few more qubits, and an increase in D or f, the next generation of NISQ devices will certainly be much harder to simulate. The results also indicate that the teams algorithm only works if the quantum computer is sufficiently imperfectif it is almost perfect, their algorithm provides no speed up advantage. Finally, the results provide numerical insight into the values of N, D, f, and for which random quantum circuits are confined to a tiny corner of the exponentially large Hilbert space. These values give insight into how to quantify the capabilities of a quantum computer to generate entanglement as a function of f, for example.
So, whats next? One natural question is, Can the approach here be transferred to efficiently simulate other aspects of quantum computing, such as quantum error correction? The circuits the trio considered are essentially random, whereas quantum error correction circuits are more ordered by design . That means that updates to the new algorithm are needed to study such systems. Despite this limitation, the future looks promising for the efficient simulation of imperfect quantum devices [6, 7].
Jordi Tura is an assistant professor at the Lorentz Institute of the University of Leiden, Netherlands. He also leads the institutes Applied Quantum Algorithms group. Tura obtained his B.Sc. degrees in mathematics and telecommunications and his M.Sc. in applied mathematics from the Polytechnic University of Catalonia, Spain. His Ph.D. was awarded by the Institute of Photonic Sciences, Spain. During his postdoctoral stay at the Max Planck Institute of Quantum Optics in Germany, Tura started working in the field of quantum information processing for near-term quantum devices.
A nanopatterned magnetic structure features an unprecedently strong coupling between lattice vibrations and quantized spin waves, which could lead to novel ways of manipulating quantum information. Read More
Go here to see the original:
Imperfections Lower the Simulation Cost of Quantum Computers - Physics