Category Archives: Quantum Physics

History of quantum mechanics – Wikipedia

The history of quantum mechanics is a fundamental part of the history of modern physics. Quantum mechanics' history, as it interlaces with the history of quantum chemistry, began essentially with a number of different scientific discoveries: the 1838 discovery of cathode rays by Michael Faraday; the 185960 winter statement of the black-body radiation problem by Gustav Kirchhoff; the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete; the discovery of the photoelectric effect by Heinrich Hertz in 1887; and the 1900 quantum hypothesis by Max Planck that any energy-radiating atomic system can theoretically be divided into a number of discrete "energy elements" (Greek letter epsilon) such that each of these energy elements is proportional to the frequency with which each of them individually radiate energy, as defined by the following formula:

where h is a numerical value called Planck's constant.

Then, Albert Einstein in 1905, in order to explain the photoelectric effect previously reported by Heinrich Hertz in 1887, postulated consistently with Max Planck's quantum hypothesis that light itself is made of individual quantum particles, which in 1926 came to be called photons by Gilbert N. Lewis. The photoelectric effect was observed upon shining light of particular wavelengths on certain materials, such as metals, which caused electrons to be ejected from those materials only if the light quantum energy was greater than the work function of the metal's surface.

The phrase "quantum mechanics" was coined (in German, Quantenmechanik) by the group of physicists including Max Born, Werner Heisenberg, and Wolfgang Pauli, at the University of Gttingen in the early 1920s, and was first used in Born's 1924 paper "Zur Quantenmechanik".[1] In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding.

Ludwig Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete (as opposed to continuous). He was a founder of the Austrian Mathematical Society, together with the mathematicians Gustav von Escherich and Emil Mller. Boltzmann's rationale for the presence of discrete energy levels in molecules such as those of iodine gas had its origins in his statistical thermodynamics and statistical mechanics theories and was backed up by mathematical arguments, as would also be the case twenty years later with the first quantum theory put forward by Max Planck.

In 1900, the German physicist Max Planck reluctantly introduced the idea that energy is quantized in order to derive a formula for the observed frequency dependence of the energy emitted by a black body, called Planck's law, that included a Boltzmann distribution (applicable in the classical limit). Planck's law[2] can be stated as follows: I ( , T ) = 2 h 3 c 2 1 e h k T 1 , {displaystyle I(nu ,T)={frac {2hnu ^{3}}{c^{2}}}{frac {1}{e^{frac {hnu }{kT}}-1}},} where:

The earlier Wien approximation may be derived from Planck's law by assuming h k T {displaystyle hnu gg kT} .

Moreover, the application of Planck's quantum theory to the electron allowed tefan Procopiu in 19111913, and subsequently Niels Bohr in 1913, to calculate the magnetic moment of the electron, which was later called the "magneton;" similar quantum computations, but with numerically quite different values, were subsequently made possible for both the magnetic moments of the proton and the neutron that are three orders of magnitude smaller than that of the electron.

In 1905, Albert Einstein explained the photoelectric effect by postulating that light, or more generally all electromagnetic radiation, can be divided into a finite number of "energy quanta" that are localized points in space. From the introduction section of his March 1905 quantum paper, "On a heuristic viewpoint concerning the emission and transformation of light", Einstein states:

"According to the assumption to be contemplated here, when a light ray is spreading from a point, the energy is not distributed continuously over ever-increasing spaces, but consists of a finite number of 'energy quanta' that are localized in points in space, move without dividing, and can be absorbed or generated only as a whole."

This statement has been called the most revolutionary sentence written by a physicist of the twentieth century.[3] These energy quanta later came to be called "photons", a term introduced by Gilbert N. Lewis in 1926. The idea that each photon had to consist of energy in terms of quanta was a remarkable achievement; it effectively solved the problem of black-body radiation attaining infinite energy, which occurred in theory if light were to be explained only in terms of waves. In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules.

These theories, though successful, were strictly phenomenological: during this time, there was no rigorous justification for quantization, aside, perhaps, from Henri Poincar's discussion of Planck's theory in his 1912 paper Sur la thorie des quanta.[4][5] They are collectively known as the old quantum theory.

The phrase "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics (1931).

In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. This theory was for a single particle and derived from special relativity theory. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan[6][7] developed matrix mechanics and the Austrian physicist Erwin Schrdinger invented wave mechanics and the non-relativistic Schrdinger equation as an approximation of the generalised case of de Broglie's theory.[8] Schrdinger subsequently showed that the two approaches were equivalent.

Heisenberg formulated his uncertainty principle in 1927, and the Copenhagen interpretation started to take shape at about the same time. Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron. The Dirac equation achieves the relativistic description of the wavefunction of an electron that Schrdinger failed to obtain. It predicts electron spin and led Dirac to predict the existence of the positron. He also pioneered the use of operator theory, including the influential braket notation, as described in his famous 1930 textbook. During the same period, Hungarian polymath John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces, as described in his likewise famous 1932 textbook. These, like many other works from the founding period, still stand, and remain widely used.

The field of quantum chemistry was pioneered by physicists Walter Heitler and Fritz London, who published a study of the covalent bond of the hydrogen molecule in 1927. Quantum chemistry was subsequently developed by a large number of workers, including the American theoretical chemist Linus Pauling at Caltech, and John C. Slater into various theories such as Molecular Orbital Theory or Valence Theory.

Beginning in 1927, researchers attempted to apply quantum mechanics to fields instead of single particles, resulting in quantum field theories. Early workers in this area include P.A.M. Dirac, W. Pauli, V. Weisskopf, and P. Jordan. This area of research culminated in the formulation of quantum electrodynamics by R.P. Feynman, F. Dyson, J. Schwinger, and S. Tomonaga during the 1940s. Quantum electrodynamics describes a quantum theory of electrons, positrons, and the electromagnetic field, and served as a model for subsequent quantum field theories.[6][7][9]

The theory of quantum chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross and Wilczek in 1975.

Building on pioneering work by Schwinger, Higgs and Goldstone, the physicists Glashow, Weinberg and Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force, for which they received the 1979 Nobel Prize in Physics.

Go here to read the rest:

History of quantum mechanics - Wikipedia

Can consciousness be explained by quantum physics? My research takes us a step closer to finding out – The Conversation UK

One of the most important open questions in science is how our consciousness is established. In the 1990s, long before winning the 2020 Nobel Prize in Physics for his prediction of black holes, physicist Roger Penrose teamed up with anaesthesiologist Stuart Hameroff to propose an ambitious answer.

They claimed that the brains neuronal system forms an intricate network and that the consciousness this produces should obey the rules of quantum mechanics the theory that determines how tiny particles like electrons move around. This, they argue, could explain the mysterious complexity of human consciousness.

Penrose and Hameroff were met with incredulity. Quantum mechanical laws are usually only found to apply at very low temperatures. Quantum computers, for example, currently operate at around -272C. At higher temperatures, classical mechanics takes over. Since our body works at room temperature, you would expect it to be governed by the classical laws of physics. For this reason, the quantum consciousness theory has been dismissed outright by many scientists though others are persuaded supporters.

Instead of entering into this debate, I decided to join forces with colleagues from China, led by Professor Xian-Min Jin at Shanghai Jiaotong University, to test some of the principles underpinning the quantum theory of consciousness.

In our new paper, weve investigated how quantum particles could move in a complex structure like the brain but in a lab setting. If our findings can one day be compared with activity measured in the brain, we may come one step closer to validating or dismissing Penrose and Hameroffs controversial theory.

Our brains are composed of cells called neurons, and their combined activity is believed to generate consciousness. Each neuron contains microtubules, which transport substances to different parts of the cell. The Penrose-Hameroff theory of quantum consciousness argues that microtubules are structured in a fractal pattern which would enable quantum processes to occur.

Fractals are structures that are neither two-dimensional nor three-dimensional, but are instead some fractional value in between. In mathematics, fractals emerge as beautiful patterns that repeat themselves infinitely, generating what is seemingly impossible: a structure that has a finite area, but an infinite perimeter.

Read more: Explainer: what are fractals?

This might sound impossible to visualise, but fractals actually occur frequently in nature. If you look closely at the florets of a cauliflower or the branches of a fern, youll see that theyre both made up of the same basic shape repeating itself over and over again, but at smaller and smaller scales. Thats a key characteristic of fractals.

The same happens if you look inside your own body: the structure of your lungs, for instance, is fractal, as are the blood vessels in your circulatory system. Fractals also feature in the enchanting repeating artworks of MC Escher and Jackson Pollock, and theyve been used for decades in technology, such as in the design of antennas. These are all examples of classical fractals fractals that abide by the laws of classical physics rather than quantum physics.

Its easy to see why fractals have been used to explain the complexity of human consciousness. Because theyre infinitely intricate, allowing complexity to emerge from simple repeated patterns, they could be the structures that support the mysterious depths of our minds.

But if this is the case, it could only be happening on the quantum level, with tiny particles moving in fractal patterns within the brains neurons. Thats why Penrose and Hameroffs proposal is called a theory of quantum consciousness.

Were not yet able to measure the behaviour of quantum fractals in the brain if they exist at all. But advanced technology means we can now measure quantum fractals in the lab. In recent research involving a scanning tunnelling microscope (STM), my colleagues at Utrecht and I carefully arranged electrons in a fractal pattern, creating a quantum fractal.

When we then measured the wave function of the electrons, which describes their quantum state, we found that they too lived at the fractal dimension dictated by the physical pattern wed made. In this case, the pattern we used on the quantum scale was the Sierpiski triangle, which is a shape thats somewhere between one-dimensional and two-dimensional.

This was an exciting finding, but STM techniques cannot probe how quantum particles move which would tell us more about how quantum processes might occur in the brain. So in our latest research, my colleagues at Shanghai Jiaotong University and I went one step further. Using state-of-the-art photonics experiments, we were able to reveal the quantum motion that takes place within fractals in unprecedented detail.

We achieved this by injecting photons (particles of light) into an artificial chip that was painstakingly engineered into a tiny Sierpiski triangle. We injected photons at the tip of the triangle and watched how they spread throughout its fractal structure in a process called quantum transport. We then repeated this experiment on two different fractal structures, both shaped as squares rather than triangles. And in each of these structures we conducted hundreds of experiments.

Our observations from these experiments reveal that quantum fractals actually behave in a different way to classical ones. Specifically, we found that the spread of light across a fractal is governed by different laws in the quantum case compared to the classical case.

This new knowledge of quantum fractals could provide the foundations for scientists to experimentally test the theory of quantum consciousness. If quantum measurements are one day taken from the human brain, they could be compared against our results to definitely decide whether consciousness is a classical or a quantum phenomenon.

Our work could also have profound implications across scientific fields. By investigating quantum transport in our artificially designed fractal structures, we may have taken the first tiny steps towards the unification of physics, mathematics and biology, which could greatly enrich our understanding of the world around us as well as the world that exists in our heads.

Read the rest here:

Can consciousness be explained by quantum physics? My research takes us a step closer to finding out - The Conversation UK

What is quantum theory? – Definition from WhatIs.com

Quantum theory is the theoretical basis of modern physics that explains the nature and behavior of matter and energy on the atomic and subatomic level.The nature and behavior of matter and energy at that level is sometimes referred to as quantum physics and quantum mechanics. Organizations in several countries have devoted significant resources to the development of quantum computing, which uses quantum theory to drastically improve computing capabilities beyond what is possible using today's classical computers.

In 1900, physicist Max Planck presented his quantum theory to the German Physical Society. Planck had sought to discover the reason that radiation from a glowing body changes in color from red, to orange, and, finally, to blue as its temperature rises. He found that by making the assumption that energy existed in individual units in the same way that matter does, rather than just as a constant electromagnetic wave - as had been formerly assumed - and was therefore quantifiable, he could find the answer to his question. The existence of these units became the first assumption of quantum theory.

Planck wrote a mathematical equation involving a figure to represent these individual units of energy, which he called quanta. The equation explained the phenomenon very well; Planck found that at certain discrete temperature levels (exact multiples of a basic minimum value), energy from a glowing body will occupy different areas of the color spectrum. Planck assumed there was a theory yet to emerge from the discovery of quanta, but, in fact, their very existence implied a completely new and fundamental understanding of the laws of nature. Planck won the Nobel Prize in Physics for his theory in 1918, but developments by various scientists over a thirty-year period all contributed to the modern understanding of quantum theory.

The two major interpretations of quantum theory's implications for the nature of reality are the Copenhagen interpretation and the many-worlds theory. Niels Bohr proposed the Copenhagen interpretation of quantum theory, which asserts that a particle is whatever it is measured to be (for example, a wave or a particle), but that it cannot be assumed to have specific properties, or even to exist, until it is measured. In short, Bohr was saying that objective reality does not exist. This translates to a principle called superposition that claims that while we do not know what the state of any object is, it is actually in all possible states simultaneously, as long as we don't look to check.

To illustrate this theory, we can use the famous and somewhat cruel analogy of Schrodinger's Cat. First, we have a living cat and place it in a thick lead box. At this stage, there is no question that the cat is alive. We then throw in a vial of cyanide and seal the box. We do not know if the cat is alive or if the cyanide capsule has broken and the cat has died. Since we do not know, the cat is both dead and alive, according to quantum law - in a superposition of states. It is only when we break open the box and see what condition the cat is that the superposition is lost, and the cat must be either alive or dead.

The second interpretation of quantum theory is the many-worlds (or multiverse theory. It holds that as soon as a potential exists for any object to be in any state, the universe of that object transmutes into a series of parallel universes equal to the number of possible states in which that the object can exist, with each universe containing a unique single possible state of that object. Furthermore, there is a mechanism for interaction between these universes that somehow permits all states to be accessible in some way and for all possible states to be affected in some manner. Stephen Hawking and the late Richard Feynman are among the scientists who have expressed a preference for the many-worlds theory.

Although scientists throughout the past century have balked at the implications of quantum theory - Planck and Einstein among them - the theory's principles have repeatedly been supported by experimentation, even when the scientists were trying to disprove them. Quantum theory and Einstein's theory of relativity form the basis for modern physics. The principles of quantum physics are being applied in an increasing number of areas, including quantum optics, quantum chemistry, quantum computing, and quantum cryptography.

See the article here:

What is quantum theory? - Definition from WhatIs.com

How Bell’s Theorem Proved ‘Spooky Action at a Distance’ Is Real – Quanta Magazine

We take for granted that an event in one part of the world cannot instantly affect what happens far away. This principle, which physicists call locality, was long regarded as a bedrock assumption about the laws of physics. So when Albert Einstein and two colleagues showed in 1935 that quantum mechanics permits spooky action at a distance, as Einstein put it, this feature of the theory seemed highly suspect. Physicists wondered whether quantum mechanics was missing something.

Then in 1964, with the stroke of a pen, the Northern Irish physicist John Stewart Bell demoted locality from a cherished principle to a testable hypothesis. Bell proved that quantum mechanics predicted stronger statistical correlations in the outcomes of certain far-apart measurements than any local theory possibly could. In the years since, experiments have vindicated quantum mechanics again and again.

Bells theorem upended one of our most deeply held intuitions about physics, and prompted physicists to explore how quantum mechanics might enable tasks unimaginable in a classical world. The quantum revolution thats happening now, and all these quantum technologies thats 100% thanks to Bells theorem, says Krister Shalm, a quantum physicist at the National Institute of Standards and Technology.

Heres how Bells theorem showed that spooky action at a distance is real.

The spooky action that bothered Einstein involves a quantum phenomenon known as entanglement, in which two particles that we would normally think of as distinct entities lose their independence. Famously, in quantum mechanics a particles location, polarization and other properties can be indefinite until the moment they are measured. Yet measuring the properties of entangled particles yields results that are strongly correlated, even when the particles are far apart and measured nearly simultaneously. The unpredictable outcome of one measurement appears to instantly affect the outcome of the other, regardless of the distance between them a gross violation of locality.

To understand entanglement more precisely, consider a property of electrons and most other quantum particles called spin. Particles with spin behave somewhat like tiny magnets. When, for instance, an electron passes through a magnetic field created by a pair of north and south magnetic poles, it gets deflected by a fixed amount toward one pole or the other. This shows that the electrons spin is a quantity that can have only one of two values: up for an electron deflected toward the north pole, and down for an electron deflected toward the south pole.

Imagine an electron passing through a region with the north pole directly above it and the south pole directly below. Measuring its deflection will reveal whether the electrons spin is up or down along the vertical axis. Now rotate the axis between the magnet poles away from vertical, and measure deflection along this new axis. Again, the electron will always deflect by the same amount toward one of the poles. Youll always measure a binary spin value either up or down along any axis.

It turns out its not possible to build any detector that can measure a particles spin along multiple axes at the same time. Quantum theory asserts that this property of spin detectors is actually a property of spin itself: If an electron has a definite spin along one axis, its spin along any other axis is undefined.

Armed with this understanding of spin, we can devise a thought experiment that we can use to prove Bells theorem. Consider a specific example of an entangled state: a pair of electrons whose total spin is zero, meaning measurements of their spins along any given axis will always yield opposite results. Whats remarkable about this entangled state is that, although the total spin has this definite value along all axes, each electrons individual spin is indefinite.

Suppose these entangled electrons are separated and transported to distant laboratories, and that teams of scientists in these labs can rotate the magnets of their respective detectors any way they like when performing spin measurements.

When both teams measure along the same axis, they obtain opposite results 100% of the time. But is this evidence of nonlocality? Not necessarily.

Alternatively, Einstein proposed, each pair of electrons could come with an associated set of hidden variables specifying the particles spins along all axes simultaneously. These hidden variables are absent from the quantum description of the entangled state, but quantum mechanics may not be telling the whole story.

Hidden variable theories can explain why same-axis measurements always yield opposite results without any violation of locality: A measurement of one electron doesnt affect the other but merely reveals the preexisting value of a hidden variable.

Bell proved that you could rule out local hidden variable theories, and indeed rule out locality altogether, by measuring entangled particles spins along different axes.

Suppose, for starters, that one team of scientists happens to rotate its detector relative to the other labs by 180 degrees. This is equivalent to swapping its north and south poles, so an up result for one electron would never be accompanied by a down result for the other. The scientists could also choose to rotate it an in-between amount 60 degrees, say. Depending on the relative orientation of the magnets in the two labs, the probability of opposite results can range anywhere between 0% and 100%.

Without specifying any particular orientations, suppose that the two teams agree on a set of three possible measurement axes, which we can label A, B and C. For every electron pair, each lab measures the spin of one of the electrons along one of these three axes chosen at random.

Lets now assume the world is described by a local hidden variable theory, rather than quantum mechanics. In that case, each electron has its own spin value in each of the three directions. That leads to eight possible sets of values for the hidden variables, which we can label in the following way:

The set of spin values labeled 5, for instance, dictates that the result of a measurement along axis A in the first lab will be up, while measurements along axes B and C will be down; the second electrons spin values will be opposite.

For any electron pair possessing spin values labeled 1 or 8, measurements in the two labs will always yield opposite results, regardless of which axes the scientists choose to measure along. The other six sets of spin values all yield opposite results in 33% of different-axis measurements. (For instance, for the spin values labeled 5, the labs will obtain opposite results when one measures along axis B while the other measures along C; this represents one-third of the possible choices.)

Thus the labs will obtain opposite results when measuring along different axes at least 33% of the time; equivalently, they will obtain the same result at most 67% of the time. This result an upper bound on the correlations allowed by local hidden variable theories is the inequality at the heart of Bells theorem.

Now, what about quantum mechanics?Were interested in the probability of both labs obtaining the same result when measuring the electrons spins along different axes. The equations of quantum theory provide a formula for this probability as a function of the angles between the measurement axes.

According to the formula, when the three axes are all as far apart as possible that is, all 120 degrees apart, as in the Mercedes logo both labs will obtain the same result 75% of the time. This exceeds Bells upper bound of 67%.

Thats the essence of Bells theorem: If locality holds and a measurement of one particle cannot instantly affect the outcome of another measurement far away, then the results in a certain experimental setup can be no more than 67% correlated. If, on the other hand, the fates of entangled particles are inextricably linked even across vast distances, as in quantum mechanics, the results of certain measurements will exhibit stronger correlations.

Since the 1970s, physicists have made increasingly precise experimental tests of Bells theorem. Each one has confirmed the strong correlations of quantum mechanics. In the past five years, various loopholes have been closed. Locality that long-held assumption about physical law is not a feature of our world.

Editors note: The author is currently a postdoctoral researcher at JILA in Boulder, Colorado.

See the article here:

How Bell's Theorem Proved 'Spooky Action at a Distance' Is Real - Quanta Magazine

Physicists Show That a Quantum Particle Made of Light and Matter Can Be Dragged by a Current of Electrons – Columbia University

In therecent Nature study, Basov and his colleagues recreated Fizeaus experiments on a speck of graphene made up of a single layer of carbon atoms. Hooking up the graphene to a battery, they created an electrical current reminiscent of Fizeaus water streaming through a pipe. But instead of shining light on the moving water and measuring its speed in both directions, as Fizeau did, they generated an electromagnetic wave with a compressed wavelengtha polaritonby focusing infrared light on a gold nub in the graphene. The activated stream of polaritons look like light but are physically more compact due to their short wavelengths.

The researchers clocked the polaritons speed in both directions. When they traveled with the flow of the electrical current, they maintained their original speed. But when launched against the current, they slowed by a few percentage points.

We were surprised when we saw it, saidstudy co-author Denis Bandurin, a physics researcher at MIT. First, the device was still alive, despite the heavy current we passed through itit hadnt blown up. Then we noticed the one-way effect, which was different from Fizeaus original experiments.

The researchers repeated the experiments over and over, led by the studys first-author, Yinan Dong, a Columbia graduate student. Finally, it dawned on them. Graphene is a material that turns electrons into relativistic particles, Dong said. We needed to account for their spectrum.

A group at Berkeley Lab founda similar result, published in the same issue of Nature. Beyond reproducing the Fizeau effect in graphene, both studies have practical applications. Most natural systems are symmetric, but here, researchers found an intriguing exception. Basov said he hopes to slow down, and ultimately, cut off the flow of polaritons in one direction. Its not an easy task, but it could hold big rewards.

Engineering a system with a one-way flow of light is very difficult to achieve, saidMilan Delor, a physical chemist working on light-matter interactions at Columbia who was not involved in the research. As soon as you can control the speed and direction of polaritons, you can transmit information in nanoscale circuits on ultrafast timescales. Its one of the ingredients currently missing in photon-based circuits.

Read the original post:

Physicists Show That a Quantum Particle Made of Light and Matter Can Be Dragged by a Current of Electrons - Columbia University

Here’s How IBM Is Driving the Use of Quantum Computing on Wall Street – Business Insider

Quantum computers might look like extravagant chandeliers, but they actually hold great potential. And IBM's top quantum chief said Wall Street's use of the tech is on the cusp of taking off.

Quantum computing unlocks the ability to execute big, complex calculations faster than traditional computers. It does so by leveraging quantum mechanics, which is a form of physics that runs on quantum bits, or qubits, rather than the traditional 1 and 0 that computers typically use.

For years, theoretical research has shown that while quantum computing can be beneficial, the cost for companies to deploy the tech has been too high to justify .

JPMorgan Chase, for example, has worked with IBM to use quantum to test an algorithm that predicted options prices, according to a 2019 IBM research blog.

IBM's quantum computer required less data input, cutting down the number of samples for a given simulation from millions to a few thousand, IBM mathematician Dr. Stefan Woerner said at the time. With fewer samples, he said, computations could be done in near real-time, as opposed to overnight.

While the technology was tested successfully and is ready to use, the bank has kept the capability on the back burner. The resources required for the quantum machine made the classical computer a better, more efficient option, a bank spokesperson told Insider.

But that'll soon change, according to IBM's chief quantum exponent, Bob Sutor.

"When is quantum going to do something more for me than the systems I have already?" Sutor told Insider. "Within a few years we'll start to see that."

That's because more people are starting to test quantum techniques. The more users, Sutor said, the more IBM and others within its quantum network, including JPMorgan, Goldman Sachs, and Wells Fargo can learn, iterate, and build off the rare instances when using quantum over a classical computer makes sense, financially.

In 2016 IBM put quantum on the cloud, and now has about 20 quantum computing systems accessible via the web, Sutor said. Half are free to use.

Roughly 325,000 people have registered to use the tech since 2016 and there are about 2 billion circuits (the tiny bits of code sent to the quantum hardware to run) executed daily, he added.

An open-source tool, called Qiskit, enables users to code in Python when using the cloud-based quantum computers, Sutor said, a coding language specifically chosen for its widespread use and deep roots within the data science and AI communities.

Meanwhile, IBM is making investments to grow the quantum team across scientists and developers, according to a spokesperson who declined to specify numbers. The company is also standing up a quantum computer in Tokyo this year and another on-premise quantum computer in Cleveland, Sutor said.

IBM's quantum computers are getting bigger, too. The firm's first quantum computer was a 5-qubit machine (the number and quality of qubits reflect the machine's compute power). Now it has a machine with 65 qubits; by year end it will build a machine with 127 qubits; and by 2023, IBM will have a machine with more than 1,000 qubits, Sutor said.

Sutor said financial services companies are on the forefront of quantum exploration, adding that "their researchers are very hardcore when you're talking about artificial intelligence and now quantum."

Speed matters for financial institutions performing risk calculations or algorithmic trading, making a strong use case for quantum computing, Howard Boville, head of IBM's hybrid cloud platform, told Insider.

"They're always looking for milliseconds of advantage in terms of latency," he said, referring to the financial firms tapping the technology.

Read more from the original source:

Here's How IBM Is Driving the Use of Quantum Computing on Wall Street - Business Insider

Christian Ferko’s PhD Thesis Defense | Department of Physics | The University of Chicago – UChicago News

11:00 am12:00 pm

Please join us:

Christian Ferkos PhDThesisDefense

Monday July 26, 2021 at 11 am CDT

SUPERSYMMETRY AND IRRELEVANT DEFORMATIONS

This The T bar{T} operator provides a universal irrelevant deformation of two-dimensional quantum field theories with remarkable properties, including connections to both string theory and holography beyond AdS spacetimes. In particular, it appears that a T bar{T}- deformed theory is a kind of new structure, which is neither a local quantum field theory nor a full-fledged string theory, but which is nonetheless under some analytic control. On the other hand, supersymmetry is a beautiful extension of Poincare symmetry which relates bosonic and fermionic degrees of freedom. The extra computational power provided by supersymmetry renders many calculations more tractable. It is natural to ask what one can learn about irrelevant deformations in supersymmetric quantum field theories.

In this talk, I will describe a presentation of the T bar{T} deformation in manifestly supersymmetric settings. I define a ``supercurrent-squared'' operator, which is closely related to T bar{T}, in any two-dimensional theory with (0, 1), (1, 1), or (2, 2) supersymmetry. This deformation generates a flow equation for the superspace Lagrangian of the theory, which therefore makes the supersymmetry manifest. In certain examples, the deformed theories produced by supercurrent-squared are related to superstring and brane actions, and some of these theories possess extra non-linearly realized supersymmetries. Finally, I will show that Tbar{T} defines a new theory of both abelian and non-abelian gauge fields coupled to charged matter, which includes models compatible with maximal supersymmetry. In analogy with the

Dirac-Born-Infeld (DBI) theory, which defines a non-linear extension of Maxwell electrodynamics, these models possess a critical value for the electric field.

Committee members:

Savdeep Sethi (Chair)

Jeffrey Harvey

Robert Wald

Mark Oreglia

Christian will be starting a postdoc at UC Davis in the Center for Quantum Mathematics and

Physics (QMAP).

Thesis Defense

Read the original here:

Christian Ferko's PhD Thesis Defense | Department of Physics | The University of Chicago - UChicago News

4 bizarre Stephen Hawking theories that turned out to be right (and 6 we’re not sure about) – Livescience.com

Stephen Hawking was one of the greatest theoretical physicists of the modern age. Best known for his appearances in popular media and his lifelong battle against debilitating illness, his true impact on posterity comes from his brilliant five-decade career in science. Beginning with his doctoral thesis in 1966, his groundbreaking work continued nonstop right up to his final paper in 2018, completed just days before his death at the age of 76.

Hawking worked at the intellectual cutting edge of physics, and his theories often seemed bizarrely far-out at the time he formulated them. Yet they're slowly being accepted into the scientific mainstream, with new supporting evidence coming in all the time. From his mind-blowing views of black holes to his explanation for the universes humble beginnings, here are some of his theories that were vindicated and some that are still up in the air.

Hawking got off to a flying start with his doctoral thesis, written at a critical time when there was heated debate between two rival cosmological theories: the Big Bang and the Steady State. Both theories accepted that the universe is expanding, but in the first it expands from an ultra-compact, super-dense state at a finite time in the past, while the second assumes the universe has been expanding forever, with new matter constantly being created to maintain a constant density. In his thesis, Hawking showed that the Steady State theory is mathematically self-contradictory. He argued instead that the universe began as an infinitely small, infinitely dense point called a singularity. Today, Hawking's description is almost universally accepted among scientists.

More than anything else, Hawking's name is associated with black holes another kind of singularity, formed when a star undergoes complete collapse under its own gravity. These mathematical curiosities arose from Einstein's theory of general relativity, and they had been debated for decades when Hawking turned his attention to them in the early 1970s.

According to an article in Nature, his stroke of genius was to combine Einstein's equations with those of quantum mechanics, turning what had previously been a theoretical abstraction into something that looked like it might actually exist in the universe. The final proof that Hawking was correct came in 2019, when the Event Horizon Telescope obtained a direct image of the supermassive black hole lurking in the center of giant galaxy Messier 87.

Black holes got their name because their gravity is so strong that photons, or particles of light, shouldn't be able to escape from them. But in his early work on the subject, Hawking argued that the truth is more subtle than this monochrome picture.

By applying quantum theory specifically, the idea that pairs of "virtual photons" can spontaneously be created out of nothing he realized that some of these photons would appear to be radiated from the black hole. Now referred to as Hawking radiation, the theory was recently confirmed in a laboratory experiment at the Technion-Israel Institute of Technology, Israel. In place of a real black hole, the researchers used an acoustic analog a "sonic black hole" from which sound waves cannot escape. They detected the equivalent of Hawking radiation exactly in accordance with the physicist's predictions.

In classical physics, entropy, or the disorder of a system that can only ever increase with time, never decreases. Together with Jacob Bekenstein, Hawking proposed that the entropy of a black hole is measured by the surface area of its surrounding event horizon.

The recent discovery of gravitational waves emitted by merging pairs of black holes shows that Hawking was right again. As Hawking told the BBC after the first such event in 2016, "the observed properties of the system are consistent with predictions about black holes that I made in 1970 ... the area of the final black hole is greater than the sum of the areas of the initial black holes." More recent observations have provided further confirmation of Hawking's "area theorem."

So the world is gradually catching up with Stephen Hawking's amazing predictions. But there are still quite a few that have yet to be proven one way or the other:

The existence of Hawking radiation creates a serious problem for theoreticians. It seems to be the only process in physics that deletes information from the universe.

The basic properties of the material that went into making the black hole appear to be lost forever; the radiation that comes out tells us nothing about them. This is the so-called information paradox that scientists have been trying to solve for decades. Hawking's own take on the mystery, which was published in 2016, is that the information isn't truly lost. It's stored in a cloud of zero-energy particles surrounding the black hole, which he dubbed "soft hair." But Hawking's hairy black hole theorem is only one of several hypotheses that have been put forward, and to date no one knows the true answer.

Black holes are created from the gravitational collapse of pre-existing matter such as stars. But it's also possible that some were created spontaneously in the very early universe, soon after the Big Bang.

Hawking was the first person to explore the theory behind such primordial black holes in depth. It turns out they could have virtually any mass whatsoever, from very light to very heavy though the really tiny ones would have "evaporated" into nothing by now due to Hawking radiation. One intriguing possibility considered by Hawking is that primordial black holes might make up the mysterious dark matter that astronomers believe permeates the universe. However, as LiveScience previously reported, current observational evidence indicates that this is unlikely. Either way, we currently don't have observational tools to detect primordial black holes or to say whether they make up dark matter.

One of the topics Hawking tinkered with toward the end of his life was the multiverse theory the idea that our universe, with its beginning in the Big Bang, is just one of an infinite number of coexisting bubble universes.

Hawking wasn't happy with the suggestion, made by some scientists, that any ludicrous situation you can imagine must be happening right now somewhere in that infinite ensemble. So, in his very last paper in 2018, Hawking sought, in his own words, to "try to tame the multiverse." He proposed a novel mathematical framework that, while not dispensing with the multiverse altogether, rendered it finite rather than infinite. But as with any speculation concerning parallel universes, we have no idea if his ideas are right. And it seems unlikely that scientists will be able to test his idea any time soon.

Surprising as it may sound, the laws of physics as we understand them today don't prohibit time travel. The solutions to Einstein's equations of general relativity include "closed time-like curves," which would effectively allow you to travel back into your own past. Hawking was bothered by this, because he felt that backward travel in time raised logical paradoxes that simply shouldn't be possible.

So he suggested that some currently unknown law of physics prevents closed timelike curves from occurring his so-called "chronology protection conjecture." But "conjecture" is just science-speak for "guess," and we really don't know whether time travel is possible or not.

One of the questions cosmologists get asked most often is "what happened before the Big Bang?" Hawking's own view was that the question is meaningless. To all intents and purposes, time itself as well as the universe and everything in it began at the Big Bang.

"For me, this means that there is no possibility of a creator," he said, and as LiveScience previously reported, "because there is no time for a creator to have existed in." That's an opinion many people will disagree with, but one that Hawking expressed on numerous occasions throughout his life. It almost certainly falls in the "will never be resolved one way or the other" category.

In his later years, Hawking made a series of bleak prophecies concerning the future of humanity that he may or may not have been totally serious about, BBC reported

These range from the suggestion that the elusive Higgs boson, or "God particle," might trigger a vacuum bubble that would gobble up the universe to hostile alien invasions and artificial intelligence (AI) takeovers. Although Stephen Hawking was right about so many things, we'll just have to hope he was wrong about these.

Originally published on Live Science.

See the original post:

4 bizarre Stephen Hawking theories that turned out to be right (and 6 we're not sure about) - Livescience.com

Can we build a computer with free will? – The Next Web

Do you have free will? Can you make your own decisions? Or are you more like an automaton, just moving as required by your constituent parts? Probably, like most people, you feel you have something called free will. Your decisions are not predetermined; you could do otherwise.

Yet scientists can tell you that you are made up of atoms and molecules and that they are governed by the laws of physics. Fundamentally, then in terms of atoms and molecules we can predict the future for any given starting point. This seems to leave no room for free will, alternative actions, or decisions.

Confused? You have every right to be. This has been one of the long outstanding unresolved problems in philosophy. There has been no convincing resolution, though speculation has included a key role for quantum theory, which describes the uncertainty of nature at the smallest scales. It is this that has fascinated me. My research interests include the foundations of quantum theory. So could free will be thought of as a macroscopic quantum phenomenon? I set out to explore the question.

There is enough philosophy literature on the subject to fill a small library. As a trained scientist I approached the problem by asking: what is the evidence? Sadly, in some ways, my research showed no link between free will and fundamental physics. Decades of philosophical debate as to whether free will could be a quantum phenomenon has been chasing an unfounded myth.

Imagine you are on stage, facing two envelopes. You are told that one has 100 inside and the other is empty. You have a free choice to pick one yet every time the magician wins, and you pick the empty one. This implies that our sense of free will is not quite as reliable as we think it is or at least that its subject to manipulation, if it is there.

This is just one of a wide variety of examples that question our awareness of our own decision-making processes. Evidence from psychology, sociology, and even neuroscience all give the same message that we are unaware of how we make decisions. And our own introspection is unreliable as evidence of how our mental processes function.

So, what is the evidence for the abstract concept of free will? None. How could we test for it? We cant. How could we recognize it? We cant. The supposed connection between our perception of free will and the uncertainty inherent to quantum theory is, therefore, unsupported by the evidence.

But we do have an experience of free will, and this experience is a fact. So having debunked the supposed link with fundamental physics, I wanted to go further and explore why we have a perception of being able to do otherwise. That perception has nothing to do with knowing the exact position of every molecule in our bodies, but everything to do with how we question and challenge our decision-making in a way that really does change our behavior.

For me as a scientist, this meant building a model of free will and testing it. But how would you do this? Could I mimic it with a computer program? If I were successful how would my computer or robot be tested?

The topic is fuelled by prejudice. You would probably assume without evidence that my brother has free will, but my computer does not. So I will offer an emotionally neutral challenge: if an alien lands on Earth, how would you decide if it was an alien being with free will like us, or a sophisticated automaton?

Strangely, the philosophical literature does not seem to consider tests for free will. But as a scientist, it was essential to have a test for my model. So here is my answer: if you are right-handed, you will write your name holding a pen in your right hand. You will do so predictably almost 100% of the time. But you have free will, you could do otherwise. You can prove it by responding to a challenge or even challenging yourself. Given a challenge you may well write with your left hand. That is a highly discerning test of free will. And you can probably think of others, not just finely balanced 50:50 choices, but really rare events that show your independence and distinguish you from an automaton.

Based on this, I would test my alien with a challenge to do something unusual and useless, perhaps slightly harmful even, like putting its hand near a flame. I would take that as evidence of free will. After all, no robot would be programmed to do that.

And so I tried to model that behavior in the simplest most direct way, starting with a generic goal-seeking computer program that responds to inputs from the environment. These programs are commonly used across disciplines from sociology, economics, and AI. The goal-seeking program is so general that it applies to simple models of human behavior, but also to hardware like the battery saving program in your mobile phone.

For free will, we add one more goal: to assert independence. The computer program is then designed to satisfy this goal or desire by responding to challenges to do otherwise. Its as simple as that. Test it out yourself, the challenges can be external or you can generate your own. After all, isnt that how you conclude that you have free will?

In principle, the program can be implemented in todays computers. It would have to be sophisticated enough to recognize a challenge and even more so to generate its own challenges. But this is well within reach of current technology. That said, Im not sure that I want my own personal computer exercising free will, though.

This article byMark Hadley, Visiting Academic in Physics, University of Warwick isrepublished from The Conversation under a Creative Commons license. Read the original article.

Excerpt from:

Can we build a computer with free will? - The Next Web

Quantum Key Distribution: Is it as secure as claimed and what can it offer the enterprise? – The Register

Feature Do the laws of physics trump mathematical complexity, or is Quantum Key Distribution (QKD) nothing more than 21st-century enterprise encryption snake oil? The number of QKD news headlines that have included unhackable, uncrackable or unbreakable could certainly lead you towards the former conclusion.

However, we at The Reg are unrelenting sceptics for our sins and take all such claims with a bulk-buy bag of Saxa. What this correspondent is not, however, is a physicist nor a mathematician, let alone a quantum cryptography expert. Thankfully, I know several people who are, so I asked them the difficult questions. Here's how those conversations went.

I can tell you what QKD isn't, and that's quantum cryptography. Instead, as the name suggests, it's just the part that deals with the exchange of encryption keys.

As defined by the creators of the first Quantum key distribution (QKD) protocol, (Bennett and Brassard, 1984) it is a method to solve the problem of the need to distribute secret keys among distant Alice and Bobs in order for cryptography to work. The way QKD solves this problem is by using quantum communication. "It relies on the fact that any attempt of an adversary to wiretap the communication would, by the laws of quantum mechanics, inevitably introduce disturbances which can be detected."

Quantum security expert, mathematician and security researcher Dr Mark Carney explains there "are a few fundamental requirements for QKD to work between Alice (A) and Bob (B), these being a quantum key exchange protocol to guarantee the key exchange has a level of security, a quantum and classical channel between A and B, and the relevant hardware and control software for A and B to enact the protocol we started with."

If you are the diagrammatical type, there's a nifty if nerdy explanatory one here.

It's kind of a given that, in and of themselves, quantum key exchange protocols are primarily very secure, as Dr Carney says most are derived from either BB84 (said QKD protocol of Bennett and Brassard, 1984) or E91 (Eckert, 1991) and sometimes a mixture of the two.

"They've had a lot of scrutiny, but they are generally considered to be solid protocols," Dr Carney says, "and when you see people claiming that 'quantum key exchange is totally secure and unhackable' there are a few things that are meant: that the key length is good (at least 256 bits), the protocol can detect someone eavesdropping on the quantum channel and the entropy of the system gives unpredictable keys, and the use of quantum states to encode these means they are tamper-evident."

So, if the protocol is accepted as secure, where do the snake oil claims enter the equation? According to Dr Carney, it's in the implementation where things start to get very sticky.

"We all know that hardware, firmware, and software have bugs even the most well researched, well assessed, widely hacked pieces of tech such as the smartphone regularly has bug updates, security fixes, and emergency patches. Bug-free code is hard, and it shouldn't be considered that the control systems for QKD are any different," Carney insists.

In other words, it's all well and good having a perfected quantum protocol, but if someone can do memory analysis on A or B's systems, then your "super secure" key can get pwned. "It's monumentally naive in my view that the companies producing QKD tech don't take this head on," Dr Carney concludes. "Hiding behind 'magic quantum woo-woo security' is only going to go so far before people start realising."

Professor Rob Young, director of the Quantum Technology Centre at Lancaster University, agrees that there is a gap between an ideal QKD implementation and a real system, as putting the theory into practice isn't easy without making compromises.

QKD connections can be blocked using a DDoS attack as simple as using a pneumatic drill in the vicinity of the cable

"When you generate the states to send from the transmitter," he explains, "errors are made, and detecting them at the receiver efficiently is challenging. Security proofs typically rely on a long list of often unmet assumptions in the real world."

Then there are the hardware limitations, with most commercially implemented QKD systems using a discrete-state protocol sending single photons down low-loss fibres. "Photons can travel a surprising distance before being absorbed, but it means that the data exchange rate falls off exponentially with distance," Young says.

"Nodes in networks need to be trusted currently, as we can't practically relay or switch quantum channels without trusting the nodes. Solutions to these problems are in development, but they could be years away from commercial implementation."

This lack of quantum repeaters is a red flag, according to Duncan Jones, head of Quantum Cybersecurity at Cambridge Quantum, who warns that "trusted repeaters" are not the same thing. "In most cases this simply means a trusted box which reads the key material from one fibre cable and re-transmits it down another. This is not a quantum-safe approach and negates the security benefits of QKD."

Then there's the motorway junction conundrum. Over to Andersen Cheng, CEO at Post-Quantum, to explain. Cheng points to problems such as QKD only telling you that a person-in-the-middle attack has happened, with photons disturbed because of the interception, but not where that attack is taking place or how many attacks are happening.

"If someone is going to put a tap along your 150km high-grade clear fibre-optic cable, how are you going to locate and weed out those taps quickly?" Cheng asks.

What if an attacker locates your cable grid and cuts a cable off? Where is the contingency for redundancy to ensure no disruption? This is where the motorway junction conundrum comes in.

"QKD is like two junctions of a motorway," Cheng explains. "You know car accidents are happening because the road surface is being attacked, but you do not know how many accidents have happened or where or who the culprit is, so you cannot go and kick the offenders out and patch up the road surface."

This all comes to the fore when Anderson insists: "QKD connections can be blocked using a DDoS attack as simple as using a pneumatic drill in the vicinity of the cable."

Sally Epstein, head of Strategic Technology at Cambridge Consultants, throws a couple of pertinent questions into the "ask any QKD vendor" ring.

Quantum-safe cryptography, coupled with verifiable quantum key generation, is an excellent approach to the same problem and works perfectly today

"1. Supply chain: There is a much greater potential for well-funded bad actors to get into the supply chain. How do they manage their supply chain security?

"2. Human fallibility: There are almost certainly exploitable weaknesses in the control software, optical sub-assemblies, electronic, firmware, etc. What penetration testing has the supplier conducted in terms of software and hardware?"

Professor Young thinks that QKD currently offers little return on investment for your average enterprise. "QKD can distribute keys with provable security metrics, but current systems are expensive, slow and difficult to implement," he says.

As has already been pointed out, security proofs are generally based on ideal cases without taking the actual physical implementation into account. This, Young says, "troubles the central premise of using QKD in the first place."

However, he doesn't think that the limitations are fundamental and sees an exciting future for the technology.

Because QKD technology is still maturing, and keys can only be sent across relatively short distances using dedicated fibre-optic cables, Jones argues that "only the biggest enterprises and telcos should be spending any money on researching this technology today."

Not least, he says, because the problems QKD solves are equally well addressed through different means. "Quantum-safe cryptography, coupled with verifiable quantum key generation, is an excellent approach to the same problem and works perfectly today," Jones concludes.

Professor Andrew Lord, head of Optical Network Research at BT, has a less pessimistic outlook.

"Our trial with NCC in Bristol illustrates a client with a need to transmit data which should remain secure for many years into the future," Lord told The Reg. "QKD is attractive here because it provides security against the 'tap now, decrypt later' risk, where data could be stored and decrypted when a quantum computer becomes available."

The UK's National Cyber Security Centre (NCSC) has gone on the record to state it does not endorse the use of QKD for any government or military application, and the National Security Agency (NSA) in the US has reached the same conclusion.

Jones of Cambridge Quantum says he completely agrees with the NCSC/NSA perspectives because the "first generation of quantum security technologies has failed to deliver tangible benefits for commercial or government applications."

Young goes further: "Both NCSC and NSA echo the views of all serious cryptographers with regards to QKD, and I am in complete agreement with them."

So what needs to change to make QKD solutions relevant to enterprises in the real world? Lord admits that the specialised hardware requirements of QKD does mean it won't be the best solution for all use cases, but foresees "photonic-chip based QKD ultimately bringing the price down to a point where it can be integrated into standard optical transmission equipment."

Dr Carney adds: "In closing, all this leaves us with the biggest misunderstanding about QKD vs classical key exchange; in classical key exchange the mathematics that makes Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) or your favourite Post-Quantum Cryptography (PQC) key exchange secure is distinct and independent of the physical channel (the classical channel) that is being used for the protocol.

"On a QKD system, the mathematics is in some way intrinsically, and necessarily, linked to the actual physicality of the system. This situation is unavoidable, and we would do well to design for and around it."

Visit link:

Quantum Key Distribution: Is it as secure as claimed and what can it offer the enterprise? - The Register