Category Archives: Quantum Physics

To build the quantum internet, UChicago engineer teaches atoms how to remember – UChicago News

When the quantum internet arrives, researchers predict it will shift the computing landscape on a scale unseen in decades. In their estimation, it will make hacking a thing of the past. It will secure global power grids and voting systems. It will enable nearly limitless computing power and allow users to securely send information across vast distances.

But forTian Zhong, assistant professor at the Pritzker School of Molecular Engineering (PME) at the University of Chicago, the most tantalizing benefits of the quantum internet have yet to be imagined.

Zhong is a quantum engineer working to create this new global network. In his mind, the full impact of the quantum internet may only be realized after its been built. To understand his work and why the United States is spending$625 millionon the new technology, it helps to consider the science behind it: quantum mechanics.

Quantum mechanics is a theory created to explain fundamental properties of matter, particularly on the subatomic scale. Its roots trace back to the late 19th and early 20th century, when scientists tried to explain the unusual nature of light, which behaves as both a wave and a particle. In the hundred years since then, physicists have learned a great deal, particularly concerning the strange behavior of subatomic particles.

Theyve learned, for example, that some subatomic particles have the ability to be in two states at the same time, a principle called superposition. Another such principle is entanglement, which is the ability of two particles to communicate instantaneously despite being separated by hundreds of miles.

Over time, scientists have found ways to manipulate those principles, entangling particles at will or controlling an electrons spin. That new control allows researchers to encode, send, and process information using subatomic particleslaying the foundations of quantum computing and the quantum internet.

At the moment, both technologies are still hampered by certain physical limitationsquantum computers, for example, need to be kept in giant sub-zero freezersbut researchers like Zhong are optimistic those limitations will be resolved in the near future.

Were at a juncture where this is no longer science fiction, Zhong said. More and more, its looking like this technology will emerge from laboratories any day, ready to be adopted by society.

Zhongs research focuses on the hardware needed to make the quantum internet a reality, things like quantum chips that encrypt and decrypt quantum information, and quantum repeaters that relay information across network lines. To create that hardware, Zhong and his team work on the subatomic scale, using individual atoms to hold information and single photons to transmit it through optic cables.

Zhongs current work centers on finding ways to fight against quantum decoherence, which is when information stored on a quantum system degrades to the point that its no longer retrievable. Decoherence is an especially difficult obstacle to overcome because quantum states are extremely sensitive and any outside forcebe it heat, light, radiation, or vibrationcan easily destroy it.

Most researchers address decoherence by keeping quantum computers at a temperature near absolute zero. But the instant any quantum state is transmitted outside the freezer, say on a network line, it begins to break down within a few microseconds, severely limiting the potential for expansive interconnectivity.

See the original post:

To build the quantum internet, UChicago engineer teaches atoms how to remember - UChicago News

Einsteins theory holds up after 16-year test. – Cosmos Magazine

An international team have put Einsteins theory through the wringer with the help of a double pulsar system and 16 years of rigorous testing.

Using telescopes from around the world, including Australia with the CSIROs Murriyang radio telescope, at Parkes, they found that Einsteins general theory of relativity originally published back in 1915 still holds true today.

But what is general relativity anyway? In short, its the description of gravity used in modern physics; a geometric property of space and time also known as four-dimensional spacetime. However, while being the simplest theory consistent with experimental data at space-sized measures, general relativity cannot be reconciled with the laws of quantum physics at the very smallest scales of our universe.

According to research team member Dr Dick Manchester a fellow at Australias national space agency and CSIRO this result helps to better refine our understanding of the universe.

The theory of general relativity describes how gravity works at large scales in the universe, but it breaks down at the atomic scale where quantum mechanics reigns supreme, says Manchester.

We needed to find ways of testing Einsteins theory at an intermediate scale, to see if it still holds true. Fortunately, just the right cosmic laboratory, known as the double pulsar, was found using the Parkes telescope in 2003.

Our observations of the double pulsar over the past 16 years proved to be amazingly consistent with Einsteins general theory of relativity within 99.99 per cent to be precise, he says.

But how did they do it?

The first binary pulsar system SR B1913+16 was identified in 1975, but both it and those found subsequently comprised a pulsar and a star orbiting each other. The double pulsar system PSR J0737-3039A/B was spotted in 2003 and remains the only system yet found that contains two pulsars in a binary orbit, which offers a rare opportunity to test general relativity.

A double pulsar system acts kind of like a clock with a ticking second hand. The two orbiting pulsars dense neutron stars create very strong gravitational fields and emit radio waves at a regular time interval (the second hand, in this analogy). They also have very stable and very speedy rotation times.

Get an update of science stories delivered straight to your inbox.

The stars in the double pulsar system complete an orbit every 2.5 hours, with one pulsar rotating 45 times each second while the other spins just 2.8 times per second.

The two pulsars are predicted to collide in 85 million years time as, according to general relativity, the extreme accelerations in the system strain the fabric of space-time and send out ripples that will slow it down. But with such a long time scale this energy loss is difficult to detect.

Fortunately, the clock-like ticks of the radio waves coming from the spinning pulsars are perfect tools to trace these tiny changes. Pulsars with a stable rotation enable the measurement of miniscule variations in the arrival times of those ticks to test for gravitational theories.

Member of the research team Associate Professor Adam Deller, from Swinburne University of Technology and the ARC Centre of Excellence for Gravitational Waves (OzGrav), explains that these ticks take around 2400 years to reach Earth.

We modelled the precise arrival times of more than 20 billion of these clock ticks over 16 years, says Deller.

That still wasnt enough to tell us how far away the stars are, and we needed to know that to test general relativity.

By adding in data from the Very Long Baseline Array a network of telescopes spread across the globe the research team was able to spot a tiny wobble in the stars positions every year, which could be used to determine their distance from Earth.

Perhaps to the disappointment of the researchers, the end result showed that Einsteins theory held: results were 99.99% in accordance with the predictions of general relativity.

The double pulsar system remains a unique tool for testing gravitational theories, and the team plans to continue to use it to poke at Einsteins theory.

Well be back in the future using new radio telescopes and new data analysis hoping to spot a weakness in general relativity that will lead us to an even better gravitational theory, says Deller.

Here is the original post:

Einsteins theory holds up after 16-year test. - Cosmos Magazine

We asked a physicist whether The Witcher’s multiverse could really exist – The Verge

Those who plan on watching the second season of Netflixs The Witcher can look forward to plenty of epic monster battles, character development, and Henry Cavill staring broodingly into the middle distance. But season 2 also reveals a lot about the broader world that The Witcher takes place in or more accurately, the many worlds.

Specifically, this darker and more serious chapter in the epic fantasy saga zooms in on a seminal event in the Witcher lore known as the conjunction of the spheres. During the conjunction, which took place approximately 1,500 years before the events of the show, a bunch of different spheres of reality collided with one another, causing elves, dwarves, humans, and monsters to all get mixed up together on the same continent, much to their mutual discontent.

While this cosmic collision is pure fantasy, there is a potentially scientific idea at its core: some physicists have proposed that our universe may really be just one in a much grander multiverse of realities. If thats true, it may even be possible for different universes to interact to some extent. These ideas are wildly controversial, with one camp of physicists arguing that the multiverse is more a matter of philosophy or religion than a fruitful terrain for scientific inquiry. Others say that since we cant rule out the existence of a multiverse, theres no harm in speculating about its nature.

With season 2 of The Witcher dropping on Netflix today, it felt like an apt time for some rampant speculation. To keep things as scientifically grounded as possible, The Verge chatted with University of California, San Diego cosmologist Brian Keating about some of the most mind-bending multiverse ideas physicists have proposed, where pop culture stretches these ideas beyond recognition, and the cosmic horizons we may never see past.

This conversation has been edited for length and clarity.

The Witcher is not alone in popularizing the idea of the multiverse. Its a big theme in the Marvel cinematic universe now, its in Star Trek. Some physicists would say the multiverse is nothing but science fiction. But could you tell us a bit about why others think it might really exist?

Yeah. So the multiverse is kind of a natural extrapolation of what we call the Copernican principle, which is that where we are and who we are and what we are shouldnt be significant or aberrant. It should be kind of representative of the properties throughout all of reality. And just as theres many planets, there are many stars, there are many galaxies, and there are many clusters of galaxies, so, too, the logic would have one believe there is no reason to suspect that there should be only one universe. In fact, one of the foremost proponents of the multiverse paradigm, Andrei Linde, whos a professor at Stanford, claims that we shouldnt be biased in favor of a universe. That we should, in fact, start from this [idea] that there probably is a multiverse. And the notion has been extended by other people to really encompass all the different types of possibilities for the existence of more than one universe: A universe that is characterized by laws of physics, constants of nature, intelligent or conscious beings, and so forth.

The multiverse comes as directly as a consequence of two very different branches of physics. One is cosmology, in particular whats called inflation, the theory of the ultra-high energy origin of the expansion of space and time that would later become our observable universe. And also from string theory, which predicts sort of a landscape of possible values for different fundamental constants and forces. So these two different fields, which arent really associated with one another, both imply that there is the possibility for a multiverse. And as yet, there is a vast disagreement as to whether or not the multiverse actually is part of physics, or if its pure philosophy. And if its part of physics, how could one go about testing it or even falsifying its existence?

So to be clear, we have no direct evidence for the existence of a multiverse.

So the question is whether or not its even in principle possible to provide evidence that supports a multiverse. And if such evidence cant be found, is it possible to rule out the existence of the multiverse? Because you might be living in a multiverse, but then you might not be able to detect that youre living in a multiverse the same way that bacteria in a petri dish cant detect that they live inside of a laboratory inside of a building inside of a planet. Its too remote from the sphere of reality that they have access to.

Now, there are people who propose there are ways to measure the possibility that we are in a multiverse. The one particular signature would be looking for an impact, or a collision with another universe, that would produce an observational pattern in the oldest light in the universe called the cosmic microwave background radiation, which is what I study. And that theory, or conjecture, is pretty wildly contested. Its not at all clear if you could categorically detect and therefore motivate the existence of the multiverse.

But theres no doubt in any physicists minds that there are regions of our universe which are unobservable to us. And in that sense, you know, we already believe in a sort of multiverse. But then, extending and adding new features onto that multiverse, from inflation or from string theory, thats where things get very controversial.

As I said a moment ago, weve seen a lot of depictions of the idea of the multiverse in pop culture. I think what makes The Witcher stand out a little bit is its not just positing there are all these different universes out there in their own separate bubbles, but that universes have collided with each other. Theres a historical, cataclysmic collision that sort of sets the stage for the events of the series. In the context of a multiverse, do physicists have any ideas as to whether, or how, different universes might interact?

First of all, physicists arent in agreement that the multiverse is a serious scientific paradigm worthy of discussion. A lot of people believe its not. On the other hand, if you do take it seriously, then you can ask questions about it. But then its not clear whether or not theres any evidence, or set of evidence, that could prove it wrong. Because you could say, well, we thought this was evidence for the multiverse. But actually, in the multiverse, since anything is possible to happen, you can get any range of predictions that you want. And so its kind of unsatisfying. Its like eating cosmic Wonder Bread.

In the context where these universes collide, it could be just as light travels at a finite speed, the Sun could disappear right now, and we wouldnt see it for eight minutes. So its not possible to say something is ruled out just by not seeing it.

So there could be a multiverse. It could be one light-year away from us, in a certain sense, in which case next year well see it. It could be 10^50 light-years away from us, in which case well never see it. So it could turn out, yes, tomorrow we impact a universe thats one light-year away from us. But the thing I would gently push back on is that the notion of a collision nucleating some vast explosion is not at all clear.

For example, we know for sure we will eventually collide with the Andromeda galaxy, which is our nearest neighbor galaxy. Its almost like a twin sister of ours, and it has almost the same number of stars, hundreds of billions of stars. Its even more massive than ours, and its one of the few galaxies were moving towards rather than expanding away from, according to Hubble. That galaxy will someday crash into our galaxy, but its not like every single point will collide and each star will hit another star. In fact, theyll mostly pass right through each other. So if a galaxy, which is billions of times more dense than the universe on average, can pass right through another galaxy, all the more so a universe could pass through another universe in a certain sense.

So I think its artistic license to suggest that that could nucleate some fireworks. But I admit its pretty cute.

Could there be any sorts of interactions between different universes?

Yeah, in fact, one of the ways you might see the impact of a universe adjacent to ours is that it might have a gravitational force that deflects the light traveling in our universe. But all of this would be taking place at the boundary of what we can see just today. In other words, it wouldnt be happening to us. It would be happening 45 billion light-years away from us and we would just be seeing it now [Editors note: 45 billion light years is the approximate radius of the observable universe]. Unless youre talking about some interdimensional wormhole between different universes, and thats incredibly speculative.

And some of the problems with these physical phenomena when applied to science fiction, like wormholes and other things, is that theyre barely at the level of speculation. Theyre completely removed from testability in laboratory settings. Theyre mathematical possibilities. But as I always say, mathematics allows the possibility for infinity. You know, just divide one by zero. But there is nothing that we know about in the universe thats infinite. Nothing that has infinite temperature, density, pressure, energy, etc. So just because something is mathematically possible doesnt mean it has any physical relevance. So I dont want to be a downer. But the reality is, yes, it is possible to witness the effects of another universe interacting with ours. But it would be occurring not here, but a very, very distant there.

So it sounds like a physically plausible story about the multiverse would not have a lot of cool stuff to look at.

Well, yeah, its like saying, you know, a black hole or a wormhole as is possible. Of course, we measure black holes, but we dont measure any near us, right? Theres not one that we can kind of play with and jump into and then pop out, you know, in the Andromeda galaxy, even, let alone in another universe.

And by the way, if the laws of physics change from universe to universe, its not at all clear that the laws of mathematics, or the laws of logic, would be forbidden from changing. In other words, you get into a wormhole in our universe. You pop out in another universe. Well, the laws of wormholes are based on the laws of black holes, which are the consequence of general relativity, which is a consequence of partial differential equations, which is a consequence of calculus, which is a consequence of real numbers. And who knows if theres such a thing as real numbers in another universe? Just as the old joke goes, an old fish swims by two young fish and says to them, Hows the water? And they say, Whats water? They have no concept of it. Its so alien to their existence that they cant even contemplate it. And theres no reason to be chauvinistic, to think it would be like our universe.

I feel like we see that idea represented at least a little bit allegorically in science fiction and this true in The Witcher as well in how when beings move from universe to universe, they often cant survive in the other universe for an extended period of time because its fundamentally so different.

In my first book Losing the Nobel Prize, I made this kind of analogy which I called the petriverse. So imagine theres some bacterium and its in a petri dish and it starts making a colony. That bacterium, if it was very smart, could realize that theres a possibility for another colony really far away from it to exist because it has the agar gel and it has gravity and sunlight and whatever. It could deduce that there is a possibility for another universe in the petriverse, and actually some of these other colonies when they do form, even though they are only a couple of centimeters away, they produce toxins that prevent other bacteria from invading their space. So its like, a barrier that makes it inhospitable and hostile to the existence of hopping between universes, just like what you described.

What sorts of advances in physics could we make in the coming decades that might shed light on this question of whether the multiverse is real?

I think the field that Im studying, which is the cosmic microwave background, the key observable, and what were trying to discover, is unequivocal evidence that inflation took place. And if inflation took place, that would come concomitantly with the multiverse in most physicists anticipation. They go as a direct consequence. If you discover these waves of gravity embedded in the cosmic microwave background, then you would get a very strong piece of evidence that would seem to mandate the multiverse exists. [Editors note: Keating later clarified that this would be perhaps the strongest circumstantial evidence possible for a multiverse.] On the other hand, it may be that inflation took place, but its too weak to produce observable gravitational waves, in which case you might need to wait till a future version of the LIGO [the Laser Interferometer Gravitational-Wave Observatory] experiment in space called LISA. And that could potentially take us back and show us evidence of the fundamental origination from, perhaps, the surrounding multiverse.

And I should point out theres other ways that multiverses [could exist]. Theres a quantum mechanical version of the multiverse called the many worlds interpretation. And thats that at every possible moment of time, every possible choice, every possible observable, is instantiated. But we only observe one particular outcome for each observation because were sort of coherently oscillating with those quantum mechanical wave functions, and therefore we can observe them. Those are kind of parallel universes going on right now. So if I turn my head to the right or the left, theres a whole universe where Brian turned his head to the left. So thats a version of the multiverse. Theres also a version of the multiverse where the universe is cyclical in a certain sense; its coming into existence, its coming out of existence in a collapse. Its reemerging, and its kind of growing, and then that universe collapses. So thats kind of a temporal multiverse. And those kinds of models have been around since antiquity.

I would say, its hard to find a model of cosmology that doesnt have some version of a multiverse in it, whether thats temporal or spatial, or spatial and temporal, or quantum mechanical. So there are hopes that one could get some confidence from measuring aspects of quantum mechanics. And then theres the cosmic and gravitational wave experiments that I do. And then, perhaps if string theory were to make much more concrete predictions. So I think theres a lot more theoretical advances that need to be made, a lot more experimental [advances]. But fundamentally, we may never be able to prove it wrong. In other words, you ruled out 10^499 different universes but you didnt rule out this one. And these observations therefore become whats called unfalsifiable. In which case you cant prove that inflations wrong, but you also cant prove that the alternatives are right. And in that case, all hope would be lost. You cant prove it using an experiment or evidence, you can only prove it on Twitter or something.

Sounds like the multiverse is going to continue to fuel physics beef for many years to come.

Yes. I always say, inflation for economists means one thing. But for us, the multiverse ensures full employment for cosmologists for years to come. And for science fiction.

Link:

We asked a physicist whether The Witcher's multiverse could really exist - The Verge

Ten ways Fermilab advanced science and technology in 2021 – Fermi National Accelerator Laboratory

Researchers from more than 50 countries collaborate with the U.S. Department of Energys Fermi National Accelerator Laboratory to develop state-of-the-art technologies and solve the mysteries of matter, energy, space and time.

Here is a look at 10 ways they advanced science and technology in 2021. Fermilab and its partners:

The long-awaitedfirst results from the Muon g-2 experiment at Fermilab show fundamental particles called muons behaving in a way that is not predicted by scientists best theory, the Standard Model of particle physics.Thelandmark result, made with unprecedented precision, confirms a discrepancy that has been gnawing at researchers for decades. It indicates that muons could be interacting with yet undiscovered particles or forces.

The Deep Underground Neutrino Experiment is an international flagship experiment to unlock the mysteries of neutrinos. In May, construction crews started lowering equipment a mile underground and began the excavation of space for the South Dakota portion of the Long-Baseline Neutrino Facility. Scientists and engineers in the UK began the production and testing of components for the first large neutrino detector module to be installed in the facility. The DUNE collaboration also published the design of the near detector. A prototype component of the near detector traveled from the University of Bern to Fermilab for testing with the labs neutrino beam.

The PIP-II project team and its national and international partners are getting ready for the construction of the new, highly anticipated particle accelerator at Fermilab. Earlier this year, testing wrapped up at the PIP-II Injector Test Facility. The successful outcome paves the way for the construction of the 700-foot-long PIP-II accelerator, which will power record-breaking neutrino beams and drive a broad physics research program at Fermilab. Construction of the PIP-II Cryogenic Plant Building began in August 2020, and the structure of the building now is largely complete. The building will house utilities as well as cryogenic equipment for the new machine. Efforts to use machine learning for the operation of PIP-II and other Fermilab accelerators are underway as well.

In 2021, Fermilab scientists were co-authors ofmore than 600 scientific articles, advancing our understanding of energy, matter, space and time and the technologies that drive these discoveries. Top achievements include results from MicroBooNE (a neutrino experiment that looks for evidence for sterile neutrinos) and NOvA (which aims to decipher the neutrino mass ordering); the search for stealthy supersymmetry with the CMS experiment at the Large Hadron Collider; dozens of papers on the Dark Energy Survey, including the most precise measurements of the universes composition and growth to date; the discovery of performance-limiting nanohydrides in superconducting qubits; and the worlds fastest magnetic ramping rates for particle accelerator magnets, made with energy-efficient, high-temperature superconducting material.

In Dec. 2020, the U.S. Department of Energy formally approved the full U.S. contribution to the high-luminosity upgrade of the Large Hadron Collider, orHL-LHC, at the European laboratory CERN. Led by Fermilab, collaborators will contribute 16 magnets to focus the LHCs near-light-speed particle beams to a tiny volume before colliding. They will also deliver eight superconducting cavities, radio-frequency devices designed to manipulate the powerful beams. (They will also provide four spare magnets and two spare cavities.) The new instruments will enable a 10-fold increase in the number of particle collisions at the future HL-LHC compared to the current LHC. Together with upgrades to the CMS detector, the accelerator upgrade will enable physicists tostudy particles such as theHiggs bosonin greater detail. And the increase in the number of collisions could also uncover rare physics phenomena or signs of new physics.

Advances in particle physics and quantum information science are tightly connected. Together with their partners in industry and collaborating institutions, Fermilab scientists and engineers used their expertise to advance quantum research. Examples include: taking the first steps toward building a quantum computer, designing quantum sensors for dark matter research, developing quantum algorithms, implementing artificial intelligence on a microchip, and building and testing the components of a quantum internet. They all are all part of the many facets of quantum science at Fermilab. This also includes the construction of the MAGIS-100 experiment, which will use free-falling atoms to probe dark matter, gravity and quantum science.

In addition to the construction projects mentioned above, Fermilab and its collaborators worked on the assembly of the ICARUS detector and the Short-Baseline Near Detector, both part of the Short-Baseline Neutrino Program to investigate neutrino oscillations and look for new physics. ICARUS scientists saw the first particles in their neutrino detector earlier this year, and SBND scientists are working on the assembly of their liquid-argon detector. Scientists are also working on the magnets and detectors for the Mu2e experiment, which will look for the direct conversion of a muon into an electron. If observed, it would signal the existence of new particles or new forces of nature.

In collaboration with partners at other DOE national labs, Fermilab researchersdesigned, built and delivered superconducting accelerator cryomodules for upgrading the worlds most powerful X-ray laser, the Linac Coherent Light Sourceat SLAC National Accelerator Laboratory. With these LCLS-II upgrades, biologists, chemists and physicists will be able to probe the evolution of systems on a molecular level, forming flipbooks of molecular processes using one million X-ray pulses every second. Another upgrade to provide higher-energy particle beams, called LCLS-II-HE, is underway to enable even more precise atomic X-ray mapping. Tests of a verification cryomodule built at Fermilab achieved records far beyond current cryomodule specifications and should result in a 30% improvement compared to LCLS-II.

Fermilab is committed to attracting, developing and retaining diverse talent and cultivating an inclusive work environment that supports scientific, technological and operational excellence. The new Carolyn B. Parker Fellowship, named for the first African-American woman to earn a postgraduate degree in physics, presents an opportunity for Black and African-American postdoctoral scholars to work for up to five years in the Superconducting Quantum Materials and Systems Center at Fermilab. The new Sylvester James Gates, Jr. Fellowship offers a five-year appointment in the Theory Division at Fermilab. It prioritizes the inclusion of first-generation college graduates and the representation of historically and contemporarily minoritized individuals underrepresented in theoretical physics. The new Accelerator Science Program to Increase Representation in Engineering, or ASPIRE, fellowship provides students with immersive learning experiences on world leading particle accelerator projects at Fermilab. The fellowship is for undergraduate and graduate engineering students in underrepresented groups. To learn about other programs at Fermilab to increase diversity and inclusion in science, technology, engineering and math, visit our website.

Fermilab offered many online STEM education and outreach programs in 2021. Live programs included the virtual Family Open House, Ask a Scientist, STEM career fair,theSaturday Morning Physics programandtheArts and Lecture at Home series, including virtual art exhibits. They attracted viewers from different communities and backgrounds from around the world. Tens of thousands also have watched lectures, physics slams and other programs on theFermilab YouTube channel. Almost 600,000 people are now subscribed to this channel, which is known for the popular science explainer videosfeaturing Fermilab scientist Don Lincoln. This year, the channel also launched the new Even Bananas series with Kirsty Duffy and other scientists, who explain the mysteries of the neutrino.

Fermi National Accelerator Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

See the original post:

Ten ways Fermilab advanced science and technology in 2021 - Fermi National Accelerator Laboratory

Quantum computing use cases are getting real–what you need to know – McKinsey

Accelerating advances in quantum computing are serving as powerful reminders that the technology is rapidly advancing toward commercial viability. In just the past few months, for example, a research center in Japan announced a breakthrough in entangling qubits (the basic unit of information in quantum, akin to bits in conventional computers) that could improve error correction in quantum systems and potentially make large-scale quantum computers possible. And one company in Australia has developed software that has shown in experiments to improve the performance of any quantum-computing hardware.

As breakthroughs accelerate, investment dollars are pouring in, and quantum-computing start-ups are proliferating. Major technology companies continue to develop their quantum capabilities as well: companies such as Alibaba, Amazon, IBM, Google, and Microsoft have already launched commercial quantum-computing cloud services.

Of course, all this activity does not necessarily translate into commercial results. While quantum computing promises to help businesses solve problems that are beyond the reach and speed of conventional high-performance computers, use cases are largely experimental and hypothetical at this early stage. Indeed, experts are still debating the most foundational topics for the field (for more on these open questions, see sidebar, Debates in quantum computing).

Still, the activity suggests that chief information officers and other leaders who have been keeping an eye out for quantum-computing news can no longer be mere bystanders. Leaders should start to formulate their quantum-computing strategies, especially in industries, such as pharmaceuticals, that may reap the early benefits of commercial quantum computing. Change may come as early as 2030, as several companies predict they will launch usable quantum systems by that time.

To help leaders start planning, we conducted extensive research and interviewed 47 experts around the globe about quantum hardware, software, and applications; the emerging quantum-computing ecosystem; possible business use cases; and the most important drivers of the quantum-computing market. In the report Quantum computing: An emerging ecosystem and industry use cases, we discuss the evolution of the quantum-computing industry and dive into the technologys possible commercial uses in pharmaceuticals, chemicals, automotive, and financefields that may derive significant value from quantum computing in the near term. We then outline a path forward and how industry decision makers can start their efforts in quantum computing.

An ecosystem that can sustain a quantum-computing industry has begun to unfold. Our research indicates that the value at stake for quantum-computing players is nearly $80 billion (not to be confused with the value that quantum-computing use cases could generate).

Because quantum computing is still a young field, the majority of funding for basic research in the area still comes from public sources (Exhibit 1).

Exhibit 1

However, private funding is increasing rapidly. In 2021 alone, announced investments in quantum-computing start-ups have surpassed $1.7 billion, more than double the amount raised in 2020 (Exhibit 2). We expect private funding to continue increasing significantly as quantum-computing commercialization gains traction.

Exhibit 2

Hardware is a significant bottleneck in the ecosystem. The challenge is both technical and structural. First, there is the matter of scaling the number of qubits in a quantum computer while achieving a sufficient level of qubit quality. Hardware also has a high barrier to entry because it requires a rare combination of capital, experience in experimental and theoretical quantum physics, and deep knowledgeespecially domain knowledge of the relevant options for implementation.

Multiple quantum-computing hardware platforms are under development. The most important milestone will be the achievement of fully error-corrected, fault-tolerant quantum computing, without which a quantum computer cannot provide exact, mathematically accurate results (Exhibit 3).

Exhibit 3

Experts disagree on whether quantum computers can create significant business value before they are fully fault tolerant. However, many say that imperfect fault tolerance does not necessarily make quantum-computing systems unusable.

When might we reach fault tolerance? Most hardware players are hesitant to reveal their development road maps, but a few have publicly shared their plans. Five manufacturers have announced plans to have fault-tolerant quantum-computing hardware by 2030. If this timeline holds, the industry will likely establish a clear quantum advantage for many use cases by then.

The number of software-focused start-ups is increasing faster than any other segment of the quantum-computing value chain. In software, industry participants currently offer customized services and aim to develop turnkey services when the industry is more mature. As quantum-computing software continues to develop, organizations will be able to upgrade their software tools and eventually use fully quantum tools. In the meantime, quantum computing requires a new programming paradigmand software stack. To build communities of developers around their offerings, the larger industry participants often provide their software-development kits free of charge.

In the end, cloud-based quantum-computing services may become the most valuable part of the ecosystem and can create outsize rewards to those who control them. Most providers of cloud-computing services now offer access to quantum computers on their platforms, which allows potential users to experiment with the technology. Since personal or mobile quantum computing is unlikely this decade, the cloud may be the main way for early users to experience the technology until the larger ecosystem matures.

Most known use cases fit into four archetypes: quantum simulation, quantum linear algebra for AI and machine learning, quantum optimization and search, and quantum factorization. We describe these fully in the report, as well as outline questions leaders should consider as they evaluate potential use cases.

We focus on potential use cases in a few industries that research suggests could reap the greatest short-term benefits from the technology: pharmaceuticals, chemicals, automotive, and finance. Collectively (and conservatively), the value at stake for these industries could be between roughly $300 billion and $700 billion (Exhibit 4).

Exhibit 4

Quantum computing has the potential to revolutionize the research and development of molecular structures in the biopharmaceuticals industry as well as provide value in production and further down the value chain. In R&D, for example, new drugs take an average of $2 billion and more than ten years to reach the market after discovery. Quantum computing could make R&D dramatically faster and more targeted and precise by making target identification, drug design, and toxicity testing less dependent on trial and error and therefore more efficient. A faster R&D timeline could get products to the right patients more quickly and more efficientlyin short, it would improve more patients quality of life. Production, logistics, and supply chain could also benefit from quantum computing. While it is difficult to estimate how much revenue or patient impact such advances could create, in a $1.5 trillion industry with average margins in earnings before interest and taxes (EBIT) of 16 percent (by our calculations), even a 1 to 5 percent revenue increase would result in $15 billion to $75 billion of additional revenues and $2 billion to $12 billion in EBIT.

Quantum computing can improve R&D, production, and supply-chain optimization in chemicals. Consider that quantum computing can be used in production to improve catalyst designs. New and improved catalysts, for example, could enable energy savings on existing production processesa single catalyst can produce up to 15 percent in efficiency gainsand innovative catalysts may enable the replacement of petrochemicals by more sustainable feedstock or the breakdown of carbon for CO2 usage. In the context of the chemicals industry, which spends $800 billion on production every year (half of which relies on catalysis), a realistic 5 to 10 percent efficiency gain would mean a gain of $20 billion to $40 billion in value.

The automotive industry can benefit from quantum computing in its R&D, product design, supply-chain management, production, and mobility and traffic management. The technology could, for example, be applied to decrease manufacturing processrelated costs and shorten cycle times by optimizing elements such as path planning in complex multirobot processes (the path a robot follows to complete a task) including welding, gluing, and painting. Even a 2 to 5 percent productivity gainin the context of an industry that spends $500 billion per year on manufacturing costswould create $10 billion to $25 billion of value per year.

Finally, quantum-computing use cases in finance are a bit further in the future, and the advantages of possible short-term uses are speculative. However, we believe that the most promising use cases of quantum computing in finance are in portfolio and risk management. For example, efficiently quantum-optimized loan portfolios that focus on collateral could allow lenders to improve their offerings, possibly lowering interest rates and freeing up capital. It is earlyand complicatedto estimate the value potential of quantum computingenhanced collateral management, but as of 2021, the global lending market stands at $6.9 trillion, which suggests significant potential impact from quantum optimization.

In the meantime, business leaders in every sector should prepare for the maturation of quantum computing.

Until about 2030, we believe that quantum-computing use cases will have a hybrid operating model that is a cross between quantum and conventional high-performance computing. For example, conventional high-performance computers may benefit from quantum-inspired algorithms.

Beyond 2030, intense ongoing research by private companies and public institutions will remain vital to improve quantum hardware and enable moreand more complexuse cases. Six key factorsfunding, accessibility, standardization, industry consortia, talent, and digital infrastructurewill determine the technologys path to commercialization.

Leaders outside the quantum-computing industry can take five concrete steps to prepare for the maturation of quantum computing:

Leaders in every industry have an uncommon opportunity to stay alert to a generation-defining technology. Strategic insights and soaring business value could be the prize.

Here is the original post:

Quantum computing use cases are getting real--what you need to know - McKinsey

Thank You, Best Wishes and Happy Holidays | Office of the Chancellor – University of NebraskaLincoln

To our students, faculty and staff,

I want to thank you from the bottom of my heart for the incredible work you have done and the persistence you have shown to get to this point this eve of finals week for fall of 2021. You continue to help us navigate this long-lasting pandemic and what have been especially difficult times all the while, keeping us focused on our mission.

As we approach the winter holidays, I am filled with gratitude for every Husker in our university community. Your work has led to big things in 2021.

To name just a few:

And we are poised to do even more in 2022 ahead. It is all because of our people and our commitment to each other, where every person and every interaction matters.

So, as we approach finals week, I want to wish you the very best! Enjoy a free cup of coffee in the union, study hard, work hard on those final projects, but get as much rest as you can and pace yourself I know you will succeed.

Thats what Huskers do.

And when this week is over and grades are completed, and graduation is ahead, I encourage you to find ways to wind down and take care of yourselves. I hope you enjoy a peaceful and joyful holiday season.

Happy Holidays, and Go Big Red!

See the original post:

Thank You, Best Wishes and Happy Holidays | Office of the Chancellor - University of NebraskaLincoln

Five Undeniable Scientific Proofs That Santa Is Definitely Real – IFLScience

With Christmas comesa most welcome visitor: Santa Claus, Father Christmas, Saint Nicholas, call him what you will, he only works one night a year but boy does he deliver. And how do we thank him? We deny his existence! Call him things like a holiday folk myth or literally impossible given the laws of physics!

Well, we at IFLScience have had enough of this disrespect. Santa is real, and we have the receipts. Here are five arguments, from all branches of science and philosophy, that prove that Santa Claus really is Coming To Town.

Logic

Not only is it easy to prove that Santa exists, its also quick in fact, were going to do it in two sentences. Ready?

1. Everything in this list is false;

2. Santa exists.

From those two statements, it follows that Santa is real.

Let us explain: either statement 1 is true, or its false. If its true, then everything in the list is false, which means statement 1 is false. But this is a contradiction we started by assuming that statement 1 is true. Clearly, this is nonsense: a statement cant be true and false. The only option is that statement 1 isnt true at all.

But if statement 1 is false, that means that at least something in the list is true. We know that statement 1 is false, so the only remaining option is for statement 2 to be true: Santa exists, QED.

(While this proof is obviously water-tight, the more persnickety among you might want to look up the Liar Paradox to understand why other arguments similar to this one arent as convincing as they initially seem basically, the first statement is self-referential.

Mathematically speaking, all this is a bit naughty, wrote mathematician Hannah Fry in her 2017 book The Undeniable Existence of Santa Claus. Self-referential statements like these dont actually have to be true or false, which resolves the paradox.

The liar paradox isnt just a handy way to prove the haters wrong on Santa it has some pretty astonishing philosophical repercussions too. Using some incredibly abstract mathematics, in the 1930s the logician Alfred Tarski tried to find a definition of truth that would be, as he put it, adequate typically understated mathematician-speak for irrefutable. Instead, thanks to the liar paradox, he accidentally proved what is now called Tarskis theorem on the undefinability of truth, which is, you imagine, the one thing he didnt want to happen.)

Quantum Physics

Of course, despite all these cast-iron proofs, some scientifically uninformed individuals still find reasons to doubt the existence of Santa. One of the most common reasons given for this apostasy is, on the face of it, fairly convincing: how, people ask, could anybody deliver all those presents in one night without being seen or heard?

Little do these doubters realize, however, that science has long known the answer to Santas apparently super-human courier skills its simple quantum physics.

Let us explain: we already know a lot about how Santa would need to travel on Christmas Eve to get the presents to every child who expects them.

Assuming he has the good sense to travel East to West, we know not just his direction but his speed: eight hours of nighttime spread over 24 time zones gives 31 hours to complete the job. From census data, we can estimate the number of children he has to get to: around 850 million.

That gives Santa a minimum speed of around 300,000 kilometers per second, according to science author Roger Highfield, which is pretty fast. Actually, its incredibly fast: at more than 6,000 times the speed of sound, Santa would be slammed back in his sleigh by forces more than 17,500 times stronger than gravity, and as for Rudolph well, according to one calculation, at those speeds, the friction from the atmosphere would vaporize Santas faithful friends in less than one two-hundredth of a second, taking out more than 214,000 reindeer before the nights work was complete.

We know, that sounds pretty grim, and hardly festive. Luckily, though, those figures contradict the facts on record so clearly, classical mechanics doesnt hold the answer.

But with quantum mechanics, everything falls into place.

In quantum mechanics, the Heisenberg uncertainty principle tells us that if we know one variable well, we cannot know the other one exactly, explained high-energy physicist Daniel Tapia Takaki to the BBC. We know what speed Santa will be travelling, but not his position.

Thanks to quantum physics inherent unavoidable weirdness, visiting every home in one night may be possible, Tapia Takaki explained it just requires that Santa be a superposition of quantum states, in other words a collection of Santas diffused all across the planet.

Once we know Santa obeys quantum rules rather than physical ones, it makes a lot more sense why weve never seen him in person. If a child happened to spot him on his mission, Tapia Takaki said, the uncertainty principle would no longer apply.

You would know his exact position, he explained, which would cause the quantum state to collapse and no more presents could be distributed.

Cosmology

Fine, so you havent been convinced so far well this proof is cast-iron. We know Santa exists for one simple reason: we can see him.

Granted, hes not exactly how the festive ads show him: for one thing, hes a couple hundred trillion square kilometers big. Also, hes something like two million degrees in temperature, which at least explains why he spends so much time hanging out at the North Pole the man just needs to cool down.

On that note: those cottages at the North Pole and Lapland must be vacation homes, because it turns out Santas natural habitat is actually in the southwest corner of the Orion Nebula.

See him there? With his little hat on?

What youre looking at is a massive cloud of incredibly hot gas that was formed after the wind from a star forty times the mass of our own sun smashed violently into the dense gas that surrounded it. It was discovered in 2007, less than a month before Christmas an early present for astronomers, as the press release from the European Space Agency said at the time.

The Orion Nebula isnt the only festive part of the night sky. Santas shadow can be seen in the Tarantula Nebula creepy crawlies need presents too, we suppose:

This is clearly the great mans face here in the nebula IC 2118 yes, we know its technically known as the Witch Head Nebula but look at that gigantic space face and tell us thats not a wispy beard on the chin.

Archaeology

You know, Santa wasnt always Santa. He used to be just a regular Joe Schmo from a town which is called Demre now but used to be called Myra, in what is now Turkey.

He may have been born around 1,700 years ago, but we have more than just centuries-old stories to support his existence. Thanks to the morbid traditions of the Orthodox and Catholic churches, there are quite a few bits of bodies around the world that people claim tohave come from the real-life Saint Nick, but one of them a piece of pelvis found in a Catholic church in Illinois may be the real deal.

Many relics that we study turn out to date to a period somewhat later than the historic attestation would suggest, said archaeological scientist Tom Higham back in 2017. This bone fragment, in contrast, suggests that we could possibly be looking at remains from St Nicholas himself.

Now, you might point out that having his skeleton living in a church on the outskirts of Chicago is more an argument against Santas existence, but consider this: nearly half a million hip replacements are performed every year in the United States alone, and most are on people aged 60 plus. Saint Nick, according to tradition, is over 1,750 years old so its not surprising in the least that the old fella might have had a bit of pelvis removed at some point.

Philosophy

Okay, so you still dont believe us. Thats fine. Lets consider the alternative.

If Santa doesnt exist, that means theres a huge conspiracy thats being willfully upheld by billions of people across the globe. Parents lying to their children; hundreds of movies being made about the same imaginary man; heck, even NORAD is engaged in this gigantic lie surrounding a jolly fat man who gives presents on Christmas. Which, when you put it like that, isnt even that unbelievable a premise.

And to what end? All good conspiracies have an end goal the CIA didnt pretend vampires were real just for fun, after all, they did it to stop the commies. What would be the point of postal workers across the world accepting mail to a mythical person (and sometimes even delivering replies); what gain would researchers and news organizations get for controverting their scientific and journalistic ethics every year?

This is where the philosophical principle known as Ockhams Razor comes into play. In simple terms, this is the idea that we shouldnt make things more complicated than they need to be to explain something. For instance: you flip the light switch, and the light turns on. Whats more likely to be true: that you flipping the switch turned it on, or that you flipping the switch set off a small alarm inside the wall, waking up a dormouse who runs up to the ceiling and opens a tiny chemistry lab, dons a tiny white coat, and starts mixing luminol with various substances which he then funnels down into the bulb in the light, thus illuminating the room?

So with that in mind, we ask you: whats more likely? That the whole world is engaged in a deception?

Or that Santa is, as we promised, real?

See the original post here:

Five Undeniable Scientific Proofs That Santa Is Definitely Real - IFLScience

Pushing the Limits of Quantum Sensing with Variational Quantum Circuits – Physics

December 6, 2021• Physics 14, 172

Variational quantum algorithms could help researchers improve the performance of optical atomic clocks and of other quantum-metrology schemes.

D. Vasilyev/University of Innsbruck

D. Vasilyev/University of Innsbruck

Since it was first introduced in 1949, Ramsey interferometry has had an exciting history. The method was at the center of a series of beautiful experiments performed by Serge Haroches group that were recognized by the 2012 Nobel Prize in Physics [1, 2]. The prize was given for methods that enable the measurement and manipulation of individual quantum systems. Haroches group used individual atoms to sense the properties of photons inside an optical cavity. Building on these ideas, researchers have reported a new theoretical study that points at a promising way to push the limits of quantum sensing. Raphael Kaubruegger at the University of Innsbruck, Austria, and his colleagues employ so-called variational quantum circuits to optimize the sensitivity of an atomic sensor based on entangled atoms [4]. The result is a sensor that, with surprisingly modest quantum resources, should outperform those based on standard Ramsey interferometry.

We often think of photons as probes to study atoms, but Ramsey interferometry flips the script and uses atoms to study photons. This type of interferometry first puts an atom in a superposition of electronic energy levels and then passes the atom through an optical cavity. As a result, the quantum superposition accumulates a measurable phase shift that depends on the properties of the photons in the cavity. The experiments by Haroches group involved passing atoms through an optical cavity one at a time in order to nondestructively detect the number of photons. More photons in the cavity lead to a larger phase shift in the atomic wave function. In such experiments, each atom can be regarded as an individual entity. In other words, each atom is prepared in an uncorrelated product statea state that can be described independently of every other atoms state.

Kaubruegger and colleagues propose to go a step further by entangling 64 atoms and using them to make an even better sensor for Ramsey interferometry. They demonstrate the effectiveness of their approach by considering an optical atomic clock, in which Ramsey-interferometry measurements of the atomic ensembles phase are used to correct the clocks laser frequency (Fig. 1). Like Haroches group, the researchers manipulate a single quantum system, but one made of 64 atoms. Rather than using atoms in the product state, they propose to prepare these atoms in an entangled state, in which each atoms state cannot be fully described independently of the other atoms. They show that performing Ramsey interferometry using entangled states gives a big boost to the sensitivity of the phase sensor, beating the standard quantum limit that applies when sensing using uncorrelated atoms.

Their proposal harnesses a key innovation to prepare the entangled state. Entangled atomic sensors have been employed before, and a standard approach involves using so-called Greenberger-Horne-Zeilinger (GHZ) states. Kaubruegger and colleagues note that these states are only optimal for sensing under certain assumptions regarding prior knowledge of the phase-shift value. This limitation opened the door for the researchers to improve upon and outperform GHZ states by taking advantage of one of todays hottest concepts in quantum computing: variational quantum circuits. These circuits, which have a set of free parameters, replace the fixed quantum circuits used to implement quantum algorithms such as Shors algorithm for factoring or the Harrow-Hassidim-Lloyd algorithm for solving linear systems. Variational quantum circuits have internal parameters (such as rotation angles about certain Bloch sphere axes) that one optimizes over to perform a given task. Kaubruegger and colleagues propose to use two sets of variational quantum circuits to prepare the entangled state for sensing and to measure the parameter that they want to sense (that is, the optical phase). They call these circuits the entangling and decoding circuits, respectively (Fig. 2).

Achieving good performance with variational quantum circuits is challenging, since the parameters can be hard to optimize and one does not know ahead of time how deep of a circuit one needs, that is, how many quantum gates are required. Kaubruegger and colleagues find that excellent performance can be achieved with shallow circuits composed using the quantum resources inherently available in Ramsey interferometry and atomic-clock platforms. With only a few layers of their quantum circuits, they not only beat the standard quantum limit (which applies to measurements made using uncorrelated atoms) but also get very close to the Heisenberg limitthe ultimate limit for the sensitivity that one can achieve with a quantum system and, therefore, the ultimate limit of a quantum sensor. Here, a layer refers to the building block of the variational quantum circuit: more layers are needed to do a more comprehensive search over the Hilbert space, whereas fewer layers can only search over a smaller subspace. The fact that good performance requires only a few layers suggests that states that are beneficial to quantum metrology are relatively easy to find. This is an exciting possibility that should stimulate more investigation.

This new work is important because it brings together two different communities: the quantum sensing community and the variational quantum algorithm community. While variational quantum algorithms are getting major attention for quantum computing applications, it is rare for them to appear in an atomic experimental setting or in a sensing setting. The beautiful observation that variational algorithms could work in a realistic sensing application should inspire many experimentalists to think about optimizing their setups with variational quantum circuits, regardless of whether they involve atoms, light, spins, or superconductors. We need cross fertilization between quantum experimentalists and quantum computer scientists, and this work gives an inspiring guide for how such cross fertilization can be brought about.

Patrick Coles is a staff scientist at Los Alamos National Laboratory (LANL), New Mexico. He leads the near-term quantum computing research efforts at LANL, focusing on variational quantum algorithms and quantum machine learning. He also co-organizes LANL's quantum computing summer school. He has switched fields many times: He received his master's degree in biochemistry from the University of Cambridge, UK, as a Churchill Scholar and then did his Ph.D. in chemical engineering at the University of California, Berkeley. In contrast, his three postdocs (at Carnegie Mellon University, Pennsylvania; the National University of Singapore; and the University of Waterloo, Canada) were focused on all things quantum, including quantum foundations, quantum optics, quantum information theory, quantum cryptography, and (his current field) quantum computing.

More:

Pushing the Limits of Quantum Sensing with Variational Quantum Circuits - Physics

Here’s how the universe could end in a ‘false vacuum decay’ – Space.com

This is the way the world ends: not with a bang, but with a quantum vacuum decay of the ground state of the universe to its true minimum.

The universe underwent radical phase transitions in the past. These transitions eventually led to the division of the four fundamental forces of nature and the panoply of particles we know today. All of that occurred when the universe was less than a second old, and it has been stable ever since.

But it might not last forever.

Our universe: Big Bang to now in 10 easy steps

To understand the stability of the universe, first we need to talk about phase transitions. Phase transitions are when a substance undergoes a rapid, radical transformation. They happen all the time. You boil water, and it transforms from a liquid into a gas. You cool that same water, and it turns into a block of ice.

Perhaps the most exotic phase transitions are those that happen to quantum fields. Quantum fields are the fundamental building blocks of the universe. Every kind of particle say, a photon or an electron is really just a local manifestation of an underlying field. That field soaks all of space and time like bread dipped in olive oil. The way those fields interact and communicate with each other makes up the forces and physics of our existence.

That existence is based on four fundamental forces: gravity, the weak force, electromagnetism and the strong force. But it hasn't always been this way. In the earliest moments of the cosmos, those forces were united. As the universe expanded and cooled, the quantum fields underwent phase transitions, splitting apart one by one.

The last phase transition occurred when the electromagnetic force split from the weak force. That splitting gave rise to the photon and the W and Z bosons, the carriers of those two forces.

Since that event, which happened when the universe wasn't even a second old, everything's been stable no more splitting, no more phase transitions. The four forces of nature went on to shape and sculpt the evolution of the cosmos for billions of years.

As far as everything looks, it's all stable for now, anyway.

Related: Is there anything beyond the universe?

The stability of the universe is tricky to measure. Sure, it's been over 13 billion years since anything as interesting as a phase transition has occurred. Yes, 13 billion years is a really long time, but in the world of quantum fields, anything can happen.

Our best bet at probing the stability of the universe is through the mass of the Higgs boson. The Higgs is a very interesting field; its presence in the universe is what separated the electromagnetic force from the weak force and what maintains that split today. Without the Higgs boson, those forces would merge right back together.

In quantum physics, the more massive an entity is, the more unstable it is. Massive particles quickly decay into lighter ones, for example. So, if the Higgs is very massive, it might not be as stable as it seems, and it might decay into something else someday. But if the Higgs is light enough, it's likely to hang out forever, and there's nothing more to say about the future of the quantum fields of the universe.

Measurements of the Higgs have found that its mass puts the universe smack in between the "really, honestly stable" and "Oh no, it looks a little unstable" regimes. Physicists call this state "metastable" a situation that is stable for now but could quickly deteriorate if something were to go wrong.

The apparent metastability of the quantum fields of the universe is a little unsettling. Although it could mean that the universe could persist for billions, even trillions, of years without anything going wrong at all, it could also mean that the universe is already beginning to transform. All it would take is one little shake in the wrong direction, in some random patch of the universe, where the Higgs falls apart and the underlying quantum fields find a new, more stable configuration. That region of "new" universe would then propagate outward at nearly the speed of light through the "old" universe.

This kind of phase transition is called a false vacuum decay. It references the idea that the vacuum of our universe is a false one its not as stable as it might appear, and it will someday decay into something new.

By the time we received any information that the phase transition was upon us, it would already be happening.

What would be on the other side of that new universe? It's impossible to say. It might be totally mundane, with the new quantum fields looking exactly like the old quantum fields and nothing amiss. It could be just a slight adjustment, like a little tuning to the nature of dark energy or a slight adjustment to the masses of neutrinos. Or, it could be radically different, with a universe filled with brand-new forces, fields and particles which would make life (and chemistry and atomics) as we know it impossible.

Of course, we're not even 100% sure about the metastability criterion. We know that the Standard Model of particle physics is incomplete. A complete version could rewrite our understanding of quantum fields and where the "stable-unstable" line is drawn.

Learn more by listening to the "Ask A Spaceman" podcast, available on iTunes and askaspaceman.com. Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and facebook.com/PaulMattSutter.

Here is the original post:

Here's how the universe could end in a 'false vacuum decay' - Space.com

Why Neuroscientist Solms Is No Materialist: Information Theory – Walter Bradley Center for Natural and Artificial Intelligence

Arjuna, the host of the Theology Unleashed broadcast with South African neuropsychologist Mark Solms and Stonybrook neurosurgeon Michael Egnor on the mind vs. the brain (October 22, 2021) begins this portion by offering a Hindu (Hare Krishna) perspective view of the whole question of mind vs. matter and he finds considerable common ground with the other two non-materialists! The true implications of quantum mechanics and information theory in refuting materialism are only beginning to be understood.

Summary to date: In the first portion, Solms, author of The Hidden Spring (2021), began by asserting in his opening statement that the source of consciousness in the brain is in fact in the brain stem, not the cerebral cortex, as is almost universally assumed. Dr. Egnor then responded that his clinical experience supports the view that brain is not mind.

Then Solms pointed to the reality that discussing the fact that the brain is not the mind can be a career-limiting move in neuroscience even though clinical experience supports the view. Egnor and Solms agreed that the further a neuroscientist gets from actual patients, the easier it is to adopt the view that the mind is just what the brain does (naturalism). Solms, who trained as a psychoanalyst as well, then described how he understands consciousness the capacity to feel things, for example, the redness of red (qualia) Talk then turned to the miraculous nature of life and Spinozas God., with Solms saying that he believes in Spinozas God, as did Albert Einstein. Egnor then explained why Christians see God as a Person: The most remarkable thing about us is personhood.

This portion begins at 01:24:30. A partial transcript and notes, plus summaries and links to date follow.

Arjuna: We talked about what consciousnesses is. Maybe we can talk about what matter is. Theres this idea that, Oh, its just matter. Matter explains that.

In Krishna Consciousness, we talk about God having inconceivable potencies. The materialist scientists attribute inconceivable potency to matter. They think matter has all these magical powers, like it can produce consciousness Its as if that answers the question and theres no further questions to be asked. [01:25:00]

Mark Solms: I must be careful not to exceed my credentials. Im not a physicist. But even as a non-physicist, I can say that it is astonishing that this is such a widespread view. It links with what Michael was saying earlier about scientists having very poor metaphysics and not even realizing that theyre starting from metaphysical assumptions of any kind, let alone unquestionable ones. [01:26:00]

Even I know that its really been a long time now in physics that the idea that matter is a fundamental concept has been transcended. I mean, Einsteins famous equation, E equals MC squared, makes the point that matter is derivative. Its a state of energy.

This naive idea that the fundamental stuff is matter 100 years ago we realized in physics that thats not true. I think the next really big development, beyond relativity and the basic insights of quantum physics that Michael was referring to, has been Shannons insight about information. [01:27:00]

Information, in neuroscience, is a crucial concept, and its very hard to think about quantum physics and the big questions that are unsolved that flow from it without the concept of information which, I hasten to draw your attention to the fact, is not matter. Im not a materialist for exactly that reason. [01:27:30]

I dont believe that the mind can be reduced to matter. Matter is an appearance. If youre wanting to make connections between mind and body and see them both as appearances, then you cant be a materialist. We must always remember, as I keep saying, that these are concepts. These are abstractions. These are inferences. These are words that we use to try to articulate these profound things. I think that, among those tools, the concept of information, in the sense that Shannon introduced it into physics in 1948, has not yet begun to The implications, the importance, the value of this concept has not begun to fully reveal itself. [01:28:30]

Note: Who was Claude Shannon (19162001)? The American mathematician and computer scientist who conceived and laid the foundations for information theory. His theories laid the groundwork for the electronic communications networks that now lace the earth Shannon was the person who saw that the binary digit was the fundamental element in all of communication, said Dr. Robert G. Gallager, a professor of electrical engineering who worked with Dr. Shannon at the Massachusetts Institute of Technology. That was really his discovery, and from it the whole communications revolution has sprung. IEEE Information Theory Society The binary digit is a mathematical concept, not a material thing.

Michael Egnor: The information concept dovetails very nicely with what a number of philosophers of science have pointed out, mainly philosopher of science Bruce Gordon He points out that when you look at the quantum world, matter doesnt exist. Nothing in the quantum world is matter [01:29:30]

I think the way where we went wrong in this was with Descartes and his notion of everything in nature as a machine extended in space, except for the spirit, the human mind, which is this kind of ghost thing.

Note: French mathematician Ren Descartes was famous for that view. But it came back to haunt us all, so to speak, when Gilbert Ryle (19001976) ridiculed the ghost in the machine, helping to establish the materialist dogma that the mind is simply what the brain does. The concept became the title of a book by a famous Arthur Koestler, unpacking that view.

Mark Solms: When you read Shannons paper Again, as I say, I always find it very valuable to go back and actually read what my forebears wrote. The title of his [1948] paper is A Mathematical Theory of Communication not of information but communication

Well, let me just cut to the chase. What we forget is that information doesnt exist without there being a question that the information is an answer to. And this perhaps goes back to what you were saying earlier about personhood and some of the other profound matters that we were touching on. Then the issue becomes more: Where does question-asking come from?

Next: Reclaiming the non-materialist dimension in science (Hint: Stephen Hawking was a fine writer but not a very good philosopher )

The discussion to date

Heres the first portion of the debate/discussion, where neuropsychologist Mark Solms shares his perspective: Consciousness: Is it in the cerebral cortex or the brain stem? In a recent discussion/debate with neurosurgeon Michael Egnor, neuropsychologist Mark Solms offers an unconventional but evidence-based view, favouring the brain stem. The evidence shows, says Mark Solms, author of The Hidden Spring, that the brain stem, not the cerebral cortex is the source of consciousness.

And Michael Egnor responds:

1.2. Neurosurgeon and neuropsychologist agree: Brain is not mind Michael Egnor tells Mark Solms: Neuroscience didnt help him understand people; quite the reverse, he had to understand people, and minds, to make sense of neuroscience. Egnor saw patients who didnt have most of their frontal lobes who were completely conscious, in fact, rather pleasant, bright people.

1.3. Then Solms admits what all know but few say: Neuroscientist: Mind is not just brain? Thats career limiting! Neuropsychologist Mark Solms and neurosurgeon Michael Egnor agreed that clinical experience supports a non-materialist view but that the establishment doesnt. Mark Solms: science is an incredibly rigid sort of its like a mafia. You have to go along with the rules of the Don, otherwise youve had it.

In the second portion, they offer definitions of consciousness:

2.1 Materialist neuroscientists dont usually see real patients. Neurosurgeon Michael Egnor and neuropsychologist Mark Solms find common ground: The mind can be merely what the brain does in an academic paper. But not in life. Egnor takes a stab at defining consciousness: Following Franz Brentano, he says, A conscious state is an intentional state. Next, it will be Solmss turn.

2.2 A neuropsychologist takes a crack at defining consciousness. Frustrated by reprimands for discussing Big Questions in neuroscience, Mark Solms decided to train as a psychoanalyst as well. As a neuropsychologist, he sees consciousness, in part, as the capacity to feel things, what philosophers call qualia the redness of red.

3.1 Einstein believed in Spinozas God. Who is that God? Neuropsychologist Mark Solms admits that life is miraculous and sees Spinozas God, embedded in nature, as the ultimate explanation. In a discussion with Solms, neurosurgeon Michael Egnor argues that it makes more sense to see God as a Person than as a personification of nature.

3.2 Egnor and Solms: What does it mean to say God is a Person? Mark Solms and Michael Egnor discuss and largely agree on what we can rationally know about God, using the tools of reason. Egnor argues that, if the most remarkable thing about us is our personhood (I am), it Makes sense to think of God as a Person (I AM).

You may also wish to read: Your mind vs. your brain: Ten things to know

View post:

Why Neuroscientist Solms Is No Materialist: Information Theory - Walter Bradley Center for Natural and Artificial Intelligence