Category Archives: Quantum Physics

Time is not an illusion. Its an object with physical size – Aeon

A timeless universe is hard to imagine, but not because time is a technically complex or philosophically elusive concept. There is a more structural reason: imagining timelessness requires time to pass. Even when you try to imagine its absence, you sense it moving as your thoughts shift, your heart pumps blood to your brain, and images, sounds and smells move around you. The thing that is time never seems to stop. You may even feel woven into its ever-moving fabric as you experience the Universe coming together and apart. But is that how time really works?

According to Albert Einstein, our experience of the past, present and future is nothing more than a stubbornly persistent illusion. According to Isaac Newton, time is nothing more than backdrop, outside of life. And according to the laws of thermodynamics, time is nothing more than entropy and heat. In the history of modern physics, there has never been a widely accepted theory in which a moving, directional sense of time is fundamental. Many of our most basic descriptions of nature from the laws of movement to the properties of molecules and matter seem to exist in a universe where time doesnt really pass. However, recent research across a variety of fields suggests that the movement of time might be more important than most physicists had once assumed.

A new form of physics called assembly theory suggests that a moving, directional sense of time is real and fundamental. It suggests that the complex objects in our Universe that have been made by life, including microbes, computers and cities, do not exist outside of time: they are impossible without the movement of time. From this perspective, the passing of time is not only intrinsic to the evolution of life or our experience of the Universe. It is also the ever-moving material fabric of the Universe itself. Time is an object. It has a physical size, like space. And it can be measured at a molecular level in laboratories.

The unification of time and space radically changed the trajectory of physics in the 20th century. It opened new possibilities for how we think about reality. What could the unification of time and matter do in our century? What happens when time is an object?

For Newton, time was fixed. In his laws of motion and gravity, which describe how objects change their position in space, time is an absolute backdrop. Newtonian time passes, but never changes. And its a view of time that endures in modern physics even in the wave functions of quantum mechanics time is a backdrop, not a fundamental feature. For Einstein, however, time was not absolute. It was relative to each observer. He described our experience of time passing as a stubbornly persistent illusion. Einsteinian time is what is measured by the ticking of clocks; space is measured by the ticks on rulers that record distances. By studying the relative motions of ticking clocks and ticks on rulers, Einstein was able to combine the concepts of how we measure both space and time into a unified structure we now call spacetime. In this structure, space is infinite and all points exist at once. But time, as Einstein described it, also has this property, which means that all times past, present and future are equally real. The result is sometimes called a block universe, which contains everything that has and will happen in space and time. Today, most physicists support the notion of the block universe.

But the block universe was cracked before it even arrived. In the early 1800s, nearly a century before Einstein developed the concept of spacetime, Nicolas Lonard Sadi Carnot and other physicists were already questioning the notion that time was either a backdrop or an illusion. These questions would continue into the 19th century as physicists such as Ludwig Boltzmann also began to turn their minds to the problems that came with a new kind of technology: the engine.

Though engines could be mechanically reproduced, physicists didnt know exactly how they functioned. Newtonian mechanics were reversible; engines were not. Newtons solar system ran equally well moving forward or backward in time. However, if you drove a car and it ran out of fuel, you could not run the engine in reverse, take back the heat that was generated, and unburn the fuel. Physicists at the time suspected that engines must be adhering to certain laws, even if those laws were unknown. What they found was that engines do not function unless time passes and has a direction. By exploiting differences in temperature, engines drive the movement of heat from warm parts to cold parts. As time moves forward, the temperature difference diminishes and less work can be done. This is the essence of the second law of thermodynamics (also known as the law of entropy) that was proposed by Carnot and later explained statistically by Boltzmann. The law describes the way that less useful work can be done by an engine over time. You must occasionally refuel your car, and entropy must always increase.

Do we really live in a universe that has no need for time as a fundamental feature?

This makes sense in the context of engines or other complex objects, but it is not helpful when dealing with a single particle. It is meaningless to talk about the temperature of a single particle because temperature is a way of quantifying the average kinetic energy of many particles. In the laws of thermodynamics, the flow and directionality of time are considered an emergent property rather than a backdrop or an illusion a property associated with the behaviour of large numbers of objects. While thermodynamic theory introduced how time should have a directionality to its passage, this property was not fundamental. In physics, fundamental properties are reserved for those properties that cannot be described in other terms. The arrow of time in thermodynamics is therefore considered emergent because it can be explained in terms of more fundamental concepts, such as entropy and heat.

Charles Darwin, working between the steam engine era of Carnot and the emergence of Einsteins block universe, was among the first to clearly see how life must exist in time. In the final sentence from On the Origin of Species (1859), he eloquently captured this perspective: [W]hilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been and are being evolved. The arrival of Darwins endless forms can be explained only in a universe where time exists and has a clear directionality.

During the past several billion years, life has evolved from single-celled organisms to complex multicellular organisms. It has evolved from simple societies to teeming cities, and now a planet potentially capable of reproducing its life on other worlds. These things take time to come into existence because they can emerge only through the processes of selection and evolution.

We think Darwins insight does not go deep enough. Evolution accurately describes changes observed across different forms of life, but it does much more than this: it is the only physical process in our Universe that can generate the objects we associate with life. This includes bacteria, cats and trees, but also things like rockets, mobile phones and cities. None of these objects fluctuates into existence spontaneously, despite what popular accounts of modern physics may claim can happen. These objects are not random flukes. Instead, they all require a memory of the past to be made in the present. They must be produced over time a time that continually moves forward. And yet, according to Newton, Einstein, Carnot, Boltzmann and others, time is either nonexistent or merely emergent.

The times of physics and of evolution are incompatible. But this has not always been obvious because physics and evolution deal with different kinds of objects. Physics, particularly quantum mechanics, deals with simple and elementary objects: quarks, leptons and force carrier particles of the Standard Model. Because these objects are considered simple, they do not require memory for the Universe to make them (assuming sufficient energy and resources are available). Think of memory as a way to describe the recording of actions or processes that are needed to build a given object. When we get to the disciplines that engage with evolution, such as chemistry and biology, we find objects that are too complex to be produced in abundance instantaneously (even when energy and materials are available). They require memory, accumulated over time, to be produced. As Darwin understood, some objects can come into existence only through evolution and the selection of certain recordings from memory to make them.

This incompatibility creates a set of problems that can be solved only by making a radical departure from the current ways that physics approaches time especially if we want to explain life. While current theories of quantum mechanics can explain certain features of molecules, such as their stability, they cannot explain the existence of DNA, proteins, RNA, or other large and complex molecules. Likewise, the second law of thermodynamics is said to give rise to the arrow of time and explanations of how organisms convert energy, but it does not explain the directionality of time, in which endless forms are built over evolutionary timescales with no final equilibrium or heat-death for the biosphere in sight. Quantum mechanics and thermodynamics are necessary to explain some features of life, but they are not sufficient.

These and other problems led us to develop a new way of thinking about the physics of time, which we have called assembly theory. It describes how much memory must exist for a molecule or combination of molecules the objects that life is made from to come into existence. In assembly theory, this memory is measured across time as a feature of a molecule by focusing on the minimum memory required for that molecule (or molecules) to come into existence. Assembly theory quantifies selection by making time a property of objects that could have emerged only via evolution.

We began developing this new physics by considering how life emerges through chemical changes. The chemistry of life operates combinatorially as atoms bond to form molecules, and the possible combinations grow with each additional bond. These combinations are made from approximately 92 naturally occurring elements, which chemists estimate can be combined to build as many as 1060 different molecules 1 followed by 60 zeroes. To become useful, each individual combination would need to be replicated billions of times think of how many molecules are required to make even a single cell, let alone an insect or a person. Making copies of any complex object takes time because each step required to assemble it involves a search across the vastness of combinatorial space to select which molecules will take physical shape.

Combinatorial spaces seem to show up when life exists

Consider the macromolecular proteins that living things use as catalysts within cells. These proteins are made from smaller molecular building blocks called amino acids, which combine to form long chains typically between 50 and 2,000 amino acids long. If every possible 100-amino-acid-long protein was assembled from the 20 most common amino acids that form proteins, the result would not just fill our Universe but 1023 universes.

The space of all possible molecules is hard to fathom. As an analogy, consider the combinations you can build with a given set of Lego bricks. If the set contained only two bricks, the number of combinations would be small. However, if the set contained thousands of pieces, like the 5,923-piece Lego model of the Taj Mahal, the number of possible combinations would be astronomical. If you specifically needed to build the Taj Mahal according to the instructions, the space of possibilities would be limited, but if you could build any Lego object with those 5,923 pieces, there would be a combinatorial explosion of possible structures that could be built the possibilities grow exponentially with each additional block you add. If you connected two Lego structures you had already built every second, you would not be able to exhaust all possible objects of the size of the Lego Taj Mahal set within the age of the Universe. In fact, any space built combinatorially from even a few simple building blocks will have this property. This includes all possible cell-like objects built from chemistry, all possible organisms built from different cell-types, all possible languages built from words or utterances, and all possible computer programs built from all possible instruction sets. The pattern here is that combinatorial spaces seem to show up when life exists. That is, life is evident when the space of possibilities is so large that the Universe must select only some of that space to exist. Assembly theory is meant to formalise this idea. In assembly theory, objects are built combinatorially from other objects and, just as you might use a ruler to measure how big a given object is spatially, assembly theory provides a measure called the assembly index to measure how big an object is in time.

The Lego Taj Mahal set is equivalent to a complex molecule in this analogy. Reproducing a specific object, like a Lego set, in a way that isnt random requires selection within the space of all possible objects. That is, at each stage of construction, specific objects or sets of objects must be selected from the vast number of possible combinations that could be built. Alongside selection, memory is also required: information is needed in the objects that exist to assemble the specific new object, which is implemented as a sequence of steps that can be completed in finite time, like the instructions required to build the Lego Taj Mahal. More complex objects require more memory to come into existence.

In assembly theory, objects grow in their complexity over time through the process of selection. As objects become more complex, their unique parts will increase, which means local memory must also increase. This local memory is the causal chain of events in how the object is first discovered by selection and then created in multiple copies. For example, in research into the origin of life, chemists study how molecules come together to become living organisms. For a chemical system to spontaneously emerge as life, it must self-replicate by forming, or catalysing, self-sustaining networks of chemical reactions. But how does the chemical system know which combinations to make? We can see local memory in action in these networks of molecules that have learned to chemically bind together in certain ways. As the memory requirements increase, the probability that an object was produced by chance drops to zero because the number of alternative combinations that werent selected is just too high. An object, whether its a Lego Taj Mahal or a network of molecules, can be produced and reproduced only with memory and a construction process. But memory is not everywhere, its local in space and time. This means an object can be produced only where there is local memory that can guide the selection of which parts go where, and when.

In assembly theory, selection refers to what has emerged in the space of possible combinations. It is formally described through an objects copy number and complexity. Copy number or concentration is a concept used in chemistry and molecular biology that refers to how many copies of a molecule are present in a given volume of space. In assembly theory, complexity is as significant as the copy number. A highly complex molecule that exists only as a single copy is not important. What is of interest to assembly theory are complex molecules with a high copy number, which is an indication that the molecule has been produced by evolution. This complexity measurement is also known as an objects assembly index. This value is related to the amount of physical memory required to store the information to direct the assembly of an object and set a directionality in time from the simple to the complex. And, while the memory must exist in the environment to bring the object into existence, in assembly theory the memory is also an intrinsic physical feature of the object. In fact, it is the object.

Life is stacks of objects building other objects that build other objects its objects building objects, all the way down. Some objects emerged only relatively recently, such as synthetic forever chemicals made from organofluorine chemical compounds. Others emerged billions of years ago, such as photosynthesising plant cells. Different objects have different depths in time. And this depth is directly related to both an objects assembly index and copy number, which we can combine into a number: a quantity called Assembly, or A. The higher the Assembly number, the deeper an object is in time.

To measure assembly in a laboratory, we chemically analyse an object to count how many copies of a given molecule it contains. We then infer the objects complexity, known as its molecular assembly index, by counting the number of parts it contains. These molecular parts, like the amino acids in a protein string, are often inferred by determining an objects molecular assembly index a theoretical assembly number. But we are not inferring theoretically. We are counting the molecular components of an object using three visualising techniques: mass spectrometry, infrared and nuclear magnetic resonance (NMR) spectroscopy. Remarkably, the number of components weve counted in molecules maps to their theoretical assembly numbers. This means we can measure an objects assembly index directly with standard lab equipment.

A high Assembly number a high assembly index and a high copy number indicates that it can be reliably made by something in its environment. This could be a cell that constructs high-Assembly molecules like proteins, or a chemist that makes molecules with an even higher Assembly value, such as the anti-cancer drug Taxol (paclitaxel). Complex objects with high copy numbers did not come into existence randomly but are the result of a process of evolution or selection. They are not formed by a series of chance encounters, but by selection in time. More specifically, a certain depth in time.

Its like throwing the 5,923 Lego Taj Mahal pieces in the air and expecting them to come together spontaneously

This is a difficult concept. Even chemists find this idea hard to grasp since it is easy to imagine that complex molecules form by chance interactions with their environment. However, in the laboratory, chance interactions often lead to the production of tar rather than high-Assembly objects. Tar is a chemists worst nightmare, a messy mixture of molecules that cannot be individually identified. It is found frequently in origin-of-life experiments. In the US chemist Stanley Millers prebiotic soup experiment in 1953, the amino acids that formed at first turned into a mess of unidentifiable black gloop if the experiment was run too long (and no selection was imposed by the researchers to stop chemical changes taking place). The problem in these experiments is that the combinatorial space of possible molecules is so vast for high-Assembly objects that no specific molecules are produced in high abundance. Tar is the result.

Its like throwing the 5,923 pieces from the Lego Taj Mahal set in the air and expecting them to come together, spontaneously, exactly as the instructions specify. Now imagine taking the pieces from 100 boxes of the same Lego set, throwing them into the air, and expecting 100 copies of the exact same building. The probabilities are incredibly low and might be zero, if assembly theory is on the right track. It is as likely as a smashed egg spontaneously reforming.

But what about complex objects that occur naturally without selection or evolution? What about snowflakes, minerals and complex storm systems? Unlike objects generated by evolution and selection, these do not need to be explained through their depth in time. Though individually complex, they do not have a high Assembly value because they form randomly and require no memory to be produced. They have a low copy number because they never exist in identical copies. No two snowflakes are alike, and the same goes for minerals and storm systems.

Assembly theory not only changes how we think about time, but how we define life itself. By applying this approach to molecular systems, it should be possible to measure if a molecule was produced by an evolutionary process. That means we can determine which molecules could have been made only by a living process, even if that process involves chemistries different to those on Earth. In this way, assembly theory can function as a universal life-detection system that works by measuring the assembly indexes and copy numbers of molecules in living or non-living samples.

In our laboratory experiments, we found that only living samples produce high-Assembly molecules. Our teams and collaborators have reproduced this finding using an analytical technique called mass spectrometry, in which molecules from a sample are weighed in an electromagnetic field and then smashed into pieces using energy. Smashing a molecule to bits allows us to measure its assembly index by counting the number of unique parts it contains. Through this, we can work out how many steps were required to produce a molecular object and then quantify its depth in time with standard laboratory equipment.

To verify our theory that high-Assembly objects can be generated only by life, the next step involved testing living and non-living samples. Our teams have been able to take samples of molecules from across the solar system, including diverse living, fossilised and abiotic systems on Earth. These solid samples of stone, bone, flesh and other forms of matter were dissolved in a solvent and then analysed with a high-resolution mass spectrometer that can identify the structure and properties of molecules. We found that only living systems produce abundant molecules with an assembly index above an experimentally determined value of 15 steps. The cut-off between 13 and 15 is sharp, meaning that molecules made by random processes cannot get beyond 13 steps. We think this is indicative of a phase transition where the physics of evolution and selection must take over from other forms of physics to explain how a molecule was formed.

These experiments verify that only objects with a sufficiently high Assembly number highly complex and copied molecules seem to be found in life. What is even more exciting is that we can find this information without knowing anything else about the molecule present. Assembly theory can determine whether molecules from anywhere in the Universe were derived from evolution or not, even if we dont know what chemistry is being used.

The possibility of detecting living systems elsewhere in the galaxy is exciting, but more exciting for us is the possibility of a new kind of physics, and a new explanation of life. As an empirical measure of objects uniquely producible by evolution, Assembly unlocks a more general theory of life. If the theory holds, its most radical philosophical implication is that time exists as a material property of the complex objects created by evolution. That is, just as Einstein radicalised our notion of time by unifying it with space, assembly theory points to a radically new conception of time by unifying it with matter.

Assembly theory explains evolved objects, such as complex molecules, biospheres, and computers

It is radical because, as we noted, time has never been fundamental in the history of physics. Newton and some quantum physicists view it as a backdrop. Einstein thought it was an illusion. And, in the work of those studying thermodynamics, its understood as merely an emergent property. Assembly theory treats time as fundamental and material: time is the stuff out of which things in the Universe are made. Objects created by selection and evolution can be formed only through the passing of time. But dont think about this time like the measured ticking of a clock or a sequence of calendar years. Time is a physical attribute. Think about it in terms of Assembly, a measurable intrinsic property of a molecules depth or size in time.

This idea is radical because it also allows physics to explain evolutionary change. Physics has traditionally studied objects that the Universe can spontaneously assemble, such as elementary particles or planets. Assembly theory, on the other hand, explains evolved objects, such as complex molecules, biospheres, and computers. These complex objects exist only along lineages where information has been acquired specific to their construction.

If we follow those lineages back, beyond the origin of life on Earth to the origin of the Universe, it would be logical to suggest that the memory of the Universe was lower in the past. This means that the Universes ability to generate high-Assembly objects is fundamentally limited by its size in time. Just as a semi-trailer truck will not fit inside a standard home garage, some objects are too large in time to come into existence in intervals that are smaller than their assembly index. For complex objects like computers to exist in our Universe, many other objects needed to form first: stars, heavy elements, life, tools, technology, and the abstraction of computing. This takes time and is critically path-dependent due to the causal contingency of each innovation made. The early Universe may not have been capable of computation as we know it, simply because not enough history existed yet. Time had to pass and be materially instantiated through the selection of the computers constituent objects. The same goes for Lego structures, large language models, new pharmaceutical drugs, the technosphere, or any other complex object.

The consequences of objects having an intrinsic material depth in time is far reaching. In the block universe, everything is treated as static and existing all at once. This mean that objects cannot be ordered by their depth in time, and selection and evolution cannot be used to explain why some objects exist and not others. Re-conceptualising time as a physical dimension of complex matter, and setting a directionality for time could help us solve such questions. Making time material through assembly theory unifies several perplexing philosophical concepts related to life in one measurable framework. At the heart of this theory is the assembly index, which measures the complexity of an object. It is a quantifiable way of describing the evolutionary concept of selection by showing how many alternatives were excluded to yield a given object. Each step in the assembly process of an object requires information, memory, to specify what should and shouldnt be added or changed. In building the Lego Taj Mahal, for example, we must take a specific sequence of steps, each directing us toward the final building. Each misstep is an error, and if we make too many errors we cannot build a recognisable structure. Copying an object requires information about the steps that were previously needed to produce similar objects.

This makes assembly theory a causal theory of physics, because the underlying structure of an assembly space the full range of required combinations orders things in a chain of causation. Each step relies on a previously selected step, and each object relies on a previously selected object. If we removed any steps in an assembly pathway, the final object would not be produced. Buzzwords often associated with the physics of life, such as theory, information, memory, causation and selection, are material because objects themselves encode the rules to help construct other complex objects. This could be the case in mutual catalysis where objects reciprocally make each other. Thus, in assembly theory, time is essentially the same thing as information, memory, causation and selection. They are all made physical because we assume they are features of the objects described in the theory, not the laws of how these objects behave. Assembly theory reintroduces an expanding, moving sense of time to physics by showing how its passing is the stuff complex objects are made of: the size of the future increases with complexity.

This new conception of time might solve many open problems in fundamental physics. The first and foremost is the debate between determinism and contingency. Einstein famously said that God does not play dice, and many physicists are still forced to conclude that determinism holds, and our future is closed. But the idea that the initial conditions of the Universe, or any process, determine the future has always been a problem. In assembly theory, the future is determined, but not until it happens. If what exists now determines the future, and what exists now is larger and more information-rich than it was in the past, then the possible futures also grow larger as objects become more complex. This is because there is more history existing in the present from which to assemble novel future states. Treating time as a material property of the objects it creates allows novelty to be generated in the future.

Novelty is critical for our understanding of life as a physical phenomenon. Our biosphere is an object that is at least 3.5 billion years old by the measure of clock time (Assembly is a different measure of time). But how did life get started? What allowed living systems to develop intelligence and consciousness? Traditional physics suggests that life emerged. The concept of emergence captures how new structures seem to appear at higher levels of spatial organisation that could not be predicted from lower levels. Examples include the wetness of water, which is not predicted from individual water molecules, or the way that living cells are made from individual non-living atoms. However, the objects traditional physics considers emergent become fundamental in assembly theory. From this perspective, an objects emergent-ness how far it departs from a physicists expectations of elementary building blocks depends on how deep it lies in time. This points us toward the origins of life, but we can also travel in the other direction.

If we are on the right track, assembly theory suggests time is fundamental. It suggests change is not measured by clocks but is encoded in chains of events that produce complex molecules with different depths in time. Assembled from local memory in the vastness of combinatorial space, these objects record the past, act in the present, and determine the future. This means the Universe is expanding in time, not space or perhaps space emerges from time, as many current proposals from quantum gravity suggest. Though the Universe may be entirely deterministic, its expansion in time implies that the future cannot be fully predicted, even in principle. The future of the Universe is more open-ended than we could have predicted.

Time may be an ever-moving fabric through which we experience things coming together and apart. But the fabric does more than move it expands. When time is an object, the future is the size of the Universe.

Published in association with the Santa Fe Institute, an Aeon Strategic Partner.

Here is the original post:

Time is not an illusion. Its an object with physical size - Aeon

Are There Reasons to Believe in a Multiverse? – Quanta Magazine

By definition, the universe seems like it should be the totality of everything that exists. Yet a variety of arguments emerging from cosmology, particle physics and quantum mechanics hint that there could also be unobservable universes beyond our own that follow different laws of nature. While the existence of a multiverse is speculative, for many physicists it represents a plausible explanation for some of the biggest mysteries in science. In this episode, Steven Strogatz explores the idea of a multiverse with the theoretical physicist David Kaplan and learns what it might mean about our own existence.

Listen on Apple Podcasts, Spotify, Google Podcasts, Stitcher, TuneIn or your favorite podcasting app, or you can stream it from Quanta.

Steve Strogatz (00:03): Im Steve Strogatz and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in math and science today. In this episode, were going to ask: Do we live in a multiverse?

(00:16) We know that we all live in our own little bubbles, whether it be our family, our friends, our hometown, even our workplace. And if you think about it, animals live in their own little bubbles, too. Fish live in certain parts of the ocean or different lakes or rivers. You wont find them in ice with a microbe population or flying around in the sky with birds, even though ice and water vapor in the sky are also forms of water. Could the universe be the same way? Maybe were not alone, and what we can see with the help of telescopes and infrared cameras isnt all there is. Maybe space is infinite. Maybe there are multiple universes beyond our own, perhaps made up of some of the same components or possibly even breaking the very laws of physics that allow us to call our universe home.

(01:08) Its a mind-blowing concept, the idea that we could live in a multiverse. And its an idea that David Kaplan thinks could be possible. Kaplan is a theoretical physicist at Johns Hopkins University in Baltimore. He looks at the theoretical possibilities that apply to the Standard Model of particle physics and cosmology. He also produced and appeared in the 2013 documentary Particle Fever, about the first experiments at the Large Hadron Collider. David, thanks for joining us today.

David Kaplan (01:38): My pleasure.

Strogatz (01:39): Well, this is great. Im very curious to hear your take on the multiverse. Before we get to the multiverse, lets talk about the good old-fashioned universe. You know, I mean, I grew up hearing about the universe that was sort of by definition, I think, all there is. Do I have that right? How would we define the universe before we define the multiverse?

Kaplan (01:57): Wow. Well, I think to be a little bit more useful or practical or even rigorous, we often talk about the observable universe. And the observable universe is a part of the universe that we have any sort of access to. And theres a very simple fact, which limits our ability to see the entirety of whatever the universe might be, which is the fact that the universe appears at least to be finite. Or at the very least, that in the early stages of what we would call our universe, it was so dense and hot, light couldnt penetrate it. And so the observable universe is really the distance we can look back to which a place where the light leaving that part of the universe at a much earlier time than today couldnt pass through the universe.

(02:53) In other words, in the early universe, if it was smaller and hotter and denser, light didnt travel in a straight line. It was stuck in the plasma the soup that was the early universe. And when the universe expanded and cooled enough that light could then find room to travel from one place to another and across the universe, it left what we now call the surface of last scattering. And so there is a surface, in a sense, in all directions when we look out and we call it the cosmic microwave background. Thats a surface at which we are seeing the universe at a very early stage, at a time when it was not transparent to light. So in that sense, we have a very rigid access to what we can see in the universe and therefore we call it the observable universe. There certainly could be universe beyond that. But if the universe has a finite lifetime, and it takes like a finite time to get to us, we just mathematically cannot see beyond a certain point.

Strogatz (03:57): This is great. I love the rigor that you just applied to that. Maybe we should move along then to this notion of a multiverse.

Kaplan (04:04): When it comes to the multiverse or the initial start of the universe, all of those things theyre all very speculative. So you do hear standard things like, At the Big Bang, you create time and space. And there is some notion where you could say that is probably right, in some sense. But really whats going on is when you get to densities of order its called the Planck energy The description of gravity in terms of a geometry or a geometric theory of the universe breaks down. It doesnt mean that there isnt time and space of some other kind at earlier times. Its just that general relativity no longer applies in that time.

(04:53) And so, could it be that time goes back infinitely far? A different label of what we call time? Absolutely. We dont know. We dont know what this, at very low energy densities, turns into when you get to an energy scale where general relativity is not the correct description.

(05:14) So people pop in and say, Well, time and space get created sort of instantaneously out of nothing. But thats not a mathematically well-defined description. Its a sort of hopeful compactification of space and time. And if you really have nothing, its hard to make predictions about when something appears because theres nothing there.

(05:37) Now, what is the multiverse? The multiverse as it appeared in Particle Fever was a use of the multiverse. It was the idea that, in an almost mundane way, if the universe is infinite or much, much bigger, at least than the part that we see (the observable part), then its possible that the laws of nature in different patches are different. You dont even need the deep underlying laws to be different. You just need some of the parameters to be different. The numbers that describe things like how much vacuum energy is in that part of the universe. Or what are the values of some background fields we say gravitational fields or other fields in those parts of the universe? And if those things are different in different places, they will have a different expansion history. The universe at that region will expand based on whats in the universe, and what are the interaction strengths of the stuff in that part of the universe. And so you can really have very different universes with a little bit more pedestrian descriptions of how the laws may change from one place to another, or just even the content of the universe in each of those places.

(07:03) And so theres a simple idea of the multiverse, which is that there really is just one universe. The part of the universe that we exist in has a certain set of parameters controlling how life looks. And then if you go far enough away, farther than light could have traveled in the age of the universe, there are places in the universe where the laws are just different. The parameters are different. The expansion history is completely different. Maybe [in] those regions, no stars, or galaxies could have formed, nothing lives there. Or maybe in those parts of the universe, detailed properties like the mass of the Higgs boson, which controls the mass of lots of other particles, it controls what hydrogen is, it controls how chemistry works. All of those things could be different in different places if you go far enough out.

(07:57) It appears in our observable universe, the laws are pretty static. We have laws, we have initial conditions. And we have a description of how the universe works, and how experiments work on Earth. They seem to be constant in time, and it doesnt really matter where our galaxy is as the experiments are done, as our galaxy is actually moving quite fast relative to the background. So in that sense, the laws are very stable here, but you could imagine much farther distances in which the laws are different. And stars can or cannot form, chemistry does or does not work, a different type of chemistry works. All kinds of goofy things could be happening in very different places. Very different creatures. Who knows?

Strogatz (08:47): Sure. So I liked the way you phrased it, that very different things could be happening. And of course, you know, all of us in late night conversations in our college dorm rooms could think of that thought that we have a parochial view. Its just in the nature of the finite speed of light and the finite time that the observable universe has been around, that we live in our patch. And who knows what happens in these other patches? We cant, by definition, we cant observe those parts. But theres a principled reason for believing in the multiverse, or maybe many principled reasons beyond this kind of like college-dorm speculation. So can you tell me about that? Like, for instance, there are mysteries in physics that led physicists to come up with the idea of a multiverse. Maybe tell us about what some of those mysteries are that would bring us to this, this wild You can tell Im a little skeptical here.

Kaplan (09:36): Of course.

Strogatz: It feels a little like wild philosophizing. But I also know that physics is very coherent and you have reasons for this. Its not wild philosophizing.

Kaplan (09:44): First off, we have something called the Standard Model of particle physics. And the Standard Model is based on quantum field theory. Quantum field theory works extraordinarily well in describing the interactions of particles in complex systems, in simple systems, at high energies. Internal properties of particles, like magnetic moments of electrons, and the strong force and confinement of nuclear matter. It has just an amazing breadth of description of how all matter works. And that theory involves the particles that weve seen we identify them as fields. We identify the electron, and every electron in fact, as an excitation of the electron field in the universe. And therefore, the Standard Model with a list of fields describes all possible fundamental particles that ever could be created. And so far, every one we have seen weve seen all those particles, and we havent seen any others, directly at least.

(10:55) It also has a list of parameters which tell you how strong is the electrical force between the electron and the proton, and various other numbers, which predict the probability of scattering various particles. And a number of those parameters seem very reasonable in the sense of: You put those numbers in, and you calculate quantum corrections or situations where the background is a little bit different, and nothing funny happens with those parameters.

(11:26) Save two. There are two numbers in the Standard Model that are weird. One of those numbers is the mass of the Higgs boson itself. The mass of the Higgs boson is actually the mass scale at which all other fundamental particles masses come from. So if you ask what the mass of the electron is, its proportional to the mass of the Higgs boson. Whats the mass of the top quark, the heaviest quark? Its proportional to the mass of the Higgs boson. The W weak force boson is proportional to the mass of the Higgs boson.

(12:02) So the Higgs boson controls one physical mass scale among all of the particles in the Standard Model. And that physical mass scale is something you simply put in by hand. Youve measured it, weve measured the mass of the Higgs, weve measured the mass of the other particles and their interactions with the Higgs. And that allows us to fix experimentally what that number is.

(12:30) Now, if you take a quantum field theory that has a Higgs in it and all these other particles, and you estimate what the mass of the Higgs should be what would be a typical mass of the Higgs in a quantum field theory with these particles in it you get infinity, which is nonsense. And then you say, Well, thats OK. Because the reason I get infinity has to do with the fact that Im assuming there are no new particles to arbitrarily high energy. And the way that quantum field theory works and quantum theories work at all is that you have to incorporate what are lets call them quantum fluctuations in the calculations of any physical parameters. So you have a few parameters you put into your theory.

(13:18) But if you want to ask about a physical state, a physical measurement that you make in that quantum theory, it includes all quantum fluctuations that live in the theory. Now in the case of the Higgs, you discover that the quantum fluctuations would be infinite if the standard model was correct up to infinite energies.

(13:40) We dont think the Standard Model is correct up to infinite energies. In fact, we already know, to incorporate gravity into the Standard Model brings in a new energy scale its called the Planck mass. But there could be plenty of other particles or new symmetries or other things for which, when you get to that energy, you find there are no more contributions to the Higgs mass. Its finite, everythings OK. Youre going to get infinities if you assume that your model is correct to arbitrarily high energy. So you cut these models off, you say, OK, there are going to be new particles at some mass. Above that, lets assume there are no contributions to the Higgs. And below that there will be these quantum fluctuations that contribute to the Higgs. And what you find is wherever you put that cut off however high-energy you say the Standard Model is correct to there are contributions to the Higgs mass all the way up to that energy. Which means you get contributions to the Higgs mass to these arbitrarily high energies or masses.

(14:43) And so the Higgs mass should be the mass thats associated with the new unknown physics. There should be some new physics up there at some high energy and the Higgs mass should actually be roughly that scale. And that means when you discover the Higgs and you measure its mass, you should also discover a whole bunch of other garbage, which is associated with the new physics which is beyond the Standard Model.

Strogatz (15:10): I see, yeah.

Kaplan: So instead of saying theres some unknown physics so far above the Higgs, and the Higgs mass actually gets contributions of that energy, you turn it around. You say, Oh no, if I measure the Higgs, its going to come with all this new stuff. Because the Higgs mass is not infinite, its finite.

Strogatz (15:27): So let me can I just ask I think I got you. This is a new idea to me, very interesting. If I hear you right, youre saying the Higgs is going to set the borderland between the known and the unknown.

Kaplan (15:38): Yes.

Strogatz: And because its an edge case, sort of, it is on the borderland

Kaplan (15:43): Indeed.

Strogatz: that means when we discover it, Im going to see on one side the stuff that we already kind of understand. But because its a borderland particle, theres going to be stuff on the other edge of the border. And thats what were going to be thrilled to discover, and measure its property too, because thatll be new physics.

Kaplan (16:01): Exactly. Exactly. And that led to decades of research to think about what would be on the other side of the border. What is the new theory that makes the mass of the Higgs finite, not infinite? When you include all the quantum fluctuations, it should be now a reasonable theory. So it should come with a bunch of stuff. And people have posited symmetries, something called supersymmetry. Or making the Higgs a composite particle. Weve seen the proton is made of quarks; maybe the Higgs is made of other stuff. And at high energies, actually, theres no Higgs, theres stuff inside, and were seeing other fundamental particles. So it can be even a borderline of its own existence. And at high energies, theres a completely different description.

(16:52) These were the sort of quantum field theories that people introduced and explored, that we all wondered could be what were seeing the hint of. That what seemingly looks like nonsense with the Higgs mass is only a statement that theres a new theory living just above it in energy and in mass. And we can even make suggestions of what that theory is suggestions that cure this issue with the mass of the Higgs.

(17:20) And that was the hope people had when the LHC turned on Large Hadron Collider turned on. And they would discover the Higgs and theyd discover some of these other new well call them degrees of freedom. These other particles, different masses, with relationships to the Higgs. Or even that the Higgs itself is not fundamental and we see properties of its internal structure.

(17:47) And thats why in (I think) 2003, I started thinking, But it could be that thats not the case. That the Higgs mass, while it should come with a whole bunch of stuff, you could at least theoretically imagine that the mass of the Higgs is accidentally small. That it would be at the borderline, but all of those new particles that have been added to make it finite, when those quantum fluctuations contribute to the mass of the Higgs, just by some horrible accident, those contributions cancel to such a large degree that the mass of the Higgs is anomalously small compared to where the actual border to new physics is.

Strogatz (18:35): So its sort of like in my picture in my head, where Ive got this region of the known, and then this borderland region, and then the region of the new stuff.

Kaplan (18:44): Right.

Strogatz: In fact, when I go into the region of the new stuff, its the Sahara Desert for as far as the eye can see

Kaplan (18:49): Right.

Strogatz: except that theres something way the heck off the edge of the map.

Kaplan (18:53): Right. And in physics, because the Higgs is so sensitive to quantum fluctuations, we would call that a fine-tuned situation, one in which there really has to be an accident. Because the other world that lives way out there, the new physics, the new particles and new laws that govern how the Higgs behaves, doesnt know anything, in some weird sense, about the low energy, low-lying masses. So theres no reason that it would pick an energy scale for the Higgs which is arbitrarily light compared to all of the dynamics associated with that new physics.

(19:37) So an example of this is that the strong interactions, the interactions of quarks, become strong in a particular energy scale, at 200 mega electron-volts. Thats when the quark interactions become so strong, we cannot actually pull the quarks apart. And then you ask, what is the mass of the proton? Its made of quarks. Its about five times that. Whats the mass of the neutron? Its about five times that. What are the masses of the various mesons made of quarks and antiquarks? Its between a few times that and a few Theres a very light one, for symmetry reasons, but its about half that energy. Theyre all about that energy, all the particles that you get when you go from the land of quarks to the land of the nuclei that we see in atoms, they all live at that energy scale.

(20:30) So all the new physics is that energy scale. And we would assume the same thing that whatever controls all of the dynamics associated with creating a Higgs boson, whatever it comes from, the Higgs is going to be at that energy scale. It might be the lightest particle of all the mess. And instead, the Higgs is way down here compared to I mean, as you said, we see a Sahara Desert, we dont see Maybe theres something way off there! Sometimes we get hints of it. But you know, its often a mirage. I dont see anything.

Strogatz (21:04): So let me get you on this. This is fantastic. Youre saying that in the story with the strong force, that was sort of exemplary. Thats given us a lot of intuition for how things are supposed to behave. That when we have the energy scale for the strong force, we see this zoo of particles, all kind of in the same zoo.

Kaplan (21:24): Exactly.

Strogatz: Right? I mean, theyre all in more or less the same scale, OK, give or take a factor of five here or 10 or two there. But is it like the Higgs is just this loner? Its like the Higgs is the only thing in its own neighborhood? Theres no zoo?

Kaplan (21:38): So far, theres no zoo.

Strogatz: Oh, my god.

Kaplan (21:40): So its people were worried about it. I mean, it really was the 70s, mid to late 70s, when Ken Wilson pointed out that the pure Standard Model with just the Higgs is horribly fine-tuned, that theres an instability just in the calculation or the contributions to the Higgs mass. And that instability is infinite unless you just say theres some energy where theres new physics.

(22:06) That kicked off an exploration of ideas. One was that there was no Higgs at all. That the behavior of the Higgs was just some new strong confining group that did all the things that Higgs is supposed to do, give masses to particles. It was hard to build such models. People came up with supersymmetry, as a symmetry that could protect the Higgs as long as you had the partners, the rest of the zoo, to stabilize the Higgs mass. And that all happened within a few years, sort of 1978 to 1981. Those were the sort of initial ideas for these sorts of theories.

(22:46) And then later, wilder theories appeared. In the early 2000s was an idea that maybe there is no more energy scale above the Higgs. Maybe even the scale of quantum gravity, which would be the Planck scale, for some manipulative reason, is not 17 orders of magnitude heavier than where we think the Higgs is, its actually right on top of it. But to do that, you need to do something slightly crazy, which is you posit the existence of extra dimensions of space. And extra dimensions of space do something very funny to gravity. It allows it to be extremely strong, at a much lower energy than the Planck scale. But it dilutes in such a way in the extra dimensions that we would estimate it as being much weaker, and not seem strong until much higher energy. But that, in some sense, numerically solved the problem. But physics-wise, didnt really solve the problem because we dont know what the theory of quantum gravity is. We dont know that the Higgs could have come from such a theory and why it would, and we have no predictions. So it was a it was less satisfying, but experimentally it was a bizarre possibility that was not ruled out. And so people could look for that possibility in totally different ways.

(24:05) But all of those type of theories, that whole class, were theories that suggested theres new stuff at the Higgs mass or just above it. And so far, we havent seen anything like that.

Strogatz (24:16): So this is great. Because this gives a real intellectual motivation that I was expecting there would be, but I never really understood what it was. Ive heard this phrase fine-tuning forever, or for my whole scientific life, but I was never quite sure what it meant. And so at least in this respect, the Higgs, as good as it is, and as you know, valuable as its been to physics, it has created a lot of headaches, it seems. Because its turning out to have properties, this fine tuning Youre telling me that the fine-tuning whatever we call it enigma, somehow the multiverse is going to help us address that? Is that the idea?

Kaplan (24:53): Yes.

Strogatz: OK. Lets hear how.

Kaplan (24:55): So what I would say is that fine-tuning doesnt tell you the theory is wrong, it just tells you that it smells bad. You think, you know, I have a theory it does such and such. And you fine tune some of the parameters, you know, by one part in 1034, which is what you would need if the new physics is at the Planck scale, one part in 1034 for cancellation among all the different parameters just so, so that the Higgs ends up at a mass which is so completely different than the scale of the physics that generated the Higgs in the first place.

(25:30) So that, wed say well, that stinks. It could be true, but maybe we should use that hint to explore what else is going on. And thats where all these other theories came to say, Oh, no, no, there could be new particles right around the mass of the Higgs. It cancels infinities, the Higgs is composite, whatever, even extra dimensions, theres no high energy scales. But the other possibility is, it is fine-tuned and perhaps theres a totally different type of explanation for why it is fine-tuned. And now this is where you think, Well, maybe were thinking parochially, which is that we describe the laws of nature but we do it in a very rigid way. We say locally in spacetime, the part of the universe weve seen, this part has these static, unchanging, uniform across-the-space laws of nature, thats all there is. And the Higgs mass is just super weird, and there must be some fine-tuning.

(26:26) But the other possibility is, some people call it a multiverse. Mundanely, the parameters could just be different in different patches of the universe. And the parameters that control the Higgs mass could be different in different places. And in fact, because the Higgs interacts directly or indirectly, with essentially every part of the Standard Model, then any parameter change in any part of the universe would change the mass of the Higgs, the properties of the Higgs.

(26:57) Now, you could imagine that the Higgs mass is naturally extremely heavy, and its near all of its family the particles, the excitations, the dynamics that created the Higgs in the first place. That for typical values of the parameters and the full theory of the universe and of nature, that the Higgs mass is always roughly at that energy scale, a much higher energy scale than the one weve seen experimentally. But since the laws of the universe are somewhat different in different locations, then the Higgs mass itself will be different in different locations. And if the universe is vast enough, there should be locations where the mass of the Higgs is sort of anomalous. Theres some accidental cancellation between the different quantum fluctuation contributions to the mass of the Higgs.

(27:53) Which means accidentally there could be parts of the universe where the Higgs is exponentially smaller than it should be the mass of the Higgs. That would require an exponentially large number of universes to allow that to happen randomly. But who knows how big the universe is? Who knows how the parameters vary? So this is in principle a possibility. But then you have to ask, well, why do we live in the aberrant universe with the really crummy cancellation that makes it very hard to discover anything?

(28:29) And the answer would be that the Higgs or any of the underlying physics have a lot of control over whether we exist or not. And instead of calling us us and making it anthropic or personal, we can just talk about it as structure. Heres a very simple analogy. We live on Earth; we dont live in empty space. Why dont we live in empty space? Of course, we are part of the Earth, we are born out of the material that made the Earth. It made us as well. Why are we near the sun? The Earth was made near the sun. Why are we near the sun? Because biological beings, perhaps, arent living in places where there are no significant sources of energy. Theres no fine-tuning argument to explain why we live on a planet Earth, rather than the 1060 times bigger volume, which is the rest of the empty universe. So we dont talk about that as a problem, of course.

(29:37) But what we can imagine is that the universe itself has different patches with different laws. And what you find with the Higgs mass is its possible that for a huge range of Higgs masses, what we would call chemistry doesnt exist, which means molecules dont bind, which means structure cannot form in those places. Nobody has explored the possible laws of nature from all possible underlying laws of nature.

(30:08) In other words, if I gave you the Standard Model, you wouldnt come back to me and tell me about the existence of a giraffe. You cant predict a very chaotic nonlinear process to tell me what could exist there. But at least its sort of the baseline level. You do need something which is nontrivial to happen. And you could imagine that the Higgs mass better be very low compared to the Planck scale, the quantum gravity scale, in order for structure to form in reasonable ways without creating black holes, for example. So you can imagine rules in which there are special values of the Higgs mass, which could be extraordinarily fine-tuned. But in those universes, or those parts of the universe, truly nontrivial things happen, like the formation of stars and planets and anything on planets.

Strogatz (31:06): So if I can just try to I dont want to oversimplify what you said. But I think it sounds like what youre saying is that if you happen to live in a hospitable patch, meaning that fine-tuning just randomly happened in your patch, you you meaning you molecules, you atoms have a shot at developing the kinds of structure that can lead to sentient introspective life

Kaplan (31:30): Exactly.

Strogatz: and then can get puzzled about, Gee, how come were so lucky that we live in a place where this can happen? And its kind of because thats what happened, the other parts are stillborn. They cant ask the question. They dont have any life. They dont have any consciousness.

Kaplan (31:44): Exactly. Were observers, but were part of the system. And so we have to be in a place where the laws of nature are such that in this region, we would be created. So we have a, what well call an observational bias.

Strogatz: Yes.

Kaplan (32:00): People see this in astronomy all the time. You look out at the stars, you can count stars and say, Oh, this is how many stars I see, and this is how many stars of this type versus how many stars of that type. But type A stars are very bright and type B stars are very dim. And its hard to see type B stars. And you may come to the conclusion that there are 20 times as many type As than type Bs, but actually type Bs are extraordinarily difficult to see. And so we have an observational bias. Actually, there are a million times more type Bs than type As. I have no idea how many stars there are based on simple observations. And so I better get much more clever about how I observe.

(32:40) But all of that is in a region where I can observe in principle. The multiverse is a place we cant observe in principle. But there still could be an observational bias based on what the parameters of the Standard Model are, and what forms in different patches of the multiverse.

Strogatz (32:59): So far, weve been focusing on this fine-tuning kind of motivation for the idea of the multiverse. But Im wondering, does something like I remember hearing the phrase eternal inflation or bubble universes popping out through an inflationary cosmology sort of scenario. Is that another kind of argument for a multiverse?

Kaplan (33:19): Yeah, and in some sense, its the easiest way to spit out a multiverse, assuming that it makes sense. So that the two parameters in the Standard Model that invoke ideas of a multiverse are the mass of the Higgs, but also the cosmological constant. And the cosmological constant is something that we have seen significant evidence for in our universe. It causes the accelerated expansion of our universe. People call it dark energy. But cosmological constant is also in effect something in the Standard Model. Its also something that garners quantum fluctuations.

(33:20) And so the value of the cosmological constant here in this universe could have been any value, really. And it too is fine-tuned compared to at least the highest energy scales that we can imagine in physics, which in this case again is the Planck scale. And that fine-tuning is something like one part in 10123. So you have to cancel something in the new physics down to 123 digits in order for a universe to have a nice small cosmological constant.

Strogatz (34:34): So I think I remember you saying earlier there are two numbers that are weird. So have we now come to that point, that one is the mass of the Higgs and one is the cosmological constant?

Kaplan (34:43): Correct. Those are the two.

Strogatz (34:45): Its so interesting. This is so, I mean, you have to feel kind of blessed to be alive to think about this. Maybe literally blessed, because if you want to get theological I happen to not be religious, but if a person is religious, its kind of tempting to go there, right? That these are two miraculous things that had to happen. These numbers had to be fine-tuned for us. OK, I dont know.

Kaplan (35:09): I get those sorts of questions when I was doing Q&As. And another explanation for fine-tuning is not that theres a statistical sample large enough to incorporate things on the tail of the distribution. But the things on the tail of the distribution are important for life and structure. You could also say, no, there is a Supreme Being that has set the number the way they are because the Supreme Being really wanted life to exist. And yes, you could think of it in that way. I dont have a model for the multiverse that Im in love with. But I certainly also dont have a model for an All-Seeing Creator setting these things up either. So I dont personally find them compelling in the sense that my day job is to try to figure out what the heck is going on. And neither of those really have a lot of teeth.

Strogatz (36:00): OK.

Kaplan: Im not saying that the multiverse is not true. But if you want the multiverse to solve the problems of our universe, it does this sort of mediocre job of it.

Strogatz (36:11): All right, but back to the many bubble universes.

Kaplan: Sure.

Strogatz (36:14): What does this have to do again with the cosmological constant?

Kaplan (36:17): So the cosmological constant itself is the one energy density in the universe that does not dilute. Thats why its called a constant. And what that means is when the universe expands, its expansion, which is driven by whats in the universe, changes the relative amounts of matter, of radiation, and of this constant. The constant is not diluting, but everything else is. And that means at some point in the history of the universe, the cosmological constant wins, and the expansion of the universe is driven by just the cosmological constant.

(36:57) And a cosmological constant expands the universe much faster than other types of matter. In fact, most types of matter, while they allow for a certain rate of expansion, they also tend to slow the expansion. You can think of it in some ways as matter is attracted to itself or radiation is attracted to itself gravitationally. Its not a perfect analogy, but its trying to slow that expansion.

(37:26) A cosmological constant, just the way it works in general relativity, it gives you the opposite sign, it tends to speed up expansion. And it speeds up expansion proportional to itself, in a way, so we would describe it as exponential expansion. And so what happens is, when the cosmological constant eventually becomes the dominant energy of the universe, it starts to expand the universe exponentially fast. And if that happens too early in the history of the universe, there is no time for anything in the universe to form. Galaxies dont form or stars dont form or any sort of clumping of matter whatsoever would not have a chance to form because the cosmological constant expansion would blow everything apart. Nothing gravitational would form in the early universe. So all the structure that lives in this universe, that seems to be the important source of nontrivial things, interesting things, stars, planets life, blah, blah, blah. If the cosmological constant was too big, there is no time in which those things could be created.

(38:35) Now, you can fiddle with other parameters to make sure those things get created at earlier times. But if you fix all other parameters and say that the cosmological constant maybe is different in different parts of the universe, then those patches of the universe would expand very differently. And the ones with larger cosmological constants would essentially have nothing in them.

(38:58) And so there its an even more trivial thing that we live where the stuff is. So we have to live in a place where exponential expansion didnt take over before stuff was formed.

(39:11) So lets just, as a slight side note, lets look at our universe and our cosmological constant, the one that we have experimental evidence for. You can say that if I wait another 14 billion years, or lets say 140 billion years or a trillion years, the exponential expansion would take over. The cosmological constant would be the dominant energy density in the universe. And you could ask, would our galaxy be ripped apart or the cluster be ripped apart?

(39:40) And the answer is no, in fact, that actually things that are gravitationally bound like our galaxy compete well, do better than the cosmological constant. So locally, the gravity is still more important in the galaxy by the local matter than the cosmological scale of the cosmological constant. But things that are not gravitationally tied to each other, they are going to be ripped apart or pulled apart by exponential expansion.

(40:11) So all you need is that structure forms before the cosmological constant takes over. And weirdly, in our universe, the cosmological constant is roughly just big enough, keeping all other parameters fixed, such that matter just had a chance to form. So we could have seen zero cosmological constant or never detected it. And wed say, oh, theres some magical reason why the cosmological constant is zero. We could have had a cosmological constant, which was, say 100 times or 1,000 times bigger, in which case, galaxies wouldnt have formed. There wouldnt have been time for the gravitational bound states that create our worlds to form. And therefore there would be no interesting structure in the universe and no life. So the cosmological constant landed as small as it had to be, but as big as it could be.

(41:05) And when that was measured in in 98, it was a sort of wake-up call that, oh, maybe there are parameters that are associated with the fact that were biased by our observable universe parameters, and not by what would be a natural outcome of a deep high energy or underlying laws of nature. And so then you would imagine that, OK, maybe its the same accident, this patch has a tiny cosmological constant due to a bizarre amount of cancellation, but [thats] not bizarre if there are 10500 universes, or 10500 patches where the cosmological constant is different. Most of those patches would have nothing in it, no structure. There could be even rarer parts or patches of the universe with a much tinier cosmological constant. But were in the most populous type of patch of the universe, where the cosmological constant is as big as it could be without destroying us or without causing us not to form.

(42:12) So the fine-tuning/landscape of possible universes, the multiverse, all of that is a statistical argument. We dont know what the priors are, we dont know what parameters were supposed to vary. But if you keep everything fixed, and you just vary the cosmological constant, you get something interesting, which is that the cosmological constant is not far from the value that you would see in a typical universe in which life exists or galaxies exist.

Strogatz (42:44): Wow, it really is very philosophically mind-blowing, thinking about all this, that our existence I mean, it makes this anthropic principle which goes back, I dont know, is it from the 1960s, or something? But it seems more important all the time.

Kaplan (42:58): I think that the anthropic principle often was more tautological. But here, it really is attempting to remove observational bias, which is to say that, you know, we have to take into account that the measurements were making are the ones we have access to. And the fact that were here means that we have access to a certain type of measurement or a certain range of parameters. We would not have been able to measure a cosmological constant, which is, you know, hundreds of orders of magnitude bigger. It would not have been possible because there would be no structure in existence.

(43:37) You could postulate that theres a different form of life that lives at a different timescale, you know, made of different particles that live long enough, but they dont have to live too long. That could be true. And then you could ask questions: Why arent we that life? Or if this whole anthropic argument is true, does that tell us that the first real structure comes down to the things made of atoms?

(44:02) Its very hard to parse what can be asked scientifically at that point, and what youre really asking about the initial conditions of the universe. And famously with physics, you get two things: You get dynamics, which are the laws of nature, the dynamical laws, which are differential equations, and then you get initial conditions. And the laws of nature are something you can test again and again and again. But the initial conditions are whatever you get. And if theres a strong bias in the initial conditions toward certain things by the way were doing the measurements, then were going to get things that are not fundamental. Theyre, in a sense, historical: They depend on the history of this part of the universe. And that could be aberrant or anomalous with respect to a larger scope of what the universe could be.

Strogatz (44:56): These ideas are so fascinating. Can I ask you to just look into your crystal ball, and well end with that. What would notions of the multiverse, cosmological constant, Higgs, all these ideas but especially the multiverse wheres it gonna lead us in physics?

Read this article:

Are There Reasons to Believe in a Multiverse? - Quanta Magazine

Some black holes may actually be tangles in the fabric of space-time … – Livescience.com

Physicists have discovered a strange twist of space-time that can mimic black holes until you get too close. Known as "topological solitons," these theoretical kinks in the fabric of space-time could lurk all around the universe and finding them could push forward our understanding of quantum physics, according to a new study published April 25 in the journal Physical Review D.

Black holes are perhaps the most frustrating object ever discovered in science. Einstein's general theory of relativity predicts their existence, and astronomers know how they form: All it takes is for a massive star to collapse under its own weight. With no other force available to resist it, gravity just keeps pulling in until all the stars material is compressed into an infinitely tiny point, known as a singularity. Surrounding that singularity is an event horizon, an invisible boundary that marks the edge of the black hole. Whatever crosses the event horizon can never get out.

But the main problem with this is that points of infinite density can't really exist. So while general relativity predicts the existence of black holes, and we have found many astronomical objects that behave exactly as Einstein's theory predicts, we know that we still don't have the full picture. We know that the singularity must be replaced by something more reasonable, but we don't know what that something is.

Related: Are black holes wormholes?

Figuring that out requires an understanding of extremely strong gravity at extremely small scales something called quantum gravity. To date, we have no viable quantum theory of gravity, but we do have several candidates. One of those candidates is string theory, a model that suggests all the particles that make up our universe are really made of tiny, vibrating strings.

To explain the wide variety of particles inhabiting our universe, those strings can't just vibrate in the usual three spatial dimensions. String theory predicts the existence of extra dimensions, all curled up on themselves at some unfathomably small scale so small that we can't tell those dimensions are there.

And that act of curling up extra spatial dimensions at incredibly tiny scales can lead to very interesting objects.

In the new study, researchers proposed that these compact extra dimensions can give rise to defects. Like a wrinkle that you just can't get out of your shirt no matter how much you iron it, these defects would be stable, permanent imperfections in the structure of space-time a topological soliton. The physicists suggested that these solitons would largely look, act and probably smell like black holes.

The researchers studied how rays of light would behave when passing near one of these solitons. They found that the solitons would affect the light in almost the same way as a black hole would. Light would bend around the solitons and form stable orbital rings, and the solitons would cast shadows. In other words, the famous images from the Event Horizon Telescope, which zoomed in on the black hole M87* in 2019, would look almost exactly the same if it were solitons in the center of the image, rather than a black hole.

But up close the mimicry would end. Topological solitons are not singularities, so they do not have event horizons. You could get as close as you wanted to a soliton, and you could always leave if you wanted to (assuming you packed enough fuel).

Unfortunately we have no black holes close enough to dig around in, and so we can only rely on observations of distant objects. If any topological solitons are ever discovered, the revelation wouldn't just be a major insight into the nature of gravity, but it would enable us to directly study the nature of quantum gravity and string theory as well.

Read the rest here:

Some black holes may actually be tangles in the fabric of space-time ... - Livescience.com

Scrutinizing joint remote state preparation under decoherence … – Nature.com

Brukner, ., ukowski, M., & Zeilinger, A.: The essence of entanglement. In Quantum Arrangements: Contributions in Honor of Michael Horne, 117138 (2021).

Falaye, B. et al. Entanglement fidelity for electron-electron interaction in strongly coupled semiclassical plasma and under external fields. Laser Phys. Lett. 16(4), 045204 (2019).

Article ADS CAS Google Scholar

Guan, X.-W., Chen, X.-B., Wang, L.-C. & Yang, Y.-X. Joint remote preparation of an arbitrary two-qubit state in noisy environments. Int. J. Theor. Phys. 53(7), 22362245 (2014).

Article MATH Google Scholar

Pirandola, S. et al. Advances in quantum cryptography. Adv. Opt. Photon. 12(4), 10121236 (2020).

Article Google Scholar

Chen, X.-B., Xu, G., Yang, Y.-X. & Wen, Q.-Y. Centrally controlled quantum teleportation. Opt. Commun. 283(23), 48024809 (2010).

Article ADS CAS Google Scholar

Hofer, S. G., Wieczorek, W., Aspelmeyer, M. & Hammerer, K. Quantum entanglement and teleportation in pulsed cavity optomechanics. Phys. Rev. A 84(5), 052327 (2011).

Article ADS Google Scholar

Li, M., Fei, S.-M. & Li-Jost, X. Quantum entanglement: Separability, measure, fidelity of teleportation, and distillation. Adv. Math. Phys. 2010, 110 (2010).

Article MathSciNet MATH Google Scholar

Sun, K. et al. Experimental quantum entanglement and teleportation by tuning remote spatial indistinguishability of independent photons. Opt. Lett. 45(23), 64106413 (2020).

Article ADS PubMed Google Scholar

Ramrez, M. D. G., Falaye, B. J., Sun, G.-H., Cruz-Irisson, M. & Dong, S.-H. Quantum teleportation and information splitting via four-qubit cluster state and a bell state. Front. Phys. 12, 19 (2017).

Article Google Scholar

Adepoju, A. G., Falaye, B. J., Sun, G.-H., Camacho-Nieto, O. & Dong, S.-H. Teleportation with two-dimensional electron gas formed at the interface of a gaas heterostructure. Laser Phys. 27(3), 035201 (2017).

Article ADS Google Scholar

Bennett, C. H. et al. Remote state preparation. Phys. Rev. Lett. 87(7), 077902 (2001).

Article ADS CAS PubMed Google Scholar

Xiao, X.-Q., Liu, J.-M. & Zeng, G. Joint remote state preparation of arbitrary two-and three-qubit states. J. Phys. B 44(7), 075501 (2011).

Article ADS Google Scholar

Bennett, C. H. et al. Teleporting an unknown quantum state via dual classical and EinsteinPodolskyRosen channels. Phys. Rev. Lett. 70(13), 1895 (1993).

Article ADS MathSciNet CAS PubMed MATH Google Scholar

Lo, H.-K. Classical-communication cost in distributed quantum-information processing: A generalization of quantum-communication complexity. Phys. Rev. A 62(1), 012313 (2000).

Article ADS Google Scholar

Pati, A. K. Minimum classical bit for remote preparation and measurement of a qubit. Phys. Rev. A 63(1), 014302 (2000).

Article ADS Google Scholar

Li, X. & Ghose, S. Optimal joint remote state preparation of equatorial states. Quant. Inf. Process. 14(12), 45854592 (2015).

Article ADS MathSciNet MATH Google Scholar

Devetak, I. & Berger, T. Low-entanglement remote state preparation. Phys. Rev. Lett. 87(19), 197901 (2001).

Article ADS CAS PubMed Google Scholar

Falaye, B. J., Sun, G.-H., Camacho-Nieto, O. & Dong, S.-H. Jrsp of three-particle state via three tripartite ghz class in quantum noisy channels. Int. J. Quant. Inf. 14(07), 1650034 (2016).

Article MATH Google Scholar

Wang, D. & Ye, L. Multiparty-controlled joint remote state preparation. Quant. Inf. Process. 12, 32233237 (2013).

Article ADS MathSciNet MATH Google Scholar

Leung, D. W. & Shor, P. W. Oblivious remote state preparation. Phys. Rev. Lett. 90(12), 127905 (2003).

Article ADS PubMed Google Scholar

Daki, B. et al. Quantum discord as resource for remote state preparation. Nat. Phys. 8(9), 666670 (2012).

Article Google Scholar

Luo, M.-X., Chen, X.-B., Yang, Y.-X. & Niu, X.-X. Experimental architecture of joint remote state preparation. Quant. Inf. Process. 11, 751767 (2012).

Article MathSciNet MATH Google Scholar

Pogorzalek, S. et al. Secure quantum remote state preparation of squeezed microwave states. Nat. Commun. 10(1), 2604 (2019).

Article ADS CAS PubMed PubMed Central Google Scholar

Xu, Z. & Jiang, M. Controlled cyclic joint remote state preparation of arbitrary single-qubit states. In 2021 40th Chinese Control Conference (CCC), 63076311 (2021).

Chen, N., Quan, D.-X., Xu, F.-F., Yang, H. & Pei, C.-X. Deterministic joint remote state preparation of arbitrary single-and two-qubit states. Chin. Phys. B 24(10), 100307 (2015).

Article ADS Google Scholar

Zha, X.-W., Yu, X.-Y. & Cao, Y. Tripartite controlled remote state preparation via a seven-qubit entangled state and three auxiliary particles. Int. J. Theor. Phys. 58(1), 282293 (2019).

Article MATH Google Scholar

Zhang, C.-Y., Bai, M.-Q. & Zhou, S.-Q. Cyclic joint remote state preparation in noisy environment. Quant. Inf. Process. 17, 120 (2018).

Article MathSciNet MATH Google Scholar

Chen, N., Quan, D.-X., Zhu, C.-H., Li, J.-Z. & Pei, C.-X. Deterministic joint remote state preparation via partially entangled quantum channel. Int. J. Quant. Inf. 14(03), 1650015 (2016).

Article MATH Google Scholar

Zhang, Z., Zhao, C., Wang, J. & Shu, L. Joint remote state preparation of mixed states. J. Phys. B 53(2), 025501 (2019).

Article ADS Google Scholar

Abdel-Aty, M. Information entropy of a time-dependent three-level trapped ion interacting with a laser field. J. Phys. B 38(40), 8589 (2005).

MathSciNet CAS MATH Google Scholar

Obada, A.-S. & Abdel-Aty, M. Influence of the stark shift and Kerr-like medium on the evolution of field entropy and entanglement in two-photon processes. Acta Phys. Pol. B 31(3), 589 (2000).

ADS CAS Google Scholar

Furuichi, S. & Abdel-Aty, M. Entanglement in a squeezed two-level atom. J. Phys. A 34(35), 6851 (2001).

Article ADS MathSciNet MATH Google Scholar

Abdel-Aty, A.-H. et al. A quantum classification algorithm for classification incomplete patterns based on entanglement measure. J. Intell. Fuzzy Syst. 38(3), 28092816 (2020).

Article Google Scholar

Zidan, M. et al. A novel algorithm based on entanglement measurement for improving speed of quantum algorithms. Appl. Math. Inf. Sci 12(1), 265269 (2018).

Article MathSciNet Google Scholar

Hashem, M. et al. Bell nonlocality, entanglement, and entropic uncertainty in a heisenberg model under intrinsic decoherence: Dm and ksea interplay effects. Appl. Phys. B 128(4), 87 (2022).

Article ADS CAS Google Scholar

Mohamed, A.-B. & Metwally, N. Non-classical correlations based on skew information for an entangled two qubit-system with non-mutual interaction under intrinsic decoherence. Ann. Phys. 381, 137150 (2017).

Article ADS MathSciNet CAS MATH Google Scholar

Abdelghany, R., Mohamed, A.-B., Tammam, M., Kuo, W. & Eleuch, H. Tripartite entropic uncertainty relation under phase decoherence. Sci. Rep. 11(1), 11830 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Mohamed, A.-B.A., Abdel-Aty, A.-H., Qasymeh, M. & Eleuch, H. Non-local correlation dynamics in two-dimensional graphene. Sci. Rep. 12(1), 3581 (2022).

Article ADS CAS PubMed PubMed Central Google Scholar

Adepoju, A. G., Falaye, B. J., Sun, G.-H., Camacho-Nieto, O. & Dong, S.-H. Joint remote state preparation (jrsp) of two-qubit equatorial state in quantum noisy channels. Phys. Lett. A 381(6), 581587 (2017).

Article ADS CAS Google Scholar

Chen, Z.-F., Liu, J.-M. & Ma, L. Deterministic joint remote preparation of an arbitrary two-qubit state in the presence of noise. Chin. Phys. B 23(2), 020312 (2013).

Article Google Scholar

Li, J.-F., Liu, J.-M. & Xu, X.-Y. Deterministic joint remote preparation of an arbitrary two-qubit state in noisy environments. Quant. Inf. Process. 14(9), 34653481 (2015).

Article ADS MathSciNet MATH Google Scholar

Wang, M.-M., Qu, Z.-G., Wang, W. & Chen, J.-G. Effect of noise on deterministic joint remote preparation of an arbitrary two-qubit state. Quant. Inf. Process. 16(5), 114 (2017).

Article ADS MATH Google Scholar

Dash, T., Sk, R. & Panigrahi, P. K. Deterministic joint remote state preparation of arbitrary two-qubit state through noisy cluster-ghz channel. Opt. Commun. 464, 125518 (2020).

Article CAS Google Scholar

Qu, Z., Wu, S., Wang, M., Sun, L. & Wang, X. Effect of quantum noise on deterministic remote state preparation of an arbitrary two-particle state via various quantum entangled channels. Quant. Inf. Process. 16(12), 125 (2017).

Article MathSciNet MATH Google Scholar

Lindblad, G. On the generators of quantum dynamical semigroups. Commun. Math. Phys. 48(2), 119130 (1976).

Article ADS MathSciNet MATH Google Scholar

Xiang, G.-Y., Li, J., Yu, B. & Guo, G.-C. Remote preparation of mixed states via noisy entanglement. Phys. Rev. A 72(1), 012315 (2005).

Article ADS Google Scholar

Ai-Xi, C., Li, D., Jia-Hua, L. & Zhi-Ming, Z. Remote preparation of an entangled state in nonideal conditions. Commun. Theor. Phys. 46(2), 221 (2006).

Article ADS Google Scholar

Wang, D., Zha, X.-W. & Lan, Q. Joint remote state preparation of arbitrary two-qubit state with six-qubit state. Opt. Commun. 284(24), 58535855 (2011).

Article ADS CAS Google Scholar

Sharma, V., Thapliyal, K., Pathak, A. & Banerjee, S. A comparative study of protocols for secure quantum communication under noisy environment: Single-qubit-based protocols versus entangled-state-based protocols. Quant. Inf. Process. 15(11), 46814710 (2016).

Article ADS MathSciNet MATH Google Scholar

Buek, V., Knight, P. & Kudryavtsev, I. Three-level atoms in phase-sensitive broadband correlated reservoirs. Phys. Rev. A 44(3), 1931 (1991).

Article ADS CAS PubMed Google Scholar

Georgiades, N. P., Polzik, E., Edamatsu, K., Kimble, H. & Parkins, A. Nonclassical excitation for atoms in a squeezed vacuum. Phys. Rev. Lett. 75(19), 3426 (1995).

Article ADS CAS PubMed Google Scholar

The rest is here:

Scrutinizing joint remote state preparation under decoherence ... - Nature.com

A New Subatomic Particle The Most Beautiful Strongly Bound … – SciTechDaily

Dibaryons, fascinating entities in nuclear and particle physics, represent a state of matter where two baryons, each consisting of three quarks, are bound together. The concept was first proposed in the context of quantum chromodynamics (QCD), the theory describing strong interactions between quarks and gluons.

Scientists from the Tata Institute of Fundamental Research and The Institute of Mathematical Science have predicted the existence of a dibaryon particle, built entirely from bottom quarks. This particle, termed D6b, is predicted to have a binding energy 40 times stronger than that of the only known stable dibaryon, deuteron. This discovery, made possible through Quantum Chromodynamics on space-time lattices, could provide valuable insights into the nature of strong forces and quark mass interactions.

Dibaryons are subatomic particles composed of two baryons. Their formation, which occurs through interactions between baryons, is fundamental in big-bang nucleosynthesis, nuclear reactions including those happening within stars, and bridges the gap between nuclear physics, cosmology, and astrophysics. Fascinatingly, the strong force, responsible for the formation and the majority of the mass of nuclei, facilitates the formation of a plethora of different dibaryons with diverse quark combinations.

Nevertheless, these dibaryons are not commonly observed the deuteron is currently the only known stable dibaryon.

To resolve this apparent dichotomy, it is essential to investigate dibaryons and baryon-baryon interactions at the fundamental level of strong interactions. In a recent publication in the journal Physical Review Letters, physicists from the Tata Institute of Fundamental Research (TIFR) and The Institute of Mathematical Science (IMSc) have provided strong evidence for the existence of a deeply bound dibaryon, entirely built from bottom (beauty) quarks.

Using the computational facility of the Indian Lattice Gauge Theory Initiative (ILGTI), Prof. Nilmani Mathur and graduate student Debsubhra Chakraborty from the Department of Theoretical Physics, TIFR, and Dr. M. Padmanath from IMSc have predicted the existence of this subatomic particle. The predicted dibaryon (D6b) is made of two triply bottom Omega (bbb) baryons, having the maximal beauty flavor.

Schematic picture of the predicted dibaryon, D6b, made of two Omega baryons. Credit: Nilmani Mathur

Its binding energy is predicted to be as large as 40 times stronger than that of the deuteron, and hence perhaps entitled it to be the most strongly bound beautiful dibaryon in our visible universe. This finding elucidates the intriguing features of strong forces in baryon-baryon interactions and leads the path for further systematic study of quark mass dependence of baryon-baryon interactions which possibly can explain the emergence of bindings in nuclei. It also brings motivation to search for such heavier exotic subatomic particles in next-generation experiments.

Since the strong force is highly non-perturbative in the low energy domain, there is no first-principles analytical solution as yet for studying the structures and interactions of composite subatomic particles like protons, neutrons, and the nuclei they form. Formulation of quantum chromodynamics (QCD) on space-time lattices, based on an intricate amalgamation between a fundamental theory and high-performance computing, provides an opportunity for such study.

Not only does it require a sophisticated understanding of the quantum field-theoretic issues, but the availability of large-scale computational resources is also crucial. In fact, some of the largest scientific computational resources in the world are being utilized by lattice gauge theorists who are trying to solve the mystery of strong interactions of our Universe through their investigations inside the femto-world (within a scale of about one million-billionth of a meter).

Lattice QCD calculations can also play a crucial role in understanding the nuclei formation at the Big Bang, their reaction mechanisms, in aiding the search for the physics beyond the standard model as well as for investigating the matter under the extreme conditions of high temperature and density similar to those at the early stages of the Universe after the Big Bang.

Reference: Strongly Bound Dibaryon with Maximal Beauty Flavor from Lattice QCD by Nilmani Mathur, M. Padmanath, and Debsubhra Chakraborty, 16 March 2023, Physical Review Letters.DOI: 10.1103/PhysRevLett.130.111901

Here is the original post:

A New Subatomic Particle The Most Beautiful Strongly Bound ... - SciTechDaily

Dark energy is the product of quantum universe interaction | Artyom Yurov and Valerian Yurov – IAI

Quantum objects make up classical objects. But the two behave very differently. The collapse of the wave-function prevents classical objects from doing the weird things quantum objects do; like quantum entanglement or quantum tunneling. Is the universe as a whole a quantum object or a classical one? Artyom Yurov and Valerian Yurov argue the universe is a quantum object, interacting with other quantum universes, with surprising consequences for our theories about dark matter and dark energy.

1. The Quantum Wonderland

If scientific theories were like human beings, the anthropomorphic quantum mechanics would be a miracle worker, a brilliant wizard of engineering, capable of fabricating almost anything, be it a laser or a complex integrated circuit. At the same token, this wizard of science would probably look and act crazier than a March Hair and Mad Hatter combined. The fact of the matter is, the principles of quantum mechanics are so bizarre and unintuitive, they seem to be utterly incompatible with our inherent common sense. For example, in the quantum realm, a particle does not journey from point A to point B along some predetermined path. Instead, it appears to traverse all possible trajectories between these points every single one! In this strange realm the items might vanish right in front of an impenetrably high barrier only to materialize on the other side (this is called quantum tunneling). In the quantum realm the two particles, separated by miles or even light years, somehow keep in touch via the link we call quantum entanglement. And, of course, we cannot talk about the quantum Wonderland without mentioning that a quantum object might (and usually does) exist at a few different places at the same time. For example, when we think about an electron in the hydrogen atom, we are tempted to imagine it as a small satellite swiftly rotating around a heavy atomic nucleus. But this image is all wrong! Instead, we have to try and imagine an electron simultaneously existing in infinitely many places all around the nucleus. This fascinating picture is called an electron cloud, and we know for a fact that it is a correct picture. We know this because the identical objects coexisting in a few different places produce a physical phenomenon known as interference, which is physically observable in a lab. The fact of these observations proves two things at once: first, that the physicists who study quantum mechanics have not gone completely mad (their relatives might disagree on this one), and secondly, that physical machinery of our universe defeats even the most unbridled human imagination.

SUGGESTED READINGIn defence of dark energyBy James Peebles

With all that in mind, it is quite natural to feel relief at the thought that no matter how strange the quantum laws are, they are safely confined to the realm of atoms and molecules, and simply cannot be encountered in normal everyday life. How precious it is to be able to lay on a sofa with no worries that it might suddenly dematerialize from underneath you at the most inopportune moment Speaking of which, why doesnt your sofa have any inclination to suddenly tunnel over the wall of your room? And what prevents a piece of apple pie from being entangled with the rest of the pie? We know that both the pie and the sofa are formed by lots and lots of atoms interacting with each other. So why do these atoms abide by one set of laws, while the pies and the sofas obey a very different legislation?

Truth be told, this is a very tough question. Niels Bohr, a Danish physicist and one of the founding fathers of quantum mechanics, spent quite a lot of time pondering it and eventually came to the following conclusion. According to Bohr, the difference between a pie and a particle is indicative of a more general divide. In fact, all the objects in our universe can be stacked into two distinct groups: the classical and the quantum. We as observers belong to the first group, and so do the instruments we use while measuring the properties of something that belong to a second group. This latter, quantum group is comprised of very small objects (such as atoms and particles), which can exist in many places at the same time quite unlike the classical objects, which cannot. Furthermore, the properties of a quantum object are effectively stored in a very special mathematical object called a wave function. The wave functions are the solutions of one differential equation, derived by the famous Austrian physicist Erwin Schrdinger and henceforth named after him. The wave functions are very useful in understanding the weird properties of quantum objects. For example, when we say that an electron exists at two states at the same time, we mean that its wave function has two separate terms, one per possible state of that electron. Mathematicians call this property a superposition. Now, suppose we measure such an electron with a classical instrument. According to Bohrs interpretation (usually referred to as Copenhagen interpretation), this very act destroys the superposition one of the terms vanishes, leaving our electron in a unique classical state. This is called a collapse of the wave function and can be used, for instance, to explain why your oak cupboard grimly remains in its corner of the room instead of carelessly existing in every point therein simply put, the wave function of the cupboard must have long collapsed to a classical state!

So, everything appears to be in order and explainable, right? Not exactly. We still have a small nagging problem the collapse itself. It is supposed to be a physical process, but it cannot be derived from the Schrdinger equation. Worse still, it has so far resisted all attempts to explain and place it within the framework of quantum mechanics. It sat within the Bohr interpretation like a weirdly shaped metal piece in a box of plastic Lego parts, begging a natural question: does it actually belong in here?

___

Consider once again an electron in a hydrogen atom. We have left it in a rather precarious state, being smeared all around the positively charged atomic nucleus, simultaneously coexisting everywhere all at once. Not in the de Broglie-Bohm theory!

___

Some scientists have decided that the answer must be no. They aimed to show that it is possible to explain quantum mechanics without resorting to the ill-defined concept such as a collapse of a wave function. And they succeeded! In fact, now we know that there are two ways to do it. The first one, independently proposed by the physicists Louis de Broglie and David Bohm, reduces the quantum mechanical effects to a special quantum force which is simply added to the equations of classical mechanics. In the de Broglie-Bohm interpretation (often called the pilot wave interpretation), the quantum force is the one responsible for pulling the particles over the potential barrier (quantum tunneling) and for all the effects commonly associated with quantum interference. In reality, there are no superpositions and no electron clouds, argued de Broglie and Bohm; these are merely vague approximations invented out of desperation, simply because we did not think to look for an actual perpetrator of all the quantum tricks namely, the quantum force. For instance, consider once again an electron in a hydrogen atom. We have left it in a rather precarious state, being smeared all around the positively charged atomic nucleus, simultaneously coexisting everywhere all at once. Not in the de Broglie-Bohm theory! Once we add a new quantum force into the mix, an electron immediately gets localized; in fact, it ends up being completely stationary, pinned at place by the competing forces of quantum repulsion and electromagnetic attraction to an atomic nucleus!

A second approach, proposed by the American physicist Hugh Everett III, is radically different. According to it, the superpositions are real, but the collapse is not. In other words, a measurement of a quantum object does not destroy terms in the wave function. If prior to a measurement an electron was in a superposition of two different states, both of those states must survive the measurement. What happens is that these two different states end up in two different parallel worlds, identical to each other in every respect except for one thing: the state of our hapless electron. Thus, according to the many-world interpretation, when we measure the spin of an electron (which can be either up or down), the universe splits into two: in one of them we observe the up spin, while our doppelganger in the parallel universe perceives the down spin.

At a first glance the Everetts picture seems to be much more extravagant and significantly less plausible than the de Broglie-Bohm interpretation. But to an eye of a physicist, it is the latter that is much more suspect, as it fails to satisfactorily explain either a source or a physical nature of the proposed quantum forces. On the other hand, over the years the Everetts many-world interpretation slowly but surely gained popularity among theoretical physicists. At first coldly received by the proponents of the Copenhagen interpretation, who found Everetts lack of faith in the wave functions collapse disturbing, new evidence began to sway the opinion of the public. One of the strongest pieces of evidence for it was the discovery, made independently by two prominent physicists Heinz-Dieter Zeh and Wojciech Zurek. They were trying to understand what happens when a quantum system interacts with its environment and found a curious effect called a decoherence. To explain what it is, imagine an electron in a closed room. Next, suppose that it exists in a superposition of being at two places at once say, by the door at the east and near the western window. Naturally, any realistic room cannot be completely empty even in a clean room we can find photons, a few dust particles, some residue molecules of oxygen and carbon dioxide, etc. For simplicity, let us restrict ourselves to photons. Zeh and Zurek have shown that when a single photon interacts with our electron, it utterly reduces the level of quantum interferences. To an imperfect eye of a classical observer, this looks as if a collapse of the wave function took place, and the electron became firmly localized (for example, near the window). But in reality, there was no collapse: the superposition remains, albeit in a significantly weakened form. This is what is called decoherence. One can show that under the normal condition (room temperature, pressure and moisture) any macroscopic object undergoes extremely rapid decoherence which all but renders its quantum abnormalities almost imperceptible.

SUGGESTED VIEWINGSolving the dark matter mysteryWith John Ellis

This was the generally accepted state of affairs in the field of quantum mechanics for the last few decades. However, an interesting discovery, made in 2014 by a group of theoretical physicists from Australia and US, has opened a new and very intriguing possibility: that the universe at large might actually behave like a quantum object! In order to explain why, well have to take a little detour to contemporary cosmology the science about the origins and the fate of the universe.

2. The Cosmological Phantom Menace

It is difficult to name another branch of physics that arose, developed and grew in popularity as fast as cosmology did in the course of the 20th and 21st centuries. We have truly learned a lot during this time. For example, we now know that the universe is about 13-14 billion years old. In its early infancy, the universe has undergone a mind-blowingly fast expansion, aptly named the cosmological inflation (the term borrowed from the economy). In a fraction of a second, a region of space the size of a pin head and weighing 1 milligram, had exploded in size, forming an entire observable universe. Such a rapid expansion has radically smoothed the distribution of matter in the region, making the universe extremely homogeneous and isotropic. Accidentally, this turned our universe into a relatively simple object of study, named an FLRW universe -- an acronym lovingly assembled out of family names of four mathematicians who first studied such universes: Alexander Friedman (USSR), Georges Lematre (Belgium), Howard Robinson (USA) and Arthur Walker (UK).

One can surmise that a rapid expansion must have blown up any pre-existing imperfection. Think of a balloon with a little picture of a mouse the mouse representing the imperfection. If we blow up the balloon, the picture would also grow in size, eventually rivalling in size not only real-world mice, but also a cat and even a medium-sized dog. In the early universe the role of such imperfections was played by the tiny, ever-present quantum fluctuations. Normally these fluctuations are too small and too faint to be noticed but cosmological inflation is anything but normal. Under its strain the vacuum fluctuations grew up to become comparable in size with the contemporary galaxy nuclei which, in fact, they eventually produced. Every galaxy can be traced to an embryonic vacuum fluctuation, caught and blown up by the cosmological expansion in the early universe. By extension, every star and every planet in our galaxy owes their existence to these quantum fluctuations. So, when you thank your lucky stars, dont forget those tiny fellows as well!

___

The remaining 96% of visible matter, hidden by the darkness of our ignorance, consists of two components: 27% of it is called dark matter (DM) and 68% is so-called dark energy (DE). The former behaves like an ordinary atomic matter, except that it is non-luminous. The latter, however, is something else

___

Another interesting thing that we have learned about the universe is how little we matter literally. It appears that all the visible matter (such as photons, protons, neutrinos, etc) constitutes a paltry 5% of total ledger. The remaining 96%, hidden by the darkness of our ignorance, consists of two components: 27% of it is called dark matter (DM) and 68% is so-called dark energy (DE). The former behaves like an ordinary atomic matter, except that it is non-luminous. The latter, however, is something else. The DE behaves like an ideal fluid with a negative pressure, which fills an entire universe and causes it to expand with acceleration. There are different hypotheses regarding the nature of this strange fluid. Many scientists claim that it is a manifestation of a vacuum energy. The others insist that it may be a product of a hypothetical quintessence field, varying in space and time. Some hypothesize that it might be a very special type of quintessence field, called the phantom energy, in which case the universe will expand so fast it will risk literally tearing itself apart in a cosmological event morbidly called the Big Rip singularity.

Then again, there might be one other explanation. But in order to understand it, well have to travel back in time, to the year 2014.

3. The Many Interacting Universes

In 2014 three physicists, Michael Hall, Dirk-Andr Deckert and Howard Wiseman made a fascinating discovery: they have managed to unite together the de Broglie-Bohm and the Everett interpretations, constructing a brand new model, called the Many Interacting Worlds interpretations (MIW). They proposed that our universe is indeed one of many other universes, just like in the many-world interpretation of Everett. But this time there was a little twist: while Everett treated the different universes as distinct and independent from each other, Hall et al. have assumed that the universes might actually influence one another. And how exactly do they do that? Why, via the quantum forces, proposed by de Broglie and Bohm, of course! Here is how it works: for any object (say, an aforementioned electron in a hydrogen atom) there exists a number of its doppelgangers, doubles from the parallel universes (different versions of our electrons). We cannot see those doppelgangers, because they interact only with each other. What we can see is the result of that interaction, which manifests itself as an additional repulsive force. In fact, according to MIW, all quantum effects that affect an object are produced by the forces of interaction with the objects doubles from other universes. Interestingly, the strength of this force is determined by how similar the doubles are to each other. When they are not very similar (have different energies, are located in different places etc) the quantum force is diminished. If the quantum force becomes so small that it gets downright negligible compared to the normal classical forces, our object ceases to be quantum and becomes purely classical. This is what happens when an object in question consists of many particles with a lot of degrees of freedom. For example, consider a soccer ball. As a macroscopic object it consists of about atoms; if we want it to behave quantum-mechanically for instance, tunnel right through the enemy teams goalkeeper, most of those atoms must be extremely similar to all their doubles in the parallel universes. Which is, of course, a statistical impossibility, and is therefore not recommended as a viable method of scoring goals.

So, MIW is a good sport when explaining why we see no discernible quantum effects on the macroscopic scale. But what about the universe itself? We have already discussed how the cosmological inflation in an early universe has produced a very smooth, homogeneous and isotropic FLRW universe. It is in fact so uniform, that its history is essentially an evolution of a single, time-dependent parameter called a scale factor. In a strange way, our universe as a whole is fundamentally much simpler than a soccer ball or any other macroscopic object! But we have already learned that the simpler the object, the stronger the quantum forces even if the object itself is as large as a universe! All we have to do is to consider a multiverse consisting of many different FLRW universes with various scale factors and add an interactive repulsive force. Following this idea, we have derived the cosmological equations for a universe interacting with its nearby neighbours via the quantum force. To say that what weve got has exceeded our expectations would be an understatement. The preliminary results have predicted that the quantum forces might act like a dark energy of a special sort! And not only that: the parallel universes closest to ours might also manifest themselves as a dark matter. Imagine our astonishment!

SUGGESTED READINGA new particle won't solve dark matterBy Melvin Vopson

Naturally, this is just a beginning of a story. Our model requires further adjustments and verifications. At this juncture, we cannot claim that the mysteries of dark matter and dark energy are resolved at last. We have merely pointed out a new promising avenue of research. And yet we cannot shake the feeling of awe when we think that our world, so familiar and clearly comprehensible, the world of sofas and soccer balls, is but a tiny classical sliver sandwiched between the two frighteningly strange quantum realms of atoms and universes. Ancient Greek philosophers believed that the same laws must govern the very large and the infinitely small. Maybe they were not too far from truth, after all?

Follow this link:

Dark energy is the product of quantum universe interaction | Artyom Yurov and Valerian Yurov - IAI

ALCF Supercomputers Help Design of Peptide-Based Drugs with … – HPCwire

A team from the Flatiron Institute is leveraging ALCF computing resources to advance the design of peptides with the aim of speeding up the search for promising new drugs.

Existing peptide drugs rank among our best medicines, but almost all of them have been discovered in nature. Theyre not something we could design rationallythat is, until very recently, said Vikram Mulligan, a biochemistry researcher at the Flatiron Institute.

Mulligan is the principal investigator on a project leveraging supercomputing resources at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science user facility located at DOEs Argonne National Laboratory, with the aim of improving the production of peptide drugs. Peptides are chains of amino acids similar to those that form proteins, but shorter.

The original motivation for the research lies in Mulligans postdoctoral work at the University of Washingtons Baker Lab, in which he sought to apply what were determined to be accurate methods for designing proteins that could fold in specific ways.

Proteins are the functional molecules in our cells, theyre the molecules responsible for all the interesting cellular activities that take place, he explained. It is their geometry that dictates those activities.

Naturally occurring proteinsproteins produced by living cellsare built from just 20 amino acids. In the laboratory, however, chemists can synthesize molecules from thousands of different building blocks, allowing for innumerable structure combinations. This effectively means that a scientist might be able to manufacture, for example, enzymes capable of catalyses that no natural enzyme could perform.

Mulligan is particularly interested in making small peptides that can act as drugs that bind to some target, either in the human body or in a pathogen, and that treat a given disease by altering the function of that target.

To this end, via a project supported through DOEs INCITE program, Mulligan is using ALCF computational resources, including the Theta supercomputer, to advance the design of new peptide compounds with techniques including physics-based simulations and machine learning. His teams work at the ALCF is driven by the Rosetta software suite, the applications and libraries of which enable protein and peptide structure prediction and design, RNA fold prediction, and more.

Between small-molecule drugs and protein drugs

Mulligans research aims to treat a broad spectrum of diseases, as evidence suggests that peptide-based compounds have the potential to operate as an especially versatile class of drugs.

While they are easily and effectively administered, a primary therapeutic limitation of small-molecule drugs, by contrast, is that they often display an equal affinity for other sites in a patient as they do for the intended target. This translates into being a source of side effects for the patient taking the drugs.

Small-molecule drugs are like simple luggage keys in that they can unlock more than what theyre made for, Mulligan said.

Larger protein drugs such as antibodies, meanwhile, have the advantage of acting on their targets with a high degree of specificity, on account of their (the drugs) comparatively large surface area. Their disadvantages, however, include an inability to cross biological barriers (such as the gut-blood barrier or the blood-brain barrier) or to pass through cells, thereby limiting their targets to extracellular proteins. Given that immune systems have evolved to recognize foreign proteins and remove them from the body, protein drugs must evade these highly efficient mechanisms, an additional challenge for researchers.

Peptide drugs split the difference in terms of size, combining certain advantages of small-molecule drug with those of protein drugs so that theyre small enough to be permeable and evade the immune system, but large enough that theyre unlikely to bind to much aside from the intended targets.

To design such drugs, Mulligans work applies methods for protein design to what are called noncanonical design molecules, which comprise nonnatural building blocks and fold as proteins do, with the potential to bind to a target.

Physics-based simulations

While in the context of protein design, detailed structural information can be inferred from the 200,000 or so protein structures that have been solved experimentally, its determination is much more challenging when building peptides in the laboratory. At present, a mere two dozen peptide structures have been solved, several of which have been designed by Mulligan himself or by researchers applying his methods.

Because many more peptide structures must be solved before machine learning can guide the design of new peptide drugs, the team continues to rely on physics-based simulations for the time being and generates new strategies both for sampling conformations and the exploration of possible amino-acid sequences.

With peptides as with proteins, the particular sequence of amino acids determines how the molecule folds up into a specific 3D structure, and this specific 3D structure determines the molecules function, Mulligan said. If you can get a molecule to rigidly fold into a precise shape, then you can create a compound that binds to that target. And if that target is, say, the active site of an enzyme, then you can inhibit that enzymes activity.

That was the idea as conceived in 2012. It took a long time to get it to work, but were now at the point where we can design, relatively robustly, folding peptides constructed from mixtures of natural and nonnatural amino-acid building blocks.

A particular success among the handful of molecules that Mulligans team has designed binds to NDM-1, or New Dehli metallo-beta-lactamase 1, an enzyme responsible for antibiotic resistance in certain bacteria.

The notion here was that if we could make a drug that inhibits the NDM-1 enzyme, this drug could be administered alongside conventional antibiotics, reviving the usefulness of those sidelined by resistance, he said.

However, while this drug progressed from computer-aided design to laboratory manufacture, its ability to cross biological barriers must be fine-tuned in order to proceed through clinical trials.

Mulligan explained, The next challenge is to try to make something that hits its target and also has all the desirable drug properties like good pharmacokinetics, good persistence in the body, and the ability to pass biological barriers or enter cells and get it wherever it needs to go.

Furthermore, different biological barriers represent different degrees of difficulty when designing drugs. The low-hanging fruit are targets in the gut, because those can be reached and acted on simply via oral medicine, he said. The most challenging cases are intracellular cancer targets, which necessitate that the drugs passively diffuse into cellsan ongoing problem in science.

Approximating with proteins

The present physics-based methods for design include quantum chemistry calculations, which compute the energy of molecules with extreme precision by solving the Schrdinger equation, the central equation of quantum mechanics. Because such solutions have high computational costs that grow exponentially as the objects of study increase in size and complexity, they are historically obtained for only the smallest molecular systems. To try to minimize these costs, the research team employs an approximation known as a force field.

We pretend that the atoms in our system are small spheres exerting forces on each other, Mulligan explained, a classical approximation that reduces our accuracy but gives us a lot of speed and makes tractable a lot of otherwise intractable equations.

The accuracy of the method diminishes the less the peptide building-blocks resemble conventional amino acids: building blocks that bear strong similarities to conventional amino acids permit the use of force-field approximations generated by training machine learning applications on protein structures, but their applicability is comparatively tenuous if the building blocks have exotic features or contain certain chemical elements.

As such, one goal of the research team is to incorporate quantum chemistry calculations into the design and validation pipelines while minimizing to the greatest possible extent the tradeoffs between accuracy and precision inherent to approximations.

Benchmarking and validation

Incorporation of the calculations requires benchmarking and testing so as to determine the appropriate level of theory and which approximations are appropriate.

There are all sorts of approximations that quantum chemists have generated to try to scale quantum chemistry methods, Mulligan said. Many of them are quite goodare established practices within the quantum chemistry communitybut have yet to take root in the molecular-modeling community. With something like the force-field approximation, we need to ask the questions, how do we use it efficiently for molecular-modeling tasks? Under what circumstances is it good enough to use the force field? Under what circumstances do we want to use this approximation and under what circumstances is this approximation not good enough? Would we have to employ a higher level of quantum chemistry theory in order to complete the necessary benchmarking? To perform all the concomitant trial and error, we need powerful computational resources, which is where leadership-class systems are especially important. To this end, many of our calculations are fairly parallelizable.

Validation strategies involve taking a designed peptide with a certain amino-acid sequence and altering the sequence to create all sorts of alternative conformations to identify those that result in the desired fold.

Such approaches to conformational sampling, in which countless molecular arrangements are explored, are similarly parallelizable, enabling the researchers to carry out the conformation analyses across thousands of nodes on the ALCFs Theta supercomputer. These brute-force calculations lay the foundation for data repositories that the team will use to train machine-learning models.

Part of the challenge of machine learning is figuring out how to use it well, Mulligan said. Because a machine-learning model is going to make mistakes regardless of how its programmed, I tried to train this one to generate more false positives than false negativesto identify more things as peptides that fold than it should. Its easier to sift through extra hay, if you will, further down the line in our search for a needle.

Successful validation of peptide designs using quantum chemistry calculations itself represents a significant advance. Moreover, in the aforementioned NDM-1 example (that concerning the mitigation of antibiotic resistance among bacteria), all the design and validation work was completed using the force-field approximations.

Adapting to exascale

Ongoing and future work requires substantial revision of the Rosetta software suite for next-generation optimization, including accelerator-based computing systems such as the ALCFs Polaris and Aurora supercomputers.

Rosetta started its life in the late 1990s as a protein modeling package written in FORTRAN, and its been subsequently rewritten several times, Mulligan said. While its written in modern C++, it is beginning to show its age; even the latest version was written more than 10 years ago. Weve continually refactored the code to try to make it more general, try to make it work for nonnatural amino acids, but taking advantage of modern hardware has posed challenges. While the software parallelizes on central processing units (CPUs) and scales well, graphics processing units (GPUs) are not supported to their full capability.

Because Polaris is a hybrid CPU-GPU system, as the exascale Aurora system will be, I and others are working on rewriting Rosettas core functionality from scratch. By creating a successor to the current software, its my hope that we can continue to use these software methods efficiently on new hardware for years to come, and that we can build atop them to permit more challenging molecular design tasks to be tackled.

Source: Nils Heinonen, ALCF

Visit link:

ALCF Supercomputers Help Design of Peptide-Based Drugs with ... - HPCwire

How the Big Bang model was born – Big Think

This is the eighth article in a series on modern cosmology.

The Big Bang model of cosmology says the Universe emerged from a single event in the far past. The model was inspired by the adventurous cosmic quantum egg idea, which suggested that in the beginning, all that exists was compressed into an unstable quantum state. When this single entity burst and decayed into fragments, it created space and time.

To take this imaginative notion and craft a theory of the Universe was quite a feat of creativity. To understand the cosmic infancy, it turns out, we need to invoke quantum physics, the physics of the very small.

It all started in the mid-1940s with the Russian-American physicist George Gamow. He knew that protons and neutrons are held together in the atomic nucleus by the strong nuclear force, and that electrons are held in orbit around the nucleus by electrical attraction. The fact that the strong force does not care about electric charge adds an interesting twist to nuclear physics. Since neutrons are electrically neutral, it is possible for a given element to have different numbers of neutrons in its nucleus. For example, a hydrogen atom is made of a proton and an electron. But it is possible to add one or two neutrons to its nucleus.

These heavier hydrogen cousins are called isotopes. Deuterium has a proton and a neutron, while tritium has a proton and two neutrons. Every element has several isotopes, each built by adding or extracting neutrons in the nucleus. Gamows idea was that matter would build from the primeval stuff that filled space near the beginning. This happened progressively, building from the smallest objects to larger ones. Protons and neutrons joined to form nuclei, then binding electrons to form complete atoms.

How do we synthesize deuterium? By fusing a proton and a neutron. What about tritium? By fusing an extra neutron to deuterium. And helium? By fusing two protons and two neutrons, which can be done in a variety of ways. The build-up continues as heavier and heavier elements are synthesized inside of stars.

A fusion process releases energy, at least up to the formation of the element iron. This is called the binding energy, and it equals the energy we must provide to a system of bound particles to break a bond. Any system of particles bound by some force has an associated binding energy. A hydrogen atom is made of a bound proton and an electron, and it has a specific binding energy. If I disturb the atom with an energy that exceeds its binding energy, I will break the bond between the proton and the electron, which will then move freely away from each other. This buildup of heavier nuclei from smaller ones is called nucleosynthesis.

In 1947, Gamow enlisted the help of two collaborators. Ralph Alpher was a graduate student at George Washington University, while Robert Herman worked at the Johns Hopkins Applied Physics Laboratory. Over the following six years, the three researchers would develop the physics of the Big Bang model pretty much as we know it today.

Gamows picture starts with a Universe filled with protons, neutrons, and electrons. This is the matter component of the early Universe, which Alpher called ylem. Added to the mix were very energetic photons, the early Universes heat component. The Universe was so hot at this early time that no binding was possible. Every time a proton tried to bind with a neutron to make a deuterium nucleus, a photon would come racing to hit the two away from each other. Electrons, which are bound to protons by the much weaker electromagnetic force, didnt have a chance. There can be no binding when it is too hot. And we are talking about some seriously hot temperatures here, around 1 trillion degrees Fahrenheit.

The image of a cosmic soup tends to emerge quite naturally when we describe these very early stages in the history of the Universe. The building blocks of matter roamed freely, colliding with each other and with photons but never binding to form nuclei or atoms. They acted somewhat like floating vegetables in a hot minestrone soup. As the Big Bang model evolved to its accepted form, the basic ingredients of this cosmic soup changed somewhat, but the fundamental recipe did not.

Structure started to emerge. The hierarchical clustering of matter progressed steadily as the Universe expanded and cooled. As the temperature lowered and photons became less energetic, nuclear bonds between protons and neutrons became possible. An era known as primordial nucleosynthesis started. This time saw the formation of deuterium and tritium; helium and its isotope helium-3; and an isotope of lithium, lithium-7. The lightest nuclei were cooked in the Universes earliest moments of existence.

According to Gamow and collaborators, this all took about 45 minutes. Accounting for more modern values given to the various nuclear reaction rates, it only took about three minutes. The remarkable feat of Gamow, Alpher, and Hermans theory was that they could predict the abundance of these light nuclei. Using relativistic cosmology and nuclear physics, they could tell us how much helium should have been synthesized in the early Universe it turns out that about 24 percent of the Universe is made of helium. Their predictions could then be checked against what was produced in stars and compared to observations.

Gamow then made a much more dramatic prediction. After the era of nucleosynthesis, the ingredients of the cosmic soup were mostly the light nuclei in addition to electrons, photons, and neutrinos particles that are very important in radioactive decay. The next step in the hierarchical clustering of matter is to make atoms. As the Universe expanded it cooled, and photons became progressively less energetic. At some point, when the Universe was about 400,000 years of age, the conditions were ripe for electrons to bind with protons and create hydrogen atoms.

Before this time, whenever a proton and an electron tried to bind, a photon would kick them apart, in a sort of unhappy love triangle with no resolution. As the photons cooled down to about 6,000 degrees Fahrenheit, the attraction between protons and electrons overcame the photons interference, and binding finally occurred. Photons were suddenly free to move around, chasing their dance across the Universe. They were not to interfere with atoms anymore, but to exist on their own, impervious to all this binding that seems to be so important for matter.

Gamow realized these photons would have a special distribution of frequencies known as a blackbody spectrum. The temperature was high at the time of decoupling that is, in the epoch when atoms formed and photons were free to roam across the Universe. But since the Universe has been expanding and cooling for about 14 billion years, the present temperature of the photons would be very low.

Earlier predictions were not very accurate, as this temperature is sensitive to aspects of nuclear reactions that were not accurately understood in the late 1940s. Nevertheless, in 1948 Alpher and Herman predicted this cosmic bath of photons would have a temperature of 5 degrees above absolute zero, or about -451 degrees Fahrenheit. The current given value is 2.73 Kelvin. Thus, according to the Big Bang model, the Universe is a giant blackbody, immersed in a bath of very cold photons peaked at microwave wavelengths the so-called fossil rays from its hot early infancy. In 1965, this radiation was accidentally discovered, and cosmology would never be the same. But that story deserves its own essay.

Here is the original post:

How the Big Bang model was born - Big Think

Astrophysicists reveal the nature of dark matter through the study of … – Science Daily

Most of the matter in the universe, amounting to a staggering 85% by mass, cannot be observed and consists of particles not accounted for by the Standard Model of Particle Physics (see remark 1). These particles are known as Dark Matter, and their existence can be inferred from their gravitational effects on light from distant galaxies. Finding the particle that makes up Dark Matter is an urgent problem in modern physics, as it dominates the mass and, therefore, the gravity of galaxies -- solving this mystery can lead to new physics beyond the Standard Model.

While some theoretical models propose the existence of ultramassive particles as a possible candidate for Dark Matter, others suggest ultralight particles. A team of astrophysicists led by Alfred AMRUTH, a PhD student in the team of Dr Jeremy LIM of the Department of Physics at The University of Hong Kong (HKU), collaborating with Professor George SMOOT, a Nobel Laureate in Physics from the Hong Kong University of Science and Technology (HKUST) and Dr Razieh EMAMI, a Research Associate at the Center for Astrophysics | Harvard & Smithsonian (CFA), has provided the most direct evidence yet that Dark Matter does not constitute ultramassive particles as is commonly thought but instead comprises particles so light that they travel through space like waves. Their work resolves an outstanding problem in astrophysics first raised two decades ago: why do models that adopt ultramassive Dark Matter particles fail to correctly predict the observed positions and the brightness of multiple images of the same galaxy created by gravitational lensing? The research findings were recently published in Nature Astronomy.

Dark Matter does not emit, absorb or reflect light, which makes it difficult to observe using traditional astronomical techniques. Today, the most powerful tool scientists have for studying Dark Matter is through gravitational lensing, a phenomenon predicted by Albert Einstein in his theory of General Relativity. In this theory, mass causes spacetime to curve, creating the appearance that light bends around massive objects such as stars, galaxies, or groups of galaxies. By observing this bending of light, scientists can infer the presence and distribution of Dark Matter -- and, as demonstrated in this study, the nature of Dark Matter itself.

When the foreground lensing object and the background lensed object -- both constituting individual galaxies in the illustration -- are closely aligned, multiple images of the same background object can be seen in the sky. The positions and brightness of the multiply-lensed images depend on the distribution of Dark Matter in the foreground lensing object, thus providing an especially powerful probe of Dark Matter.

Another assumption of the nature of Dark Matter

In the 1970s, after the existence of Dark Matter was firmly established, hypothetical particles referred to as Weakly Interacting Massive Particles (WIMPs) were proposed as candidates for Dark Matter. These WIMPs were thought to be ultramassive -- more than at least ten times as massive as a proton -- and interact with other matter only through the weak nuclear force. These particles emerge from Supersymmetry theories, developed to fill deficiencies in the Standard Model, and have since been widely advocated as the most likely candidate for Dark Matter. However, for the past two decades, adopting ultramassive particles for Dark Matter, astrophysicists have struggled to correctly reproduce the positions and brightness of multiply-lensed images. In these studies, the density of Dark Matter is assumed to decrease smoothly outwards from the centres of galaxies in accordance with theoretical simulations employing ultramassive particles.

Beginning also in the 1970s, but in dramatic contrast to WIMPs, versions of theories that seek to rectify deficiencies in the Standard Model, or those (e.g., String Theory) that seek to unify the four fundamental forces of nature (the three in the Standard Model, along with gravity), advocate the existence of ultralight particles. Referred to as axions, these hypothetical particles are predicted to be far less massive than even the lightest particles in the Standard Model and constitute an alternative candidate for Dark Matter.

According to the theory of Quantum Mechanics, ultralight particles travel through space as waves, interfering with each other in such large numbers as to create random fluctuations in density. These random density fluctuations in Dark Matter give rise to crinkles in spacetime. As might be expected, the different patterns of spacetime around galaxies depending on whether Dark Matter constitutes ultramassive or ultralight particles -- smooth versus crinkly -- ought to give rise to different positions and brightness for multiply-lensed images of background galaxies.

In work led by Alfred AMRUTH, a PhD student in Dr Jeremy LIM's team at HKU, astrophysicists have for the first time computed how gravitationally-lensed images generated by galaxies incorporating ultralight Dark Matter particles differ from those incorporating ultramassive Dark Matter particles.

Their research has shown that the general level of disagreement found between the observed and predicted positions as well as the brightness of multiply-lensed images generated by models incorporating ultramassive Dark Matter can be resolved by adopting models incorporating ultralight Dark Matter particles. Moreover, they demonstrate that models incorporating ultralight Dark Matter particles can reproduce the observed positions and brightness of multiply-lensed galaxy images, an important achievement that reveals the crinkly rather than smooth nature of spacetime around galaxies.

'The possibility that Dark Matter does not comprise ultramassive particles, as has long been advocated by the scientific community, alleviates other problems in both laboratory experiments and astronomical observations,' explains Dr Lim. 'Laboratory experiments have been singularly unsuccessful at finding WIMPs, the long-favoured candidate for Dark Matter. Such experiments are in their final stretch, culminating in the planned DARWIN experiment, leaving WIMPs with no place to hide if not found (see remark 2).'

Professor Tom BROADHURST, an Ikerbasque Professor at the University of the Basque Country, a Visiting Professor at HKU, and a co-author of the paper adds, 'If Dark Matter comprises ultramassive particles, then according to cosmological simulations, there should be hundreds of satellite galaxies surrounding the Milky Way. However, despite intensive searches, only around fifty have been discovered so far. On the other hand, if Dark Matter comprises ultralight particles instead, then the theory of Quantum Mechanics predicts that galaxies below a certain mass simply cannot form owing to the wave interference of these particles, explaining why we observe a lack of small satellite galaxies around the Milky Way.'

'Incorporating ultralight rather than ultramassive particles for Dark Matter resolve several longstanding problems simultaneously in both particle physics and astrophysics,' said Amruth Alfred, 'We have reached a point where the existing paradigm of Dark Matter needs to be reconsidered. Waving goodbye to ultramassive particles, which have long been heralded as the favoured candidate for Dark Matter, may not come easily, but the evidence accumulates in favour of Dark Matter having wave-like properties as possessed by ultralight particles.' The pioneering work used the supercomputing facilities at HKU, without which this work would not have been possible.

The co-author Professor George SMOOT added, 'Understanding the nature of particles that constitute Dark Matter is the first step towards New Physics. This work paves the way for future tests of Wave-like Dark Matter in situations involving gravitational lensing. The James Webb Space Telescope should discover many more gravitationally-lensed systems, allowing us to make even more exacting tests of the nature of Dark Matter.'

Remarks: 1. The Standard Model of Particle Physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions -- excluding gravity) in the universe and classifying all known elementary particles. Although the Standard Model has met with huge successes, it leaves some phenomena unexplained -- e.g., the existence of particles that interact with known particles in the Standard Model only through gravity -- and falls short of being a complete theory of fundamental interactions.

Read the rest here:

Astrophysicists reveal the nature of dark matter through the study of ... - Science Daily

Entanglement Could Step in Where GPS Is Denied – IEEE Spectrum

Using the strange quantum phenomenon known as entanglement, which can link particles together anywhere in the universe, sensors can become significantly more accurate and faster at detecting motion, a new study reveals. The findings may help augment navigation systems that do not rely on GPS, scientists say.

In the new study, researchers experimented with optomechanical sensors, which use beams of light to analyze how their components move in response to disturbances. The sensors serve as accelerometers, which smartphones use to detect motions. Accelerometers can find use in inertial navigation systems in situations where GPS performs badly, such as underground, underwater, inside buildings, remote locations, and places where radio signal jamming is in use.

To boost the performance of optomechanical sensing, researchers experimented with using entanglement, which Einstein dubbed spooky action at a distance. Entangled particles essentially act in sync regardless of how far apart they are.

The researchers expect to have a prototype entanglement accelerometer chip within the next two years.

However, quantum entanglement is also incredibly vulnerable to outside interference. Quantum sensors capitalize on this sensitivity to help detect the slightest disturbances in their surroundings.

Previous research in quantum-enhanced optomechanical sensing has primarily focused on improving sensitivity at a single sensor, says study lead author Yi Xia, a quantum physicist at the University of Arizona at Tucson. However, recent theoretical and experimental studies have shown that entanglement can significantly improve sensitivity among multiple sensors, an approach known as distributed quantum sensing.

Optomechanical sensors depend on two synchronized laser beams. One beam gets reflected off a component known as an oscillator, and any movement of the oscillator changes the distance the light travels on its way to a detector. Any such difference in distance traveled shows up when the second beam overlaps with the first. If the sensor is still, the two beams are perfectly aligned. If the sensor moves, the overlapping light waves generate interference patterns that reveal the size and speed of the sensors motions.

In the new study, the sensors from Dal Wilsons group at University of Arizona at Tucson used membranes as oscillators. These acted much like drumheads that vibrate after getting struck.

Instead of having one beam illuminate one oscillator, the researchers split one infrared laser beam into two entangled beams, which they bounced off two oscillators onto two detectors. The entangled nature of this light essentially let two sensors analyze one beam, altogether leading to improvements in speed and precision.

The vision is to deploy such sensors in autonomous vehicles and spacecraft to enable precise navigation in the absence of GPS.Zhenshen Zhang, University of Michigan

Entanglement can be leveraged to enhanced the performance of force sensing undertaken by multiple optomechanical sensors, says study senior author Zheshen Zhang, a quantum physicist at the University of Michigan at Ann Arbor.

In addition, to boost the precision of the device, the researchers employed squeezed light. Squeezed light takes advantage of a key tenet of quantum physics: Heisenbergs uncertainty principle, which states that one cannot measure a feature of a particle, such as its position, with certainty without measuring another feature of that particle, such as its momentum, with less certainty. Squeezed light takes advantage of this trade-off to squeeze or reduce the uncertainty in the measurements of a given variablein this case, the phase of the waves making up the laser beamswhile increasing the uncertainty in the measurement of another variable the researchers can ignore.

We are one of the few groups who can build squeezed-light sources and are currently exploring its power as the basis for the next-generation precision measurement technology, Zhang says.

All in all, the scientists were able to collect measurements that were 40 percent more precise than with two unentangled beams and do it 60 percent faster. In addition, the precision and speed of this method is expected to rise in proportion to the number of sensors, they say.

The implication of these findings would be that we can further push the performance of ultraprecise force sensing to an unprecedented level, Zhang says.

He adds that improving optomechanical sensors may not only lead to better inertial navigation systems but also help detect enigmatic phenomena such as dark matter and gravitational waves. Dark matter is the invisible substance thought to make up five-sixths of all matter in the universe, and detecting the gravitational effects it might have could help scientists figure out its nature. Gravitational waves are ripples in the fabric of space and time that could help shed light on mysteries from black holes to the Big Bang.

The scientists now plan to miniaturize their system. They can already put a squeezed-light source on a chip just a half centimeter wide. They expect to have a prototype chip in the next year or two that includes a squeezed-light source, beam splitters, waveguides, and inertial sensors. This would make this technology much more practical, affordable, and accessible, Zhang says.

In addition, we are currently working with Honeywell, JPL, NIST, and a few other universities in a different program to develop chip-scale quantum-enhanced inertial measurement units, Zhang says. The vision is to deploy such integrated sensors in autonomous vehicles and spacecraft to enable precise navigation in the absence of GPS signals.

The scientists detailed their findings online 20 April in the journal Nature Photonics.

From Your Site Articles

Related Articles Around the Web

Read more from the original source:

Entanglement Could Step in Where GPS Is Denied - IEEE Spectrum