Page 1,160«..1020..1,1591,1601,1611,162..1,1701,180..»

IBM to Partner with US and Japanese Universities on Quantum … – Nextgov

Cookie List

A cookie is a small piece of data (text file) that a website when visited by a user asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies which are cookies from a domain different than the domain of the website you are visiting for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

See the article here:

IBM to Partner with US and Japanese Universities on Quantum ... - Nextgov

Read More..

Uncovering universal physics in the dynamics of a quantum system – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

New experiments using one-dimensional gases of ultra-cold atoms reveal a universality in how quantum systems composed of many particles change over time following a large influx of energy that throws the system out of equilibrium. A team of physicists at Penn State showed that these gases immediately respond, "evolving" with features that are common to all "many-body" quantum systems thrown out of equilibrium in this way. A paper describing the experiments appears May 17, 2023 in the journal Nature.

"Many major advances in physics over the last century have concerned the behavior of quantum systems with many particles," said David Weiss, Distinguished Professor of Physics at Penn State and one of the leaders of the research team. "Despite the staggering array of diverse 'many-body' phenomena, like superconductivity, superfluidity, and magnetism, it was found that their behavior near equilibrium is often similar enough that they can be sorted into a small set of universal classes. In contrast, the behavior of systems that are far from equilibrium has yielded to few such unifying descriptions."

These quantum many-body systems are ensembles of particles, like atoms, that are free to move around relative to each other, Weiss explained. When they are some combination of dense and cold enough, which can vary depending on the context, quantum mechanicsthe fundamental theory that describes the properties of nature at the atomic or subatomic scaleis required to describe their dynamics.

Dramatically out-of-equilibrium systems are routinely created in particle accelerators when pairs of heavy ions are collided at speeds near the speed-of-light. The collisions produce a plasmacomposed of the subatomic particles "quarks" and "gluons"that emerges very early in the collision and can be described by a hydrodynamic theorysimilar to the classical theory used to describe air flow or other moving fluidswell before the plasma reaches local thermal equilibrium. But what happens in the astonishingly short time before hydrodynamic theory can be used?

"The physical process that occurs before hydrodynamics can be used has been called 'hydrodynamization," said Marcos Rigol, professor of physics at Penn State and another leader of the research team. "Many theories have been developed to try to understand hydrodynamization in these collisions, but the situation is quite complicated and it is not possible to actually observe it as it happens in the particle accelerator experiments. Using cold atoms, we can observe what is happening during hydrodynamization."

The Penn State researchers took advantage of two special features of one-dimensional gases, which are trapped and cooled to near absolute zero by lasers, in order to understand the evolution of the system after it is thrown of out of equilibrium, but before hydrodynamics can be applied. The first feature is experimental. Interactions in the experiment can be suddenly turned off at any point following the influx of energy, so the evolution of the system can be directly observed and measured. Specifically, they observed the time-evolution of one-dimensional momentum distributions after the sudden quench in energy.

"Ultra-cold atoms in traps made from lasers allow for such exquisite control and measurement that they can really shed light on many-body physics," said Weiss. "It is amazing that the same basic physics that characterize relativistic heavy ion collisions, some of the most energetic collisions ever made in a lab, also show up in the much less energetic collisions we make in our lab."

The second feature is theoretical. A collection of particles that interact with each other in a complicated way can be described as a collection of "quasiparticles" whose mutual interactions are much simpler. Unlike in most systems, the quasiparticle description of one-dimensional gases is mathematically exact. It allows for a very clear description of why energy is rapidly redistributed across the system after it is thrown out of equilibrium.

"Known laws of physics, including conservation laws, in these one-dimensional gases imply that a hydrodynamic description will be accurate once this initial evolution plays out," said Rigol. "The experiment shows that this occurs before local equilibrium is reached. The experiment and theory together therefore provide a model example of hydrodynamization. Since hydrodynamization happens so fast, the underlying understanding in terms of quasi-particles can be applied to any many-body quantum system to which a very large amount of energy is added."

In addition to Weiss and Rigol, the research team at Penn State includes Yuan Le, Yicheng Zhang, and Sarang Gopalakrishnan.

More information: Yuan Le et al, Observation of hydrodynamization and local prethermalization in 1D Bose gases, Nature (2023). DOI: 10.1038/s41586-023-05979-9

Journal information: Nature

Read the rest here:

Uncovering universal physics in the dynamics of a quantum system - Phys.org

Read More..

Advanced Quantum Material Curves the Fabric of Space – DISCOVER Magazine

Quantum materials offer many benefits to the future of electronic devices from batteries to sensors and even our smartphones. Thanks to quantum behaviors like entanglement, these materials exhibit unusual electronic, optical and magnetic properties, making them more energy efficient.

Being superior to conventional materials for certain electronic processes, quantum materials open vast application opportunities, says Carmine Ortix, an associate professor of physics at the University of Salerno in Italy.

Ortix is part of an international research team, led by the University of Geneva, that studied how the electronic properties of quantum materials can be controlled. Their recent research shows that we can create tighter electronic control by curving the fabric of space within these materials.

The researchers wrapped their quantum material in insulators, trapping electrons which control energy output within a sandwich layer and limiting their free space. Then, using specific laser pulses, the team stacked each atom of their material on top of one another.

The results, published in Nature Materials, suggest this new material could boost the future of energy-efficient electronics.

Read More: Why Quantum Mechanics Still Stumps Physicists

In understanding how to control electrons movements, Ortix and the rest of his team relied on a concept in quantum physics known as the Berry phase. Named after English physicist Sir Michael Berry, this phase happens when a wave-like particle (such as an electron) moves in a closed loop through a magnetic field or another force field.

As the particle moves through the loop, part of its wave function a map of where the particle might be in a general area of the quantum realm changes, affecting how it behaves around other particles.

The Berry phase is quite complicated, so it can help to imagine the process like an eye exam: The giant metal headpiece (a lensometer) that an ophthalmologist uses to test your near- and far-sightedness contains two focus wheels for either lens.

As the eye doctor spins these wheels, asking, Is one or two better?, the lenses change in quality. When you get back to the beginning of the wheel, the difference between the first and last lens is quite different.

This looping process is similar to what happens during the Berry phase, where the electron evolves as the object moves through a loop (or wheel). But Ortix and his colleagues took the Berry phase one step further in their experiment, by studying the Berry curve of the electrons in their material.

You can think of the Berry curvature as an effective magnetic field generated by the electrons when they have some peculiar properties, Ortix says.

Previous studies have shown that these electron curves were either spin-sourced or orbital-sourced. A spin-sourced Berry curve can be graphed to show how an electrons momentum changes as it moves through a material in the presence of a magnetic field.

The curve is called spin-sourced because it considers the electrons spin, or the quantum property that gives the electron magnetic moment, magnetizing it. The magnetic fields presence causes the electron to rotate in the same direction as the field.

In contrast, the orbital-sourced Berry curve shows the changes in the electrons wave function without a magnetic field.

This curve considers the electrons orbital properties, which describe its spatial distribution around the nucleus of an atom. The electrons orbitals can affect the phase of its wave function, influencing its behavior within the material.

Read More: Electrons Split in New Form of Matter

In their new study, the researchers found that by curving the space where the electrons were housed and simultaneously changing the magnetic fields of the material, the electrons could exhibit both spin- and orbital-sourced Berry curves.

The curvature of quantum materials is an intrinsic property of elementary electrons, Ortix says. In practice, the large number of electrons inside the material form a quantum geometric space that can possess curvature.

This means that by trapping the electrons within the designated space, scientists can more easily control when and how the electrons curve the fabric of space within the material. The two curves working in tandem allow the material to be more tightly controlled, suggesting a more energy-efficient future for our devices as it exhibits less energy loss.

Read More: Do Atoms Ever Touch?

This potential needs to be explored with further experimentation, says Andrea Caviglia, a professor at the University of Geneva and a co-author of the study. This new quantum material could also prove key for the future of nanotechnology and in sensing electromagnetic signals.

Ortix explains that the significance of these results lies in the fact that the measured quantum transport properties might be exploited in future optoelectronic nanodevices. Examples of these types of devices, not on a nanoscale, include solar cells or LED lights.

The nonlinear electrical responses discussed in our study might be relevant to creat[ing] microscale devices that convert electromagnetic energy into usable electrical energy, Ortix adds.

Converting electromagnetic energy into electrical energy could be especially useful in the telecommunications industry, where electromagnetic signals are transmitted and received constantly by phones, laptops or TV satellites.

As the telecommunications industry advances, having quantum materials like the one studied by Ortix and Caviglia could become vital in creating more powerful satellites and other devices.

Read More: The Quantum Internet Will Blow Your Mind. Heres What It Will Look Like

Continued here:

Advanced Quantum Material Curves the Fabric of Space - DISCOVER Magazine

Read More..

Time is not an illusion. Its an object with physical size – Aeon

A timeless universe is hard to imagine, but not because time is a technically complex or philosophically elusive concept. There is a more structural reason: imagining timelessness requires time to pass. Even when you try to imagine its absence, you sense it moving as your thoughts shift, your heart pumps blood to your brain, and images, sounds and smells move around you. The thing that is time never seems to stop. You may even feel woven into its ever-moving fabric as you experience the Universe coming together and apart. But is that how time really works?

According to Albert Einstein, our experience of the past, present and future is nothing more than a stubbornly persistent illusion. According to Isaac Newton, time is nothing more than backdrop, outside of life. And according to the laws of thermodynamics, time is nothing more than entropy and heat. In the history of modern physics, there has never been a widely accepted theory in which a moving, directional sense of time is fundamental. Many of our most basic descriptions of nature from the laws of movement to the properties of molecules and matter seem to exist in a universe where time doesnt really pass. However, recent research across a variety of fields suggests that the movement of time might be more important than most physicists had once assumed.

A new form of physics called assembly theory suggests that a moving, directional sense of time is real and fundamental. It suggests that the complex objects in our Universe that have been made by life, including microbes, computers and cities, do not exist outside of time: they are impossible without the movement of time. From this perspective, the passing of time is not only intrinsic to the evolution of life or our experience of the Universe. It is also the ever-moving material fabric of the Universe itself. Time is an object. It has a physical size, like space. And it can be measured at a molecular level in laboratories.

The unification of time and space radically changed the trajectory of physics in the 20th century. It opened new possibilities for how we think about reality. What could the unification of time and matter do in our century? What happens when time is an object?

For Newton, time was fixed. In his laws of motion and gravity, which describe how objects change their position in space, time is an absolute backdrop. Newtonian time passes, but never changes. And its a view of time that endures in modern physics even in the wave functions of quantum mechanics time is a backdrop, not a fundamental feature. For Einstein, however, time was not absolute. It was relative to each observer. He described our experience of time passing as a stubbornly persistent illusion. Einsteinian time is what is measured by the ticking of clocks; space is measured by the ticks on rulers that record distances. By studying the relative motions of ticking clocks and ticks on rulers, Einstein was able to combine the concepts of how we measure both space and time into a unified structure we now call spacetime. In this structure, space is infinite and all points exist at once. But time, as Einstein described it, also has this property, which means that all times past, present and future are equally real. The result is sometimes called a block universe, which contains everything that has and will happen in space and time. Today, most physicists support the notion of the block universe.

But the block universe was cracked before it even arrived. In the early 1800s, nearly a century before Einstein developed the concept of spacetime, Nicolas Lonard Sadi Carnot and other physicists were already questioning the notion that time was either a backdrop or an illusion. These questions would continue into the 19th century as physicists such as Ludwig Boltzmann also began to turn their minds to the problems that came with a new kind of technology: the engine.

Though engines could be mechanically reproduced, physicists didnt know exactly how they functioned. Newtonian mechanics were reversible; engines were not. Newtons solar system ran equally well moving forward or backward in time. However, if you drove a car and it ran out of fuel, you could not run the engine in reverse, take back the heat that was generated, and unburn the fuel. Physicists at the time suspected that engines must be adhering to certain laws, even if those laws were unknown. What they found was that engines do not function unless time passes and has a direction. By exploiting differences in temperature, engines drive the movement of heat from warm parts to cold parts. As time moves forward, the temperature difference diminishes and less work can be done. This is the essence of the second law of thermodynamics (also known as the law of entropy) that was proposed by Carnot and later explained statistically by Boltzmann. The law describes the way that less useful work can be done by an engine over time. You must occasionally refuel your car, and entropy must always increase.

Do we really live in a universe that has no need for time as a fundamental feature?

This makes sense in the context of engines or other complex objects, but it is not helpful when dealing with a single particle. It is meaningless to talk about the temperature of a single particle because temperature is a way of quantifying the average kinetic energy of many particles. In the laws of thermodynamics, the flow and directionality of time are considered an emergent property rather than a backdrop or an illusion a property associated with the behaviour of large numbers of objects. While thermodynamic theory introduced how time should have a directionality to its passage, this property was not fundamental. In physics, fundamental properties are reserved for those properties that cannot be described in other terms. The arrow of time in thermodynamics is therefore considered emergent because it can be explained in terms of more fundamental concepts, such as entropy and heat.

Charles Darwin, working between the steam engine era of Carnot and the emergence of Einsteins block universe, was among the first to clearly see how life must exist in time. In the final sentence from On the Origin of Species (1859), he eloquently captured this perspective: [W]hilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been and are being evolved. The arrival of Darwins endless forms can be explained only in a universe where time exists and has a clear directionality.

During the past several billion years, life has evolved from single-celled organisms to complex multicellular organisms. It has evolved from simple societies to teeming cities, and now a planet potentially capable of reproducing its life on other worlds. These things take time to come into existence because they can emerge only through the processes of selection and evolution.

We think Darwins insight does not go deep enough. Evolution accurately describes changes observed across different forms of life, but it does much more than this: it is the only physical process in our Universe that can generate the objects we associate with life. This includes bacteria, cats and trees, but also things like rockets, mobile phones and cities. None of these objects fluctuates into existence spontaneously, despite what popular accounts of modern physics may claim can happen. These objects are not random flukes. Instead, they all require a memory of the past to be made in the present. They must be produced over time a time that continually moves forward. And yet, according to Newton, Einstein, Carnot, Boltzmann and others, time is either nonexistent or merely emergent.

The times of physics and of evolution are incompatible. But this has not always been obvious because physics and evolution deal with different kinds of objects. Physics, particularly quantum mechanics, deals with simple and elementary objects: quarks, leptons and force carrier particles of the Standard Model. Because these objects are considered simple, they do not require memory for the Universe to make them (assuming sufficient energy and resources are available). Think of memory as a way to describe the recording of actions or processes that are needed to build a given object. When we get to the disciplines that engage with evolution, such as chemistry and biology, we find objects that are too complex to be produced in abundance instantaneously (even when energy and materials are available). They require memory, accumulated over time, to be produced. As Darwin understood, some objects can come into existence only through evolution and the selection of certain recordings from memory to make them.

This incompatibility creates a set of problems that can be solved only by making a radical departure from the current ways that physics approaches time especially if we want to explain life. While current theories of quantum mechanics can explain certain features of molecules, such as their stability, they cannot explain the existence of DNA, proteins, RNA, or other large and complex molecules. Likewise, the second law of thermodynamics is said to give rise to the arrow of time and explanations of how organisms convert energy, but it does not explain the directionality of time, in which endless forms are built over evolutionary timescales with no final equilibrium or heat-death for the biosphere in sight. Quantum mechanics and thermodynamics are necessary to explain some features of life, but they are not sufficient.

These and other problems led us to develop a new way of thinking about the physics of time, which we have called assembly theory. It describes how much memory must exist for a molecule or combination of molecules the objects that life is made from to come into existence. In assembly theory, this memory is measured across time as a feature of a molecule by focusing on the minimum memory required for that molecule (or molecules) to come into existence. Assembly theory quantifies selection by making time a property of objects that could have emerged only via evolution.

We began developing this new physics by considering how life emerges through chemical changes. The chemistry of life operates combinatorially as atoms bond to form molecules, and the possible combinations grow with each additional bond. These combinations are made from approximately 92 naturally occurring elements, which chemists estimate can be combined to build as many as 1060 different molecules 1 followed by 60 zeroes. To become useful, each individual combination would need to be replicated billions of times think of how many molecules are required to make even a single cell, let alone an insect or a person. Making copies of any complex object takes time because each step required to assemble it involves a search across the vastness of combinatorial space to select which molecules will take physical shape.

Combinatorial spaces seem to show up when life exists

Consider the macromolecular proteins that living things use as catalysts within cells. These proteins are made from smaller molecular building blocks called amino acids, which combine to form long chains typically between 50 and 2,000 amino acids long. If every possible 100-amino-acid-long protein was assembled from the 20 most common amino acids that form proteins, the result would not just fill our Universe but 1023 universes.

The space of all possible molecules is hard to fathom. As an analogy, consider the combinations you can build with a given set of Lego bricks. If the set contained only two bricks, the number of combinations would be small. However, if the set contained thousands of pieces, like the 5,923-piece Lego model of the Taj Mahal, the number of possible combinations would be astronomical. If you specifically needed to build the Taj Mahal according to the instructions, the space of possibilities would be limited, but if you could build any Lego object with those 5,923 pieces, there would be a combinatorial explosion of possible structures that could be built the possibilities grow exponentially with each additional block you add. If you connected two Lego structures you had already built every second, you would not be able to exhaust all possible objects of the size of the Lego Taj Mahal set within the age of the Universe. In fact, any space built combinatorially from even a few simple building blocks will have this property. This includes all possible cell-like objects built from chemistry, all possible organisms built from different cell-types, all possible languages built from words or utterances, and all possible computer programs built from all possible instruction sets. The pattern here is that combinatorial spaces seem to show up when life exists. That is, life is evident when the space of possibilities is so large that the Universe must select only some of that space to exist. Assembly theory is meant to formalise this idea. In assembly theory, objects are built combinatorially from other objects and, just as you might use a ruler to measure how big a given object is spatially, assembly theory provides a measure called the assembly index to measure how big an object is in time.

The Lego Taj Mahal set is equivalent to a complex molecule in this analogy. Reproducing a specific object, like a Lego set, in a way that isnt random requires selection within the space of all possible objects. That is, at each stage of construction, specific objects or sets of objects must be selected from the vast number of possible combinations that could be built. Alongside selection, memory is also required: information is needed in the objects that exist to assemble the specific new object, which is implemented as a sequence of steps that can be completed in finite time, like the instructions required to build the Lego Taj Mahal. More complex objects require more memory to come into existence.

In assembly theory, objects grow in their complexity over time through the process of selection. As objects become more complex, their unique parts will increase, which means local memory must also increase. This local memory is the causal chain of events in how the object is first discovered by selection and then created in multiple copies. For example, in research into the origin of life, chemists study how molecules come together to become living organisms. For a chemical system to spontaneously emerge as life, it must self-replicate by forming, or catalysing, self-sustaining networks of chemical reactions. But how does the chemical system know which combinations to make? We can see local memory in action in these networks of molecules that have learned to chemically bind together in certain ways. As the memory requirements increase, the probability that an object was produced by chance drops to zero because the number of alternative combinations that werent selected is just too high. An object, whether its a Lego Taj Mahal or a network of molecules, can be produced and reproduced only with memory and a construction process. But memory is not everywhere, its local in space and time. This means an object can be produced only where there is local memory that can guide the selection of which parts go where, and when.

In assembly theory, selection refers to what has emerged in the space of possible combinations. It is formally described through an objects copy number and complexity. Copy number or concentration is a concept used in chemistry and molecular biology that refers to how many copies of a molecule are present in a given volume of space. In assembly theory, complexity is as significant as the copy number. A highly complex molecule that exists only as a single copy is not important. What is of interest to assembly theory are complex molecules with a high copy number, which is an indication that the molecule has been produced by evolution. This complexity measurement is also known as an objects assembly index. This value is related to the amount of physical memory required to store the information to direct the assembly of an object and set a directionality in time from the simple to the complex. And, while the memory must exist in the environment to bring the object into existence, in assembly theory the memory is also an intrinsic physical feature of the object. In fact, it is the object.

Life is stacks of objects building other objects that build other objects its objects building objects, all the way down. Some objects emerged only relatively recently, such as synthetic forever chemicals made from organofluorine chemical compounds. Others emerged billions of years ago, such as photosynthesising plant cells. Different objects have different depths in time. And this depth is directly related to both an objects assembly index and copy number, which we can combine into a number: a quantity called Assembly, or A. The higher the Assembly number, the deeper an object is in time.

To measure assembly in a laboratory, we chemically analyse an object to count how many copies of a given molecule it contains. We then infer the objects complexity, known as its molecular assembly index, by counting the number of parts it contains. These molecular parts, like the amino acids in a protein string, are often inferred by determining an objects molecular assembly index a theoretical assembly number. But we are not inferring theoretically. We are counting the molecular components of an object using three visualising techniques: mass spectrometry, infrared and nuclear magnetic resonance (NMR) spectroscopy. Remarkably, the number of components weve counted in molecules maps to their theoretical assembly numbers. This means we can measure an objects assembly index directly with standard lab equipment.

A high Assembly number a high assembly index and a high copy number indicates that it can be reliably made by something in its environment. This could be a cell that constructs high-Assembly molecules like proteins, or a chemist that makes molecules with an even higher Assembly value, such as the anti-cancer drug Taxol (paclitaxel). Complex objects with high copy numbers did not come into existence randomly but are the result of a process of evolution or selection. They are not formed by a series of chance encounters, but by selection in time. More specifically, a certain depth in time.

Its like throwing the 5,923 Lego Taj Mahal pieces in the air and expecting them to come together spontaneously

This is a difficult concept. Even chemists find this idea hard to grasp since it is easy to imagine that complex molecules form by chance interactions with their environment. However, in the laboratory, chance interactions often lead to the production of tar rather than high-Assembly objects. Tar is a chemists worst nightmare, a messy mixture of molecules that cannot be individually identified. It is found frequently in origin-of-life experiments. In the US chemist Stanley Millers prebiotic soup experiment in 1953, the amino acids that formed at first turned into a mess of unidentifiable black gloop if the experiment was run too long (and no selection was imposed by the researchers to stop chemical changes taking place). The problem in these experiments is that the combinatorial space of possible molecules is so vast for high-Assembly objects that no specific molecules are produced in high abundance. Tar is the result.

Its like throwing the 5,923 pieces from the Lego Taj Mahal set in the air and expecting them to come together, spontaneously, exactly as the instructions specify. Now imagine taking the pieces from 100 boxes of the same Lego set, throwing them into the air, and expecting 100 copies of the exact same building. The probabilities are incredibly low and might be zero, if assembly theory is on the right track. It is as likely as a smashed egg spontaneously reforming.

But what about complex objects that occur naturally without selection or evolution? What about snowflakes, minerals and complex storm systems? Unlike objects generated by evolution and selection, these do not need to be explained through their depth in time. Though individually complex, they do not have a high Assembly value because they form randomly and require no memory to be produced. They have a low copy number because they never exist in identical copies. No two snowflakes are alike, and the same goes for minerals and storm systems.

Assembly theory not only changes how we think about time, but how we define life itself. By applying this approach to molecular systems, it should be possible to measure if a molecule was produced by an evolutionary process. That means we can determine which molecules could have been made only by a living process, even if that process involves chemistries different to those on Earth. In this way, assembly theory can function as a universal life-detection system that works by measuring the assembly indexes and copy numbers of molecules in living or non-living samples.

In our laboratory experiments, we found that only living samples produce high-Assembly molecules. Our teams and collaborators have reproduced this finding using an analytical technique called mass spectrometry, in which molecules from a sample are weighed in an electromagnetic field and then smashed into pieces using energy. Smashing a molecule to bits allows us to measure its assembly index by counting the number of unique parts it contains. Through this, we can work out how many steps were required to produce a molecular object and then quantify its depth in time with standard laboratory equipment.

To verify our theory that high-Assembly objects can be generated only by life, the next step involved testing living and non-living samples. Our teams have been able to take samples of molecules from across the solar system, including diverse living, fossilised and abiotic systems on Earth. These solid samples of stone, bone, flesh and other forms of matter were dissolved in a solvent and then analysed with a high-resolution mass spectrometer that can identify the structure and properties of molecules. We found that only living systems produce abundant molecules with an assembly index above an experimentally determined value of 15 steps. The cut-off between 13 and 15 is sharp, meaning that molecules made by random processes cannot get beyond 13 steps. We think this is indicative of a phase transition where the physics of evolution and selection must take over from other forms of physics to explain how a molecule was formed.

These experiments verify that only objects with a sufficiently high Assembly number highly complex and copied molecules seem to be found in life. What is even more exciting is that we can find this information without knowing anything else about the molecule present. Assembly theory can determine whether molecules from anywhere in the Universe were derived from evolution or not, even if we dont know what chemistry is being used.

The possibility of detecting living systems elsewhere in the galaxy is exciting, but more exciting for us is the possibility of a new kind of physics, and a new explanation of life. As an empirical measure of objects uniquely producible by evolution, Assembly unlocks a more general theory of life. If the theory holds, its most radical philosophical implication is that time exists as a material property of the complex objects created by evolution. That is, just as Einstein radicalised our notion of time by unifying it with space, assembly theory points to a radically new conception of time by unifying it with matter.

Assembly theory explains evolved objects, such as complex molecules, biospheres, and computers

It is radical because, as we noted, time has never been fundamental in the history of physics. Newton and some quantum physicists view it as a backdrop. Einstein thought it was an illusion. And, in the work of those studying thermodynamics, its understood as merely an emergent property. Assembly theory treats time as fundamental and material: time is the stuff out of which things in the Universe are made. Objects created by selection and evolution can be formed only through the passing of time. But dont think about this time like the measured ticking of a clock or a sequence of calendar years. Time is a physical attribute. Think about it in terms of Assembly, a measurable intrinsic property of a molecules depth or size in time.

This idea is radical because it also allows physics to explain evolutionary change. Physics has traditionally studied objects that the Universe can spontaneously assemble, such as elementary particles or planets. Assembly theory, on the other hand, explains evolved objects, such as complex molecules, biospheres, and computers. These complex objects exist only along lineages where information has been acquired specific to their construction.

If we follow those lineages back, beyond the origin of life on Earth to the origin of the Universe, it would be logical to suggest that the memory of the Universe was lower in the past. This means that the Universes ability to generate high-Assembly objects is fundamentally limited by its size in time. Just as a semi-trailer truck will not fit inside a standard home garage, some objects are too large in time to come into existence in intervals that are smaller than their assembly index. For complex objects like computers to exist in our Universe, many other objects needed to form first: stars, heavy elements, life, tools, technology, and the abstraction of computing. This takes time and is critically path-dependent due to the causal contingency of each innovation made. The early Universe may not have been capable of computation as we know it, simply because not enough history existed yet. Time had to pass and be materially instantiated through the selection of the computers constituent objects. The same goes for Lego structures, large language models, new pharmaceutical drugs, the technosphere, or any other complex object.

The consequences of objects having an intrinsic material depth in time is far reaching. In the block universe, everything is treated as static and existing all at once. This mean that objects cannot be ordered by their depth in time, and selection and evolution cannot be used to explain why some objects exist and not others. Re-conceptualising time as a physical dimension of complex matter, and setting a directionality for time could help us solve such questions. Making time material through assembly theory unifies several perplexing philosophical concepts related to life in one measurable framework. At the heart of this theory is the assembly index, which measures the complexity of an object. It is a quantifiable way of describing the evolutionary concept of selection by showing how many alternatives were excluded to yield a given object. Each step in the assembly process of an object requires information, memory, to specify what should and shouldnt be added or changed. In building the Lego Taj Mahal, for example, we must take a specific sequence of steps, each directing us toward the final building. Each misstep is an error, and if we make too many errors we cannot build a recognisable structure. Copying an object requires information about the steps that were previously needed to produce similar objects.

This makes assembly theory a causal theory of physics, because the underlying structure of an assembly space the full range of required combinations orders things in a chain of causation. Each step relies on a previously selected step, and each object relies on a previously selected object. If we removed any steps in an assembly pathway, the final object would not be produced. Buzzwords often associated with the physics of life, such as theory, information, memory, causation and selection, are material because objects themselves encode the rules to help construct other complex objects. This could be the case in mutual catalysis where objects reciprocally make each other. Thus, in assembly theory, time is essentially the same thing as information, memory, causation and selection. They are all made physical because we assume they are features of the objects described in the theory, not the laws of how these objects behave. Assembly theory reintroduces an expanding, moving sense of time to physics by showing how its passing is the stuff complex objects are made of: the size of the future increases with complexity.

This new conception of time might solve many open problems in fundamental physics. The first and foremost is the debate between determinism and contingency. Einstein famously said that God does not play dice, and many physicists are still forced to conclude that determinism holds, and our future is closed. But the idea that the initial conditions of the Universe, or any process, determine the future has always been a problem. In assembly theory, the future is determined, but not until it happens. If what exists now determines the future, and what exists now is larger and more information-rich than it was in the past, then the possible futures also grow larger as objects become more complex. This is because there is more history existing in the present from which to assemble novel future states. Treating time as a material property of the objects it creates allows novelty to be generated in the future.

Novelty is critical for our understanding of life as a physical phenomenon. Our biosphere is an object that is at least 3.5 billion years old by the measure of clock time (Assembly is a different measure of time). But how did life get started? What allowed living systems to develop intelligence and consciousness? Traditional physics suggests that life emerged. The concept of emergence captures how new structures seem to appear at higher levels of spatial organisation that could not be predicted from lower levels. Examples include the wetness of water, which is not predicted from individual water molecules, or the way that living cells are made from individual non-living atoms. However, the objects traditional physics considers emergent become fundamental in assembly theory. From this perspective, an objects emergent-ness how far it departs from a physicists expectations of elementary building blocks depends on how deep it lies in time. This points us toward the origins of life, but we can also travel in the other direction.

If we are on the right track, assembly theory suggests time is fundamental. It suggests change is not measured by clocks but is encoded in chains of events that produce complex molecules with different depths in time. Assembled from local memory in the vastness of combinatorial space, these objects record the past, act in the present, and determine the future. This means the Universe is expanding in time, not space or perhaps space emerges from time, as many current proposals from quantum gravity suggest. Though the Universe may be entirely deterministic, its expansion in time implies that the future cannot be fully predicted, even in principle. The future of the Universe is more open-ended than we could have predicted.

Time may be an ever-moving fabric through which we experience things coming together and apart. But the fabric does more than move it expands. When time is an object, the future is the size of the Universe.

Published in association with the Santa Fe Institute, an Aeon Strategic Partner.

Here is the original post:

Time is not an illusion. Its an object with physical size - Aeon

Read More..

Some black holes may actually be tangles in the fabric of space-time … – Livescience.com

Physicists have discovered a strange twist of space-time that can mimic black holes until you get too close. Known as "topological solitons," these theoretical kinks in the fabric of space-time could lurk all around the universe and finding them could push forward our understanding of quantum physics, according to a new study published April 25 in the journal Physical Review D.

Black holes are perhaps the most frustrating object ever discovered in science. Einstein's general theory of relativity predicts their existence, and astronomers know how they form: All it takes is for a massive star to collapse under its own weight. With no other force available to resist it, gravity just keeps pulling in until all the stars material is compressed into an infinitely tiny point, known as a singularity. Surrounding that singularity is an event horizon, an invisible boundary that marks the edge of the black hole. Whatever crosses the event horizon can never get out.

But the main problem with this is that points of infinite density can't really exist. So while general relativity predicts the existence of black holes, and we have found many astronomical objects that behave exactly as Einstein's theory predicts, we know that we still don't have the full picture. We know that the singularity must be replaced by something more reasonable, but we don't know what that something is.

Related: Are black holes wormholes?

Figuring that out requires an understanding of extremely strong gravity at extremely small scales something called quantum gravity. To date, we have no viable quantum theory of gravity, but we do have several candidates. One of those candidates is string theory, a model that suggests all the particles that make up our universe are really made of tiny, vibrating strings.

To explain the wide variety of particles inhabiting our universe, those strings can't just vibrate in the usual three spatial dimensions. String theory predicts the existence of extra dimensions, all curled up on themselves at some unfathomably small scale so small that we can't tell those dimensions are there.

And that act of curling up extra spatial dimensions at incredibly tiny scales can lead to very interesting objects.

In the new study, researchers proposed that these compact extra dimensions can give rise to defects. Like a wrinkle that you just can't get out of your shirt no matter how much you iron it, these defects would be stable, permanent imperfections in the structure of space-time a topological soliton. The physicists suggested that these solitons would largely look, act and probably smell like black holes.

The researchers studied how rays of light would behave when passing near one of these solitons. They found that the solitons would affect the light in almost the same way as a black hole would. Light would bend around the solitons and form stable orbital rings, and the solitons would cast shadows. In other words, the famous images from the Event Horizon Telescope, which zoomed in on the black hole M87* in 2019, would look almost exactly the same if it were solitons in the center of the image, rather than a black hole.

But up close the mimicry would end. Topological solitons are not singularities, so they do not have event horizons. You could get as close as you wanted to a soliton, and you could always leave if you wanted to (assuming you packed enough fuel).

Unfortunately we have no black holes close enough to dig around in, and so we can only rely on observations of distant objects. If any topological solitons are ever discovered, the revelation wouldn't just be a major insight into the nature of gravity, but it would enable us to directly study the nature of quantum gravity and string theory as well.

Read the rest here:

Some black holes may actually be tangles in the fabric of space-time ... - Livescience.com

Read More..

Are There Reasons to Believe in a Multiverse? – Quanta Magazine

By definition, the universe seems like it should be the totality of everything that exists. Yet a variety of arguments emerging from cosmology, particle physics and quantum mechanics hint that there could also be unobservable universes beyond our own that follow different laws of nature. While the existence of a multiverse is speculative, for many physicists it represents a plausible explanation for some of the biggest mysteries in science. In this episode, Steven Strogatz explores the idea of a multiverse with the theoretical physicist David Kaplan and learns what it might mean about our own existence.

Listen on Apple Podcasts, Spotify, Google Podcasts, Stitcher, TuneIn or your favorite podcasting app, or you can stream it from Quanta.

Steve Strogatz (00:03): Im Steve Strogatz and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in math and science today. In this episode, were going to ask: Do we live in a multiverse?

(00:16) We know that we all live in our own little bubbles, whether it be our family, our friends, our hometown, even our workplace. And if you think about it, animals live in their own little bubbles, too. Fish live in certain parts of the ocean or different lakes or rivers. You wont find them in ice with a microbe population or flying around in the sky with birds, even though ice and water vapor in the sky are also forms of water. Could the universe be the same way? Maybe were not alone, and what we can see with the help of telescopes and infrared cameras isnt all there is. Maybe space is infinite. Maybe there are multiple universes beyond our own, perhaps made up of some of the same components or possibly even breaking the very laws of physics that allow us to call our universe home.

(01:08) Its a mind-blowing concept, the idea that we could live in a multiverse. And its an idea that David Kaplan thinks could be possible. Kaplan is a theoretical physicist at Johns Hopkins University in Baltimore. He looks at the theoretical possibilities that apply to the Standard Model of particle physics and cosmology. He also produced and appeared in the 2013 documentary Particle Fever, about the first experiments at the Large Hadron Collider. David, thanks for joining us today.

David Kaplan (01:38): My pleasure.

Strogatz (01:39): Well, this is great. Im very curious to hear your take on the multiverse. Before we get to the multiverse, lets talk about the good old-fashioned universe. You know, I mean, I grew up hearing about the universe that was sort of by definition, I think, all there is. Do I have that right? How would we define the universe before we define the multiverse?

Kaplan (01:57): Wow. Well, I think to be a little bit more useful or practical or even rigorous, we often talk about the observable universe. And the observable universe is a part of the universe that we have any sort of access to. And theres a very simple fact, which limits our ability to see the entirety of whatever the universe might be, which is the fact that the universe appears at least to be finite. Or at the very least, that in the early stages of what we would call our universe, it was so dense and hot, light couldnt penetrate it. And so the observable universe is really the distance we can look back to which a place where the light leaving that part of the universe at a much earlier time than today couldnt pass through the universe.

(02:53) In other words, in the early universe, if it was smaller and hotter and denser, light didnt travel in a straight line. It was stuck in the plasma the soup that was the early universe. And when the universe expanded and cooled enough that light could then find room to travel from one place to another and across the universe, it left what we now call the surface of last scattering. And so there is a surface, in a sense, in all directions when we look out and we call it the cosmic microwave background. Thats a surface at which we are seeing the universe at a very early stage, at a time when it was not transparent to light. So in that sense, we have a very rigid access to what we can see in the universe and therefore we call it the observable universe. There certainly could be universe beyond that. But if the universe has a finite lifetime, and it takes like a finite time to get to us, we just mathematically cannot see beyond a certain point.

Strogatz (03:57): This is great. I love the rigor that you just applied to that. Maybe we should move along then to this notion of a multiverse.

Kaplan (04:04): When it comes to the multiverse or the initial start of the universe, all of those things theyre all very speculative. So you do hear standard things like, At the Big Bang, you create time and space. And there is some notion where you could say that is probably right, in some sense. But really whats going on is when you get to densities of order its called the Planck energy The description of gravity in terms of a geometry or a geometric theory of the universe breaks down. It doesnt mean that there isnt time and space of some other kind at earlier times. Its just that general relativity no longer applies in that time.

(04:53) And so, could it be that time goes back infinitely far? A different label of what we call time? Absolutely. We dont know. We dont know what this, at very low energy densities, turns into when you get to an energy scale where general relativity is not the correct description.

(05:14) So people pop in and say, Well, time and space get created sort of instantaneously out of nothing. But thats not a mathematically well-defined description. Its a sort of hopeful compactification of space and time. And if you really have nothing, its hard to make predictions about when something appears because theres nothing there.

(05:37) Now, what is the multiverse? The multiverse as it appeared in Particle Fever was a use of the multiverse. It was the idea that, in an almost mundane way, if the universe is infinite or much, much bigger, at least than the part that we see (the observable part), then its possible that the laws of nature in different patches are different. You dont even need the deep underlying laws to be different. You just need some of the parameters to be different. The numbers that describe things like how much vacuum energy is in that part of the universe. Or what are the values of some background fields we say gravitational fields or other fields in those parts of the universe? And if those things are different in different places, they will have a different expansion history. The universe at that region will expand based on whats in the universe, and what are the interaction strengths of the stuff in that part of the universe. And so you can really have very different universes with a little bit more pedestrian descriptions of how the laws may change from one place to another, or just even the content of the universe in each of those places.

(07:03) And so theres a simple idea of the multiverse, which is that there really is just one universe. The part of the universe that we exist in has a certain set of parameters controlling how life looks. And then if you go far enough away, farther than light could have traveled in the age of the universe, there are places in the universe where the laws are just different. The parameters are different. The expansion history is completely different. Maybe [in] those regions, no stars, or galaxies could have formed, nothing lives there. Or maybe in those parts of the universe, detailed properties like the mass of the Higgs boson, which controls the mass of lots of other particles, it controls what hydrogen is, it controls how chemistry works. All of those things could be different in different places if you go far enough out.

(07:57) It appears in our observable universe, the laws are pretty static. We have laws, we have initial conditions. And we have a description of how the universe works, and how experiments work on Earth. They seem to be constant in time, and it doesnt really matter where our galaxy is as the experiments are done, as our galaxy is actually moving quite fast relative to the background. So in that sense, the laws are very stable here, but you could imagine much farther distances in which the laws are different. And stars can or cannot form, chemistry does or does not work, a different type of chemistry works. All kinds of goofy things could be happening in very different places. Very different creatures. Who knows?

Strogatz (08:47): Sure. So I liked the way you phrased it, that very different things could be happening. And of course, you know, all of us in late night conversations in our college dorm rooms could think of that thought that we have a parochial view. Its just in the nature of the finite speed of light and the finite time that the observable universe has been around, that we live in our patch. And who knows what happens in these other patches? We cant, by definition, we cant observe those parts. But theres a principled reason for believing in the multiverse, or maybe many principled reasons beyond this kind of like college-dorm speculation. So can you tell me about that? Like, for instance, there are mysteries in physics that led physicists to come up with the idea of a multiverse. Maybe tell us about what some of those mysteries are that would bring us to this, this wild You can tell Im a little skeptical here.

Kaplan (09:36): Of course.

Strogatz: It feels a little like wild philosophizing. But I also know that physics is very coherent and you have reasons for this. Its not wild philosophizing.

Kaplan (09:44): First off, we have something called the Standard Model of particle physics. And the Standard Model is based on quantum field theory. Quantum field theory works extraordinarily well in describing the interactions of particles in complex systems, in simple systems, at high energies. Internal properties of particles, like magnetic moments of electrons, and the strong force and confinement of nuclear matter. It has just an amazing breadth of description of how all matter works. And that theory involves the particles that weve seen we identify them as fields. We identify the electron, and every electron in fact, as an excitation of the electron field in the universe. And therefore, the Standard Model with a list of fields describes all possible fundamental particles that ever could be created. And so far, every one we have seen weve seen all those particles, and we havent seen any others, directly at least.

(10:55) It also has a list of parameters which tell you how strong is the electrical force between the electron and the proton, and various other numbers, which predict the probability of scattering various particles. And a number of those parameters seem very reasonable in the sense of: You put those numbers in, and you calculate quantum corrections or situations where the background is a little bit different, and nothing funny happens with those parameters.

(11:26) Save two. There are two numbers in the Standard Model that are weird. One of those numbers is the mass of the Higgs boson itself. The mass of the Higgs boson is actually the mass scale at which all other fundamental particles masses come from. So if you ask what the mass of the electron is, its proportional to the mass of the Higgs boson. Whats the mass of the top quark, the heaviest quark? Its proportional to the mass of the Higgs boson. The W weak force boson is proportional to the mass of the Higgs boson.

(12:02) So the Higgs boson controls one physical mass scale among all of the particles in the Standard Model. And that physical mass scale is something you simply put in by hand. Youve measured it, weve measured the mass of the Higgs, weve measured the mass of the other particles and their interactions with the Higgs. And that allows us to fix experimentally what that number is.

(12:30) Now, if you take a quantum field theory that has a Higgs in it and all these other particles, and you estimate what the mass of the Higgs should be what would be a typical mass of the Higgs in a quantum field theory with these particles in it you get infinity, which is nonsense. And then you say, Well, thats OK. Because the reason I get infinity has to do with the fact that Im assuming there are no new particles to arbitrarily high energy. And the way that quantum field theory works and quantum theories work at all is that you have to incorporate what are lets call them quantum fluctuations in the calculations of any physical parameters. So you have a few parameters you put into your theory.

(13:18) But if you want to ask about a physical state, a physical measurement that you make in that quantum theory, it includes all quantum fluctuations that live in the theory. Now in the case of the Higgs, you discover that the quantum fluctuations would be infinite if the standard model was correct up to infinite energies.

(13:40) We dont think the Standard Model is correct up to infinite energies. In fact, we already know, to incorporate gravity into the Standard Model brings in a new energy scale its called the Planck mass. But there could be plenty of other particles or new symmetries or other things for which, when you get to that energy, you find there are no more contributions to the Higgs mass. Its finite, everythings OK. Youre going to get infinities if you assume that your model is correct to arbitrarily high energy. So you cut these models off, you say, OK, there are going to be new particles at some mass. Above that, lets assume there are no contributions to the Higgs. And below that there will be these quantum fluctuations that contribute to the Higgs. And what you find is wherever you put that cut off however high-energy you say the Standard Model is correct to there are contributions to the Higgs mass all the way up to that energy. Which means you get contributions to the Higgs mass to these arbitrarily high energies or masses.

(14:43) And so the Higgs mass should be the mass thats associated with the new unknown physics. There should be some new physics up there at some high energy and the Higgs mass should actually be roughly that scale. And that means when you discover the Higgs and you measure its mass, you should also discover a whole bunch of other garbage, which is associated with the new physics which is beyond the Standard Model.

Strogatz (15:10): I see, yeah.

Kaplan: So instead of saying theres some unknown physics so far above the Higgs, and the Higgs mass actually gets contributions of that energy, you turn it around. You say, Oh no, if I measure the Higgs, its going to come with all this new stuff. Because the Higgs mass is not infinite, its finite.

Strogatz (15:27): So let me can I just ask I think I got you. This is a new idea to me, very interesting. If I hear you right, youre saying the Higgs is going to set the borderland between the known and the unknown.

Kaplan (15:38): Yes.

Strogatz: And because its an edge case, sort of, it is on the borderland

Kaplan (15:43): Indeed.

Strogatz: that means when we discover it, Im going to see on one side the stuff that we already kind of understand. But because its a borderland particle, theres going to be stuff on the other edge of the border. And thats what were going to be thrilled to discover, and measure its property too, because thatll be new physics.

Kaplan (16:01): Exactly. Exactly. And that led to decades of research to think about what would be on the other side of the border. What is the new theory that makes the mass of the Higgs finite, not infinite? When you include all the quantum fluctuations, it should be now a reasonable theory. So it should come with a bunch of stuff. And people have posited symmetries, something called supersymmetry. Or making the Higgs a composite particle. Weve seen the proton is made of quarks; maybe the Higgs is made of other stuff. And at high energies, actually, theres no Higgs, theres stuff inside, and were seeing other fundamental particles. So it can be even a borderline of its own existence. And at high energies, theres a completely different description.

(16:52) These were the sort of quantum field theories that people introduced and explored, that we all wondered could be what were seeing the hint of. That what seemingly looks like nonsense with the Higgs mass is only a statement that theres a new theory living just above it in energy and in mass. And we can even make suggestions of what that theory is suggestions that cure this issue with the mass of the Higgs.

(17:20) And that was the hope people had when the LHC turned on Large Hadron Collider turned on. And they would discover the Higgs and theyd discover some of these other new well call them degrees of freedom. These other particles, different masses, with relationships to the Higgs. Or even that the Higgs itself is not fundamental and we see properties of its internal structure.

(17:47) And thats why in (I think) 2003, I started thinking, But it could be that thats not the case. That the Higgs mass, while it should come with a whole bunch of stuff, you could at least theoretically imagine that the mass of the Higgs is accidentally small. That it would be at the borderline, but all of those new particles that have been added to make it finite, when those quantum fluctuations contribute to the mass of the Higgs, just by some horrible accident, those contributions cancel to such a large degree that the mass of the Higgs is anomalously small compared to where the actual border to new physics is.

Strogatz (18:35): So its sort of like in my picture in my head, where Ive got this region of the known, and then this borderland region, and then the region of the new stuff.

Kaplan (18:44): Right.

Strogatz: In fact, when I go into the region of the new stuff, its the Sahara Desert for as far as the eye can see

Kaplan (18:49): Right.

Strogatz: except that theres something way the heck off the edge of the map.

Kaplan (18:53): Right. And in physics, because the Higgs is so sensitive to quantum fluctuations, we would call that a fine-tuned situation, one in which there really has to be an accident. Because the other world that lives way out there, the new physics, the new particles and new laws that govern how the Higgs behaves, doesnt know anything, in some weird sense, about the low energy, low-lying masses. So theres no reason that it would pick an energy scale for the Higgs which is arbitrarily light compared to all of the dynamics associated with that new physics.

(19:37) So an example of this is that the strong interactions, the interactions of quarks, become strong in a particular energy scale, at 200 mega electron-volts. Thats when the quark interactions become so strong, we cannot actually pull the quarks apart. And then you ask, what is the mass of the proton? Its made of quarks. Its about five times that. Whats the mass of the neutron? Its about five times that. What are the masses of the various mesons made of quarks and antiquarks? Its between a few times that and a few Theres a very light one, for symmetry reasons, but its about half that energy. Theyre all about that energy, all the particles that you get when you go from the land of quarks to the land of the nuclei that we see in atoms, they all live at that energy scale.

(20:30) So all the new physics is that energy scale. And we would assume the same thing that whatever controls all of the dynamics associated with creating a Higgs boson, whatever it comes from, the Higgs is going to be at that energy scale. It might be the lightest particle of all the mess. And instead, the Higgs is way down here compared to I mean, as you said, we see a Sahara Desert, we dont see Maybe theres something way off there! Sometimes we get hints of it. But you know, its often a mirage. I dont see anything.

Strogatz (21:04): So let me get you on this. This is fantastic. Youre saying that in the story with the strong force, that was sort of exemplary. Thats given us a lot of intuition for how things are supposed to behave. That when we have the energy scale for the strong force, we see this zoo of particles, all kind of in the same zoo.

Kaplan (21:24): Exactly.

Strogatz: Right? I mean, theyre all in more or less the same scale, OK, give or take a factor of five here or 10 or two there. But is it like the Higgs is just this loner? Its like the Higgs is the only thing in its own neighborhood? Theres no zoo?

Kaplan (21:38): So far, theres no zoo.

Strogatz: Oh, my god.

Kaplan (21:40): So its people were worried about it. I mean, it really was the 70s, mid to late 70s, when Ken Wilson pointed out that the pure Standard Model with just the Higgs is horribly fine-tuned, that theres an instability just in the calculation or the contributions to the Higgs mass. And that instability is infinite unless you just say theres some energy where theres new physics.

(22:06) That kicked off an exploration of ideas. One was that there was no Higgs at all. That the behavior of the Higgs was just some new strong confining group that did all the things that Higgs is supposed to do, give masses to particles. It was hard to build such models. People came up with supersymmetry, as a symmetry that could protect the Higgs as long as you had the partners, the rest of the zoo, to stabilize the Higgs mass. And that all happened within a few years, sort of 1978 to 1981. Those were the sort of initial ideas for these sorts of theories.

(22:46) And then later, wilder theories appeared. In the early 2000s was an idea that maybe there is no more energy scale above the Higgs. Maybe even the scale of quantum gravity, which would be the Planck scale, for some manipulative reason, is not 17 orders of magnitude heavier than where we think the Higgs is, its actually right on top of it. But to do that, you need to do something slightly crazy, which is you posit the existence of extra dimensions of space. And extra dimensions of space do something very funny to gravity. It allows it to be extremely strong, at a much lower energy than the Planck scale. But it dilutes in such a way in the extra dimensions that we would estimate it as being much weaker, and not seem strong until much higher energy. But that, in some sense, numerically solved the problem. But physics-wise, didnt really solve the problem because we dont know what the theory of quantum gravity is. We dont know that the Higgs could have come from such a theory and why it would, and we have no predictions. So it was a it was less satisfying, but experimentally it was a bizarre possibility that was not ruled out. And so people could look for that possibility in totally different ways.

(24:05) But all of those type of theories, that whole class, were theories that suggested theres new stuff at the Higgs mass or just above it. And so far, we havent seen anything like that.

Strogatz (24:16): So this is great. Because this gives a real intellectual motivation that I was expecting there would be, but I never really understood what it was. Ive heard this phrase fine-tuning forever, or for my whole scientific life, but I was never quite sure what it meant. And so at least in this respect, the Higgs, as good as it is, and as you know, valuable as its been to physics, it has created a lot of headaches, it seems. Because its turning out to have properties, this fine tuning Youre telling me that the fine-tuning whatever we call it enigma, somehow the multiverse is going to help us address that? Is that the idea?

Kaplan (24:53): Yes.

Strogatz: OK. Lets hear how.

Kaplan (24:55): So what I would say is that fine-tuning doesnt tell you the theory is wrong, it just tells you that it smells bad. You think, you know, I have a theory it does such and such. And you fine tune some of the parameters, you know, by one part in 1034, which is what you would need if the new physics is at the Planck scale, one part in 1034 for cancellation among all the different parameters just so, so that the Higgs ends up at a mass which is so completely different than the scale of the physics that generated the Higgs in the first place.

(25:30) So that, wed say well, that stinks. It could be true, but maybe we should use that hint to explore what else is going on. And thats where all these other theories came to say, Oh, no, no, there could be new particles right around the mass of the Higgs. It cancels infinities, the Higgs is composite, whatever, even extra dimensions, theres no high energy scales. But the other possibility is, it is fine-tuned and perhaps theres a totally different type of explanation for why it is fine-tuned. And now this is where you think, Well, maybe were thinking parochially, which is that we describe the laws of nature but we do it in a very rigid way. We say locally in spacetime, the part of the universe weve seen, this part has these static, unchanging, uniform across-the-space laws of nature, thats all there is. And the Higgs mass is just super weird, and there must be some fine-tuning.

(26:26) But the other possibility is, some people call it a multiverse. Mundanely, the parameters could just be different in different patches of the universe. And the parameters that control the Higgs mass could be different in different places. And in fact, because the Higgs interacts directly or indirectly, with essentially every part of the Standard Model, then any parameter change in any part of the universe would change the mass of the Higgs, the properties of the Higgs.

(26:57) Now, you could imagine that the Higgs mass is naturally extremely heavy, and its near all of its family the particles, the excitations, the dynamics that created the Higgs in the first place. That for typical values of the parameters and the full theory of the universe and of nature, that the Higgs mass is always roughly at that energy scale, a much higher energy scale than the one weve seen experimentally. But since the laws of the universe are somewhat different in different locations, then the Higgs mass itself will be different in different locations. And if the universe is vast enough, there should be locations where the mass of the Higgs is sort of anomalous. Theres some accidental cancellation between the different quantum fluctuation contributions to the mass of the Higgs.

(27:53) Which means accidentally there could be parts of the universe where the Higgs is exponentially smaller than it should be the mass of the Higgs. That would require an exponentially large number of universes to allow that to happen randomly. But who knows how big the universe is? Who knows how the parameters vary? So this is in principle a possibility. But then you have to ask, well, why do we live in the aberrant universe with the really crummy cancellation that makes it very hard to discover anything?

(28:29) And the answer would be that the Higgs or any of the underlying physics have a lot of control over whether we exist or not. And instead of calling us us and making it anthropic or personal, we can just talk about it as structure. Heres a very simple analogy. We live on Earth; we dont live in empty space. Why dont we live in empty space? Of course, we are part of the Earth, we are born out of the material that made the Earth. It made us as well. Why are we near the sun? The Earth was made near the sun. Why are we near the sun? Because biological beings, perhaps, arent living in places where there are no significant sources of energy. Theres no fine-tuning argument to explain why we live on a planet Earth, rather than the 1060 times bigger volume, which is the rest of the empty universe. So we dont talk about that as a problem, of course.

(29:37) But what we can imagine is that the universe itself has different patches with different laws. And what you find with the Higgs mass is its possible that for a huge range of Higgs masses, what we would call chemistry doesnt exist, which means molecules dont bind, which means structure cannot form in those places. Nobody has explored the possible laws of nature from all possible underlying laws of nature.

(30:08) In other words, if I gave you the Standard Model, you wouldnt come back to me and tell me about the existence of a giraffe. You cant predict a very chaotic nonlinear process to tell me what could exist there. But at least its sort of the baseline level. You do need something which is nontrivial to happen. And you could imagine that the Higgs mass better be very low compared to the Planck scale, the quantum gravity scale, in order for structure to form in reasonable ways without creating black holes, for example. So you can imagine rules in which there are special values of the Higgs mass, which could be extraordinarily fine-tuned. But in those universes, or those parts of the universe, truly nontrivial things happen, like the formation of stars and planets and anything on planets.

Strogatz (31:06): So if I can just try to I dont want to oversimplify what you said. But I think it sounds like what youre saying is that if you happen to live in a hospitable patch, meaning that fine-tuning just randomly happened in your patch, you you meaning you molecules, you atoms have a shot at developing the kinds of structure that can lead to sentient introspective life

Kaplan (31:30): Exactly.

Strogatz: and then can get puzzled about, Gee, how come were so lucky that we live in a place where this can happen? And its kind of because thats what happened, the other parts are stillborn. They cant ask the question. They dont have any life. They dont have any consciousness.

Kaplan (31:44): Exactly. Were observers, but were part of the system. And so we have to be in a place where the laws of nature are such that in this region, we would be created. So we have a, what well call an observational bias.

Strogatz: Yes.

Kaplan (32:00): People see this in astronomy all the time. You look out at the stars, you can count stars and say, Oh, this is how many stars I see, and this is how many stars of this type versus how many stars of that type. But type A stars are very bright and type B stars are very dim. And its hard to see type B stars. And you may come to the conclusion that there are 20 times as many type As than type Bs, but actually type Bs are extraordinarily difficult to see. And so we have an observational bias. Actually, there are a million times more type Bs than type As. I have no idea how many stars there are based on simple observations. And so I better get much more clever about how I observe.

(32:40) But all of that is in a region where I can observe in principle. The multiverse is a place we cant observe in principle. But there still could be an observational bias based on what the parameters of the Standard Model are, and what forms in different patches of the multiverse.

Strogatz (32:59): So far, weve been focusing on this fine-tuning kind of motivation for the idea of the multiverse. But Im wondering, does something like I remember hearing the phrase eternal inflation or bubble universes popping out through an inflationary cosmology sort of scenario. Is that another kind of argument for a multiverse?

Kaplan (33:19): Yeah, and in some sense, its the easiest way to spit out a multiverse, assuming that it makes sense. So that the two parameters in the Standard Model that invoke ideas of a multiverse are the mass of the Higgs, but also the cosmological constant. And the cosmological constant is something that we have seen significant evidence for in our universe. It causes the accelerated expansion of our universe. People call it dark energy. But cosmological constant is also in effect something in the Standard Model. Its also something that garners quantum fluctuations.

(33:20) And so the value of the cosmological constant here in this universe could have been any value, really. And it too is fine-tuned compared to at least the highest energy scales that we can imagine in physics, which in this case again is the Planck scale. And that fine-tuning is something like one part in 10123. So you have to cancel something in the new physics down to 123 digits in order for a universe to have a nice small cosmological constant.

Strogatz (34:34): So I think I remember you saying earlier there are two numbers that are weird. So have we now come to that point, that one is the mass of the Higgs and one is the cosmological constant?

Kaplan (34:43): Correct. Those are the two.

Strogatz (34:45): Its so interesting. This is so, I mean, you have to feel kind of blessed to be alive to think about this. Maybe literally blessed, because if you want to get theological I happen to not be religious, but if a person is religious, its kind of tempting to go there, right? That these are two miraculous things that had to happen. These numbers had to be fine-tuned for us. OK, I dont know.

Kaplan (35:09): I get those sorts of questions when I was doing Q&As. And another explanation for fine-tuning is not that theres a statistical sample large enough to incorporate things on the tail of the distribution. But the things on the tail of the distribution are important for life and structure. You could also say, no, there is a Supreme Being that has set the number the way they are because the Supreme Being really wanted life to exist. And yes, you could think of it in that way. I dont have a model for the multiverse that Im in love with. But I certainly also dont have a model for an All-Seeing Creator setting these things up either. So I dont personally find them compelling in the sense that my day job is to try to figure out what the heck is going on. And neither of those really have a lot of teeth.

Strogatz (36:00): OK.

Kaplan: Im not saying that the multiverse is not true. But if you want the multiverse to solve the problems of our universe, it does this sort of mediocre job of it.

Strogatz (36:11): All right, but back to the many bubble universes.

Kaplan: Sure.

Strogatz (36:14): What does this have to do again with the cosmological constant?

Kaplan (36:17): So the cosmological constant itself is the one energy density in the universe that does not dilute. Thats why its called a constant. And what that means is when the universe expands, its expansion, which is driven by whats in the universe, changes the relative amounts of matter, of radiation, and of this constant. The constant is not diluting, but everything else is. And that means at some point in the history of the universe, the cosmological constant wins, and the expansion of the universe is driven by just the cosmological constant.

(36:57) And a cosmological constant expands the universe much faster than other types of matter. In fact, most types of matter, while they allow for a certain rate of expansion, they also tend to slow the expansion. You can think of it in some ways as matter is attracted to itself or radiation is attracted to itself gravitationally. Its not a perfect analogy, but its trying to slow that expansion.

(37:26) A cosmological constant, just the way it works in general relativity, it gives you the opposite sign, it tends to speed up expansion. And it speeds up expansion proportional to itself, in a way, so we would describe it as exponential expansion. And so what happens is, when the cosmological constant eventually becomes the dominant energy of the universe, it starts to expand the universe exponentially fast. And if that happens too early in the history of the universe, there is no time for anything in the universe to form. Galaxies dont form or stars dont form or any sort of clumping of matter whatsoever would not have a chance to form because the cosmological constant expansion would blow everything apart. Nothing gravitational would form in the early universe. So all the structure that lives in this universe, that seems to be the important source of nontrivial things, interesting things, stars, planets life, blah, blah, blah. If the cosmological constant was too big, there is no time in which those things could be created.

(38:35) Now, you can fiddle with other parameters to make sure those things get created at earlier times. But if you fix all other parameters and say that the cosmological constant maybe is different in different parts of the universe, then those patches of the universe would expand very differently. And the ones with larger cosmological constants would essentially have nothing in them.

(38:58) And so there its an even more trivial thing that we live where the stuff is. So we have to live in a place where exponential expansion didnt take over before stuff was formed.

(39:11) So lets just, as a slight side note, lets look at our universe and our cosmological constant, the one that we have experimental evidence for. You can say that if I wait another 14 billion years, or lets say 140 billion years or a trillion years, the exponential expansion would take over. The cosmological constant would be the dominant energy density in the universe. And you could ask, would our galaxy be ripped apart or the cluster be ripped apart?

(39:40) And the answer is no, in fact, that actually things that are gravitationally bound like our galaxy compete well, do better than the cosmological constant. So locally, the gravity is still more important in the galaxy by the local matter than the cosmological scale of the cosmological constant. But things that are not gravitationally tied to each other, they are going to be ripped apart or pulled apart by exponential expansion.

(40:11) So all you need is that structure forms before the cosmological constant takes over. And weirdly, in our universe, the cosmological constant is roughly just big enough, keeping all other parameters fixed, such that matter just had a chance to form. So we could have seen zero cosmological constant or never detected it. And wed say, oh, theres some magical reason why the cosmological constant is zero. We could have had a cosmological constant, which was, say 100 times or 1,000 times bigger, in which case, galaxies wouldnt have formed. There wouldnt have been time for the gravitational bound states that create our worlds to form. And therefore there would be no interesting structure in the universe and no life. So the cosmological constant landed as small as it had to be, but as big as it could be.

(41:05) And when that was measured in in 98, it was a sort of wake-up call that, oh, maybe there are parameters that are associated with the fact that were biased by our observable universe parameters, and not by what would be a natural outcome of a deep high energy or underlying laws of nature. And so then you would imagine that, OK, maybe its the same accident, this patch has a tiny cosmological constant due to a bizarre amount of cancellation, but [thats] not bizarre if there are 10500 universes, or 10500 patches where the cosmological constant is different. Most of those patches would have nothing in it, no structure. There could be even rarer parts or patches of the universe with a much tinier cosmological constant. But were in the most populous type of patch of the universe, where the cosmological constant is as big as it could be without destroying us or without causing us not to form.

(42:12) So the fine-tuning/landscape of possible universes, the multiverse, all of that is a statistical argument. We dont know what the priors are, we dont know what parameters were supposed to vary. But if you keep everything fixed, and you just vary the cosmological constant, you get something interesting, which is that the cosmological constant is not far from the value that you would see in a typical universe in which life exists or galaxies exist.

Strogatz (42:44): Wow, it really is very philosophically mind-blowing, thinking about all this, that our existence I mean, it makes this anthropic principle which goes back, I dont know, is it from the 1960s, or something? But it seems more important all the time.

Kaplan (42:58): I think that the anthropic principle often was more tautological. But here, it really is attempting to remove observational bias, which is to say that, you know, we have to take into account that the measurements were making are the ones we have access to. And the fact that were here means that we have access to a certain type of measurement or a certain range of parameters. We would not have been able to measure a cosmological constant, which is, you know, hundreds of orders of magnitude bigger. It would not have been possible because there would be no structure in existence.

(43:37) You could postulate that theres a different form of life that lives at a different timescale, you know, made of different particles that live long enough, but they dont have to live too long. That could be true. And then you could ask questions: Why arent we that life? Or if this whole anthropic argument is true, does that tell us that the first real structure comes down to the things made of atoms?

(44:02) Its very hard to parse what can be asked scientifically at that point, and what youre really asking about the initial conditions of the universe. And famously with physics, you get two things: You get dynamics, which are the laws of nature, the dynamical laws, which are differential equations, and then you get initial conditions. And the laws of nature are something you can test again and again and again. But the initial conditions are whatever you get. And if theres a strong bias in the initial conditions toward certain things by the way were doing the measurements, then were going to get things that are not fundamental. Theyre, in a sense, historical: They depend on the history of this part of the universe. And that could be aberrant or anomalous with respect to a larger scope of what the universe could be.

Strogatz (44:56): These ideas are so fascinating. Can I ask you to just look into your crystal ball, and well end with that. What would notions of the multiverse, cosmological constant, Higgs, all these ideas but especially the multiverse wheres it gonna lead us in physics?

Read this article:

Are There Reasons to Believe in a Multiverse? - Quanta Magazine

Read More..

Scrutinizing joint remote state preparation under decoherence … – Nature.com

Brukner, ., ukowski, M., & Zeilinger, A.: The essence of entanglement. In Quantum Arrangements: Contributions in Honor of Michael Horne, 117138 (2021).

Falaye, B. et al. Entanglement fidelity for electron-electron interaction in strongly coupled semiclassical plasma and under external fields. Laser Phys. Lett. 16(4), 045204 (2019).

Article ADS CAS Google Scholar

Guan, X.-W., Chen, X.-B., Wang, L.-C. & Yang, Y.-X. Joint remote preparation of an arbitrary two-qubit state in noisy environments. Int. J. Theor. Phys. 53(7), 22362245 (2014).

Article MATH Google Scholar

Pirandola, S. et al. Advances in quantum cryptography. Adv. Opt. Photon. 12(4), 10121236 (2020).

Article Google Scholar

Chen, X.-B., Xu, G., Yang, Y.-X. & Wen, Q.-Y. Centrally controlled quantum teleportation. Opt. Commun. 283(23), 48024809 (2010).

Article ADS CAS Google Scholar

Hofer, S. G., Wieczorek, W., Aspelmeyer, M. & Hammerer, K. Quantum entanglement and teleportation in pulsed cavity optomechanics. Phys. Rev. A 84(5), 052327 (2011).

Article ADS Google Scholar

Li, M., Fei, S.-M. & Li-Jost, X. Quantum entanglement: Separability, measure, fidelity of teleportation, and distillation. Adv. Math. Phys. 2010, 110 (2010).

Article MathSciNet MATH Google Scholar

Sun, K. et al. Experimental quantum entanglement and teleportation by tuning remote spatial indistinguishability of independent photons. Opt. Lett. 45(23), 64106413 (2020).

Article ADS PubMed Google Scholar

Ramrez, M. D. G., Falaye, B. J., Sun, G.-H., Cruz-Irisson, M. & Dong, S.-H. Quantum teleportation and information splitting via four-qubit cluster state and a bell state. Front. Phys. 12, 19 (2017).

Article Google Scholar

Adepoju, A. G., Falaye, B. J., Sun, G.-H., Camacho-Nieto, O. & Dong, S.-H. Teleportation with two-dimensional electron gas formed at the interface of a gaas heterostructure. Laser Phys. 27(3), 035201 (2017).

Article ADS Google Scholar

Bennett, C. H. et al. Remote state preparation. Phys. Rev. Lett. 87(7), 077902 (2001).

Article ADS CAS PubMed Google Scholar

Xiao, X.-Q., Liu, J.-M. & Zeng, G. Joint remote state preparation of arbitrary two-and three-qubit states. J. Phys. B 44(7), 075501 (2011).

Article ADS Google Scholar

Bennett, C. H. et al. Teleporting an unknown quantum state via dual classical and EinsteinPodolskyRosen channels. Phys. Rev. Lett. 70(13), 1895 (1993).

Article ADS MathSciNet CAS PubMed MATH Google Scholar

Lo, H.-K. Classical-communication cost in distributed quantum-information processing: A generalization of quantum-communication complexity. Phys. Rev. A 62(1), 012313 (2000).

Article ADS Google Scholar

Pati, A. K. Minimum classical bit for remote preparation and measurement of a qubit. Phys. Rev. A 63(1), 014302 (2000).

Article ADS Google Scholar

Li, X. & Ghose, S. Optimal joint remote state preparation of equatorial states. Quant. Inf. Process. 14(12), 45854592 (2015).

Article ADS MathSciNet MATH Google Scholar

Devetak, I. & Berger, T. Low-entanglement remote state preparation. Phys. Rev. Lett. 87(19), 197901 (2001).

Article ADS CAS PubMed Google Scholar

Falaye, B. J., Sun, G.-H., Camacho-Nieto, O. & Dong, S.-H. Jrsp of three-particle state via three tripartite ghz class in quantum noisy channels. Int. J. Quant. Inf. 14(07), 1650034 (2016).

Article MATH Google Scholar

Wang, D. & Ye, L. Multiparty-controlled joint remote state preparation. Quant. Inf. Process. 12, 32233237 (2013).

Article ADS MathSciNet MATH Google Scholar

Leung, D. W. & Shor, P. W. Oblivious remote state preparation. Phys. Rev. Lett. 90(12), 127905 (2003).

Article ADS PubMed Google Scholar

Daki, B. et al. Quantum discord as resource for remote state preparation. Nat. Phys. 8(9), 666670 (2012).

Article Google Scholar

Luo, M.-X., Chen, X.-B., Yang, Y.-X. & Niu, X.-X. Experimental architecture of joint remote state preparation. Quant. Inf. Process. 11, 751767 (2012).

Article MathSciNet MATH Google Scholar

Pogorzalek, S. et al. Secure quantum remote state preparation of squeezed microwave states. Nat. Commun. 10(1), 2604 (2019).

Article ADS CAS PubMed PubMed Central Google Scholar

Xu, Z. & Jiang, M. Controlled cyclic joint remote state preparation of arbitrary single-qubit states. In 2021 40th Chinese Control Conference (CCC), 63076311 (2021).

Chen, N., Quan, D.-X., Xu, F.-F., Yang, H. & Pei, C.-X. Deterministic joint remote state preparation of arbitrary single-and two-qubit states. Chin. Phys. B 24(10), 100307 (2015).

Article ADS Google Scholar

Zha, X.-W., Yu, X.-Y. & Cao, Y. Tripartite controlled remote state preparation via a seven-qubit entangled state and three auxiliary particles. Int. J. Theor. Phys. 58(1), 282293 (2019).

Article MATH Google Scholar

Zhang, C.-Y., Bai, M.-Q. & Zhou, S.-Q. Cyclic joint remote state preparation in noisy environment. Quant. Inf. Process. 17, 120 (2018).

Article MathSciNet MATH Google Scholar

Chen, N., Quan, D.-X., Zhu, C.-H., Li, J.-Z. & Pei, C.-X. Deterministic joint remote state preparation via partially entangled quantum channel. Int. J. Quant. Inf. 14(03), 1650015 (2016).

Article MATH Google Scholar

Zhang, Z., Zhao, C., Wang, J. & Shu, L. Joint remote state preparation of mixed states. J. Phys. B 53(2), 025501 (2019).

Article ADS Google Scholar

Abdel-Aty, M. Information entropy of a time-dependent three-level trapped ion interacting with a laser field. J. Phys. B 38(40), 8589 (2005).

MathSciNet CAS MATH Google Scholar

Obada, A.-S. & Abdel-Aty, M. Influence of the stark shift and Kerr-like medium on the evolution of field entropy and entanglement in two-photon processes. Acta Phys. Pol. B 31(3), 589 (2000).

ADS CAS Google Scholar

Furuichi, S. & Abdel-Aty, M. Entanglement in a squeezed two-level atom. J. Phys. A 34(35), 6851 (2001).

Article ADS MathSciNet MATH Google Scholar

Abdel-Aty, A.-H. et al. A quantum classification algorithm for classification incomplete patterns based on entanglement measure. J. Intell. Fuzzy Syst. 38(3), 28092816 (2020).

Article Google Scholar

Zidan, M. et al. A novel algorithm based on entanglement measurement for improving speed of quantum algorithms. Appl. Math. Inf. Sci 12(1), 265269 (2018).

Article MathSciNet Google Scholar

Hashem, M. et al. Bell nonlocality, entanglement, and entropic uncertainty in a heisenberg model under intrinsic decoherence: Dm and ksea interplay effects. Appl. Phys. B 128(4), 87 (2022).

Article ADS CAS Google Scholar

Mohamed, A.-B. & Metwally, N. Non-classical correlations based on skew information for an entangled two qubit-system with non-mutual interaction under intrinsic decoherence. Ann. Phys. 381, 137150 (2017).

Article ADS MathSciNet CAS MATH Google Scholar

Abdelghany, R., Mohamed, A.-B., Tammam, M., Kuo, W. & Eleuch, H. Tripartite entropic uncertainty relation under phase decoherence. Sci. Rep. 11(1), 11830 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Mohamed, A.-B.A., Abdel-Aty, A.-H., Qasymeh, M. & Eleuch, H. Non-local correlation dynamics in two-dimensional graphene. Sci. Rep. 12(1), 3581 (2022).

Article ADS CAS PubMed PubMed Central Google Scholar

Adepoju, A. G., Falaye, B. J., Sun, G.-H., Camacho-Nieto, O. & Dong, S.-H. Joint remote state preparation (jrsp) of two-qubit equatorial state in quantum noisy channels. Phys. Lett. A 381(6), 581587 (2017).

Article ADS CAS Google Scholar

Chen, Z.-F., Liu, J.-M. & Ma, L. Deterministic joint remote preparation of an arbitrary two-qubit state in the presence of noise. Chin. Phys. B 23(2), 020312 (2013).

Article Google Scholar

Li, J.-F., Liu, J.-M. & Xu, X.-Y. Deterministic joint remote preparation of an arbitrary two-qubit state in noisy environments. Quant. Inf. Process. 14(9), 34653481 (2015).

Article ADS MathSciNet MATH Google Scholar

Wang, M.-M., Qu, Z.-G., Wang, W. & Chen, J.-G. Effect of noise on deterministic joint remote preparation of an arbitrary two-qubit state. Quant. Inf. Process. 16(5), 114 (2017).

Article ADS MATH Google Scholar

Dash, T., Sk, R. & Panigrahi, P. K. Deterministic joint remote state preparation of arbitrary two-qubit state through noisy cluster-ghz channel. Opt. Commun. 464, 125518 (2020).

Article CAS Google Scholar

Qu, Z., Wu, S., Wang, M., Sun, L. & Wang, X. Effect of quantum noise on deterministic remote state preparation of an arbitrary two-particle state via various quantum entangled channels. Quant. Inf. Process. 16(12), 125 (2017).

Article MathSciNet MATH Google Scholar

Lindblad, G. On the generators of quantum dynamical semigroups. Commun. Math. Phys. 48(2), 119130 (1976).

Article ADS MathSciNet MATH Google Scholar

Xiang, G.-Y., Li, J., Yu, B. & Guo, G.-C. Remote preparation of mixed states via noisy entanglement. Phys. Rev. A 72(1), 012315 (2005).

Article ADS Google Scholar

Ai-Xi, C., Li, D., Jia-Hua, L. & Zhi-Ming, Z. Remote preparation of an entangled state in nonideal conditions. Commun. Theor. Phys. 46(2), 221 (2006).

Article ADS Google Scholar

Wang, D., Zha, X.-W. & Lan, Q. Joint remote state preparation of arbitrary two-qubit state with six-qubit state. Opt. Commun. 284(24), 58535855 (2011).

Article ADS CAS Google Scholar

Sharma, V., Thapliyal, K., Pathak, A. & Banerjee, S. A comparative study of protocols for secure quantum communication under noisy environment: Single-qubit-based protocols versus entangled-state-based protocols. Quant. Inf. Process. 15(11), 46814710 (2016).

Article ADS MathSciNet MATH Google Scholar

Buek, V., Knight, P. & Kudryavtsev, I. Three-level atoms in phase-sensitive broadband correlated reservoirs. Phys. Rev. A 44(3), 1931 (1991).

Article ADS CAS PubMed Google Scholar

Georgiades, N. P., Polzik, E., Edamatsu, K., Kimble, H. & Parkins, A. Nonclassical excitation for atoms in a squeezed vacuum. Phys. Rev. Lett. 75(19), 3426 (1995).

Article ADS CAS PubMed Google Scholar

The rest is here:

Scrutinizing joint remote state preparation under decoherence ... - Nature.com

Read More..

A New Subatomic Particle The Most Beautiful Strongly Bound … – SciTechDaily

Dibaryons, fascinating entities in nuclear and particle physics, represent a state of matter where two baryons, each consisting of three quarks, are bound together. The concept was first proposed in the context of quantum chromodynamics (QCD), the theory describing strong interactions between quarks and gluons.

Scientists from the Tata Institute of Fundamental Research and The Institute of Mathematical Science have predicted the existence of a dibaryon particle, built entirely from bottom quarks. This particle, termed D6b, is predicted to have a binding energy 40 times stronger than that of the only known stable dibaryon, deuteron. This discovery, made possible through Quantum Chromodynamics on space-time lattices, could provide valuable insights into the nature of strong forces and quark mass interactions.

Dibaryons are subatomic particles composed of two baryons. Their formation, which occurs through interactions between baryons, is fundamental in big-bang nucleosynthesis, nuclear reactions including those happening within stars, and bridges the gap between nuclear physics, cosmology, and astrophysics. Fascinatingly, the strong force, responsible for the formation and the majority of the mass of nuclei, facilitates the formation of a plethora of different dibaryons with diverse quark combinations.

Nevertheless, these dibaryons are not commonly observed the deuteron is currently the only known stable dibaryon.

To resolve this apparent dichotomy, it is essential to investigate dibaryons and baryon-baryon interactions at the fundamental level of strong interactions. In a recent publication in the journal Physical Review Letters, physicists from the Tata Institute of Fundamental Research (TIFR) and The Institute of Mathematical Science (IMSc) have provided strong evidence for the existence of a deeply bound dibaryon, entirely built from bottom (beauty) quarks.

Using the computational facility of the Indian Lattice Gauge Theory Initiative (ILGTI), Prof. Nilmani Mathur and graduate student Debsubhra Chakraborty from the Department of Theoretical Physics, TIFR, and Dr. M. Padmanath from IMSc have predicted the existence of this subatomic particle. The predicted dibaryon (D6b) is made of two triply bottom Omega (bbb) baryons, having the maximal beauty flavor.

Schematic picture of the predicted dibaryon, D6b, made of two Omega baryons. Credit: Nilmani Mathur

Its binding energy is predicted to be as large as 40 times stronger than that of the deuteron, and hence perhaps entitled it to be the most strongly bound beautiful dibaryon in our visible universe. This finding elucidates the intriguing features of strong forces in baryon-baryon interactions and leads the path for further systematic study of quark mass dependence of baryon-baryon interactions which possibly can explain the emergence of bindings in nuclei. It also brings motivation to search for such heavier exotic subatomic particles in next-generation experiments.

Since the strong force is highly non-perturbative in the low energy domain, there is no first-principles analytical solution as yet for studying the structures and interactions of composite subatomic particles like protons, neutrons, and the nuclei they form. Formulation of quantum chromodynamics (QCD) on space-time lattices, based on an intricate amalgamation between a fundamental theory and high-performance computing, provides an opportunity for such study.

Not only does it require a sophisticated understanding of the quantum field-theoretic issues, but the availability of large-scale computational resources is also crucial. In fact, some of the largest scientific computational resources in the world are being utilized by lattice gauge theorists who are trying to solve the mystery of strong interactions of our Universe through their investigations inside the femto-world (within a scale of about one million-billionth of a meter).

Lattice QCD calculations can also play a crucial role in understanding the nuclei formation at the Big Bang, their reaction mechanisms, in aiding the search for the physics beyond the standard model as well as for investigating the matter under the extreme conditions of high temperature and density similar to those at the early stages of the Universe after the Big Bang.

Reference: Strongly Bound Dibaryon with Maximal Beauty Flavor from Lattice QCD by Nilmani Mathur, M. Padmanath, and Debsubhra Chakraborty, 16 March 2023, Physical Review Letters.DOI: 10.1103/PhysRevLett.130.111901

Here is the original post:

A New Subatomic Particle The Most Beautiful Strongly Bound ... - SciTechDaily

Read More..

Homegrown review: Timothy McVeigh and the rise of the Trumpist threat – The Guardian

Books

Jeffrey Toobin has written a brilliant, chilling book on the Oklahoma City bombing and the rise of Republican extremism

Sun 14 May 2023 02.00 EDT

Jeffrey Toobin has combined two great books in one. The first is an edge-of-your-seat thriller, describing Timothy McVeighs every movement on his way to committing one of the most horrific crimes in American history. The second traces how a huge part of the Republican establishment has come to embrace many of McVeighs most dangerous convictions.

Toobin is a lawyer who became a full-time writer and TV pundit 30 years ago. This is his ninth book and his most important, because it gives the clear and present danger of rightwing extremism the attention it deserves.

McVeigh was a brilliant marksman who fought in the first Iraq war but failed a tryout for the Green Berets after only two days. This, Toobin writes, was a shattering defeat he had no plan B.

McVeighs biggest ideological influence was a novel, The Turner Diaries, which envisaged a world in which the government had the power to confiscate private arms, Black people were allowed to attack whites with impunity and whites were punished for defending themselves. It also imagined the blowing up of the FBI building in Washington with a truck filled with thousands of pounds of fertilizer, blasting gelatin and sticks of dynamite.

That became McVeighs template for blowing up the Alfred P Murrah Federal Building in Oklahoma City, on 19 April 1995, with a rental truck. The death toll was 168, including 19 children. More than 500 were injured.

Toobin covered McVeighs trial for the New Yorker and ABC News. His interest was rekindled when he realized the conspirators arrested in a plot to kidnap the current Michigan governor, Gretchen Whitmer, were much like McVeigh.

I know these people, he writes.

Then he discovered McVeighs lead lawyer had donated 635 boxes of documents to the University of Texas.

I knew that an archive of this extent had never before been publicly available in a major case.

In Homegrown, Toobin combines the fruits of those documents with interviews with more than 100 participants, among them Bill Clinton, president at the time, and Merrick Garland, now attorney general, then lead prosecutor of McVeigh. The result is one of the most detailed and exciting true crime stories I have ever read.

But in many ways the other book Toobin has written is even more important. It is the book that looks at the birth of the extreme language that now dominates Republican politics. McVeigh embraced white supremacy and violent action just as a Republican House speaker, Newt Gingrich, and a talk show host, Rush Limbaugh, were engaging in rhetorical violence at a pitch the country had rarely heard before on national broadcasts.

Gingrich instructed Republicans to describe Democrats as sick, pathetic, traitors, radical and corrupt, while describing himself as standing between us and Auschwitz. Limbaugh said the second violent American revolution was a quarter of an inch away. Toobin draws a straight line to the titles of books written by extremists today, from Deliver Us From Evil: Defeating Terrorism, Despotism and Liberalism (Sean Hannity) to Treason: Liberal Treachery from the Cold War to the War on Terrorism (Ann Coulter).

To Toobin, the mistake Garland made was the same one he himself made when he covered McVeighs trial: both focused on the acts of a loner, instead of connecting the atrocity to the beginnings of the mainstreaming of rightwing extremism.

One of the most interesting parts of the book lies in Clintons prescience. Because the Oklahoma bombing came barely two years after the first bombing of the World Trade Center by Islamic fundamentalists, the media and many others assumed foreigners were the culprits again. Clinton was certain that wasnt the case.

This was domestic, homegrown, the militias, he told his staff. I know these people. Ive been fighting them all my life.

Clintons earliest political memory was of the Arkansas governor Orval Faubus refusing to allow Black students into Little Rock Central high school. Clinton remembered those opposed to integration, the faces twisted with rage. He also believed hatred was especially virulent in the early 1990s, because of Gingrichs sneering contempt and Limbaughs roiling bombast.

McVeighs own linear connection to old hatreds was confirmed by his membership of the Ku Klux Klan.

McVeigh saw himself as the leader of an army of extremists but Toobin is convinced, by the evidence, there were only two significant co-conspirators. The disastrous change in our own time lies in the way the internet has enabled millions of such people to connect. One study for the Department of Homeland Security found social media was used in 90% of US extremist plots.

Toobin writes: More than any other reason the internet accounts for the difference between McVeighs lonely crusade and the thousands who stormed the Capitol on January 6.

The terrorism expert Juliette Kayyem said the internet gave white-supremacist terrorism what amounts to a dating app online. In Toobins words, when Donald Trump became president, the wolf pack had a new leader.

This is one of the most important markers of the decline of the Republican party. After Oklahoma City, no politicians defended the attack. But after January 6, many Republicans did just that. Andrew Clyde, a Georgia congressman, said the riot resembled a normal tourist visit. Since leaving the White House, Trump has turned to a new level of feral zealotry, embracing QAnon, the antisemitic conspiracy theory he reposts on his social media platform, and regularly expressing eagerness to pardon rioters as soon as he enters the White House again.

This week provided the most dramatic evidence yet of how completely the political-media establishment has been corrupted by organized hatred. CNN, a formerly respectable organization, decided the best thing it could do was to give a national forum to the hero of millions of white supremacists. That decision alone drove home the urgency of the message of Toobins brilliant book.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:
Homegrown review: Timothy McVeigh and the rise of the Trumpist threat - The Guardian

Read More..

Study combines quantum computing and generative AI for drug discovery – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Insilico Medicine, a clinical stage generative artificial intelligence (AI)-driven drug discovery company, today announced that it combined two rapidly developing technologies, quantum computing and generative AI, to explore lead candidate discovery in drug development and successfully demonstrated the potential advantages of quantum generative adversarial networks in generative chemistry.

The study, published in the Journal of Chemical Information and Modeling, was led by Insilico's Taiwan and UAE centers which focus on pioneering and constructing breakthrough methods and engines with rapidly developing technologiesincluding generative AI and quantum computingto accelerate drug discovery and development.

The research was supported by University of Toronto Acceleration Consortium director Aln Aspuru-Guzik, Ph.D., and scientists from the Hon Hai (Foxconn) Research Institute.

"This international collaboration was a very fun project," said Aln Aspuru-Guzik, director of the Acceleration Consortium and professor of computer science and chemistry at the University of Toronto. "It sets the stage for further developments in AI as it meets drug discovery. This is a global collaboration where Foxconn, Insilico, Zapata Computing, and University of Toronto are working together."

Generative Adversarial Networks (GANs) are one of the most successful generative models in drug discovery and design and have shown remarkable results for generating data that mimics a data distribution in different tasks. The classic GAN model consists of a generator and a discriminator. The generator takes random noises as input and tries to imitate the data distribution, and the discriminator tries to distinguish between the fake and real samples. A GAN is trained until the discriminator cannot distinguish the generated data from the real data.

In this paper, researchers explored the quantum advantage in small molecule drug discovery by substituting each part of MolGAN, an implicit GAN for small molecular graphs, with a variational quantum circuit (VQC), step by step, including as the noise generator, generator with the patch method, and quantum discriminator, comparing its performance with the classical counterpart.

The study not only demonstrated that the trained quantum GANs can generate training-set-like molecules by using the VQC as the noise generator, but that the quantum generator outperforms the classical GAN in the drug properties of generated compounds and the goal-directed benchmark.

In addition, the study showed that the quantum discriminator of GAN with only tens of learnable parameters can generate valid molecules and outperforms the classical counterpart with tens of thousands parameters in terms of generated molecule properties and KL-divergence score.

"Quantum computing is recognized as the next technology breakthrough which will make a great impact, and the pharmaceutical industry is believed to be among the first wave of industries benefiting from the advancement," said Jimmy Yen-Chu Lin, Ph.D., GM of Insilico Medicine Taiwan and corresponding author of the paper. "This paper demonstrates Insilico's first footprint in quantum computing with AI in molecular generation, underscoring our vision in the field."

Building on these findings, Insilico scientists plan to integrate the hybrid quantum GAN model into Chemistry42, the Company's proprietary small molecule generation engine, to further accelerate and improve its AI-driven drug discovery and development process.

Insilico was one of the first to use GANs in de novo molecular design, and published the first paper in this field in 2016. The Company has delivered 11 preclinical candidates by GAN-based generative AI models and its lead program has been validated in Phase I clinical trials.

"I am proud of the positive results our quantum computing team has achieved through their efforts and innovation," said Alex Zhavoronkov, Ph.D., founder and CEO of Insilico Medicine. "I believe this is the first small step in our journey. We are currently working on a breakthrough experiment with a real quantum computer for chemistry and look forward to sharing Insilico's best practices with industry and academia."

More information: Po-Yu Kao et al, Exploring the Advantages of Quantum Generative Adversarial Networks in Generative Chemistry, Journal of Chemical Information and Modeling (2023). DOI: 10.1021/acs.jcim.3c00562

The data acquisition code and source codes associated with this study are publicly available at: github.com/pykao/QuantumMolGAN-PyTorch

Journal information: Journal of Chemical Information and Modeling

See the original post:
Study combines quantum computing and generative AI for drug discovery - Phys.org

Read More..