Category Archives: Quantum Physics

How to invent a time machine – Daily Kos

This story is about physics, not science fiction. Traveling into the future is easy, in fact inevitable. Its a one-way trip, and I will say no more about that. There is nothing in this story about sending objects or people into the past thats way beyond what our current mastery of physics can accomplish. But sending information into the recent past may not be impossible, and in fact we may be starting to see a way to get there from here (or should I say, a way to get then from soon). That is the subject of this story.

Spoiler alert: the bottom line is that sending information back in time appears to be theoretically possible, though the practical difficulties may prove to be insurmountable, and there is now an opportunity for creative amateurs comfortable with oscilloscopes and radio frequency signal generators to do some really interesting research on time-shifting information a short ways into the past. Were talking fractions of a microsecond into the past, but one has to start somewhere. And if the process can be daisy-chained, it only takes a million microseconds to make a second. Thats where Im heading, but I will take the long scenic route to get there.

I published a story here earlier this year with the audacious and provocative title Understanding quantum weirdness. This had two unexpected outcomes. First, on the strength of my diary I was invited to review a sci-fi information-time-shift novel manuscript for physics plausibility. This gave me much to think about. Second, I sent a link to my article to Dr. John Cramer, whose Transactional Interpretation of quantum mechanics I described and promoted, and received a very kind response to which he attached a pre-print of an article he was just about to submit to Analog Magazine for his regular column in their July-August issue. Now that Cramers column has been published, I am free to expand upon it.

19th Century Electromagnetism

Some historical background is in order. Lets start with Michael Faraday (1791-1867), a brilliant experimentalist who studied electricity and magnetism, among other things. In 1821, Faraday was the first to demonstrate the use of current flow in a magnetic field to generate rotary motion a concept that grew up into electric motors.

Faraday also invented the first electric generator, the Faraday disk, in 1831.

Faradays experiments laid the foundation for the concept of fields in physics, specifically electromagnetic fields. And he pretty much kicked off the current Age of Electromagnetism.

James Clerk Maxwell (1831-1879) recognized the value of Faradays insights, and translated them into a page of equations, the mathematical representation of Faradays lines of force. These equations form the core of what is now known as Classical Electrodynamics (CED for short). Maxwell published A Dynamical Theory of the Electromagnetic Field in 1865, two years before Faraday died.

This was the first theory to describe magnetism, electricity, and light as different manifestations of the same phenomenon. Maxwells equations showed that electric and magnetic undulations can form free-standing waves that travel through space at the speed of light; Maxwell proposed that these waves are what light actually is. Moreover, electromagnetic waves can have frequencies higher and lower than the frequencies (colors) we see. Recognizing this, Maxwell predicted the existence of radio waves.

The German physicist Heinrich Hertz generated radio waves in his laboratory in 1887. The Italian inventor Guglielmo Marconi developed the first practical radio transmitters and receivers around 1894-1895. Radio communication began to be used commercially around 1900. Thus began the Age of Radio, bringing music and entertainment into homes. And where theres radio, can WiFi be far behind?

Maxwells equations have two independent solutions: one with waves that carry positive energy forward in time, and one with waves that carry negative energy backward in time. This has nothing to do with quantum mechanics the time-reversed negative-energy waves are solidly built into the classical electromagnetic theory developed in the 1800s. The form of the equations is such that the existence of two solutions is inevitable.

Now a bit of terminology. The time-reversed waves, that arrive somewhere before they were emitted, are called advanced waves. The more familiar waves that carry positive energy forward in time, arriving somewhere after they were emitted, are called retarded waves. Physicists started using this terminology many decades ago, and now were stuck with it.

We live in the realm of retarded waves, apparently, and the classical approach is to simply disregard the time-reversed solution, and work only with the retarded waves of everyday experience.

20th and Early 21st Century Electromagnetism

And then along came quantum theory. In quantum mechanics, electrons are treated as point-like objects with no internal structure, since that is how they appear to experimentalists. A persistent problem results from treating the electron as a point charge. The electric field strength increases with the inverse square of the distance from an electron. As the distance goes to zero, the field strength goes to infinity. Thanks to E=mc^2, the electron mass also goes to infinity. Something is clearly wrong with this picture. This is known as the self-energy problem, as it arises from the interaction of an electron with its own electric field.

Paul A.M. Dirac was one of the early giants of quantum mechanics. Among other accomplishments, he was the first to incorporate relativity into quantum wave equations, and he predicted the existence of antimatter. In 1938 he published a technical paper re-examining Classical Electrodynamics and Maxwells equations with the electron as a point charge being influenced by electromagnetic fields. He used the average of the retarded and advanced fields, without providing justification other than symmetry, and showed that the resulting field remained finite and continuous through the center of the electron.

Wheeler and Feynman (WF for short), following up on the earlier work by Dirac, published in 1945 and 1949 their own version of classical (not quantized) electromagnetic theory, formulating it as an action-at-a-distance theory rather than a field theory. They started with the assumption that the solutions of the electromagnetic field equations must be invariant under time-reversal transformation, as are the field equations themselves. WF Absorber theory, as it is known, focused on retarded waves coming from the emitter of a photon, and on advanced waves reaching back in time from the absorber to the emitter of a photon. This provides justification for the assumption that the retarded and advanced waves carry equal weight.

WF make the assumption that electrons do not interact with their own fields, so the self-energy problem is eliminated. This eliminates the well-known energy loss and recoil processes known as radiative damping. WF theory replaces radiative damping by allowing the emitting electron to interact with the advanced wave from the absorbing electron. It was an innovative way to deal with the self-energy problem, which was mathematically successful but failed to reproduce some subtle electron behavior. (It turns out that electrons really do interact with themselves.) Also, it was a classical theory that did not appear to lend itself to quantization. Feynman set it aside and went on to bigger and better things, particularly his role (along with Schwinger and Tomonaga) in developing the modern theory of quantum electrodynamics (QED). Wheeler set it aside and went on to work on loop quantum gravity, which seeks to quantize space itself.

In 1986, John Cramer published his transactional interpretation of quantum mechanics. His approach has a great deal in common with WF Absorber theory, though there are also important differences. Cramer uses the Schroedinger equation and its complex conjugate to represent retarded and advanced waves, respectively. Think of the retarded wave as an offer to transmit a photon, and think of the advanced wave as an offer to receive a photon. A would-be emitter sends out time-symmetric retarded and advanced waves. A would-be absorber is stimulated by an incoming retarded wave to send out its own retarded and advanced waves. Thanks to time reversal, the advanced wave returning from the absorber arrives at the time the retarded wave leaves the emitter. This is true for all the advanced waves from all potential absorbers, so the emitter knows what all of its options are, and can choose one. On the path(s) between the emitter and the chosen absorber, retarded and advanced waves reinforce each other and strengthen enough to transmit a quantum of energy and momentum (and other quantum properties). Outside the spacetime line(s) between emitter and absorber, the phase relationships are such that the retarded and advanced waves largely cancel each other out. I put two sentences in boldface in this paragraph because they will be important later.

Cramer argues, convincingly in my opinion, that the Transactional Interpretation avoids the philosophical difficulties of other interpretations, provides the means by which some of the more mysterious results of quantum mechanics can be achieved, and gives us a narrative that makes sense of bizarre quantum results like retrocausality and entanglement. I reviewed the Transactional Interpretation at length here. Cramer has written a book on the subject titled The Quantum Handshake.

I will indulge in a digression into one side-issue. Allowing emitters to see the entire future universe of potential absorbers, as in WF absorption theory, makes it relatively easy to visualize how probabilities can be honored. Think of the potential absorbers as forming a pie chart, with the width of each slice proportional to the probability of the interaction. Think of the emitter spinning a spinner at the center of the pie, and selecting whichever absorber the pointer points to. Then the probabilities take care of themselves, and even improbable interactions are selected at the right frequency. On the other hand, this picture seems to be in conflict with the indeterminacy of quantum events. It assumes the entire future universe of absorbers can be foreseen, which is hard to reconcile with the random unpredictability we observe. Cramer has responded to criticism along these lines by (reluctantly) proposing a principle of hierarchy, whereby potential absorbers are accepted or rejected sequentially in order of increasing spacetime distance from the emitter. Then, once an absorber has been selected, the universe is free to continue evolving randomly, including any consequences of relocating a quantum. The hierarchy principle may make little difference in terms of testable consequences, but the philosophical implications are significant. It gives the determinacy/indeterminacy question a nuanced answer: under this hypothesis, each quantum event follows a randomly selected path that is predetermined from beginning to end. End of digression.

There is a relatively new class of high-power pulsed lasers called Free Electron Lasers (FEL for short), first developed in 1971 by John Madey at Stanford University. An FEL involves a beam of electrons accelerated to nearly light speed, high voltage power supplies, vacuum pumps, powerful magnets, radiation shielding a roomful of equipment. On the plus side, it generates a brief but very intense pulse of mostly coherent electromagnetic radiation (photons) that can be tuned to a wide range of frequencies, from microwaves up through the visible spectrum and on up into x-rays.

In 2015, John Madey published a technical paper with two other authors (Niknejadi, Madey, and Kowalczyk, or NMK for short), making the following points:

Classical Electrodynamics (CED) does not accurately predict the fields and forces found in a free electron laser emitting an intense burst of coherent electromagnetic radiation into free-space. [O]ne of the potentially relevant limitations of conventional eld-based CED theory is its reliance on the imposition at all spatial and temporal scales [of a restriction] to the retarded solutions allowed, but not mandated, by Maxwells equations.

.

[A] competing model of CED, the action-at-a-distance model of Wheeler and Feynman [20], (1) allows for the time symmetric inclusion of both the advanced and retarded interactions allowed by Maxwells equations, (2) includes a plausible if not unique statement of the physical boundary conditions for the case of radiation into free-space, and (3) provides a description of the forces

generated through the process of coherent emission that is fully compatible with the energy integral of Maxwells equations in contrast to the fundamentally awed solutions of conventional eld-based CED.

.

It is the second key purpose of this paper to demonstrate that the solutions developed by Wheeler and Feynman in their model of radiation into free-space successfully predict the forces needed to insure compliance with Maxwells equations.The success of this model does not necessarily imply the need to abandon the eld-based electrodynamics, but demonstrates the need to reformulate it.

.

Based on past critical reviews, the Wheeler-Feynman model of radiation into free-space has been found to be fully compatible with Maxwells equations, quantum electrodynamics and causality.Therefore, there can be no objection on theoretical grounds to its implication for the reformulation of the more widely accepted eld-based CED theory to include the models half-advanced, half-retarded time symmetric interactions as required to assure consistency with Maxwells equations.Objections to this reformulation can only be based on experiment.

NMK then propose an experiment to look for evidence of advanced waves. Their proposal has the following key elements. First, use an electrically small antenna, no more than 1/10 wavelength. Second, direct the electromagnetic signal into an absorbing environment such as a high-quality anechoic chamber, to approximate the WF boundary conditions. (This turns out to be bad advice, and a reminder that not all expert advice is helpful.) Third, develop and use a phase-sensitive probe to measure

only that component of the field that oscillates in phase with the velocity of the oscillating charged particles in the nearby radiation source. [T]he design of such a phase sensitive eld probe is within the current state of the art, though requiring a signicant commitment with respect to engineering and commissioning.

21st Century Electromagnetism: Advanced Waves Have (probably) Been Detected!

Dr. John Cramer gets credit for breaking the news in the popular press: a pre-print was published online by Darko Bajlo, who Cramer describes as a retired Croatian military electronic surveillance specialist, presenting Bajlos detection of advanced radio waves.

The basic idea is to aim a radio-frequency transmitter into the sky, where the suns dont shine. The transmitter sends out both retarded waves into the future and advanced waves into the past. Since radio-frequency absorbers in outer space are few and far between, some of the retarded waves never find an absorber. Some absorbers send back advanced waves, but there arent enough of them to cancel out all the advanced waves that the transmitter originally sent. Thus the excess advanced waves from the transmitter should become detectable as a sort of echo preceding a pulse of retarded waves.

Bajlo did not use a free-electron laser, or a laser of any sort he used an off-the-shelf signal generator, available for around $500. His most expensive piece of equipment was an oscilloscope, around $800. He tried three different transmitting antennas: a 1/10 wavelength monopole, a 3-element Yagi antenna (moderately directional, i.e. moderately high gain) of unspecified size, and a pyramidal horn antenna (higher gain, more directional) with an aperture comparable to the wavelengths he used. He was successful with all three, so apparently the characteristics of the transmitting antenna are not critical. Following the recommendation of Fearn, Bajlo used wavelengths greater than 21 cm so that shorter waves cant get red-shifted in the distant universe to 21 cm, a wavelength strongly absorbed by interstellar hydrogen. Following the recommendation of NMK, Bajlo tested three detection antennas: 1/6.7 wavelength, 1/10 wavelength, and 1/20 wavelength. The signal was strongest with the shortest antenna, moderate to low at 1/10 wavelength, and nearly gone at 1/6.7 wavelength. Subsequent tests used the 1/20 wavelength detection antenna.

Signal strength depended on the orientation of the transmitting antenna. The advanced wave signal disappeared within 3.5 degrees of the horizon, or within 5 degrees on humid or overcast days. This is presumably due to more absorption of retarded waves on a long tangential path through the atmosphere. The best signal occurred at the highest angle tested, which was about 10 degrees above the horizon. The signal also grew weaker when the transmitting antenna pointed toward the center of the galaxy, where more absorbers can be found.

The time shift depended on the distance R between the transmitting antenna and the small receiving antenna. In general, the time between the retarded wave and the corresponding advanced wave passing the little receiving antenna is 2R/c. The speed of light (or radio waves) c is 0.3 m/ns (meters per nanosecond), or 11.8 inches/ns. (A nanosecond is a billionth of a second.) For example, when R = 18 meters, the retarded waves take 18m/(0.3 m/ns) = 60 ns to cover the distance, and the advanced waves take 18m/(-0.3 m/ns) = -60 ns to cover the same distance, so the advanced waves should arrive at the receiving antenna 120 ns before the retarded waves. Over multiple runs, Bajlo measured 120.0 ns with a standard deviation of 0.4 ns.

And what is the utility of transmitting a pulse of radio waves a few dozen nanoseconds into the past? John Cramer has some suggestions:

First, it indicates that "standard" classical and quantum treatments of electrodynamics that reject advanced waves and advanced potentials are leaving out an important aspect of Nature and, at some level, are therefore wrong. Second, theoretical work like WF electrodynamics and my own transactional interpretation of quantum mechanics, which do include advanced waves and potentials, must be taken much more seriously. Third, advanced-wave strength depends on any absorption deficit in the direction the antenna axis is pointing. That means that one could map the universe, as Bajlo has made a start at doing, by accurately measuring the microwave absorption deficit in each sky-pixel, thereby creating a new branch of radio astronomy.

And finally, the observation of advanced waves indicates cracks in the seemingly impenetrable armor of that least-well-understood law of physics, the Principle of Causality.

Cramer notes that lengthening the time shift by increasing the distance can only get one so far, as the signal strength drops rather steeply over modest distances and the speed of light is extremely fast. (If you could put a reflector in orbit around the moon and somehow detect a signal bounced off it, that would only get you 2.56 seconds into the past.) He suggests another approach:

However, perhaps larger time-advances could be achieved by "daisy chaining". Suppose we made a triggerable radio-pulse generator, re-triggerable in 500 nanoseconds or less. It beams a pulse 7.5 meters to a downstream mirror, where it is reflected back past a 1/20 antenna (close to but shielded from the transmitter), then out into space. The 1/20 antenna should detect an advanced signal 50 nanoseconds before transmission. Now construct 20 identical units, configured in a circle for minimum interconnect delays, with each unit triggering the next with its advanced signal. The advanced signal at the 20th unit will precede the initial trigger by 1.0 microseconds.

Now suppose that a counter in the 1st unit permits triggering except when it reads 1,000,000. Now route the advanced signal from the 20th unit back to trigger the 1st unit. The result should be that the earliest advance signal from the 20th unit (counter=0) will occur 1.0 seconds before the initial trigger (counter=1,000,000)!

I should say that although Darko Bajlo's writeup describing his observation of advanced waves is pretty convincing and would be a lovely reinforcement of Wheeler-Feynman electrodynamics and the TI, a small voice in my head says that it must be wrong because Nature would never allow us to send messages back in time. Or perhaps Bajlo's work is OK, but my daisy-chain scheme is somehow flawed.

How to Invent a Time Machine

In theory, theory and practice are the same. In practice, they are not. - Albert Einstein

So here we are:

What might the next steps be?

First, reproduce some of Bajlos results. I would use the same signal generator, and the highest gain (most directional) transmitting antenna I could lay hands on.

Second, optimize the system for the greatest possible distance at an acceptable signal to noise ratio. Youre going to need every nanosecond you can get. I would start by varying the receiving antenna size above and below 1/20 wavelength. I would experiment with using a radio frequency mirror to direct the pulse into the sky, and find out how much is gained by aiming straight up, or in the direction perpendicular to the plane of the Milky Way, or toward the emptiest region of the sky. I would test various frequencies below 1.3 GHz, i.e. wavelengths longer than 23 cm, to see where the signal is strongest (keeping in mind that higher frequencies can carry information more compactly).

Cramer proposes daisy-chaining enough transmitters to overcome the reset time of each transmitter. Obviously a short reset time will be advantageous to minimize the cost of all those transmitters. I wouldnt want to go below three transmitters in the daisy chain: one to start the process and two to pass the signal back and forth, earlier and earlier, until the result pops out one of them. It may be superstitious of me, but I dont want the results to pop out of the device with the start button, before I press the start button.

If you want to test out Cramers daisy chain idea, youre on your own finding hardware for it the signal generator Bailo used has no way to trigger it with an electrical signal.

Cramers proposal is a way to send a single bit of information - an electromagnetic pulse - into the past. To make practical use of it, for example to cheat Wall Street by learning stock price changes before they happen, we need the pulse to carry information, and the more information the better. Well want to transmit pulses shorter than R/c nanoseconds, so the advanced and retarded signals maintain a little separation at the receiving antenna.

This is where we get into difficult design tradeoffs. A simple trigger will no longer suffice, as we are sending information to an earlier time when that information was not yet known.

We need to daisy-chain RF repeaters or linear transponders, to receive a weak advanced waveform containing useful data, and re-transmit the same waveform after amplification. But the amplifier/ repeater/ transponder introduces some delay, mostly from the bandpass filter. A quick online search turned up no repeaters or transponders with a delay less than about 5 microseconds. And in a time machine, time is distance. Specifically, to make up those 5 microseconds we need the receiving antenna to be nearly a mile from the transmitter, and still pick up a usable signal with our necessarily undersized receiving antenna. This may not even be possible. Can we skimp on filters, and gain more than we lose? Can we gain enough distance by cranking up the power of the transmitter? By using an even more directional transmitting antenna? Or by replacing the signal generator with a radio-frequency laser (i.e. a maser) so the signal spreads less?

Hey, I never said this would be easy. It might even be impossible. Maybe the universe defends causality with practical engineering limits rather than with theoretical limits. Still, if I were a billionaire with an interest in high frequency trading, I would already have a team of physicists and engineers and technicians secretly working on it. Just sayin.

More:

How to invent a time machine - Daily Kos

Lee Smolin: the laws of the universe are changing – IAI

We tend to think of the laws of nature as fixed. They came into existence along with the universe, and have been the same ever since. But once you start asking why the laws of the universe are what they are, their invariance also comes into question. Lee Smolin is the type of theoretical physicist who likes asking such why questions. His inquiries have led him to believe that the laws of the universe have evolved from earlier forms, along the lines of natural selection. In this in depth interview he offers an account of how he came to this view of the evolving universe and explains why physics needs to change its view of time.

Lee Smolin is a rare breed of theoretical physicist. Whereas most physicists see themselves in the business of discovering what the laws of the universe are, Lee Smolin goes a step further: he wants to know why the laws of the universe are what they are.

I believe in an aspirational form of Leibnizs Principle of Sufficient Reason. When seeking knowledge, we should act on the assumption that the principle of sufficient reason is true, otherwise we are likely to give up too soon.

The Principle of Sufficient Reason being the idea that there is a reason for why things are the way they are and Leibniz being a 17th century rationalist philosopher. Lee Smolin is not like other physicists in another way: he draws inspiration from many different fields, including philosophy.

___

Smolin admits that it might be the case that at some point our explanations simply run out and there are no further why questions we can ask.

___

Its perhaps hard to appreciate how unconventional this way of thinking about physics is. Leibniz was a key figure of early modern rationalist philosophy that held necessity to be the key concept that would unlock the mysteries of the universe things are the way they are because they had to be this way, and reason could explain why that was. Modern science on the other hand for the most part has given up on this idea that the world is governed by rational necessity. Instead, contingency rules: the way things are is the way things are, we cant really know why. For many scientists the question doesnt even make sense. Smolin admits that it might be the case that at some point our explanations simply run out and there are no further why questions we can ask.

Of course this might be the case, and it might not be. The only way to find out is to try to see how far we can go.

And Smolin is prepared to go a lot further in his questioning than most. Pushing the boundaries of explanation has led him to put forward some extraordinary theories, including the idea that the laws of the universe are not invariable across space and time, but are evolving. When asked to give an account of how he arrived at this theory he offers a kind of intellectual autobiography, and why he sees the issue of time as crucial to how we think about laws of nature.

Smolin came of age during the era when the main puzzle of theoretical physics was how to make Einsteins General Relativity consistent with Quantum Mechanics. Time according to General Relativity was seen as a relational property not as something absolute or external to the universe, as Newton had thought. This means that time becomes secondary, as Smolin says a merely relational property between events in the universe, not something fundamental. Quantum mechanics, on the other hand, still seemed to depend on an absolute framework of time that wasnt relational. This was one the key contradictions at the heart of physics at the time Smolin was still a physics student and laws of nature were seen as invariant as time-independent.

___

Smolin wasnt satisfied with having just a description of how particles interact he also wanted to know why. Why is the neutron slightly heavier than the proton?

___

The second big issue playing on Smolins mind when he was a graduate student at Harvard came from particle physics. The Standard Model had just started going, and it seemed to be an immensely powerful tool for explaining the interactions of fundamental particles. But Smolin wasnt satisfied with having just a description - even if it was a very good description - of how particles interact he also wanted to know why. Why is the neutron slightly heavier than the proton? And why is the mass of the electron 1800 times smaller than that of the neutron?

These near coincidences are very important for how the world turned out to be. Smolin adds.

There was a group of cosmologists at the time who were also asking this question of why the universe seemed to be so perfectly tuned to allow for matter to be formed all these constants, including the Cosmological Constant, had just the right values to allow life to eventually develop. Was this mere accident? Or was there a reason for it? Cosmologists like Martin Rees of Cambridge developed the idea of the Anthropic Principle that postulated the existence of many different universes in which these constants all had different values, leading to completely different outcomes. Life was possible in our universe because we got lucky in other universes not only is life not possible, there are no atoms to begin with.

Smolin admits this is a pretty cool idea but he doesnt think its really a scientific theory since it doesnt make any predictions. But the puzzle it tried to tackle was a real one, and Smolin had a better idea for how to solve it. He thought to himself, where else do we find systems that are fine tuned for the emergency of complexity? Biology was to him the obvious answer. Im pretty good at stealing ideas from other fields. Everybody has a trick, and thats mine he says jokingly.

___

This seemingly paradoxical balance of the cosmos is not a mere accident - there was a process behind it, akin to natural selection, that gave rise to it.

___

Thats how Smolin came up with the idea of applying the principles of evolutionary biology to the universe as a whole. In the same way that in biology Darwinian evolution was able to explain the existence of perfectly developed organisms, with organs that work just the right way to keep them alive and functioning, the idea that the universe as a whole has been undergoing a process of evolution can explain the existence of this fine tuning of cosmological constants. This seemingly paradoxical balance of the cosmos is not a mere accident - there was a process behind it, akin to natural selection, that gave rise to it. Its an idea that he was surprised to find the American pragmatist philosopher Charles Peirce had also hinted at in the early 20th century.

Putting forward this theory of the dynamically evolving universe led to the other central idea in Smolins work: a reassessment of the centrality of time.

Smolin is always talking about his collaborators - many of them unconventional thinkers and eccentric in their own way - and how theyve contributed to his work. Roberto Mangabeira Unger is one of them, a professor at Harvards Law School, a Brazilian politician, and a philosopher. Smolin credits Mangabeira with forcing him to come to terms with the contradiction he was seemingly committed to. On the one hand Quantum Gravity that Smolin was working on saw laws of nature as fundamental, and time as secondary, as emergent. But applying natural selection to cosmology we get the opposite: time becomes fundamental, and laws of nature evolve, are emergent. This led to a collaboration between the two thinkers, and the publication of their book The Singular Universe and the Reality of Time. Smollin ended up espousing the view that time is fundamental, not secondary as General Relativity would have it, and space an emergent property of it. This was a view that Fotini Markopoulou, another collaborator of Smolins, also arrived at independently a view that most theoretical physicists, including Carlo Rovelli, oppose (although Smolin thinks Rovelli is coming around to that view in his recent publications).

Both these theories, that the universe and its laws are changing, and that time is a fundamental property of the universe, whereas space derivative, pose several questions, questions that Smolin sees as invitations for further elaboration and investigation, rather than as objections.

One of the questions I was curious to find out more about was how Smolin thought of the evolution of the universe. What is the mechanism here, exactly?

Smolin has three possible answers to this question, all of them hypotheses, as he stresses to me, given that they arent capable of making predictions: Im not Darwin! he says.

The most prominent hypothesis is that the universe gives birth to other universes through black holes. This, in itself, was not a new idea. Theoretical physicists John Wheeler and Bryce DeWitt first put forward this hypothesis before, but Smolin tweaked it to fit his view of a universe that evolves, almost along the lines of natural selection. Whereas Wheeler and DeWitt thought the new universe produced each time has random values of the cosmological constant and other key parameters, Smolin took a more Darwinian approach, proposing that each universe embodies very small changes to those cosmological values, allowing for a cumulative change and fine tuning, until we arrive at the universe we have today.

___

How can the universe learn anything, and how does the universe remember what has happened in the past, and use it as a precedent to decide what will happen in the future?

___

The question I immediately raise is whether this picture of an evolving set of universes, in which the laws of nature are not fixed, but are ever changing, requires us to postulate a kind of meta-law, a law that would dictate the way that this evolution can take place. So are we not back to where we started, the cosmos being dictated by some fixed meta-laws? Smolin is not happy with this solution, you cant solve this by just accepting that there are fixed laws, theyre just meta-laws he says. But he also doesnt really have a definitive answer either. Its a question he takes seriously, however, and has spent much of his book with Roberto Mangabeira Unger tackling this issue.

Smolin has two other hypotheses for how the universe might be changing. One he calls The Autodidactic Universe, the self-learning universe, the other The Principle of Precedence - borrowing a concept from jurisprudence when thinking about laws seems quite clever, and in line with Smolins trick of stealing ideas from other disciplines. They each come with their own conceptual challenges how can the universe learn anything, and how does the universe remember what has happened in the past, and use it as a precedent to decide what will happen in the future? Thinking of the universe in these terms seems to bend our concepts to breaking point, although admittedly things like machine learning, a technology that is very much real, does the same. If machines can learn from a trial-and-error process, why not the universe as a whole? In fact, Smolin has collaborated with Microsoft computer scientist Jaron Lanier, to model how the universe might be understood as a giant machine learning process.

The other major challenge to Smolins theory is directed at his view that time is more fundamental than space. How is that even possible, I asked him. If time is some measure of change, how can there be change without space? Where is the change taking place?

Here Smolin brings up another collaborator, Julian Barbour, who he acknowledges as his mentor when it comes to the philosophy of fundamental physics. In work they did together they showed that it is indeed possible to do dynamics, the study of evolving quantities, without space. In order to do that, Smolin tells me, you need to think of time as playing a causal role itself as creating new events from past ones. If we think of time this way, all we have to do is look back in time coming at you from your past and the causes that have made you, to see change.

___

I dont claim to have complete ideas, but I believe I have done enough to show that these are things worth thinking about. I havent built a new paradigm yet, but Im having a lot of fun in the process.

___

These are fascinating ideas that really capture the imagination, which goes some way to explain why Smolin, a theoretical physicist who is mostly in the business of publishing highly technical papers, impenetrable to the uninitiated, has acquired something of a cult status beyond the world of academia. But even though his theories are these beautiful mosaics of ideas from physics, philosophy, biology, computer science, the question is, do they ultimately offer us answers to the puzzles they set out to tackle? Smolin offers a humble self-diagnosis that captures both the joy of research, but also the hope of an enduring legacy:

I dont claim to have complete ideas, but I believe I have done enough to show that these are things worth thinking about. I havent built a new paradigm yet, but Im having a lot of fun in the process.

Link:

Lee Smolin: the laws of the universe are changing - IAI

"TAPPING THE SOURCE" SERIES DEBUTS ON JULY 16 WITH WORLD’S LEADING WELLNESS ICONS, HUMANITARIANS, PHILOSOPHERS, PHYSICISTS – PR Newswire

Summit Led by Dr. and Master Zhi Gang Sha Features Conversations with Dr. Deepak Chopra, Dr. Ervin Laszlo, Dr. Rulin Xiu

Quarterly Held Event Targeting Spirituality and Science Will Help People Navigate Unprecedented Challenges of 2022

NEW YORK, July 8, 2022 /PRNewswire/ --As millions of Americans continue to grapple with strife in their daily lives caused by a continuing global pandemic, a looming economic recession, lingering social injustices, upsetting political upheavals, and heartbreaking events including deadly mass shootings and fighting in Europe unseen since World War II, a diverse cross section of the world's leading spiritual and wellness icons, humanitarians, philanthropists, philosophers and physicists are launching a series of events to raise awareness about the science of spirituality and help people navigate the unprecedented challenges of 2022.

"Tapping the Source" is an online science and spirituality summit premiering on July 16 that will be held quarterly for the remainder of 2022 and beyond. Leading the effort is Dr. and Master Zhi Gang Sha, a Tao Grandmaster who has authored more than 10 New York Times bestselling books, and the first panel of guest speakers includes Dr. Deepak Chopra, a world-renowned pioneer in integrative medicine, Dr. Ervin Laszlo, an accomplished philosopher and two-time Nobel Peace Prize nominee, and Dr. Rulin Xiu, a University of California, Berkeley trained quantum physicist who heads the Hawaii Theoretical Physics Research Center.

Responding to an overwhelming need for mental health and wellbeing, and as millions of people are meditating and seeking inner peace, "Tapping The Source" will offer conversations with experts sharing their original discoveries and insights about the science of spirituality. With their own unique perspectives, each panelist will explain how every person has the power to transform their own reality and also have a dramatic impact on the world. This is a rare chance to expand the public's understanding of complex sciences and connect with deeper, underlying sources of life.

Once recognized by Maya Angelou in her own powerful words, "We, the human race, need more Zhi Gang Sha," Dr. and Master Sha combines 5,000-year-old Soulfulness practices together with 21st-century innovations to successfully help celebrities, entrepreneurs, athletes, scientists and everyday people tap into a power, passion, clarity, and purpose they didn't even know they had.

"I am honored to join together with these outstanding thinkers who are revolutionizing how we understand the nature of consciousness and the power of quantum healing," said Dr. and Master Sha. "The mind is just one piece of a bigger puzzle at play, and it is essential for people to align their heart and soul to overcome challenges affecting health, relationships, careers, and beyond."

The online summit will take place on July 16 from 12pm to 5pm. For more information and tickets, visit http://www.tappingthesource.org 100% of proceeds will support The Chopra Foundation, The Love Peace Harmony Foundation, and the Laszlo Institute of New Paradigm Research, a range of community-serving non-profits established by the program speakers. Tapping the Source is an initiative by Universal Soul Service Corp.

About Tapping The Source July 16 Speakers

Dr. and Master Zhi Gang Sha a Tao Grandmaster, international spiritual teacher, and 11-times New York Times bestselling author as well as an M.D from China and Doctor of Traditional Chinese Medicine in China and Canada. Founder of Tao Academy, the Love Peace Harmony Foundation, the Sha Research Foundation, and the Tao Calligraphy meditation practice - combining the essence of modern Western medicine with ancient Taoist teachings to help people lead happier and healthier lives. Awarded the Martin Luther King, Jr. Commemorative Commission Award for promoting world peace. Featured on PBS with 'The Power of Soul' and 'Soul Healing Miracles'. Appointed to the position of Shu Fa Jia (National Chinese Calligrapher Master) as well as Yan Jiu Yuan (Honorable Researcher Professor) at the State Ethnic of Academy of Painting in China.

Connect on social media:

Dr. Deepak Chopra World-renowned pioneer in integrative medicine and personal transformation and author of over 90 books, MD, FACP, founder of The Chopra Foundation, a non-profit entity for research on well-being and humanitarianism, and Chopra Global, a modern-day Health company at the intersection of science and spirituality.

Dr. Ervin Laszlo Renowned philosopher and systems scientist. Twice nominated for the Nobel Peace Prize, he has published more than 101 books and over 400 research papers and was the subject of the PBS Documentary Life of a Modern-Day Genius. Laszlo is the founder and president of the international think tank, The Club of Budapest.

Dr. Rulin Xiu - Ph.D.,University of California, Berkeley. Quantum physicist, co-founder of Tao Science, Research Director for the Hawaii Theoretical Physics Research Center, and co-author of the international bestselling book,Tao Science: The Science, Wisdom, and Practice of Creation and Grand Unification.

https://www.tappingthesource.org/

Contact:Michael JohnstonCo-Communications(617) 549-0639[emailprotected]

SOURCE Universal Soul Service Corp.

Go here to see the original:

"TAPPING THE SOURCE" SERIES DEBUTS ON JULY 16 WITH WORLD'S LEADING WELLNESS ICONS, HUMANITARIANS, PHILOSOPHERS, PHYSICISTS - PR Newswire

Data-centric Security and the Road to Quantum Safety – Security Boulevard

Quantum computing is the next frontier in technological innovation, one which could change the world forever. But it also represents a potential ticking time bomb from a cybersecurity perspective. Thats because quantum computers, once perfected beyond laboratory conditions, will be able to crack the asymmetric (public key), cryptography on which many enterprises, societies, and economies rely to keep prying eyes away.

The good news is that symmetric cryptography is resistant to this kind of quantum cracking. It represents a useful bulwark against quantum-related security risks, while industry experts work out ways to transition public key infrastructure (PKI) to quantum safety.

Quantum computing sounds like magic. Based on the theory of quantum mechanics, pioneered by Albert Einstein, it derives much of its power from qubits, quantum particles which behave in ways that defy the normal rules of physics. When applied to computing, qubits can be used to represent a zero and a one at the same time. Encoding one and zero simultaneously rather than sequentially dramatically accelerates processing speeds, making lightning-fast calculations and problem solving a reality.

This could be used to super-charge AI algorithms and open the door for potentially revolutionary discoveries in pharmaceutical and chemical sectors, as well as huge leaps forward in finance, autonomous driving and many other use cases. Unfortunately, it could also undermine trust in the PKI on which much of the modern world is builtfrom financial transactions and secure communications, to e-commerce, identity, electronic voting, and much more.

Although we are at least 10-15 years away from a working quantum computer which can achieve this, threat actors could theoretically steal data now to decrypt in a decades time when they have the means to do so. That lends an urgency to the challenge of discovering quantum safe algorithms to replace current asymmetric cryptography with.

While PKI is facing an existential threat in the form of quantum computing, symmetric encryption systems are already quantum safe. In fact, we offer mobile post quantum safes which protect enterprise data anywhereat rest, in motion and in use. Tokenization technology like this provides peace-of-mind that the corporate crown jewels can be kept safe from prying eyes, even in a post-quantum world.

But the bigger picture is that enterprises still rely heavily on PKI. In fact, all internet communication is based on asymmetric encryption. That means, until PKI has been transitioned to quantum safety, there will always be opportunities for threat actors to find a way to access sensitive data stores.

The good news is that standards for quantum safe asymmetric encryption are being worked out today. The US National Institute of Standards and Technology (NIST) Post-Quantum Cryptography Standardization Program has already shortlisted several algorithms, and aims to have new quantum-safe standards in place by 2024.

In the meantime, governments are already planning for the new post-quantum era, and enterprises would do well to start thinking about it too. While some of the worlds smartest minds work on creating more certainty in the PKI sphere, symmetric encryption solutions represent a technology organizations can invest in today, that will stand them in good stead.

The road to quantum safety will be a long one. But tokenization represents a solid first step.

Read the original post:

Data-centric Security and the Road to Quantum Safety - Security Boulevard

Ten years on from the Higgs boson, what is next for physics? – The Economist

I was actually shaking, said Mitesh Patel, a particle physicist at Imperial College, London, as he describes the moment he saw the results. I realised this was probably the most exciting thing Ive done in my 20 years in particle physics.

Your browser does not support the

Save time by listening to our audio articles as you multitask

OK

Dr Patel is one of the leaders of lhcb, an experiment at cern, in Geneva. cern is the worlds largest particle-physics laboratory, and the lhc bit of the experiments name stands for Large Hadron Collider, which is likewise the worlds biggest particle accelerator. This machine, which collides packets of high-speed protons (examples of a type of subatomic particle called a hadron) was switched on again on July 5th, after a three-and-a-half-year upgrade, for what is known as Run 3. In the interim Dr Patel and his colleagues have been crunching data collected from previous runs. It is the results of these crunchings that are giving him palpitations.

The lhcb team has spent the best part of a decade measuring how subatomic particles known as b mesons decay into lighter particles. b mesons come in many varieties, but all have a constituent called a bottom antiquark. One way in which these mesons decay is by the transformation of the bottom antiquark into a so-called strange antiquark and a pair of leptons, a different class of fundamental particle that includes electrons and their more massive cousins, muons. According to the accepted rules of particle physics, such decays should yield as many muons as they do electrons. For the forces that govern them, there is no difference between the two, an idea called lepton universality.

But that is not what the tallies counted by the lhcb showed. Instead, Dr Patels team found that only 85 muons were emitted for every 100 electrons.

To the person in the street this may not sound a big deal. To a physicist it is practically an invitation to book a flight to Stockholm. A violation of lepton universality would be a crack in what is called the Standard Model, and therefore Nobel prizewinning stuff. This model has, with assistance from the general theory of relativity developed earlier by Albert Einstein, held physics together for around half a century.

Nor is the b meson anomaly, as it is known, the only recent result that might attract the attention of the prize-awarders at Swedens Royal Academy of Science. Two other Standard Model-violating results, from cerns American frenemy Fermilab, have also been published recently. After a long period in the doldrums, the sails of the ship of physics are rustling in the breeze. The lhcs latest run may provide the wind needed to fill them properly.

Fermilabs contributions to the anomaly list, announced respectively in the Aprils of 2021 and 2022, are that the magnetic properties of muons wobble around at frequencies which do not match predictions; and that the mass of another Standard Model particle, the w boson, which carries the weak nuclear force that is responsible for a form of radioactivity called beta decay, seems larger than predicted.

Once is happenstance. Twice is coincidence. The third time, as Ian Fleming opined through the mouth of Auric Goldfinger, does look like enemy action. None of these results, it must be said, yet quite reaches the gold standard of confirmation, known as 5-sigma (ie, five standard deviations from the mean) which particle physicists normally demand before they will call something a discovery. Five-sigma equates to a probability of around one in a million that something of interest in fact happened by chance. But all of them are close enough to this threshold to be eye-catching (Dr Patels, for example, is 3.1-sigma), and thus worthy of further work to attempt to reach the magic value of five.

If they do survive scrutiny, these three findings may go into future textbooks as the keys which unlocked the door marked Physics beyond the Standard Model. Practitioners have been battering on this portal since the Model was put together in the 1960s and 1970s, to no avail. Their ultimate goal is to unify the Standard Model and general relativity into an overarching theory of everything. That is some way off. But there was until recently a widespread belief that lurking behind this door would be a predicted step on the journey, called Supersymmetry. That is not what these results suggest.

The Standard Model describes two broad classes of particlesfermions and bosons. Fermions are the stuff of matter. Bosons carry the forces which hold that stuff together, or sometimes push it apart.

Fermions divide into leptons, quarks and their antimatter equivalents, which are identical to normal matter but with opposite electrical charges. Bosons include photons, which carry the electromagnetic force (and are the particles of light), the aforementioned w boson, the gluons that hold atomic nuclei together via a second, strong nuclear force, and the Higgs. The discovery of this, in 2012, using the then recently opened lhc, was a triumph of scientific prediction, the particle having been described theoretically by the eponymous Peter Higgs in 1964. The field associated with it confers mass on the other particles, and ties the Standard Model together.

But, though it is one of the most tested, most successful scientific ideas of all time, the Standard Model is not a complete description of the universe. Not only does it fail to account for gravity (this is the purview of general relativity), it cannot explain why matter is more abundant than antimatter. Neither does it say anything about two other important but obscure phenomena: dark matter and dark energy.

Dark matter is stuff that interacts with gravity but not electromagnetism, so can be felt, but not seen. Its abundance can be calculated from its effects on visible matterthan which, the sums suggest, it is six times more plentiful. And, though it is invisible, its influence is profound. Galaxies, for example, are held together largely by the gravitational fields of their dark matter.

Dark energy is even weirder. Belief in it depends on calculations about the speed at which the universe is expanding, for dark energy is the stuff that propels this expansion. And, to show how little physicists really understand the cosmos, it is worth noting that, together, dark matter and dark energy make up more than 95% of it, and the familiar stuff of stars, planets and human beings themselves less than 5%.

Nor is the Model itself quite as elegant as it is sometimes made out to be. It is, rather, a thing of sealing-wax and string, held together by arbitrary mathematical assumptions. Until recently, this was not a cause of great worry. Supersymmetry, people thought, would ride to the rescue. Susy, as this theory is known for short, got rid of the arbitrary assumptions by predicting a set of heavier (and as-yet-unseen) particles, a superpartner for each known fermion and boson. These sparticles would be too massive for older, less powerful, machines to find (mass being, as per Einsteins E=mc2, an embodiment of energy) but not, it was hoped, for the lhc. It was Susys smiling face that people expected to greet them when the physics beyond the Standard Model door eventually opened.

Run 2 of the lhc, however, found no evidence of sparticles. If Run 3 also fails to reveal Susy, some of her supporters will no doubt tweak the numbers to try to explain why. But there is now a whiff of desperation in the air about the theory, and it would be sensible to assume that even if Susy is not dead, she is missing in action. And that will leave physicists scrabbling around for a replacement.

The kit they have to conduct their search with is a yet-more-powerful version of the collider that found the Higgs boson a decade ago. Since the machine paused operations in December 2018, dozens of its superconducting magnets have been replaced with stronger ones and the injection system, which packs 120bn protons into bunches the size of a human hair and then accelerates them before they enter the lhc itself, has been upgraded. The new version of the machine will thus collide more protons, more often and at higher energies than previous incarnations.

The four experiments that sit around its 27km ring and analyse the results of those collisions have also been given a once-over. The lhcb detector in particular has been almost entirely rebuilt. According to Chris Parkes, a physicist from the University of Manchester who acts as the detectors spokesman, something like 90% of the sensitive elements which do the actual detecting have been changed.

Collisions happen so fast and abundantly within the experiments that software known as a trigger system is normally used to decide, quickly, which data to keep and which to delete. A new trigger system at lhcb will permit retention of data from almost all the 40m collisions occurring per second in the upgraded detector, so that more intelligent decisions can be made later about which to retain and analyse.

The first job will be to gather more data on the b meson anomaly, in search of that precious 5-sigma status. Theoreticians, meanwhile, have been busy devising ways to extend the Standard Model to try to explain those mesons anomalous decays.

One approach starts with the idea of a fundamental particles flavour. This term was invented in 1971, by Murray Gell-Mann, an architect of the Standard Model, and his student Harald Fritzcsh as they sat eating ice cream at a Baskin-Robbins store in Pasadena, California. They wanted a way to label the different types of quarks that had so far been found inside atomic nuclei. Up and down quarks are the constituents of protons and neutrons, but there are two further pairs (or generations) of quarks of different flavours: charm and strange, and top and bottom (also known as truth and beauty). Each successive generation is heavier than the previous.

Leptons are similar. The lightest generation contains the electron; a second, heavier, generation, the muon; the third and heaviest, the tau. Each generation also sports an associated neutrino.

It is ingrained within the Standard Model that its fundamental forceselectromagnetic, weak nuclear and strong nucleardo not distinguish between flavours. Photons, carriers of the electromagnetic force, interact with electrons, muons and taus in identical ways. Similarly, the gluons of the strong force bind with the same strength to all flavours of quark.

The b meson anomaly challenges this idea. To me that looks like theres a picture developing where a lot of things are pointing in the same direction. To a beyond-the-Standard Model theorist, thats exciting, says Ben Allanach, a professor of theoretical physics at Cambridge University. What it means is there could be additional interactions within the b meson, thats breaking it up with the wrong frequencies.

By frequencies, Dr Allanach means the rates at which electrons and muons are emitted when b mesons decay. The hypothetical new interaction could be what he and his colleagues call the flavour forcea fifth fundamental force of nature besides gravity and the three of the Standard Model. This would act more strongly on muons than it does on electrons. Like Standard Model forces, this force would have a particle associated with it, which they call the z (pronounced z prime) boson.

The idea of a force that discriminates between flavours is not in itself newsuch theories have been invoked in the past to fill other gaps in the Standard Model. But in all previous versions the force-carrying particle was so heavy that no particle collider was or is powerful enough to create it. Theory suggests that Dr Allanachs particle, if it exists, should have a mass less than 8,000 times that of a proton. This may sound quite big but it puts it squarely in the sights of Run 3.

Others, though, have a different explanation for the b meson decay anomalya proposed new particle called a leptoquark. This theory says that, at a deeper level of nature, quarks and leptons are actually the same thing. What are seen as electrons, muons, top quarks, bottom quarks and so on are actually different faces of the same underlying entity. The leptoquark force that this theory posits would be able to transform quarks into leptons, and vice versa. Crucially, it would also interact at different strengths with the different generations of fermions. In interacting with this force, b mesons would therefore emit electrons and muons at different rates.

Unifying quarks and leptons in this way could explain other things, too. One is why protons and electrons have exactly the same electric charges (though of opposite polarity), even though protons weigh more than 1,000 times as much as electrons do. It is this exact match which allows atoms to exist. The charges of the orbiting electrons are perfectly balanced by those of the protons in the nucleus, which get them from their constituent quarks. But if these two objects are the same thing, you could understand it, says Gino Isidori of the University of Zurich, who is a leading proponent of the leptoquark hypothesis.

Looking for exchanges of leptoquarks between known particles could be possible during Run 3. The leptoquark itself would be too heavy for the collider to produce, says Dr Isidori. But if we are lucky with the Run 3, we will start to see a more consistent series of deviations in the high-energy collisions. That would be an unambiguous sign of the exchange of leptoquarks. Collisions of protons, for example, can (rarely but predictably) give rise to pairs of tau particles. If the number of taus appearing in Run 3 begins to grow, compared with the predictions of the Standard Model, as the energy is cranked up, Dr Isidori says This would be a striking signal.

Both the z particle and the leptoquark could also go some way towards explaining the discrepancy discovered by Fermilab between its measurement of the mass of the w boson and the mass that the Standard Model predicts. This result will need to be checked further by independent experiments but, assuming it stands, Dr Allanach says, One of the things that can affect the prediction of [the w boson] is a z of exactly the kind that weve introduced.

The Standard Model does not predict the w bosons mass directly. Instead, it predicts the ratio of its mass to that of a z boson, the other weak-nuclear-force carrier. The z boson of the flavour force would interact with the z boson of the weak nuclear force and thus alter the predicted ratio. Put the z bosons mass, which has been measured experimentally, into the altered ratio, says Dr Allanach, and out comes a w boson mass prediction that is much closer to the Fermilab measurement.

The third Model-breaking anomaly came from an experiment called Muon g-2. Like other leptons, muons contain a tiny internal magnet. When placed in a strong magnetic field, the direction in which this magnet points wobbles around like the axis of a spinning top.

The strength of the magneta number known as the g-factordetermines the size of this wobble. The g-factor, and therefore the amount of wobble, is also influenced by a muons interactions with any particles that briefly pop into and out of existence around it from the vacuum of space. (This happens because of the uncertainties inherent in quantum mechanics.) The Standard Model can take all of these factors and all known particles and forces into account to make a precise prediction of how much a muons internal magnet should be wobbling. The measurements from Fermilab, which tallied the motions of 8bn muons, showed a deviation from the Standard Models prediction. The result had a statistical significance of 4.2-sigma, about a one in 40,000 chance that the result is a fluke.

A tweaked version of the flavour force could ride to the rescue here, as well. This time the z boson would have to be lighter than the one used to explain the b meson anomalyonly a thousand times more massive than a protonbut it would also interact preferentially with muons. Muons might randomly emit and reabsorb these lighter z bosons, and that would change the frequency of their magnetic wobbles enough to match the data seen at Fermilab.

As well as investigating these anomalies, Run 3 will poke and prod all the known fundamental particles in ever more detail. The Standard Model makes very clear predictions about how the Higgs boson should interact with different particles, says Dr Parkes. If you were to see some deviation of how the Higgs boson is interacting with particles in nature, and compare that with the Standard Model and see some differences, that will be another way of taking you on a journey beyond. But even at the moment, its telling you about the confidence you have in our current theory. Its telling you about the level at which that theory is a reliable description of the fundamental particles and forces in nature.

Physicists have other things, too, in their sights for Run 3. One is the top quark, the heaviest of the lot, of which only a few hundred had been made before the lhc was built. Two of cerns detectorsatlas and cmshave recently announced hints of excesses in the production of this and other heavy fermions, notably bottom quarks and tau leptons. These things will all need to be investigated.

What physics no longer has, though, is an all-embracing model of the future to try to fit everything into. Perhaps, just perhaps, Susy will still show up at the party as the collisions get more energeticpossibly she will be wearing one of the disguises that those who have not yet abandoned her are trying to dress her up in. But dont bet on it. For the moment, fundamental physics is back a pragmatic phase, gathering more pieces of the jigsaw in the hope of fitting them together later. Physicists have by no means abandoned the lofty goal of unifying forces and creating a grand theory that encompasses everything. But they need a new map to get them there.

To hear our podcast series about the reopening of the Large Hadron Collider, go to economist.com/LHC-pod

Curious about the world? To enjoy our mind-expanding science coverage, sign up to Simply Science, our weekly newsletter.

Read the rest here:

Ten years on from the Higgs boson, what is next for physics? - The Economist

Notable Thermal and Mechanical Properties of New Hybrid Nanostructures – AZoM

Carbon-based nanomaterials such as carbon nanotubes (CNTs), fullerenes, and graphene receive a great deal of attention today due to their unique physical properties. A new study explores the potential of hybrid nanostructures and introduces a new porous graphene CNT hybrid structure with remarkable thermal and mechanical properties.

Image Credit:Orange Deer studio/Shutterstock.com

The study shows how the remarkable characteristics of novel graphene CNT hybrid structures could be modified by slightly changing the inherent geometric arrangement of CNTs and graphene, plus various filler agents.

The ability to accurately control thermal conductivity and mechanical strength in the graphene CNT hybrid structures make them a potentially suitable candidate for various application areas, especially in advanced aerospace manufacturing where weight and strength are critical.

Carbon nanostructures and hybrids of multiple carbon nanostructures have been examined recently as potential candidates for numerous sensing, photovoltaic, antibacterial, energy storage, fuel cell, and environmental improvement applications.

The most prominent carbon-based nanostructures in the research appear to be CNTs, graphene, and fullerene. These structures exhibit unique thermal, mechanical, electronic, and biological properties due to their extremely small size.

Structures that measure in the sub-nanometer range behave according to the peculiar laws of quantum physics, and so they can be used to exploit nonintuitive phenomena such as quantum tunneling, quantum superposition, and quantum entanglement.

CNTs are tubes made out of carbon and that measure only a few nanometers across in diameter. CNTs display notable electrical conductivity, and some are semiconductor materials.

CNTs also have great tensile strength and thermal conductivity due to their nanostructure, and the strength of covalent bonds formed between carbon atoms.

CNTs are potentially valuable materials for electronics, optics, and composite materials, where they may replace carbon fibers in the next few years. Nanotechnology and materials science also use CNTs in research.

Graphene is a carbon allotrope that is shaped into a single layer of carbon atoms arranged in a two-dimensional lattice structure composed of hexagonal shapes. Graphene was first isolated in a series of groundbreaking experiments byUniversity of Manchester, UK, scientists Andrew Geim and Konstantin Novoselov in 2004, earning them the Nobel Prize for Physics in 2010.

In the few decades since then, graphene has become a useful nanomaterial with exceptionally high tensile strength, transparency, and electrical conductivity leading to numerous and varied applications in electronics, sensing, and other advanced technologies.

A fullerene is another carbon allotrope that has been known for some time. Its molecule consists of carbon atoms that are connected by single and double bonds to form a mesh, which can be closed or partially closed. The mesh is fused with rings of five, six, or seven atoms.

Fullerene molecules can be hollow spheres, ellipsoids, tubes, or a number of other shapes and sizes. Graphene could be considered an extreme member of the fullerene family, although it is considered a member of its own material class.

As well as a great deal of research invested into understanding and characterizing these carbon nanostructures in isolation, scientists are also exploring the properties of hybrid nanostructures that combine two or more nanostructure elements into one material.

For example, foam materials have adjustable properties that make them suitable for practical applications like sandwich structure design, biocompatibility design, and high strength and low weight structure design.

Carbon-based nanofoams have been utilized in medicine as well, examining bone injuries as well as acting as the base for replacement bone tissue.

Carbon-based cellular structures are produced both with chemical vapor deposition (CVD) and solution processing. Spark plasma sintering (SPS) methods are also implemented for using graphene for biological and medical applications.

As a result, scientists have been looking at ways to make three-dimensional carbon foams structurally stable. Research suggests that stable junctions between different types of structures (CNTs, fullerene, and graphene) need to be formed for this material to be stable enough for extensive application.

New research from mechanical engineers at Turkeys Istanbul Technical University introduces a new hybrid nanostructure formed through chemical bonding.

The porous graphene CNT structures were made by organizing graphene around CNTs in nanoribbons. The different geometrical arrangement of graphene nanoribbon layers around CNTs (square, hexagon, and diamond patterns) led to different physical properties being observed in the material, suggesting that this geometric rearrangement could be used to fine-tune the new structure.

The study was published in the journal Physica E: Low-dimensional Systems and Nanostructures in 2022.

Researchers found that the structures with fullerenes inserted, for example, exhibited significant compressive stability and strength without sacrificing tensile strength. The geometric arrangement of carbon nanostructures also had a significant effect on their thermal properties.

Researchers said that these new hybrid nanostructures present important advantages, especially for the aerospace industry. Nanoarchitectures with these hybrid structures may also be utilized in hydrogen storage and nanoelectronics.

Belkin, A., A. Hubler, and A. Bezryadin (2015). Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production. Scientific Reports. doi.org/10.1038/srep08323

Degirmenci, U., and M. Kirca (2022). Carbon-based nano lattice hybrid structures: Mechanical and thermal properties. Physica E: Low-dimensional Systems and Nanostructures. doi.org/10.1016/j.physe.2022.115392

Geim, A.K. (2009). Graphene: Status and Prospects. Science. /doi.org/10.1126/science.1158877

Geim, A.K., and K.S. Novoselov (2007). The rise of graphene. Nature Materials. doi.org/10.1038/nmat1849

Monthioux, M., and V.L. Kuznetsov (2006). Who should be given the credit for the discovery of carbon nanotubes? Carbon. doi.org/10.1016/j.carbon.2006.03.019

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Read more:

Notable Thermal and Mechanical Properties of New Hybrid Nanostructures - AZoM

Two towers of strength: looking up to the best in physics – ANU College of Science

When you see the Heavy Ion Accelerator Facility which, if you live in Canberra, you almost certainly have you probably dont feel any particularly powerful emotions.

Its the 40-metre tall rectangular tower conspicuously located among the low-rise campus buildings on the edge of Lake Burley Griffin. In terms of architecture, functional is possibly the politest way to describe its appearance.

When Professor Mahananda Dasgupta looks up at the same tower, known as the HIAF, she feels something quite different.

I feel very, very proud of it, she says. It is a tower of strength.

Then she repeats, with emphasis: A. Tower. Of. Strength.

Professor Dasgupta is an experimental physicist at the ANU Research School of Physics, and the Director of the HIAF, which, she notes admiringly, hasn't even needed repainting in the 50 years since it was built.

An ion accelerator enables scientists to look inside an atom, beyond even the capabilities of an electron microscope. The HIAF is the largest and most powerful ion accelerator in Australia, and one of the three highest-voltage ion accelerators in the world.

Among its many achievements, the HIAF has helped researchers to unveil ancient climate records, discover evidence of nearby supernovae, trace and track soil erosion in Australia, and find the most efficient nuclear reaction to lead to the discovery of new elements.Its also used for hands-on teaching and training in nuclear physics and its applications.

It is here for the nation, Professor Dasgupta says of the facility, supported by the Federal Governments National Collaborative Research Infrastructure Strategy. Quite simply, the accelerators capabilities are necessary for our national advancement.

Professor Dasgupta would like the world-class reputation and capabilities of the HIAF to be better known within Australia. When she speaks with politicians about her research, she says, they assume she travels to CERN.

Researchers from many countries including Germany, the US, France and Japan come to Australia to collaborate with us because we are known for our brand, which is: we advance frontiers.

And our students are very, very sought after by international labs, because of our wide-ranging hands-on training. You can stand them in front of almost anything in a laboratory, and they will solve the problem.

It was the international reputation of experimental physics at ANU that brought Professor Dasgupta herself to Australia.

Its a funny story actually, she says. I was finishing my thesis at Bombay, where I was the first PhD out of their accelerator lab, and I was giving my thesis seminar.

And I said, Look, there is an idea, where if you could make exquisitely precise measurements then that will answer a highly sought-after question in quantum mechanics, except the data quality is so poor, that this will not be achieved very soon. That is still whats written at the end of my thesis.

And then someone at the end of my talk says, Oh, by the way, have you seen the publication in Physical Review Letters yesterday? A group from the Australian National University has made the measurement that you are saying is not possible.

And I said, Well, there you go! I stand corrected. This is how science progresses.

In the next issue of Nature magazine, there was an advertisement for a position with that ground-breaking team at ANU and Professor Dasgupta applied. She has remained here ever since.

The tangles of wires, pipes and tubes of the HIAF have provided the backdrop to her research career, but to Professor Dasgupta, its more than just a workplace.

If you asked me to leave my house and buy another house, sure, Id do it. But if you asked me to change my lab, I would say no. I would not swap my career for anything else.

And now, the capabilities of the HIAF and its scientists are needed more than ever.

If we are going into nuclear science and technology in a big way, our international credibility in experimental nuclear science depends on the HIAF. We have important national projects coming up in defence, in the space industry, and in cancer treatment using accelerated protons. All these projects are underpinned by nuclear science and supported by accelerators.

It is important that people know the breadth and depth of what we can achieve.

There is more than one tower of strength in this story, and its important people know that too.

When Professor Dasgupta arrived in Canberra in 1992, she was the only woman at HIAF.

She faced structural disadvantages, but went on to become the first woman to secure tenure in physics at the University, which, shockingly was not until 2003. Now, she is a Fellow of both the Australian Academy of Science and the American Physical Society, and a recipient of the prestigious Pawsey Medal.

It is very slow, but things have changed, she says of the sexism and racism she has faced along the way. It is changeable.

Professor Dasgupta gets frustrated about this sometimes, that being a woman in physics is still noteworthy, a topic on which she speaks regularly.

I am a physicist first, she says. But I have a platform as a female physicist to articulate the problems I see, so I need to use it.

This is an important project for the nation, too.

The kind of STEM workforce numbers that we are talking about which will be required for our future capabilities in health, in space, in national security, it is such large numbers, she says.

As a nation, we need to make sure women can be part of that opportunity.

This is, after all, how science progresses.

Study a Master of Science in Nuclear Science at the ANU: home to Australia's largest university-based research and teaching activity in physics.

Read the original post:

Two towers of strength: looking up to the best in physics - ANU College of Science

The Big Bang Theory Fans Actually Walked Away With This Cool Piece Of Knowledge – Looper

Named for ErwinSchrdinger, the quantum physicist who created the theory,Schrdinger's Cat is a thought experiment where (theoretically) a cat is placed inside a box along with poison and some radioactive material. If any of the radiation decays, this triggers the poison, killing the cat. But if the material doesn't do anything, the cat, as unseen by the observer, is in a state of both life and death (via Discover Magazine).

When u/Aggressive-Nobody473 asked other "Big Bang Theory" fans about what they learned from the show, several commenters mentioned Schrdinger's Cat as a favorite. The theory is first explained in Season 1, Episode 17 ("The Tangerine Factor") as Sheldon uses it to explain the possibilities for Leonard and Penny's blossoming relationship.

In one comment,u/FruitySwiftA113 wrote, "I use it in every day [sic] conversation and comparisons now." Other Reddit users appreciated the real-life science reference, but also ultimately remained reticent to use it in everyday life. For example,u/Rosemoorstreetshared that theyloved it, but wrote, "the science is so far over my head that there is no hope for me to learn any of that!" Still, the concept is so compelling that it is hard not to think over it, even if one part sounds particularly creepy.

Read this article:

The Big Bang Theory Fans Actually Walked Away With This Cool Piece Of Knowledge - Looper

Research Assistant ( Ng Hui Khoon’s Group), Centre for Quantum Technologies job with NATIONAL UNIVERSITY OF SINGAPORE | 299892 – Times Higher…

About the Centre for Quantum Technologies

The Centre for Quantum Technologies (CQT) is a research centre of excellence in Singapore. It brings together physicists, computer scientists and engineers to do basic research on quantum physics and to build devices based on quantum phenomena. Experts in this new discipline of quantum technologies are applying their discoveries in computing, communications, and sensing.

CQT is hosted by the National University of Singapore and also has staff at Nanyang Technological University. With some 180 researchers and students, it offers a friendly and international work environment.

Learn more about CQT atwww.quantumlah.org

Job Description

Associate Professor Ng Hui Khoonis seeking to hire a motivated and independent candidate as a Research Assistant (RA) for a duration of one year, starting in July 2022.The Research Assistant will work in the group of Dr. Ng, doing theoretical research on surface codes, the current go-to scheme for quantum error correction quantum computing implementations.

Specifically, the project will involve the investigation of decoding strategies that accommodate noise information for enhanced surface-code performance. The RA will be assisting in the building of numerical code to test decoding algorithms and generating ideas for strategic use of noise information.

Job Requirements

Covid-19 Message

At NUS, the health and safety of our staff and students are one of our utmost priorities, and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require a significant amount of physical interactions with students/staff/public members. Even for job roles that may be performed remotely, there will be instances where on-campus presence is required.

Taking into consideration the health and well-being of our staff and students and to better protect everyone in the campus, applicants are strongly encouraged to have themselves fully COVID-19 vaccinated to secure successful employment with NUS.

Read more:

Research Assistant ( Ng Hui Khoon's Group), Centre for Quantum Technologies job with NATIONAL UNIVERSITY OF SINGAPORE | 299892 - Times Higher...

Quantum Information Science MIT Physics

There is a worldwide research effort exploring the potentials of quantum mechanics for applications. The field began with Feynmans proposal in 1981 at MIT Endicott House to build a computer that takes advantage of quantum mechanics and has grown enormously since Peter Shors 1994 quantum factoring algorithm. The idea of utilizing quantum mechanics to process information has since grown from computation and communication to encompass diverse topics such as sensing and simulations in biology and chemistry. Leaving aside the extensive experimental efforts to build controllable large-scale quantum devices, theory research in quantum information science (QIS) investigates several themes:

QIS theory research at MIT spans all of these areas. The CTP faculty involved are: Soonwon Choi and Aram Harrow, and the larger group at MIT includes Isaac Chuang (EECS/physics), Seth Lloyd (Mech. Eng.), Anand Natarajan (EECS) and Peter Shor (Math). Other faculty in the area include Eddie Farhi (emeritus), Jeffrey Goldstone (emeritus) and Jeff Shapiro (EECS, emeritus). Together this forms a large and vibrant group working in all areas of QIS.

Some of the notable contributions involving the CTP include the quantum adiabatic algorithm and quantum walk algorithms (Farhi, Goldstone), the first example of a problem for which quantum computers exhibit no speedup (Farhi, Goldstone), proposals for unforgeable quantum money (Farhi, Shor), a quantum algorithm for linear systems of equations (Harrow, Lloyd), efficient protocols for simulating quantum channels (Harrow, Shor), both algorithms and hardness results for testing entanglement (Harrow), proposals for quantum approximate optimization algorithms (Farhi, Goldstone), proposals and experimental observations of exotic quantum dynamics such as slow thermalization or a discrete time crystalline phase in quantum simulators (Choi), quantum sensing protocols using strongly interacting spin ensembles (Choi), and quantum convolutional neural networks (Choi). Ongoing research at MIT in QIS includes work on new quantum algorithms, efficient simulations of quantum systems, methods to characterize and control existing or near-term quantum hardwares, connections to many-body physics, applications in high-energy physics, and many other topics.

The larger QIS group at MIT shares a seminar series, a weekly group meeting, regular events for grad students.

Interdepartmental course offerings include an introductory and an advanced class in core QI/QC, as well as occasional advanced special topics classes. Quantum information has also entered the undergraduate physics curriculum with a junior lab experiment on NMR quantum computing and some lectures in the 8.04/8.05/8.06 sequence on quantum computing.

Read more from the original source:

Quantum Information Science MIT Physics