Category Archives: Quantum Physics
The quiet life and the sad death of the ‘Miracle on Ice’ team’s Mark Pavelich – The Athletic
Editors note: This story addresses suicide and other mental health issues and may be difficult to read and emotionally upsetting.
On the last night of his life, Mark Pavelich played his guitar in his room at the veterans facility where hed lived for the past six months. The notes were comforting to the housemate who paused just beyond the door. It was the kind of melody that drifted from Pavelichs guitar each evening, nestling into a place that had become home to the wounded souls that found their way there.
Pavelichs music was everywhere at the Eagles Healing Nest in the chapel after service, around the bonfire, or alone in his room at night. In his time at the center, Pav had become part of a family of veterans despite being the homes only civilian guest. Taz, his black border collie, was always by his side, whether he was on his way to a pre-dawn workout or heading out onto Sauk Lake to fish.
For the first time since he was arrested and accused of assaulting his neighbor with a metal pipe in August 2019, Pavelich seemed to have found peace and comfort in Sauk Centre, Minn. He was declared mentally incompetent to stand trial for the assault, which left the victim with a bruised kidney, two cracked ribs and a fractured vertebra. Pavelich was deemed to have a serious and persistent mental illness and labeled dangerous.
Once celebrated as part of the U.S. Olympic mens hockey teams Miracle on Ice in 1980, Pavelich was uneasy in the spotlight. Now he made headlines as a villain, gripped by the despair and confusion of several mental health disorders. He was another former hockey player with unhealed wounds, unseen until they spilled out in destructive fits.
That wasnt this Pav though.
More:
The quiet life and the sad death of the 'Miracle on Ice' team's Mark Pavelich - The Athletic
Symmetries Reveal Clues About the Holographic Universe – WIRED
Weve known about gravity since Newtons apocryphal encounter with the apple, but were still struggling to make sense of it. While the other three forces of nature are all due to the activity of quantum fields, our best theory of gravity describes it as bent spacetime. For decades, physicists have tried to use quantum field theories to describe gravity, but those efforts are incomplete at best.
One of the most promising of those efforts treats gravity as something like a holograma three-dimensional effect that pops out of a flat, two-dimensional surface. Currently, the only concrete example of such a theory is the AdS/CFT correspondence, in which a particular type of quantum field theory, called a conformal field theory (CFT), gives rise to gravity in so-called anti-de Sitter (AdS) space. In the bizarre curves of AdS space, a finite boundary can encapsulate an infinite world. Juan Maldacena, the theorys discoverer, has called it a universe in a bottle.
But our universe isnt a bottle. Our universe is (largely) flat. Any bottle that would contain our flat universe would have to be infinitely far away in space and time. Physicists call this cosmic capsule the celestial sphere.
Physicists want to determine the rules for a CFT that can give rise to gravity in a world without the curves of AdS space. Theyre looking for a CFT for flat spacea celestial CFT.
The celestial CFT would be even more ambitious than the corresponding theory in AdS/CFT. Since it lives on a sphere of infinite radius, concepts of space and time break down. As a consequence, the CFT wouldnt depend on space and time; instead, it could explain how space and time come to be.
Recent research results have given physicists hope that theyre on the right track. These results use fundamental symmetries to constrain what this CFT might look like. Researchers have discovered a surprising set of mathematical relationships between these symmetriesrelationships that have appeared before in certain string theories, leading some to wonder if the connection is more than coincidence.
Theres a very large, amazing animal out here, said Nima Arkani-Hamed, a theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey. The thing were going to find is going to be pretty mind-blowing, hopefully.
Symmetries on the Sphere
Perhaps the primary way that physicists probe the fundamental forces of nature is by blasting particles together to see what happens. The technical term for this is scattering. At facilities such as the Large Hadron Collider, particles fly in from distant points, interact, then fly out to the detectors in whatever transformed state has been dictated by quantum forces.
If the interaction is governed by any of the three forces other than gravity, physicists can in principle calculate the results of these scattering problems using quantum field theory. But what many physicists really want to learn about is gravity.
Luckily, Steven Weinberg showed in the 1960s that certain quantum gravitational scattering problemsones that involve low-energy gravitonscan be calculated. In this low-energy limit, weve nailed the behavior, said Monica Pate of Harvard University. Quantum gravity reproduces the predictions of general relativity. Celestial holographers like Pate and Sabrina Pasterski of Princeton University are using these low-energy scattering problems as the starting point to determine some of the rules the hypothetical celestial CFT must obey.
They do this by looking for symmetries. In a scattering problem, physicists calculate the products of scatteringthe scattering amplitudesand what they should look like when they hit the detectors. After calculating these amplitudes, researchers look for patterns the particles make on the detector, which correspond to rules or symmetries the scattering process must obey. The symmetries demand that if you apply certain transformations to the detector, the outcome of a scattering event should remain unchanged.
Read this article:
Symmetries Reveal Clues About the Holographic Universe - WIRED
String theory fuzzballs resolve famous black hole paradox – Advanced Science News
Scientists have turned to string theory to better understand black holes, proposing they can be modeled as "fuzzballs" made up of interacting strings.
Black holes are among the most mysterious objects in the universe. For more than a century, physicists have used Einsteins theory of general relativity to describe them, treating gravity as a deformation of spacetime created by the energy and momentum of particles and fields.
In this theory, a black hole, is considered an infinitely dense point called a singularity, which is surrounded by a spherical surface known as an event horizon or just a horizon for short with empty space existing between them. The gravity in the region beneath the horizon is so strong that no particles or waves can escape it and are doomed to fall into the singularity.
In this theory, black holes are characterized by only three parameters: mass, electric charge, and angular momentum encoding its rotational properties. However, this contradicts a quantum mechanical principle called a unitarity of time evolution, which states that the information must not be lost during the time development of a physical system.
Black holes are formed from huge amounts of matter consisting of an enormous number of particles that each have their own set of physical parameters. If the classical description of black holes is correct, then the information about the matter used to create them has definitely been lost given the simplicity of the that description implied by the no hair theorem. This is known as the black hole information loss paradox.
A group of American physicists led by Samir Mathur from Ohio State University has sought to resolve the paradox in a new paper published in the Turkish Journal of Physics. They propose replacing the convenient general relativistic picture of black holes as empty space with all its mass located in its center, with a ball-shaped mess of interacting strings called fuzzballs.
These hypothetical objects have neither a horizon nor a singularity, and sizes similar to those of same-mass black holes. This concept of a black hole fuzzball is based on string theory, a modern theory whose central postulate is that elementary particles, which are often considered as being point-like, are actually tiny vibrating strings with different oscillation modes that correspond to different types of particles. These string theory fuzzballs are characterized not by three numbers, but by a huge number of parameters composed of all the strings they are made up of, resolving the information loss paradox.
Black hole fuzzballs also help rectify another paradox in black hole physics. In the 1970s, Stephen Hawking analyzed the electromagnetic field in the vicinity of a horizon and predicted that black holes radiate photons in a similar way as heated bodies, such as stars or pieces of burning coal.
The mechanism of this hypothetical radiation emitted by a black hole results from the creation of photons in the vacuum outside its horizon due to quantum effects. Some of these particles cross the horizon and fall to the singularity, whereas others manage to escape the black holes gravitational field and travel away. In principle, they can be observed in the same way we see the light emitted by the Sun and other hot bodies. This radiation is known as Hawking radiation and has yet to be detected as its energy is so low that it exceeds the sensitivity of current instruments.
The difference between Hawking radiation from black holes and electromagnetic wave emissions from heated bodies like stars, for example, is that in the latter, the photons are generated by interacting elementary particles, and not in the vacuum.
Because of this peculiarity in how black hole radiation is generated, the photons emitted during a black holes lifespan, would have an entropy that is too large for the process to be consistent with the general principles of quantum mechanics, which demand this entropy to be smaller than the entropy of the black hole.
In order to solve this paradox, physicists have considered something called a wormhole paradigm, which, requires that both the photons that escape the black holes gravitational field as well as particles that fall into it should be considered when accounting for entropy. If one defines the Hawking radiation as a union of these two sets of particles, then the quantum mechanical correlations between them reduces the entropy of the black holes radiation, resolving the paradox.
But the Ohio State researchers analysis suggests that all realizations of this paradigm lead either to non-physical, larger-than-one probabilities of certain phenomena the aforementioned violation of unitarity or to a violation of the original Hawking proposal that black holes radiate like heated bodies. Instead, Mathur and his colleagues found these issues dont arise if black holes are considered not as objects with a singularity and a horizon, but as string theory fuzzballs with radiation produced by the interacting strings.
While the theory might work on paper, detecting this low-energy radiation is another challenge. It has been predicted that the interaction between the black holes gravitational waves and the fuzzballs surface would leave an imprint in its spectrum. Many scientists hope to be able to register such a subtle change with next generation Earth-based and space-based gravitational observatories, allowing them to determine if the fuzzballs are real or not.
Reference: Bin Guo, et al., Contrasting the fuzzball and wormhole paradigms for black holes, Turkish Journal of Physics (2021), arXiv:2111.05295
Continue reading here:
String theory fuzzballs resolve famous black hole paradox - Advanced Science News
Quantum eMotion Appoints High-Tech Business Expert to Its Board of Directors – StreetInsider.com
Get inside Wall Street with StreetInsider Premium. Claim your 1-week free trial here.
Montreal, Quebec--(Newsfile Corp. - February 14, 2022) - Quantum eMotion Corp. (TSXV: QNC) (OTCQB: QNCCF) (FSE: 34Q)("QeM" or the "Company") announces the appointment of Tullio Panarello, to its Board of Directors. The appointment of M. Panarello will continue to strengthen the Board, which will comprise 5 directors, 4 of whom are now independent. Tullio replaces Marc Rousseau, who will remain CFO and secretary of the corporation while Larry Moore will continue to serve as Chairman of the Board. Tullio Panarello will be receiving a grant of 500,000 options for his service as a Board director.
Tullio brings a wealth of expertise to this role, having served in several senior leadership capacities over the past 25 years in many High-Tech companies from the telecom, military, semiconductor, space, and sensor industries. His technical and market knowledge extends to the fields of lasers, optics, semiconductors, and quantum-based technologies.
"We are pleased to welcome Tullio Panarello to the Quantum eMotion Board," commented Francis Bellido, CEO of QeM. "Tullio's deep experience in high-technology global businesses and his strong technical expertise (20 Granted Patents) will be invaluable to QeM as we grow our business and pursue our mission to become a significant player in Cybersecurity."
Tullio is currently Vice President and General Manager at Smiths Interconnect, Montreal, Quebec which acquired ReflexPhotonics, the company where he occupied the position of Executive President. Earlier in his career he worked as Business Development Manager for PerkinElmer Canada before co-founding PyroPhotonics Lasers Inc, a company specialized in pulsed laser technology for material processing applications, of which he became CEO until he sold it to Electro Scientific Industries (ESI). At ESI, he occupied the position of Divisional General Manager for the Laser Business Division.
Tullio has been a member of Genia Photonics Board of Directors and is currently Chairman of the Board of Aeponyx Inc.
He holds a B.Sc. in Physics from Concordia University, Montreal, a MEng in Engineering Physics from McMaster University, Hamilton and an MBA from Queen's University, Kingston.
About QeM
The Company's mission is to address the growing demand for affordable hardware security for connected devices. The patented solution for a Quantum Random Number Generator exploits the built-in unpredictability of quantum mechanics and promises to provide enhanced security for protecting high value assets and critical systems.
The Company intends to target the highly valued Financial Services, Blockchain Applications, Cloud-Based IT Security Infrastructure, Classified Government Networks and Communication Systems, Secure Device Keying (IOT, Automotive, Consumer Electronics) and Quantum Cryptography.
For further information, please contact:
Francis Bellido, Chief Executive OfficerTel: 514.956.2525Email : info@quantumemotion.comWebsite : http://www.quantumemotion.com
Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.
This press release may contain forward-looking statements that are subject to known and unknown risks and uncertainties that could cause actual results to vary materially from targeted results. Such risks and uncertainties include those described in the Corporation's periodic reports including the annual report or in the filings made by Quantum from time to time with securities regulatory authorities.
To view the source version of this press release, please visit https://www.newsfilecorp.com/release/113651
See more here:
Quantum eMotion Appoints High-Tech Business Expert to Its Board of Directors - StreetInsider.com
Leibniz and the Miracle Creed Behind Modern Physics | Jeffrey K. McDonough – IAI
Part philosophical, part scientific, Leibniz believed that our world - "the best of all possible worlds" - must be governed by what is known as the Principle of Optimality. This seemingly outlandish idea proved surprisingly powerful and led to one of the most profound ideas in theoretical physics. Jeffrey K. McDonough tells the story.
The great German polymath Gottfried Wilhelm Leibniz famously insisted that ours is the best of all possible worlds. The claim that our world couldnt possibly be better has never been very plausible. It was hard to believe when Leibniz made it in the seventeenth century on the heels of the horrific Thirty Years War. It didnt seem any more likely when Voltaire heaped ridicule upon it following the Lisbon Earthquake of 1755. And, of course, it will probably not find many adherents today as we trudge along under the weight of a global pandemic, political uncertainty, and an environment on the verge of collapse. Leibnizs thought that ours is the best of all possible worlds is, in short, incredible. Incredible or not, however, Leibnizs implausible idea lies at the heart of one of the most profound, most successful, most tantalizing developments in the theoretical physics. Call it the story of Leibnizs Principle of Optimality.
The roots of Leibnizs principle reach back to at least Heron of Alexandrias discovery of the optical law of reflection and to ancient thinking about optimizing territories and storage containers. The story of Leibnizs principle begins in earnest, however, with a controversy that erupted between two of the finest mathematicians of the early modern era, Ren Descartes and Pierre de Fermat. Descartes was the first to publish the optical law of refraction in essentially the form we accept today. Many at the time, however, doubted his mechanistic demonstration of the law, which involved drawing clever analogies to the behavior of tennis balls and rackets. Seeking a more rigorous derivation, Fermat showed how both the optical law of reflection and the optical law of refraction could be derived from a quickest path principle: in a standard set of cases, a ray of light will take the quickest path from, say, a lamp to an eye regardless of how it is reflected or refracted.
As was typical of the era, Fermat and the followers of Descartes managed to snatch bitter controversy from clear progress. Fermat claimed that Descartes had never proved the law of refraction and insinuated that he had stolen his results from the Dutch astronomer Willebrord Snell. Cartesians insisted that Fermats derivation was technically flawed and was at any rate a regression from mechanistic ideals. Leibniz stepped into this controversy with a remarkable paper published in 1682. The paper aimed to show that Descartess mechanistic approach to the laws of optics and Fermats optimization approach could be reconciled. Leibniz sided with Descartes on some technical points and agreed that a mechanistic explanation of the laws of optics could be given. Nonetheless, he also embraced the spirit of Fermats proposal, showing how the laws of optics could be derived from optimal an easiest path principle and applied to an even greater variety of cases than Fermat had considered. The paper was a multifaceted breakthrough that showed how optimization methods could be reconciled with mechanistic explanations, how such methods could be married to Leibnizs powerful new infinitesimal calculus, and perhaps most profoundly that optimization principles neednt be restricted to kinematic notions such as distance and time but could be extended to dynamic notions such as ease, work, and energy.
___
In the notion of an optimal form, Leibniz found a rigorous model for his thesis that this is the best of all possible worlds.
___
In a series of papers written over the next decade and half, Leibniz and his cohort extended his optimization approach to other cases of natural phenomena by showing how they too could be viewed as instances of optimal form. One such case concerns the shape of a freely hanging chain suspended at two ends:
Such a chain can be thought of as an optimal form that, in contemporary terms, minimizes potential energy, that is, the energy a system has in virtue of its position. As an optimal form, it has two remarkable properties. First, while the chain as a whole minimizes overall potential energy, it does not minimize the potential energy of every part. We can, for example, lower the potential energy of the middle link by pulling down on it. Doing so, however, must come the expense of raising the other links in such a way that the overall potential energy of the string is increased.
Second, since the hanging chain is an optimal form, it must be the case that every subsection of the chain is also an optimal form. In fact, we can see this by reasoning alone. Suppose that figure ACDB represents a chain that minimizes potential energy, and that CD is a segment of ACDB.
If the segment CD did not minimize its potential energy if it were not itself an optimal form it could be replaced by a different segment with less potential energy so that the chain as a whole would have less than its minimal potential energy an absurdity! On pain of contradiction, any subsection of an optimal form must itself be an optimal form.
In the notion of an optimal form, Leibniz found a rigorous model for his thesis that this is the best of all possible worlds. The world as a whole is analogous to the chain as a whole. Just as the chain as a whole minimizes overall potential energy, the world as a whole maximizes overall goodness (or minimizes overall badness). That doesnt mean that individual aspects of the world couldnt be better. Judas would have been better if he hadnt betrayed Christ. But any such local improvement according to Leibniz would have to be more than counterbalanced by negative consequences. If Judas hadnt sinned, Judas would have been better, but the world as a whole would have been worse, just as pulling down on the middle link of the chain would decrease the potential energy of that middle link but only at the cost of increasing the potential energy of the other links.
___
The development of physics since Leibnizs time has largely vindicated his audacious conjecture.
___
Its surprising that Leibniz was able to draw deep connections between his seemingly fantastic view that this is the best of all possible worlds and his concrete scientific discoveries. The real twist in the story of Leibnizs Principle of Optimality, however, played out only after his death. On the basis of some philosophical assumptions and a handful of technical cases, Leibniz had audaciously conjectured that it should be possible to explain all natural phenomena in terms of optimization principles. Even more surprising than the connections between Leibnizs philosophical theology and his scientific studies is the fact that the development of physics since Leibnizs time has largely vindicated his audacious conjecture concerning the scope of optimality principles.
Eighteen years old when Leibniz died, Pierre Louis Maupertuis cut his teeth as a scientist by applying Leibnizs calculus to Newtonian mechanics. He rose to international prominence, however, with a swashbuckling account of his scientific expedition to Lapland, an account that mixed exact science with tales of bitter cold, reindeer, and local women. He was poised to assume the presidency of the Royal Prussian Academy of Science, when he published a paper that echoed the results and methods of Leibnizs early 1682 optics paper. Two years later, he published a second paper in which he formally announced his Principle of Least Action, according to which in Nature, the action necessary for change is the smallest possible. As the leader of one of the great scientific societies of the era, Maupertuis had just thrown the prestige of his presidency and the weight of the Royal Prussian Academy of Science behind a version of Leibnizs bold hypothesis that all natural phenomena could be explained in terms of optimization principles.
Once again, however, the march of science got bogged down in the mire of petty controversy. Samuel Knig, a mathematician and student of Leibnizs philosophy, publicly accused Maupertuis of plagiarizing Leibniz. Leonard Euler rose to his presidents defense. Voltaire poison pen in hand countered on behalf of Knig. The debate that ensued did little to resolve the issue and no doubt contributed to the deterioration of Maupertuiss health. It did have one good result, however. The controversy catalyzed Euler the greatest mathematician and physicist of his time to formulate a rigorous version of the Principle of Least Action and to apply it to cases that were beyond Maupertuiss impressive but merely mortal abilities. Remarkably, Euler came to hold essentially the same opinion as Leibniz. He concluded that all natural effects follow some law of maximum or minimum that is some principle of optimization so that nothing whatsoever takes place in the universe in which some relation of maximum and minimum does not appear.
In the years that followed Eulers pioneering efforts, other luminaries of the age of rational mechanics continued to develope optimization principles and confirm their essentially universal applicability. The French-Italian mathematician and astronomer, Joseph-Louis Lagrange generalized Eulers results, showing how optimization principles could be derived from principles of virtual work as well as from Newtons laws. With Lagrange we finally get the general principle that for each particle in a conservative system the particles action taken from its initial position to its final position is optimal. A few decades later, the great Irish mathematician, William Rowan Hamilton who had already made important contributions to the study of optics further generalized Lagranges pioneering work. Hamiltons generalized version of the Principle of Least Action is applicable not only to the cases considered by Lagrange but to non-conservative systems as well.
___
It was applied to specific problems in the 17th century, generalized in the 18th and 19th centuries, confirmed in the 20th century, and remains at the foundations of our best physical theories today.
___
Today, least action principles are expressed in what is known as Lagrangian formulation, and it is accepted that for any physical system one can uniquely specify a function, called the Lagrangian, that is determined by the nature of the system as a whole. Given a Lagrangian, one can (in principle) determine the actual sequence of a systems states by considering all its possible states and identifying that sequence of states that optimizes its action. The Lagrangian of a system applies, in one form or another, to all current physical theories including general and special relativity, quantum mechanics, and even string theory. Reflecting in the mid-twentieth century on the more or less general laws which mark the achievements of physical science during the course of the last centuries, the founder of quantum mechanics, Max Plank, concluded that the principle of least action is perhaps that which, as regards form and content, may claim to come nearest to that ideal final aim of theoretical research.
In an article written five years before his death, Albert Einstein proposed that every theoretical physicist is a kind of tamed metaphysicist. The philosopher and physicist alike must believe that the totality of all sensory experience can be comprehended on the basis of a conceptual system built on premises of great simplicity. They must have faith that the world is governed by a hidden, simple order. The skeptic, Einstein suggested will say that this is a miracle creed. And, Einstein acknowledged, shell be right. Nonetheless, while the miracle creeds of philosophers and physicists must, by definition, outstrip all empirical evidence while they must be audacious many have, as Einstein put it, been borne out to an amazing extent by the development of science.
Leibnizs principle of optimality is perhaps the most miraculous of all miracle creeds. Rooted in an implausible conviction that this is the best of all possible worlds, it was applied to specific problems in the seventeenth century, generalized in the eighteenth and nineteenth centuries, confirmed in the twentieth century, and remains at the foundations of our best physical theories today. Leibnizs principle of optimality is no doubt a miracle creed worthy of the sceptics incredulous stare. It implies that for all the worlds faults, there is a sense in which it and its parts are indeed optimal. And it shows that even an implausible idea, born of faith and hope, might bear long term, concrete results. In a world currently beaten down by disease, uncertainty, and conflict, Leibnizs Principle of Optimality has somehow triumphed, a small victory for optimism in a pessimistic time.
See more here:
Leibniz and the Miracle Creed Behind Modern Physics | Jeffrey K. McDonough - IAI
The Creation of the Arcade Game Centipede – IEEE Spectrum
Following somewhat in Messmer's footsteps, French president Emmanuel Macron announced a plan earlier this month to build at least six new reactors to help the country decarbonize by 2050.
At first glance, theres little life to be found in the nuclear sectors of Frances neighbors. Germanys coalition government is today forging ahead with a publicly popular plan to shutter the countrys remaining nuclear reactors by the end of 2022. The current Belgian government plans to shut down its remaining reactors by 2025. Switzerland is doing the same, albeit with a hazy timetable. Spain plans to start phasing out in 2027. Italy hasnt hosted nuclear power at all since 1990.
France can claim a qualified victory: Under current EU guidelines, at least some nuclear power will be categorized as green.
Some of these antinuclear forces have recently found a sparring ground with France in drafting the EUs sustainable finance taxonomy, which delineates particular energy sources as green. The taxonomy sets incentives for investment in green technologies, instead of setting hard policy, but its an important benchmark.
A lot of investorstheyre not experts in this topic, and theyre trying to understand: Whats really sustainable, and what is greenwashing? says Darragh Conway, a climate policy expert at Climate Focus in Amsterdam. And I think a lot of them will look to official standards that have been adopted, such as the EUs taxonomy.
France, naturally, backed nuclear powers greenness. Scientists from the EU Joint Research Centre agreed, reporting that nuclear power doesnt cause undue environmental harm, despite the need to store nuclear waste.
The report was quickly blasted by ministers from five countries, including Germany and Spain, who argued that including nuclear power in the taxonomy would permanently damage its integrity, credibility and therefore its usefulness.
But the pronuclear side can claim a qualified victory: As of now, at least, some nuclear power is slated to receive the label.
(So, incidentally, will natural gas, which the current German government actually favored.)
This row over green finance obscures an unfortunate reality: Its uncertain how the power once generated by fission will be made up if plants go offline. The obvious answer might be solar and wind. After all, the cost of renewables continues to plummet. But to decarbonize Europes grid in short order, the renewable requirements are already steep, and removing nuclear energy from the picture makes it even harder to match that curve.
Even in the most ambitious scenarios but the most ambitious countries, it is an incredible undertaking to try to deploy that much in terms of renewables to meet the climate goals, says Adam Stein, a nuclear policy expert at the Breakthrough Institute. It's possible for some countries to succeed, he says, but that would likely involve them buying an outsized share of the worlds supply of renewable energy infrastructure, threatening to prevent other countries from reaching their goals.
This reality has come to the forefront as gas prices spiked over Europes past winter. France continued to export its nuclear power as supplies of politically sensitive Russian natural gas ran thinner. Unlike the concrete in reactor shielding, public opinion isnt set, and indications are that rising energy costs are softening attitudes to atoms, at least in Germany.
And other countries are charting new nuclear courses. Poland has begun forging ahead with French-backed plans to build a half dozen nuclear reactors by 2043. In October, Romania adopted a plan to double its nuclear capacity by 2031. Closer to the Atlantic, in December, a new Dutch coalition government stated its ambition to build two new nuclear power plants, declaring them a necessity to meet climate targets that arent falling any further away.
Its entirely possible that the picture might change as solar and wind costs continue to fall and as renewables expand. After all, in sharp contrast to those two, the average price of nuclear electricity had actually nudged upward by 26 percent between 2010 and 2019.
Whether nuclear is more cost effective than renewables, it does differ per country, says Conway. In a lot of countries, nuclear is already more expensive than renewables.
But Stein says that the idea of looking at nuclear as a bottleneck for renewables is flawedwhen the real target should be to reduce reliance on fossil fuels. We need every clean energy source, building as much as they can, as fast as they can. Its not one versus the other, he says.
From Your Site Articles
Related Articles Around the Web
Go here to see the original:
Einstein Finally Warms Up to Quantum Mechanics? The Solution Is Shockingly Intuitive – SciTechDaily
Einstein was no stranger to mathematical challenges. He struggled to define energy in a way that acknowledged both the law of energy conservation and covariance, which is general relativitys fundamental feature where physical laws are the same for all observers.
A research team at Kyoto Universitys Yukawa Institute for Theoretical Physics has now proposed a novel approach to this longstanding problem by defining energy to incorporate the concept of entropy. Although a great deal of effort has gone into reconciling the elegance of general relativity with quantum mechanics, team member Shuichi Yokoyama says, The solution is shockingly intuitive.
Einsteins field equations describe how matter and energy shape spacetime and how in turn the structure of spacetime moves matter and energy. Solving this set of equations, however, is notoriously difficult, such as with pinning down the behavior of a charge associated with an energy-momentum tensor, the troublesome factor that describes mass and energy.
The research team has observed that the conservation of charge resembles entropy, which can be described as a measure of the number of different ways of arranging parts of a system.
And theres the rub: conserved entropy defies this standard definition.
The existence of this conserved quantity contradicts a principle in basic physics known as Noethers theorem, in which conservation of any quantity generally arises because of some kind of symmetry in a system.
Surprised that other researchers have not already applied this new definition of the energy-momentum tensor, another team member, Shinya Aoki, adds that he is also intrigued that in general curved spacetime, a conserved quantity can be defined even without symmetry.
In fact, the team has also applied this novel approach to observe a variety of cosmic phenomena, such as the expansion of the universe and black holes. While the calculations correspond well with the currently accepted behavior of entropy for a Schwarzschild black hole, the equations show that entropy density is concentrated at the singularity in the center of the black hole, a region where spacetime becomes poorly defined.
The authors hope that their research will spur deeper discussion among many scientists not only in gravity theory but also in basic physics.
Reference: Charge conservation, entropy current and gravitation by Sinya Aoki, Tetsuya Onogi and Shuichi Yokoyama, 2 November 2021, International Journal of Modern Physics A.DOI: 10.1142/S0217751X21502018
The rest is here:
Einstein Finally Warms Up to Quantum Mechanics? The Solution Is Shockingly Intuitive - SciTechDaily
What is quantum entanglement? All about this ‘spooky’ quirk of physics – Interesting Engineering
If you know anything about quantum mechanics, there's a good chance you've heard of quantum entanglement. This feature of quantum mechanics is one of the most extraordinary discoveries of the 20th century and is one of the most promising avenues of research for advanced technologies in communications, computing, and more.
But what is quantum entanglement and why is it so important? Why did it freak Albert Einstein out? And why does it appear to violate one of the most important laws of physics?
Any time you discuss quantum mechanics, things are going to get complicated, and quantum entanglement is no different.
The first thing to understand is that particles exist in a state of "superposition" until they are observed. In a very common demonstration, the quantum particles used as qubits in a quantum computer are both 0 and 1 at the same time until they are observed, whereby they appear to randomly become a0 or 1.
Now, in simple terms, quantum entanglement is when two particles are produced or interact in such a way that the key properties of those particles cannot be described independently of each other.
For example, if two photons are generated and are entangled, one particle may have a clockwise spin on one axis so that the other will necessarily have a counterclockwise spin on that same axis.
In and of itself, this is not that radical. But because particles in quantum mechanics can also be described as wave functions, the act of measuring the spin of a particle is said to "collapse" its wave function to produce that measurable property (like going from both 0 and 1 to only 0 or only 1).
When you do this to entangled particles, however, we get to the really incredible part of quantum entanglement. When you measure an entangled particle to determine its spin along some axis and collapse its wave function, the other particle also collapses to produce the measurable property of spin, even though you did not observe the other particle.
If a pair of entangled particles are both 0 and 1, and you measure one particle as 0, the other entangled particle automatically collapses to produce a 1, entirely on its own and without any interaction from the observer.
This appears to happeninstantaneouslyand regardless of their distance from each other, which originally led to the paradoxical conclusion that the information about the measured particle's spin is somehow being transmitted to its entangled partner faster than even the speed of light.
Not only is quantum entanglement real, but it's also an important component of emerging technologies like quantum computing and quantum communications.
In quantum computing, how can you operate on qubits in a quantum processor without observing them and therefore collapsing them into plain old digital bits? How do you detect errors without looking at the qubits and destroying the whole mechanism that makes quantum computing so powerful?
The quantum entanglement of several particles in a row is vital to putting enough distance between qubits and the outside world to keep the vital qubits in superposition long enough for them to perform computations.
Quantum communications is another area of research that hopes to take advantage of quantum entanglement to facilitate communication, though it doesn't mean that faster than light communication is on the horizon (in fact, such a technology is likely impossible).
To some degree, yes.
When most people discuss quantum entanglement, they use an example of two entangled particles behaving in a certain way to demonstrate the phenomenon, but this is very much a simplification of an incredibly complex quantum system.
The reality is that a given particle can be entangled with many different particles to varying degrees, not just the "maximally entangled" state where two particles are one to one correlated to one another and only to each other.
This is why measuring one part of an entangled pair doesn't automatically guarantee that you will know the state of the other particle in real-world applications, since that other particle has other entanglements it is maintaining as well. It does give you a better than random chance of knowing the other particle's state though.
Quantum entanglement, or at least the principles that describe the phenomenon, was first proposed by Einstein and his colleagues Boris Podolsky and Nathan Rosen in a 1935 paper in the journal Physical Reviewtitled "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete." In it, Einstein, Podolsky, and Rosen discussed that an especially strong correlation of quantum states between particles can lead to them having a single unified quantum state.
They also determined that this unified state can result in the measurement of one strongly correlated particle having a direct effect on the other strongly correlated particle without regard to the distance between the two particles.
The purpose of the Einstein-Podolsky-Rosen paper wasn't to announce the "discovery" of quantum entanglement, per se, but rather to describe this phenomenon that had been observed and discussed and argue that there must be a missing component of quantum mechanics that hasn't been discovered yet.
Since the strong correlation phenomenon they described violated laws laid down in Einstein's relativityand appeared to be paradoxical, the paper argued there must be something else that physicists were missing that would properly place the quantum realm under the umbrella of relativity. That "something else" still hasn't been found almost a century later.
The first use of the word "entanglement" to describe this phenomenon belongs toErwin Schrdinger, who recognized it as one of quantum mechanics' most fundamental features and argued that it wasn't a mystery that would soon be resolved under relativity, but rather a strong break from classical physics entirely.
Famously, Einstein described quantum entanglement as "spooky action at a distance," but he actually described it as more than just a weird quirk of ghostly particles with instantaneous knowledge of each other.
Einstein actually saw quantum entanglement as a mathematical paradox, an inherent contradiction in mathematical logic that shows that something about the arguments being made must be wrong.
In the case of the Einstein-Podolsky-Rosen paradox, as it came to be called, the arguments are that the fundamental rules of quantum mechanics are completely known and that general relativity is valid. If general relativity is valid, then nothing in the universe can travel faster than the speed of light, which moves at 186,000 miles per second.
If quantum mechanics were fully understood, then the rules governing the strong correlation between particles are complete and our observations tell us everything we need to know.
Since quantum particles are "of the universe" they ought to be governed by the speed of lightjust like everything else, but quantum entanglement not only appears to instantaneously share information between particles that could theoretically be on opposite ends of the universe. Even weirder, this information might even travel back and forth through time.
Quantum entanglement through time would have all kinds of implications for the nature of causality, which is about as fundamental a law of physics as it gets. It doesn't work the other way around, effects can't precede their cause, but some scientists think that those rules might not apply to the quantum realm any more than the speed of light would.
This last point is still mostly speculative, but it has some experimental basis, and it just further complicates the paradox that Einstein, Podolsky, and Rosen proposed in their 1935 paper.
Quantum entanglement is important for two major reasons.
First, quantum entanglement is such a fundamental mechanism of the quantum world while also being one that we can directly interact with and influence.It may provide a key way to harness some of the most fundamental properties of the universe to advance our technology to new heights.
We know how to entangle particles and do so regularly both in laboratories and in real-world applications like quantum computers.Quantum computers in particular demonstrate the potential of quantum mechanics in modern technology, and quantum entanglement is the best tool we have for actually leveraging quantum mechanics in this way.
The other major reason why quantum entanglement is important is that it is a signpost that points towards something truly fundamental about our universe. It is as clear a demonstration as you can get that the quantum world is almost a purer form of the universe than the one we can see and that obeys laws that we can explain.
If all the universe is a stage and matter is the actors, then quantum entanglementand quantum mechanics more broadlymay be the line riggings that lift the curtains, the switches that turn the lights on and off, or even the costumes that the actors wear.
If we watch a play, there are two ways to appreciate it. You can see past the theater and the set pieces to appreciate the story that the play conveys, or you can appreciate the quality of the performance, the staging, and the execution.
You might see two very different things by watching the exact same performance, and quantum mechanics appear to give us a different way of seeing the same universe we've always seen, and quantum entanglement may be the key that gets us backstage.
More:
What is quantum entanglement? All about this 'spooky' quirk of physics - Interesting Engineering
What is the ‘Gold Foil Experiment’? The Geiger-Marsden experiments explained – Livescience.com
The Geiger-Marsden experiment, also called the gold foil experiment or the -particle scattering experiments, refers to a series of early-20th-century experiments that gave physicists their first view of the structure of the atomic nucleus and the physics underlying the everyday world. It was first proposed by Nobel Prize-winning physicist Ernest Rutherford.
As familiar as terms like electron, proton and neutron are to us now, in the early 1900s, scientists had very little concept of the fundamental particles that made up atoms.
In fact, until 1897, scientists believed that atoms had no internal structure and believed that they were an indivisible unit of matter. Even the label "atom" gives this impression, given that it's derived from the Greek word "atomos," meaning "indivisible."
But that year, University of Cambridge physicist Joseph John Thomson discovered the electron and disproved the concept of the atom being unsplittable, according to Britannica. Thomson found that metals emitted negatively charged particles when illuminated with high-frequency light.
His discovery of electrons also suggested that there were more elements to atomic structure. That's because matter is usually electrically neutral; so if atoms contain negatively charged particles, they must also contain a source of equivalent positive charge to balance out the negative charge.
By 1904, Thomson had suggested a "plum pudding model" of the atom in which an atom comprises a number of negatively charged electrons in a sphere of uniform positive charge, distributed like blueberries in a muffin.
The model had serious shortcomings, however primarily the mysterious nature of this positively charged sphere. One scientist who was skeptical of this model of atoms was Rutherford, who won the Nobel Prize in chemistry for his 1899 discovery of a form of radioactive decay via -particles two protons and two neutrons bound together and identical to a helium-4 nucleus, even if the researchers of the time didn't know this.
Rutherford's Nobel-winning discovery of particles formed the basis of the gold foil experiment, which cast doubt on the plum pudding model. His experiment would probe atomic structure with high-velocity -particles emitted by a radioactive source. He initially handed off his investigation to two of his protgs, Ernest Marsden and Hans Geiger, according to Britannica.
Rutherford reasoned that if Thomson's plum pudding model was correct, then when an -particle hit a thin foil of gold, the particle should pass through with only the tiniest of deflections. This is because -particles are 7,000 times more massive than the electrons that presumably made up the interior of the atom.
Marsden and Geiger conducted the experiments primarily at the Physical Laboratories of the University of Manchester in the U.K. between 1908 and 1913.
The duo used a radioactive source of -particles facing a thin sheet of gold or platinum surrounded by fluorescent screens that glowed when struck by the deflected particles, thus allowing the scientists to measure the angle of deflection.
The research team calculated that if Thomson's model was correct, the maximum deflection should occur when the -particle grazed an atom it encountered and thus experienced the maximum transverse electrostatic force. Even in this case, the plum pudding model predicted a maximum deflection angle of just 0.06 degrees.
Of course, an -particle passing through an extremely thin gold foil would still encounter about 1,000 atoms, and thus its deflections would be essentially random. Even with this random scattering, the maximum angle of refraction if Thomson's model was correct would be just over half a degree. The chance of an -particle being reflected back was just 1 in 10^1,000 (1 followed by a thousand zeroes).
Yet, when Geiger and Marsden conducted their eponymous experiment, they found that in about 2% of cases, the -particle underwent large deflections. Even more shocking, around 1 in 10,000 -particles were reflected directly back from the gold foil.
Rutherford explained just how extraordinary this result was, likening it to firing a 15-inch (38 centimeters) shell (projectile) at a sheet of tissue paper and having it bounce back at you, according to Britannica
Extraordinary though they were, the results of the Geiger-Marsden experiments did not immediately cause a sensation in the physics community. Initially, the data were unnoticed or even ignored, according to the book "Quantum Physics: An Introduction" by J. Manners.
The results did have a profound effect on Rutherford, however, who in 1910 set about determining a model of atomic structure that would supersede Thomson's plum pudding model, Manners wrote in his book.
The Rutherford model of the atom, put forward in 1911, proposed a nucleus, where the majority of the particle's mass was concentrated, according to Britannica. Surrounding this tiny central core were electrons, and the distance at which they orbited determined the size of the atom. The model suggested that most of the atom was empty space.
When the -particle approaches within 10^-13 meters of the compact nucleus of Rutherford's atomic model, it experiences a repulsive force around a million times more powerful than it would experience in the plum pudding model. This explains the large-angle scatterings seen in the Geiger-Marsden experiments.
Later Geiger-Marsden experiments were also instrumental; the 1913 tests helped determine the upper limits of the size of an atomic nucleus. These experiments revealed that the angle of scattering of the -particle was proportional to the square of the charge of the atomic nucleus, or Z, according to the book "Quantum Physics of Matter," published in 2000 and edited by Alan Durrant.
In 1920, James Chadwick used a similar experimental setup to determine the Z value for a number of metals. The British physicist went on to discover the neutron in 1932, delineating it as a separate particle from the proton, the American Physical Society said.
Yet the Rutherford model shared a critical problem with the earlier plum pudding model of the atom: The orbiting electrons in both models should be continuously emitting electromagnetic energy, which would cause them to lose energy and eventually spiral into the nucleus. In fact, the electrons in Rutherford's model should have lasted less than 10^-5 seconds.
Another problem presented by Rutherford's model is that it doesn't account for the sizes of atoms.
Despite these failings, the Rutherford model derived from the Geiger-Marsden experiments would become the inspiration for Niels Bohr's atomic model of hydrogen, for which he won a Nobel Prize in Physics.
Bohr united Rutherford's atomic model with the quantum theories of Max Planck to determine that electrons in an atom can only take discrete energy values, thereby explaining why they remain stable around a nucleus unless emitting or absorbing a photon, or light particle.
Thus, the work of Rutherford, Geiger (who later became famous for his invention of a radiation detector)and Marsden helped to form the foundations of both quantum mechanics and particle physics.
Rutherford's idea of firing a beam at a target was adapted to particle accelerators during the 20th century. Perhaps the ultimate example of this type of experiment is the Large Hadron Collider near Geneva, which accelerates beams of particles to near light speed and slams them together.
Thomson's Atomic Model, Lumens Chemistry for Non-Majors,.
Rutherford Model, Britannica, https://www.britannica.com/science/Rutherford-model
Alpha particle, U.S NRC, https://www.nrc.gov/reading-rm/basic-ref/glossary/alpha-particle.html
Manners. J., et al, 'Quantum Physics: An Introduction,' Open University, 2008.
Durrant, A., et al, 'Quantum Physics of Matter,' Open University, 2008
Ernest Rutherford, Britannica, https://www.britannica.com/biography/Ernest-Rutherford
Niels Bohr, The Nobel Prize, https://www.nobelprize.org/prizes/physics/1922/bohr/facts/
House. J. E., 'Origins of Quantum Theory,' Fundamentals of Quantum Mechanics (Third Edition), 2018
View post:
What is the 'Gold Foil Experiment'? The Geiger-Marsden experiments explained - Livescience.com
Quantum Holograms Dont Even Need to See Their Subject – IEEE Spectrum
Applications for the CAD software extend far beyond medicine and throughout the burgeoning field of synthetic biology, which involves redesigning organisms to give them new abilities. For example, we envision users designing solutions for biomanufacturing; it's possible that society could reduce its reliance on petroleum thanks to microorganisms that produce valuable chemicals and materials. And to aid the fight against climate change, users could design microorganisms that ingest and lock up carbon, thus reducing atmospheric carbon dioxide (the main driver of global warming).
Our consortium, GP-write, can be understood as a sequel to the Human Genome Project, in which scientists first learned how to "read" the entire genetic sequence of human beings. GP-write aims to take the next step in genetic literacy by enabling the routine "writing" of entire genomes, each with tens of thousands of different variations. As genome writing and editing becomes more accessible, biosafety is a top priority. We're building safeguards into our system from the start to ensure that the platform isn't used to craft dangerous or pathogenic sequences.
Need a quick refresher on genetic engineering? It starts with DNA, the double-stranded molecule that encodes the instructions for all life on our planet. DNA is composed of four types of nitrogen basesadenine (A), thymine (T), guanine (G), and cytosine (C)and the sequence of those bases determines the biological instructions in the DNA. Those bases pair up to create what look like the rungs of a long and twisted ladder. The human genome (meaning the entire DNA sequence in each human cell) is composed of approximately 3 billion base-pairs. Within the genome are sections of DNA called genes, many of which code for the production of proteins; there are more than 20,000 genes in the human genome.
The Human Genome Project, which produced the first draft of a human genome in 2000, took more than a decade and cost about $2.7 billion in total. Today, an individual's genome can be sequenced in a day for $600, with some predicting that the $100 genome is not far behind. The ease of genome sequencing has transformed both basic biological research and nearly all areas of medicine. For example, doctors have been able to precisely identify genomic variants that are correlated with certain types of cancer, helping them to establish screening regimens for early detection. However, the process of identifying and understanding variants that cause disease and developing targeted therapeutics is still in its infancy and remains a defining challenge.
Until now, genetic editing has been a matter of changing one or two genes within a massive genome; sophisticated techniques like CRISPR can create targeted edits, but at a small scale. And although many software packages exist to help with gene editing and synthesis, the scope of those software algorithms is limited to single or few gene edits. Our CAD program will be the first to enable editing and design at genome-scale, allowing users to change thousands of genes, and it will operate with a degree of abstraction and automation that allows designers to think about the big picture. As users create new genome variants and study the results in cells, each variant's traits and characteristics (called its phenotype) can be noted and added to the platform's libraries. Such a shared database could vastly speed up research on complex diseases.
What's more, current genomic design software requires human experts to predict the effect of edits. In a future version, GP-write's software will include predictions of phenotype to help scientists understand if their edits will have the desired effect. All the experimental data generated by users can feed into a machine-learning program, improving its predictions in a virtuous cycle. As more researchers leverage the CAD platform and share data (the open-source platform will be freely available to academia), its predictive power will be enhanced and refined.
Our first version of the CAD software will feature a user-friendly graphical interface enabling researchers to upload a species' genome, make thousands of edits throughout the genome, and output a file that can go directly to a DNA synthesis company for manufacture. The platform will also enable design sharing, an important feature in the collaborative efforts required for large-scale genome-writing initiatives.
There are clear parallels between CAD programs for electronic and genome design. To make a gadget with four transistors, you wouldn't need the help of a computer. But today's systems may have billions of transistors and other components, and designing them would be impossible without design-automation software. Likewise, designing just a snippet of DNA can be a manual process. But sophisticated genomic designwith thousands to tens of thousands of edits across a genomeis simply not feasible without something like the CAD program we're developing. Users must be able to input high-level directives that are executed across the genome in a matter of seconds.
Our CAD program will be the first to enable editing at genome-scale, with a degree of abstraction and automation that allows designers to think about the big picture.
A good CAD program for electronics includes certain design rules to prevent a user from spending a lot of time on a design, only to discover that it can't be built. For example, a good program won't let the user put down transistors in patterns that can't be manufactured or put in a logic that doesn't make sense. We want the same sort of design-for-manufacture rules for our genomic CAD program. Ultimately, our system will alert users if they're creating sequences that can't be manufactured by synthesis companies, which currently have limitations such as trouble with certain repetitive DNA sequences. It will also inform users if their biological logic is faulty; for example, if the gene sequence they added to code for the production of a protein won't work, because they've mistakenly included a "stop production" signal halfway through.
But other aspects of our enterprise seem unique. For one thing, our users may import huge files containing billions of base-pairs. The genome of the Polychaos dubium, a freshwater amoeboid, clocks in at 670 billion base-pairsthat's over 200 times larger than the human genome! As our CAD program will be hosted on the cloud and run on any Internet browser, we need to think about efficiency in the user experience. We don't want a user to click the "save" button and then wait ten minutes for results. We may employ the technique of lazy loading, in which the program only uploads the portion of the genome that the user is working on, or implement other tricks with caching.
Getting a DNA sequence into the CAD program is just the first step, because the sequence, on its own, doesn't tell you much. What's needed is another layer of annotation to indicate the structure and function of that sequence. For example, a gene that codes for the production of a protein is composed of three regions: the promoter that turns the gene on, the coding region that contains instructions for synthesizing RNA (the next step in protein production), and the termination sequence that indicates the end of the gene. Within the coding region, there are "exons," which are directly translated into the amino acids that make up proteins and "introns," intervening sequences of nucleotides that are removed during the process of gene expression. There are existing standards for this annotation that we want to improve on, so our standardized interface language will be readily interpretable by people all over the world.
The CAD program from GP-write will enable users to apply high-level directives to edit a genome, including inserting, deleting, modifying, and replacing certain parts of the sequence. GP-write
Once a user imports the genome, the editing engine will enable the user to make changes throughout the genome. Right now, we're exploring different ways to efficiently make these changes and keep track of them. One idea is an approach we call genome algebra, which is analogous to the algebra we all learned in school. In mathematics, if you want to get from the number 1 to the number 10, there are infinite ways to do it. You could add 1 million and then subtract almost all of it, or you could get there by repeatedly adding tiny amounts. In algebra, you have a set of operations, costs for each of those operations, and tools that help organize everything.
In genome algebra, we have four operations: we can insert, delete, invert, or edit sequences of nucleotides. The CAD program can execute these operations based on certain rules of genomics, without the user having to get into the details. Similar to the "PEMDAS rule" that defines the order of operations in arithmetic, the genome editing engine must order the user's operations correctly to get the desired outcome. The software could also compare sequences against each other, essentially checking their math to determine similarities and differences in the resulting genomes.
In a later version of the software, we'll also have algorithms that advise users on how best to create the genomes they have in mind. Some altered genomes can most efficiently be produced by creating the DNA sequence from scratch, while others are more suited to large-scale edits of an existing genome. Users will be able to input their design objectives and get recommendations on whether to use a synthesis or editing strategyor a combination of the two.
Users can import any genome (here, the E. coli bacteria genome), and create many edited versions; the CAD program will automatically annotate each version to show the changes made. GP-write
Our goal is to make the CAD program a "one-stop shop" for users, with the help of the members of our Industry Advisory Board: Agilent Technologies, a global leader in life sciences, diagnostics and applied chemical markets; the DNA synthesis companies Ansa Biotechnologies, DNA Script, and Twist Bioscience; and the gene editing automation companies Inscripta and Lattice Automation. (Lattice was founded by coauthor Douglas Densmore). We are also partnering with biofoudries such as the Edinburgh Genome Foundry that can take synthetic DNA fragments, assemble them, and validate them before the genome is sent to a lab for testing in cells.
Users can most readily benefit from our connections to DNA synthesis companies; when possible, we'll use these companies' APIs to allow CAD users to place orders and send their sequences off to be synthesized. (In the case of DNA Script, when a user places an order it would be quickly printed on the company's DNA printers; some dedicated users might even buy their own printers for more rapid turnaround.) In the future, we'd like to make the ordering step even more user-friendly by suggesting the company best suited to the manufacture of a particular sequence, or perhaps by creating a marketplace where the user can see prices from multiple manufacturers, the way people do on airfare sites.
We've recently added two new members to our Industrial Advisory Board, each of which brings interesting new capabilities to our users. Catalog Technologies is the first commercially viable platform to use synthetic DNA for massive digital storage and computation, and could eventually help users store vast amounts of genomic data generated on GP-write software. The other new board member is SOSV's IndieBio, the leader in biotech startup development. It will work with GP-write to select, fund, and launch companies advancing genome-writing science from IndieBio's New York office. Naturally, all those startups will have access to our CAD software.
We're motivated by a desire to make genome editing and synthesis more accessible than ever before. Imagine if high-school kids who don't have access to a wet lab could find their way to genetic research via a computer in their school library; this scenario could enable outreach to future genome design engineers and could lead to a more diverse workforce. Our CAD program could also entice people with engineering or computational backgroundsbut with no knowledge of biologyto contribute their skills to genetic research.
Because of this new level of accessibility, biosafety is a top priority. We're planning to build several different levels of safety checks into our system. There will be user authentication, so we'll know who's using our technology. We'll have biosecurity checks upon the import and export of any sequence, basing our "prohibited" list on the standards devised by the International Gene Synthesis Consortium (IGSC), and updated in accordance with their evolving database of pathogens and potentially dangerous sequences. In addition to hard checkpoints that prevent a user from moving forward with something dangerous, we may also develop a softer system of warnings.
Imagine if high-school kids who don't have access to a lab could find their way to genetic research via a computer in their school library.
We'll also keep a permanent record of redesigned genomes for tracing and tracking purposes. This record will serve as a unique identifier for each new genome and will enable proper attribution to further encourage sharing and collaboration. The goal is to create a broadly accessible resource for researchers, philanthropies, pharmaceutical companies, and funders to share their designs and lessons learned, helping all of them identify fruitful pathways for advancing R&D on genetic diseases and environmental health. We believe that the authentication of users and annotated tracking of their designs will serve two complementary goals: It will enhance biosecurity while also engendering a safer environment for collaborative exchange by creating a record for attribution.
One project that will put the CAD program to the test is a grand challenge adopted by GP-write, the Ultra-Safe Cell Project. This effort, led by coauthor Farren Isaacs and Harvard professor George Church, aims to create a human cell line that is resistant to viral infection. Such virus-resistant cells could be a huge boon to the biomanufacturing and pharmaceutical industry by enabling the production of more robust and stable products, potentially driving down the cost of biomanufacturing and passing along the savings to patients.
The Ultra-Safe Cell Project relies on a technique called recoding. To build proteins, cells use combinations of three DNA bases, called codons, to code for each amino acid building block. For example, the triplet 'GGC' represents the amino acid glycine, TTA represents leucine, GTC represents valine, and so on. Because there are 64 possible codons but only 20 amino acids, many of the codons are redundant. For example, four different codons can code for glycine: GGT, GGC, GGA, and GGG. If you replaced a redundant codon in all genes (or 'recode' the genes), the human cell could still make all of its proteins. But viruseswhose genes would still include the redundant codons and which rely on the host cell to replicatewould not be able to translate their genes into proteins. Think of a key that no longer fits into the lock; viruses trying to replicate would be unable to do so in the cells' machinery, rendering the recoded cells virus-resistant.
This concept of recoding for viral resistance has already been demonstrated. Isaacs, Church, and their colleagues reported in a 2013 paper in Science that, by removing all 321 instances of a single codon from the genome of the E. coli bacterium, they could impart resistance to viruses which use that codon. But the ultra-safe cell line requires edits on a much grander scale. We estimate that it would entail thousands to tens of thousands of edits across the human genome (for example, removing specific redundant codons from all 20,000 human genes). Such an ambitious undertaking can only be achieved with the help of the CAD program, which can automate much of the drudge work and let researchers focus on high-level design.
The famed physicist Richard Feynman once said, "What I cannot create, I do not understand." With our CAD program, we hope geneticists become creators who understand life on an entirely new level.
From Your Site Articles
Related Articles Around the Web
Continued here:
Quantum Holograms Dont Even Need to See Their Subject - IEEE Spectrum