Category Archives: Quantum Physics
What is the double-slit experiment, and why is it so important? – Interesting Engineering
Few science experiments are as strange and compelling as the double-slit experiment.
Few experiments, if any, in modern physics are capable of conveyingsuch a simple ideathat light and matter can act as both waves and discrete particles depending on whether they are being observedbut which is nonetheless one of the great mysteries of quantum mechanics.
It's the kind of experiment that despite its simplicity is difficult to wrap your mind around because what it shows is incredibly counter-intuitive.
But not only has the double-slit experiment been repeated countless times in physics labs around the world, but it has also even spawned many derivative experiments that further reinforce its ultimate result, that particles can be waves or discrete objects and that it is as if they "know" when you are watching them.
To understand what the double-slit experiment demonstrates, we need to lay out some key ideas from quantum mechanics.
In 1925,Werner Heisenberg presented his mentor, the eminent German physicist Max Born, with a paper to review that showed how the properties of subatomic particles, like position, momentum, and energy, could be measured.
Born saw that these properties could be represented through mathematical matrices, with definite figures and descriptions of individual particles, and this laid the foundation for the matrix description of quantum mechanics.
Meanwhile, in 1926,Edwin Schrdingerpublished his wave theory of quantum mechanics which showed that particles could be described by an equation that defined their waveform; that is, it determined that particles were actually waves.
This gave rise to the concept of wave-particle duality, which is one of the defining features of quantum mechanics. According to this concept, subatomic entities can be described as both waves and particles, and it is up to the observer to decide how to measure them.
That last part is important since it will determine how quantum entities will manifest. If you try to measure a particle's position, you will measure a particle's position, and it will cease to be a wave at all.
If you try to define its momentum, you will find that behaves like a wave and you can't know anything definitive about its position beyond the probability that it exists at any given point within that wave.
Essentially, you will measure it as a particle or a wave, and doing so decides what form it will take.
The double-slit experiment is one of the simplest demonstrations of this wave-particle duality as well as a central defining weirdness ofquantum mechanics, one that makes the observer an active participant in the fundamental behavior of particles.
The easiest way to describe the double-slit experiment is by using light. First, take a source of coherent light, such as a laser beam, that shines in a single wavelength, like purely blue visible light at 460nm, and aim it at a wall with two slits in it.The distance between the slits should be roughly the same as the light's wavelengthso that they will both sit inside that beam of light.
Behind that wall, place a screen that can detect and record the light that impacts it. If you fire the laser beam at the two slits, on the recording screen behind the wall you will see a stripey pattern like this:
This is probably not what you might have been expecting, and that's perfectly rational if you treat light as if it were a wave. If the light was a wave, then when the single wave of light from the laser hit both slits, each slit would become a new "source" of light on the other side of the wall, and so you would have a new wave originating from each slit producing two waves.
Where those two waves intersect causes something known as interference, and it can be either constructive or destructive. When the amplitude of the waves overlaps at either a peak or a trough, it acts to boost the wavelength in either direction by adding its energy together. This is constructive interference, and it produces these brighter bars in this pattern.
When the waves cancel each other out, as when a peak hits a trough, the effect neutralizes the wavelength and diminishes or even eliminates the light, producing the blacked-out spaces in between the blue bars.
But in the case of quantum entities like photons of light or electrons, they are also individual particles. So what happens when you shoot a single photon through the double slits?
One photon alone reacting to the screen might leave a tiny dot behind, which might not mean much in isolation, but if you shoot many single photons at the double slits, those tiny dots that the photon leaves behind on our screen actually show up in that same stripey interference pattern produced by the laser beam hitting the double slits.
In other words, the individual photon behaves as if it passed through both slits like it was a wave.
Now, here's where things get really weird.
We can set up a detector in front of one of the slits that can watch for photons and light up whenever it detects one passing through. When we do this, the detector will light up 50% of the time, and the pattern left behind on the screen changes, giving us something that looks like this:
And to make things even wilder, we can set up a detector behind the wall that only detects a photon after it has passed through the slit and we get the same result. That means that even if the photon passes through both slits as a wave, the moment it is detected, it is no longer a wave but a particle. And not just that, that second wave emerging from the other slit also collapses back into the particle that was detected passing through the other slit.
In practice, this means that somehow the universe "knows" that someone is watching and flips the metaphorical quantum coin to see which slit the particle passed through. The more individual photons you shoot through the double slit, the closer that photon detector comes to detecting photons 50% of the time, just as flipping a coin 10 times might give you heads 70% of the time while flipping it 100 times might give you tails 55% of the time, and flipping it 1 billion times gives you heads 50.0003% of the time.
This seems to show that not only is the universe watching the observer as well, but that the quantum states of entities passing through the double slits are governed by the laws of probability, making it impossible to ever predict with certainty what the quantum state of an entity will be.
The double-slit experiment actually predates quantum mechanics by a little more than acentury.
During the Scientific Revolution, the nature of light was a particularly contentious topic, with manylike Isaac Newton himselfarguing in favor of a corpuscular theory of light that held that light was transmitted through particles.
Others believed that light was a wave that was transmitted through "aether" or some other medium, the way sound travels through air and water, but Newton's reputation and a lack of an effective means to demonstrate the wave theory of light solidified the corpuscular view for just shy of a century after Newton published hisOpticks in 1704.
The definitive demonstration came from the British polymath Thomas Young, who presented a paper to the Royal Society of London in 1803 that described a pair of simple experiments that anyone could perform to see for themselves that light was in fact a wave.
First, Young established that a pair of waves were subject to interference when they overlapped, producing a distinctive interference pattern.
He initially demonstrated this interference pattern using a ripple tank of water, showing that such a pattern is characteristic of wave propagation.
Young then introduced the precursor to the modern double-slit experiment, though instead of using a laser beam to produce the required light source, Young used reflected sunlight striking two slits in a card as its target.
The resulting light diffraction showed the expected interference pattern, and the wave theory of light gained considerable support. It would take another decade and a half before further experimentation conclusively refuted corpuscles in favor of waves, but the double-slit experiment that Young developed proved to be a fatal blow to Newton's theory.
Young wasn't lying when he said, "The experiments I am about to relate...may be repeated with great ease, whenever the sun shines, and without any other apparatus than is at hand to everyone."
While it might be a stretch to say that you can use the double-slit experiment to demonstrate some of the more counterintuitive features of quantum mechanics (unless you have a photon detector handy and a laser that shoots individual photons), you can still use it to demonstrate the wave nature of light.
If you want to replicate Young's experiment, you only need as large a box as is practical with a hole cut in it a little smaller than an index card. Then, take an Exacto knife or similar blade for fine cutting work and cut two slits into a piece of cardboard larger than the hole in your box. The slits should be between 0.1mm and 0.4mm apart, as the closer together they are, the more distinct the interference pattern will be. It's better to create cards for this rather than cut directly into the box since you might need to make adjustments to the spacing of the slits.
Once you're satisfied with the spacing, affix the card with the double-slit in it over the hole and secure it in place with tape. Just make sure sunlight isn't leaking around the card.
You'll also need to create some eye-holes in the box so you can look inside without getting in the way of the light hitting the double-slit card, but once you figure that out, you're all set.
To accurately diffract sunlight using this box, you will need to have the sunlight more or less hitting the double-slit card dead on, so it might take some maneuvering to get it properly positioned.
Once it is, look through the eye holes and you can see the interference pattern forming on the inside wall, as well as different colors emerging as the different wavelengths interfering with each other change the color of the light being created.
If you wanted to try it out with something fancier, get yourself a laser pointer from an office supply store. Just like you'd do with a viewing box, create cards with slits in them, and when properly spaced, set up a shielded area for the card to rest on.
You'll want to make sure that only the light from the laser pointer is hitting the double-slit, so shield the card however you need to. Then, set the laser pointer on a surface level with the slits and shine the laser at them. On the wall behind the card, the interference pattern from the slits should be clearly visible.
If you don't want to go through all that trouble, you can also use Photoshop or similar software to recreate the effect.
First, create a template of evenly spaced concentric circles. Using different layers for each source, as well as a background later, position the center of the concentric rings near to one another. On a 1200 pixel wide canvas, a distance of 100 pixels between the two centers should do nicely.
Then, fill in the color of each concentric ring, alternating light and dark, with an opacity set to about 33%. You may need to hide one of the concentric circle layers while you work on the other. When you're done, reveal the two overlapping layers of circles and the interference pattern should jump out at you immediately, looking something like this:
Of course, if you want to dig into the quantum mechanics side of things, you'll need to work in a pretty advanced physics lab at a university or science institute, since photon detectors aren't the kind of thing you can pick up at the hobby store.
Still, if you're compelled to try the heavier stuff out for yourself, you wouldn't be the first person to get drawn into a career in physics because of the weirdness of quantum mechanics, and there are definitely worse ways to make a living.
Read the original post:
What is the double-slit experiment, and why is it so important? - Interesting Engineering
Condensed Matter Physics and Quantum Light and Matter Project Coordinator job with DURHAM UNIVERSITY | 281141 – Times Higher Education (THE)
Department of Physics
Grade 5: - 22,847 - 26,341Fixed Term - Full TimeContract Duration: 24 MonthsContracted Hours per Week: 35Closing Date: 04-Mar-2022, 7:59:00 AM
The Department and role purpose:
The Department of Physics at Durham University is one of the very best UK Physics departments with an outstanding reputation for excellence in teaching, research and employability of our students.
The Department of Physics is committed to building and maintaining a diverse and inclusive environment. It is pledged to the Athena SWAN charter, where we hold a silver award, and has the status of IoP Juno Champion. We embrace equality and particularly welcome applications from women, black and minority ethnic candidates, and members of other groups that are under-represented in physics. Durham University provides a range of benefits including pension, flexible and/or part time working hours, shared parental leave policy and childcare provision.
The Condensed Matter Physics (CMP) and Quantum Light and Matter (QLM) research sections are seeking to appoint a self-motivated and experienced Project Coordinator to support the daily operations and the effective and efficient running of their research. This post offers the successful applicant an opportunity to be part of one of Durham Universitys leading research groups.
The post holder will be a committed, enthusiastic professional who relates well to people at all levels. She/he will be expected to demonstrate a high level of initiative and be confident in dealing with diverse groups, including visiting researchers, Heads of Faculties, Departments and Colleges, and research groups across the University.
The post holder will be expected to work flexibly to deliver effective administrative support and guidance to the CMP/QLM staff and its stakeholders. Working closely with senior staff and colleagues, she/he will take responsibility for the fundamental and general CMP/QLM administrative services, as well as assisting with data gathering for funding and project applications, organising events and research activities, creating and maintaining financial and publishing records. The role will also provide opportunities for the post holder to contribute to the development of new promotional materials and communication tools for the CMP and QLM research sections e.g. website and social media content.
The CMP & QLM Project Coordinator will act as the first point of contact for enquiries and managing a wide range of internal and external enquiries from staff, partners and other stakeholders via email, telephone and face-to-face contact, taking an active decision-making role and using judgement on a day-to-day basis, providing advice, support and information.
The candidate would be expected to assist the Grant PIs, providing administrative support to ensure the smooth running of activities and to maximise effective use of academic staff time.
This role is an excellent opportunity for an administrator seeking to develop their experience and knowledge at both strategic and operational levels, and applications are invited from enthusiastic individuals looking to embrace a new challenge.
Core responsibilities:
Role responsibilities:
Specific role requirements
Working Arrangements
At Durham we recognise that our staff and students are our greatest asset and we want to support the health and wellbeing of all. Hybrid working supports this ethos and provides many benefits to our colleagues, including empowering people, where their role allows, to work in a manner which is more suitable for them, whilst encouraging our commitment to environmental sustainability.
Depending on the needs of the business and the job role, Durham University is piloting hybrid working for all Professional Services colleagues in the academic year 2021/2022, which may include the opportunity to work both on and off campus and to flex working hours. If appointed to the post, your line-manager will discuss the specific arrangements with you. Any hybrid arrangements are non-contractual and may change within the pilot and when the pilot ends.
Interviews are anticipated to take place on or around28February 2022.
Reward and Benefits
To support the delivery of the University's People Strategy to attract, retain and reward the very best, we offer a fantastic range ofrewards and benefitsto our staff,including:
Recruiting to this post
In order to be considered for interview, candidates must evidence each of the essential criteria required for the role in the person specification. In some cases, the recruiting panel may also consider the desirable criteria, so we recommend you evidence all criteria in your application.
Please note that some criteria will only be considered at interview stage.
How to apply
We prefer to receive applications online.
Please note that in submitting your application Durham University will be processing your data. We would ask you to consider the relevant University Privacy Statementhttps://www.dur.ac.uk/ig/dp/privacy/pnjobapplicants/which provides information on the collation, storing and use of data.
Information if you have a disability
The University welcomes applications from disabled people. We are committed to ensuring fair treatment throughout the recruitment process. We will make adjustments to support the interview process wherever it is reasonable to do so and, where successful, adjustments will be made to support people within their role.
If you are unable to complete your application via our recruitment system, please get in touch with us one.recruitment@durham.ac.uk.
What you are required to submit:
Please ensure that you submit all documentation listed above or your application cannot proceed to the next stage.
Contact details
For further information regarding this post, please contact;
Mrs Linda Wilkinson, Research Manager, Department of Physics, Lower Mountjoy, Durham, DH1 3LE (l.a.wilkinson@durham.ac.uk)
Contact information for technical difficulties when submitting your application
If you encounter technical difficulties when using the online application form, we prefer you send enquiries by email. Please send your name along with abrief description of the problem youre experiencing toe.recruitment@durham.ac.uk
We will notify you on the status of your application at various points throughout the selection process, via automated emails from our e-recruitment system. Please check your spam/junk folder periodically to ensure you receive all emails.
At Durham University, our aim is to create an open and inclusive environment where everyone can reach their full potential and believe our staff should reflect the diversity of the global community in which we work. We welcome and encourage applications from members of groups who are under-represented in our work force including people with disabilities, women and black, Asian and minority ethnic communities.
As a University we foster a collegiate community of extraordinary people aligned to the Universitysvalues. Equality, Diversity, and Inclusion (EDI) are a key part of the Universitys Strategy and a central part of everything we do. At Durham we actively work towards providing an environment where our staff and students can study, work and live in a community which is supportive and inclusive, and in doing so, recruit the worlds best candidates from all backgrounds and identities. Its important to us that all of our colleagues are aligned to both our values and commitment to EDI.
Person specification - skills, knowledge, qualifications and experience required
Essential Criteria
Desirable Criteria
Durham University
OUR CHARACTERISTICS:We are a globally outstanding centre of teaching and research excellence, a collegiate community of extraordinary people, in a unique and historic setting.
OUR VALUES:We are inspiring, challenging, innovative, responsible and enabling.
Durham University is one of the world's top universities with strengths across the Arts and Humanities, Business, Sciences and Social Sciences. We are home to some of the most talented scholars and researchers from around the world who are tackling global issues and making a difference to people's lives.
The University sits in a beautiful historic city where it shares ownership of a UNESCO World Heritage Site with Durham Cathedral, the greatest Romanesque building in Western Europe. A collegiate University, Durham recruits outstanding students from across the world and offers an unmatched wider student experience.
Durham University seeks to promote and maintain an inclusive and supportive environment for work and study that assists all members of our University community to reach their full potential. Diversity brings strength and we welcome applications from across the international, national and regional communities that we work with and serve.
It is expected that all staff within the University:
Family key attributes
Roles in this family provide a comprehensive service and deliver the efficient administration and governance of the University.
Overall family purpose
Link to key strategic plan
DBS Requirement:Not Applicable.
See original here:
Is Afterlife Possible? Scientist Reveals the Physics Behind Death – News18
The human brain is a mysterious organ that is much bigger than it looks. This deceptive characteristic of the brain is also reflected in the sense of self that humans entail. While what we look like are just a collection of atoms and molecules, the sheer probability of having consciousness, and that too, this advanced, triggers a belief that humans are much more than just flesh and bones.
And this is how the concept of soul is fostered. Religious texts and teachings frequently bring the soul into the discussion. What some perceive as soul boils down to consciousness that assists us in being us. Soul is believed to exist beyond the laws of life and death. It is postulated that our soul existed before we did and will exist after we do not. However, this concept becomes feeble when looked through a scientific spectacle.
Sean M. Carroll, a physicist specialising in cosmology, gravity, and quantum mechanics, shared his piece of mind regarding this never-ending journey of a soul through a blog post. Sean elaborately analysed the tributaries of this thought that claims that life after death does not end at decomposing of the body but exists beyond that.
The questions that target the sanctity of this belief revolved around the fundamental laws of physics that play their role in the interaction of atoms with their surroundings. Sean throws light on the fact that for life after death to be true, the basic structure of physics of atoms and electrons will have to be demolished, and someone will have to build a new model. Believing n life after death, to put it mildly, required physics beyond the standard model. Most importantly, we need some way for that new Physics to interact with the atoms that we do have.
Most people perceive souls as a blob of energy. What Sean argues about is the interaction of this energy with the world that we witness and the building blocks of it that we do not see. Multiple equations such as the Dirac equation, Lorentz invariance, Hamiltonian system of Quantum Mechanics, Gauge Variance, etc., will be proven void, or the concept of the soul will lose trustful ground in attempts to justify the existence of life after death.
While discussions such as these do tickle the thought process, it also sways us away from the more reality-centric questions about human beings and the consciousness giving them an identity. So, what do you think about the existence of an immaterial, immortal soul and the life after we die?
Read all the Latest News, Breaking News and Coronavirus News here.
More:
Is Afterlife Possible? Scientist Reveals the Physics Behind Death - News18
The ten greatest ideas in the history of science – Big Think
In his bookThe Structure of Scientific Revolutions, Thomas Kuhn argued that science, instead of progressing gradually in small steps as is commonly believed, actually moves forward in awkward leaps and bounds. The reason for this is that established theories are difficult to overturn, and contradictory data is often dismissed as merely anomalous. However, at some point, the evidence against the theory becomes so overwhelming that it is forcefully displaced by a better one in a process that Kuhn refers to as a paradigm shift. And in science, even the most widely accepted ideas could, someday, be considered yesterdays dogma.
Yet, there are some concepts which are considered so rock solid, that it is difficult to imagine them ever being replaced with something better. Whats more, these concepts have fundamentally altered their fields, unifying and illuminating them in a way that no previous theory had done before.
So, what are these ideas? Compiling such a list would be a monumental task, mostly because there are so many good ones to choose from. Thankfully, Oxford chemistry professor Peter Atkins has done just that in his 2003 bookGalileos Finger: The Ten Great Ideas of Science. Dr. Atkins breadth of scientific knowledge is truly impressive, and his ten choices are excellent. Though this book was written with a popular audience in mind, it can be quite incomprehensible in places, even for people with a background in science. Still, I highly recommend it.
Lets take a look at the ten great ideas (listed in no particular order).
In 1973, evolutionary biologist Theodosius Dobzhansky penned an essay titled Nothing in Biology Makes Sense Except in the Light of Evolution. By now, thousands of students across the globe have heard this title quoted to them by their biology teachers.
And for good reason, too. The power of evolution comes from its ability to explain both the unity and diversity of life; in other words, the theory describes how similarities and differences between species arise by descent from a universal common ancestor. Remarkably, all species have aboutone-third of their genes in common, and65% of human genesare similar to those found in bacteria and unicellular eukaryotes (like algae and yeast).
One of the most fascinating examples of common descent is theevolution of the gene responsible for the final step in vitamin C synthesis. Humans have this gene, but it is broken. That is why we have to drink orange juice or find some other external source of vitamin C. By sequencing this gene and tracking mutations, it is possible to trace back exactly when the ability to synthesize vitamin C was lost. According to this phylogenetic tree (see above), the loss occurred in an ancestor which gave rise to the entire anthropoid primate lineage. Humans, chimpanzees, orangutans, and gorillas all possess this broken gene, and hence, all of them need an external source of vitamin C. (At other points in evolutionary history, bats and guinea pigs also lost this vitamin C gene.) Yet, many mammals dont need vitamin C in their diet because they possess a functioning copy and are able to produce it on their own; thats why your dog or cat gets by just fine without orange juice.
The most satisfying explanation for these observations is descent with modification from a common ancestor.
A contrarian embodiment to the notion that science and religion are in conflict, the Father of Genetics was none other than Gregor Mendel, an Augustinian friar. He famously conducted experiments using pea plants and, in the process, deduced the basic patterns of inheritance. He referred to these heritable units as elements; today, we call them genes. Amazingly,Mendel didnt even know DNA existed, andCharles Darwin knew about neither DNA nor the discoveries of Mendel.
It wasnt until 1952 that scientists determined that DNA was the molecule responsible for transmitting heritable information. An experiment conducted by Alfred Hershey and Martha Chase, usingviruses with radioactively labeled sulfur or phosphorus to infect bacteria, rather convincingly demonstrated that this was the case. Then, in 1953, James Watson and Francis Crick, with substantial input from Rosalind Franklin, shattered the biological world with their double helix model of DNA structure.
From there, it was determined that the letters (A, C, G, T) of the DNA sequence encoded information. In groups of three (e.g., ACG, GAA, CCT, etc.), these nucleotides coded for amino acids, the building blocks of protein. Collectively, every possible combination of three letters is known as the genetic code. (See diagram above. Note that every T is replaced with U in RNA.) Eventually, the central dogma of molecular biology emerged: (1) DNA is the master blueprint and is responsible for inheritance; (2) DNA is transcribed into RNA, which acts as a messenger, conveying this vital information; and (3) RNA is translated into proteins, which provide structural and enzymatic functions for the cell.
Today, it is known that DNA sequences alone are insufficient to explain all the behaviors observed at the cellular level. Alterations to the DNA which do not affect the sequence of letters known asepigenetic changes are under intense investigation. It is currently unclear to what extent epigenetics is responsible for heritable traits.
All the energy that currently exists in the Universe is all that ever has been and all that ever will be. Energy is neither created nor destroyed (which is why you shouldnever buy a perpetual motion machine), though it can be transformed into mass (and vice versa). This is known as mass-energy equivalence, and every schoolchild knows the equation that describes it: E = mc2.
The story of energy largely begins with Isaac Newton. His three laws of motion got the ball rolling, so to speak, but they did not deal with energy directly; instead, they dealt with force. Eventually, with the help of scientists like Lord Kelvin, physics began to focus on energy. The two most important forms of it are potential energy (stored energy) and kinetic energy (energy of motion). Most other forms of energy, including chemical and electric energy, are simply varying manifestations of potential and kinetic energy. Also, work and heat are not forms of energy themselves, but are simply methods of transferring it.
Murphys Lawstates, Anything that can go wrong, will go wrong. Entropy is sort of like Murphys Law applied to the entire Universe.
Put simply, entropy is a measure of disorder, and the Second Law of Thermodynamics states that all closed systems tend to maximize entropy. Reversing this ever increasing tendency toward disorder requires the input of energy. Thats why housekeeping is so tiresome. Left on its own, your house would get dusty, spiders would move in, and eventually, it would fall apart. However, the energy put into preventing disorder in one place simultaneously increases it somewhere else. Overall, the entropy of the Universealwaysincreases.
Entropy also manifests in another way: There is no perfect transfer of energy. Your body (or a cell) cannot perfectly utilize food as an energy source because some of that energy is lost forever to the Universe. So, just like in finance, every transaction comes with a tax. (University of Washington microbiologist Franklin Harold liked to call it Gods energy tax.)
The common adage that nothing in life is certain except death and taxes hence takes on a new meaning.
Air, water, bacteria, humans, computers, the stars: All of them are made from atoms. In fact, the atoms that make up Earth (and everything on it, including us), originally came from the stars, which is why Carl Sagan famously quipped, We are made of starstuff.
But what are atoms? Mostly empty space, actually. That means you are mostly empty space, as well. The center of each atom, called a nucleus, consists of positively-charged protons and uncharged neutrons. Surrounding this dense cluster of positivity are the negatively-charged electrons, which buzz about, rather unpredictably. Originally, it was thought that the electrons orbited the nucleus in a way that resembles the planets around the sun, the so-called solar system model of the atom, for which Niels Bohr is given credit. The model is overly simplistic and incorrect, but it does well enough for certain calculations, which is why it is still taught in basic chemistry classes. The model was ultimately replaced with the more complex atomicorbital model.
All the known atoms are found on the periodic table, the centerpiece of every chemistry class. The table organizes the atoms in various ways, two of which are particularly important: First, the atoms are arranged by increasing atomic number, which represents the number of protons and defines each element. Second, each column on the table represents the number of outer shell electrons in each atom. This is important because the outer shell electrons largely determine the sorts of chemical reactions in which the atoms will participate.
Perhaps the most fascinating aspect of the periodic table is how it came about. The Russian chemist, Dmitri Mendeleev, first created the modern periodic table. But, it was missing elements. And using his table, he correctly predicted the existence of elements that had not yet been discovered.
Symmetry, that somewhat vague concept that involves folding or twisting triangles, cubes, and other objects in various ways has applications far beyond high school geometry class. As it turns out, the Universe is riddled with symmetry, or the lack thereof.
Themost beautiful human facesare also the most symmetrical. Atoms in a crystal are arranged in a symmetrical, repeating pattern.Many other phenomenathroughout nature exhibit breathtaking symmetry, from honeycombs to spiral galaxies.
Particle physics and astrophysics are also captivated by the concept of symmetry. One of the biggest asymmetries is the fact that our Universe is made ofmore matter than antimatter. If the Universe were perfectly symmetrical, there would be equal amounts of both. (But then the Universe probably wouldnt exist, since matter and antimatter annihilate each other.) However, as Atkins writes, the Universe issymmetricalifsimultaneouslywe change particles for antiparticles, reflect the Universe in a mirror, and reverse the direction of time.
Does that explain why Miss Universe is always so pretty?
The classical physics of Isaac Newton and James Clerk Maxwell work reasonably well for most everyday applications. But classical physics is limited in the sense that itdoes not quite accurately depict reality.
The first inkling that something was seriously wrong came from analysis of blackbody radiation. Imagine a hot stove: It first starts out red, then turns white as it gets hotter. Classical physics was incapable of explaining this. Max Planck, however, had an idea: Perhaps the released energy came in little packets called quanta. Instead of energy taking on continuous values, it instead takes on only discrete values. (Think of the difference between a ramp and a staircase; a person standing on a ramp can take on any height, while a person standing on a staircase only has certain discrete heights from which to choose.) As it turns out, these quanta of light energy are today known as photons. Thus, it was demonstrated that light, which until that time generally had been thought of as a wave, could also act like discrete particles.
Then along came Louis de Broglie who extended the concept: All particles can act like waves, and all waves can act like particles. Slam-dunk evidence for this idea came by way of the famousdouble-slit experiment, which conclusively showed that photons, electrons, and even molecules like buckyballs exhibit wave-particle duality. (A lab confirmed the results of this experiment yetagainin May 2013.)
These two concepts, quantization and wave-particle duality, form the core of the discipline known as quantum mechanics. Two other core concepts include theuncertainty principle(that is, the inability to know various pairs of characteristics of a system with precision) and thewavefunction(which, when squared, gives the probability of finding a particle in a particular location). And what does all that give us?Schrdingers cat, which is simultaneously dead and alive.
No wonder Stephen Hawking wouldalways reach for his gun.
About 13.8 billion years ago, the Universe underwent a period of rapid expansion, known as cosmic inflation. Immediately after that was the Big Bang. (Yes, cosmic inflation occurredbeforethe Big Bang.) Ever since then, the Universe has kept right on expanding.
We know the Big Bang occurred because of the telltale evidence it left behind: the cosmic microwave background (CMB) radiation. As the Universe expanded, the initial burst of light from the Big Bang got stretched. (Remember, light can be both a wave and a particle.) When light is stretched, the wavelength increases. Today, that light is no longer visible with the naked eye because it now inhabits the microwave range of the electromagnetic spectrum. However, you can still see it on old-school television sets with antennas; thestatic on in-between channelsis partially due to the CMB.
But not only is the Universe expanding, itsrate of expansionis accelerating due to dark energy. And the further away an object is from Earth, the faster it is accelerating away from us. If you thought the Universe was a lonely place now,just wait 100 billion years. Thanks to dark energy, we wont be able to see any stars beyond our own galaxy (which, at that time, will be a giant merger between the Milky Way and Andromeda galaxies and their smaller satellite galaxies).
The fabric of our universe is spacetime, which consists of the three spatial dimensions (length, width, and height) combined with the dimension of time. Imagine this fabric as a stretchy, rubber sheet. And then imagine placing a giant bowling ball on that sheet. The sheet would warp around the bowling ball, and any object placed near the bowling ball would roll toward it. This metaphor for Albert Einsteins theory of general relativity explains how gravity works. (Despite being Einsteins greatest achievement, general relativity is not for what he won the Nobel Prize; instead, the prize was awarded for his work on thephotoelectric effect.)
But this wasnt Einsteins only contribution. He also came up with special relativity, which describes how time slows down for moving objects, especially as they travel closer to the speed of light.
Interestingly, theeffects of both general and special relativitymust be taken into account for GPS satellites to work properly. If these effects were not considered, then the clocks on Earth and on the satellites would be out of sync, and consequently, the distances reported by the GPS unit would be wildly inaccurate. So, every time you use your smartphone successfully to find the local Starbucks, give thanks to Albert Einstein.
Fundamentally, mathematics makes no sense. That probably doesnt come as a surprise to those of us who struggled in algebra or calculus. Though it is the language of science, the truth is that mathematics is built upon a cracked foundation.
For instance, consider a number. You think you know one when you see one, but its rather difficult to define. (In that sense,numbers are like obscenity or pornography.) Not that mathematicians havent tried to define numbers. The field of set theory is largely dedicated to such an endeavor, butit isnt without controversy.
Or consider infinity.Georg Cantordid, and (it is speculated by some that) he went crazy in the process. Counterintuitively, there is such a thing as one infinity being larger than another infinity. The rational numbers (those that can be expressed as a fraction) constitute one infinity, but irrational numbers (those that cannot be expressed as a fraction) constitute a larger infinity. A special type of irrational number, called the transcendental number, is particularly to blame for this. The most famous transcendental is pi, which can neither be expressed as a fraction nor as the solution to an algebraic equation. The digits which make up pi (3.14159265) go on and on infinitely in no particular pattern. Most numbers are transcendental, like pi. And that yields a very bizarre conclusion: The natural numbers (1, 2, 3) are incredibly rare. Its amazing that we can do any math whatsoever.
At its core, mathematics is intimately tied to philosophy. The most hotly debated questions, such as theexistence and qualities of infinity, seem far more philosophical in nature than scientific. And thanks toKurt Gdel, we know that an infinite number of mathematical expressions are probably true, but unprovable.
Such difficulties explain why, from an epistemological viewpoint, mathematics is so disturbing: It places a finite boundary on human reason.
This article is adapted from aversionoriginally published on RealClearScience.
Go here to read the rest:
The ten greatest ideas in the history of science - Big Think
The worst thought experiments imaginable – The Next Web
While the rest of us are doing good, honest work like podcasting and influencer-ing, theres a group of thinkers out there conducting horrific experiments. Theyre conjuring pedantic monsters, murdering innumerable cats, and putting humans inside of computers.
Sure, these thought experiments are all in their heads. But thats how it starts. First you dont know whether the cats dead or alive and then a demon opens the box and were all in the Matrix.
Unfortunately, there are only two ways to fight science and philosophy:
Thus, well arm ourselves with the collective knowledge of those whove gone before us (ahem, Google Scholar) and critique so snarky it could tank a Netflix Original. And well decide once-and-for-all whose big, bright ideas are the worst.
What if I told you there was a box that gave away a free lunch every time it was opened? Some of you are reading this and thinking is Neural suggesting we eat dead cats?
No. Im talking about a different box from a different thought experiment. Erwin Schrdingers cat actually came along some 68 years after James Clerk Maxwells Demon.
In Maxwells Demon, we have a box with a gate in the middle separating its contents (a bunch of particles) into two sides. Outside the box, theres what Maxwell calls a finite being (who other scientists later inexplicably decided was a demon) who acts as the gatekeeper.
So this demon being controls which particles go from one side of the box to the other. And, because particle behavior varies at different temperatures, this means the demons able to exploit physics to harness energy from the universes tendency towards entropy.
This particular thought experiment is awful. As in: its awfully good at being awesome!
Maxwells Demon has managed to stand the test of time and, a century-and-a-half later, its at the heart of the quantum computing industry. It might be the best scientific thought experiment ever.
The worst is actually Szilards Engine. But you have to go through Maxwells Demon to get there. Because in Szilards box, rather than Maxwells Demon exploiting the tendencies of the universe, the universe exploits Maxwells Demon.
Szilards work imagines a single-molecule engine inside of the box that results in a system where entropy works differently than it does in Maxwells experiment.
This difference in opinion over the efficacy of entropy caused a kerfuffle.
It all started when scientists came up with the second law of thermodynamics, which basically just says that if you drop an ice cube in a pot of boiling water, it wont make the water hotter.
Well, Maxwells Demon essentially says sure, but what if were talking about really tiny things experiencing somewhat quantum interactions? This made a lot of sense and has led to numerous breakthroughs in the field of quantum physics.
But then Szilard comes along and says, Oh yeah, what if the system only had one molecule and, like, the demon was really bored?
Those probably arent their exact words. Im, admittedly, guessing. The point is that Szilards Engine was tough to swallow back when he wrote it in 1929 and its only garnered more scrutiny since.
Dont just take my word for it. Its so awful that John D. Norton, a scientist from the department of history and philosophy of science at the University of Pittsburgh, once wrote an entire research paper describing it as the worst thought experiment.
In their criticism, Norton wrote:
In its capacity to engender mischief and confusion, Szilards thought experiment is unmatched. It is the worst thought experiment I know in science. Let me count the ways it has misled us.
Thats borderline hate-poetry and I love it. The only criticism I have to add is that its preposterous Szilard didnt reimagine the whole thing as Szilards Lizard.
The missed opportunity alone gets it our stamp for worst scientific thought experiment.
Honestly, Id say Ren Descartesscogito, ergo sum is the worst thought experiment of all time. But theres not much to discuss.
You ever meet someone who, if they started a sentence with I think, youd want to interrupt them to disagree? Imagine that, but at the multiverse level.
Accepting Descartesspremise requires two leaps of faith in just three words and Im not prepared to give anyone that much credit.
But, admittedly, thats low hanging fruit. So lets throw another twist in this article and discuss my favorite paper of all time because its also the worst philosophical thought experiment ever.
Nick BostromsSimulation Argument lies at the intersection of lazy physics and brilliant philosophy. Its like the Han Solo of thought experiments: you love itbecause its so simple, not in spite of it.
It goes like this: Uh, what if, like, we live inside a computer?
For the sake of fairness, this is how Bostrom puts it:
This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a posthuman stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.
Think about it for a second.
Done? Good. It doesnt go any deeper. It really is just, what if all of this is just a dream? But instead of a dream, were digital entities in a computer simulation.
Its uh, kinda dumb, right?
But that doesnt mean Bostroms paper isnt important. I think its the most influential thought experiment since Descartess off-puttinginsistence upon his own existence (self involved much D?)
Bostroms a master philosopher because he understands that the core of explanation lies not in burdening a reader with unessential thought, but in stripping it away. He understands perfection as Antoine de Saint Exupry did when he declared it was attained not when there is nothing more to add, but when there is nothing more to remove.
Bostrom whittled the Simulation Argument down with Occams Razor until it became a paper capable of pre-empting your biggest yeah but, what about. queries before you could think them.
Still though, you dont have to be the head of Oxfords philosophy department to wonder if life is but a dream.
Theres no official name for this one, so well just call it That time the people building the A-bomb had to spend a few hours wondering if they were about to set the atmosphere on fire before deciding the math looked good and everything was going to be fine.
A close runner-up for this prize is That time the Nazis most famous quantum physicist was asked if it was possible that Germanys weapons could blow up the Earth by setting all the oceans aflame and he was all like: lol, maybe.
If I can channel our pal John D. Norton from above: these thought experiments are the worst. Allow me to list the ways I hate them.
The Axis and Allies werent far apart in their respective endeavors to create a weapon of mass destruction during World War II.
Of course we know how things played out: the Germans never got there and the US managed to avoid lighting the planet on fire when it dropped atomic bombs on the civilian populations of Hiroshima and Nagasaki.
In reality, Albert Einstein and company on the Allies side and Warner Heisenberg and his crew on the Axis were never concerned with setting off a globally-catastrophic chain reaction by detonating an atomic bomb. Both sides had done the math and determined it wasnt really a problem.
Unfortunately, the reason were aware of this is because both sides were also keen to talk to outsiders. Heisenberg famously joked about it to a German politician. And Arthur Compton, whod worked with Einstein and others on The Manhattan Project, gave a now infamous interview wherein he made it seem like the possibility of such a tragic event was far greater than it actually was.
This is our selection for the absolute worst thought experiment(s) of all time because its clear that both the Axis and the Allies were pretty far along in the process of actually building atomic bombs before anyone stopped and thought hey guys, are we going to blow up the planet if we do this?
Thats Day One stuff right there. Thats a question you should have to answer during orientation. You dont start building a literal atom bomb and then hold an all-hands meeting to dig into the whole killing all life thing.
Those are all great examples of terrible thought experiments. For scientists and philosophers anyway. But everyone knows the worst ideascome from journalists.
I think I can come up with a terrible thought experiment thatll trump each of the above. All I have to do is reverse-engineer someone elses work and restate it with added nonsense (hey, it worked for Szilard right?).
So lets do this. The most important part of any thought experiment is its title. We need to combine the name of an important scientist with a science-y creature if we want to be taken seriously like Maxwell and his Demon or Schrdinger and his Cat.
And, while substance isnt really what were going for here, we still need a real problem that remains unsolved, can be addressed with a vapid premise, and is accessible to intellects of any level.
Thus, without further ado, I present: Ogres Ogre, athought experiment that uses all the best ideas from the dumb ones mentioned above but contains none of their weaknesses (such as math and the scientific method).
Unlike those theories, Ogres Ogre doesnt require you to understand or know anything. Its just quietly cajoling you into a natural state of curiosity.
In short, Ogres Ogre isnt some overeager overachiever like those others. Where Maxwells Demon demonizes particles by maximizing the tendency toward entropy, and Szilars Engine engages in entropy in only isolated incidents, Ogres Ogre egregiously accepts all eventualities.
It goes like this: What if C-A-T really spelled dog?
Read this article:
What’s the maximum number of planets that could orbit the sun? – Verve Times
An artists impression of the planets in the solar system, not to scale. (Image credit: Shutterstock)
The solar system contains eight planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune, all of which circle the sun due to its intense gravitational pull. But is this the maximum number of planets that can orbit the sun? Or is there room for more?
Compared with other known planetary systems, the solar system contains an unusually high number of planets. In total, there are 812 known planetary systems with three or more confirmed planets, and only one other known system, Kepler-90, that contains as many planets as the solar system, according to The Extrasolar Planets Encyclopaedia.
There is a good chance that a lot of these systems have small inner planets that we cannot detect, so it is unlikely that the solar system is actually the most populated planetary system in our cosmic neighborhood. But it highlights that eight planets may be near the upper limit of how large a planetary system can naturally grow.
Related: How many atoms are in the observable universe?
Therefore, to work out the absolute maximum capacity of planets orbiting the sun, we need to move into the realm of the theoretical, ignoring some of the natural factors that may limit how many planets can form. One of the best ways to do that is to design, or engineer, a brand-new solar system from scratch.
When youre talking about how many planets could be in a planetary system, there are lots of different aspects you need to consider, Sean Raymond, an astronomer at the Bordeaux Astrophysics Laboratory in France who specializes in planetary systems, told Live Science.
The structure of a planetary system is the result of a number of complex factors, Raymond said, including the size of the star, the size of the planets, the type of planets (for instance, rocky planets or gas giants), the number of moons orbiting each planet, the location of large asteroids and comets (such as those in the asteroid belt between Jupiter and Mars and in the Kuiper Belt beyond Neptune), the direction of the planets orbits and the amount of material left over from the suns formation to create the planets. It also takes hundreds of millions of years of intense collisions and gravitational tugs-of-war between planets for a system to settle into a stable configuration.
However, if we were a super-advanced civilization with technology and resources that far exceeded our current capabilities, it might be possible to get around a lot of these limitations and design a solar system packed with the maximum number of planets, Raymond said.
In this theoretical engineered solar system, we could assume that there were no limit to the materials available to create planets and that they could be produced artificially and positioned at will. It would also be possible to remove moons, asteroids, comets and other obstructions that might complicate things. The only limitations would be that the gravity that the planets and the sun exert would be the same as they normally would be and that the planets would have to orbit the sun in a stable configuration without interfering with each other.
A planet is defined as celestial body that (a) is in orbit around the sun, (b) has sufficient mass to achieve hydrostatic equilibrium (making it round in shape) and (c) has cleared the neighborhood around its orbit from debris, the latter being the reason why Pluto is not considered a true planet, according to the International Astronomical Union.
In an engineered solar system, the maximum number of planets is limited by the number of planetary orbits you can fit around the sun before they start to become unstable.
When a planetary system becomes unstable, the orbits of planets start to cross each other, which means they might collide with each other or just gravitationally scatter, where planets slingshot around other planets and get catapulted out of the system, Raymond said.
Related: Why are galaxies different shapes?
The minimum safe distance between the orbits of different planets in a stable system is dependent on each planets size or, more accurately, its Hill radius. A planets Hill radius is the distance between the planet and the edge of its sphere of influence, within which objects with a smaller mass will be affected by its gravity, such as the moon orbiting Earth.
More massive planets exert a stronger gravitational force, which means they have a greater Hill radius. That is why the distance between the orbits of Earth and Mars, which is around 48.65 million miles (78.3 million kilometers), is around seven times smaller than the distance between the orbits of Mars and Jupiter, which is around 342.19 million miles (550.7 million km), according to NASA.
For this reason, the number of orbits that could fit inside the solar system depends predominantly on the size of the planets, Raymond said. For example, Jupiter is around 300 times more massive than Earth, which means that its Hill radius is around 10 times larger, Raymond said. This means that 10 separate Earth orbits could fit into the same space taken up by Jupiters current orbit.
Therefore, to maximize the number of planets in a system, you have to make the planets as small as possible.
The size of the planets is the key to maximizing the number of orbits that could fit into an engineered system. However, there is another clever trick we could exploit to add in a few extra orbits regardless of the planets size: change the direction in which they move around the sun.
In the current solar system, each planet orbits in the same direction around the sun. This is because the planets formed from a large cloud of dust rotating in the same direction around the sun. However, in our engineered solar system, it would be possible to have planets that orbit the sun in the opposite direction, known as retrograde orbits, Raymond said. However, this idea is somewhat fanciful; retrograde orbits likely do not exist in nature due to the nature of how planets form.
That said, if two planets were to orbit the sun in the opposite direction, the gravitational forces between them would be slightly weakened and the minimum safe distance between their orbits could be reduced.
If two planets in different orbits are going in the same direction, then they have a longer time to encounter each other as they pass, which creates a larger gravitational kick, Raymond said. However, if they are going in the opposite direction, they zoom past each other and interact for a shorter amount of time, which means they can be closer together without colliding or scattering.
Related: What happened before the Big Bang?
Therefore, if we made every other orbit in our engineered system a retrograde orbit, like a carousel where adjacent people are moving in opposite directions, we could minimize the space needed between each orbit and, in doing so, squeeze in extra planets.
Until this point, we have assumed that each orbit in our engineered solar system contains just one planet. However, it is actually possible to have multiple planets that share an orbit, Raymond said. And we can see an example of this in our current solar system.
Jupiter has two clusters of asteroids, known as the Greeks and the Trojans, that share its orbit. These clusters are located around 60 degrees in front of and behind the gas giant as it orbits the sun, Raymond said. However, astronomers think it is possible to have planets share orbits in a similar way. Theyve dubbed these theoretical worlds Trojan planets.
People are actively searching for examples of these Trojan planets among exoplanet systems because theyre expected to form naturally, Raymond said. However, none have been observed yet, he added.
If we want to maximize the number of planets in our engineered solar system, we will want to have as many of these Trojan planets as possible. However, just like with the number of orbits you can fit around the sun, the number of planets you can fit into an orbit must be spaced out enough to remain stable.
In a study published in 2010 in the journal Celestial Mechanics and Dynamical Astronomy, a pair of astronomers used Hill radii to work out how many planets could share an orbit. They found that it would be possible to have as many as 42 Earth-size planets share a single orbit. Moreover, just like with the number of orbits in a system, the smaller the planets, the more you could fit into the same orbit, Raymond said.
Of course, the chances of this many planets naturally sharing a single orbit are practically zero, because each planet would need to be exactly the same size and have formed at the same time to be stable, Raymond said. But in an engineered solar system, this level of co-orbital structure would be possible and would greatly increase the number of planets we could squeeze in.
Related: Why does outer space look black?
Now that we understand the key variables we need to engineer a planet-packed solar system, its finally time to crunch the numbers and see how many planets we can fit inside it.
Luckily, Raymond has already done this for us using computer simulations he created; they can be viewed in more detail on his blog, PlanetPlanet. However, it is important to note that although these calculations are based on theories astronomers use to create legitimate simulations, these models are not peer-reviewed and should be regarded with a pinch of playful skepticism.
To maximize the number of planets, Raymonds engineered system extends to 1,000 astronomical units (AU) from the sun. (One AU is the average distance from the sun to Earths orbit, which is about 93 million miles, or 150 million km.) Currently, the defined edge of the solar system, also known as the heliosphere, is around 100 AU from the sun, according to the European Space Agency, but the suns gravitational influence can extend much farther. Whats more, Raymonds model uses equally sized planets with alternating retrograde orbits.
Taking all of this into account, if you used Earth-size planets, you could fit in 57 orbits, each containing 42 planets, which gives a total of 2,394 planets. However, if you used smaller planets that are one-tenth the size of Earth (roughly the same mass as Mars), you could fit in 121 orbits, each containing 89 planets, which gives a total of 10,769 planets. And if the planets were around the size of the moon (one-hundredth the mass of Earth), you could have 341 orbits, each containing 193 planets, which gives a total of 65,813 planets.
Obviously, these numbers are extreme, and the ability to engineer such complicated systems is far beyond humanitys reach. But this fun thought experiment does highlight that there is much more space for planets in the solar system than the meager eight we see today. However, it is very unlikely that any more could have formed naturally.
Originally published on Live Science.
See the original post:
What's the maximum number of planets that could orbit the sun? - Verve Times
Breaking the noise barrier: The startups developing quantum computers – ComputerWeekly.com
Today is the era of noisy intermediate scale quantum (Nisq) computers. These can solve difficult problems, but they are said to be noisy, which means many physical qubits are required for every logical qubit that can be applied to problem-solving. This makes it hard for the industry to demonstrate a truly practical advantage that quantum computers have over classical high-performance computing (HPC) architectures.
Algorithmiq recently received $4m in seed funding to enable it to deliver what it claims are truly noise-resilient quantum algorithms. The company is targeting one specific application area drug discovery and hopes to work with major pharmaceutical firms to develop molecular simulations that are accurate at the quantum level.
Algorithmiq says it has a unique strategy of using standard computers to un-noise quantum computers. The algorithms it is developing offer researchers the ability to boost the speed of chemical simulations on quantum computers by a factor of 100x compared with current industry benchmarks.
Sabrina Maniscalco, co-founder and CEO at Algorithmiq and a professor of quantum information, computing and logic at the University of Helsinki, has been studying noise in quantum computers for 20 years. My main field of research is about extracting noise, she said. Quantum information is very fragile.
In Maniscalcos experience, full tolerance requires technological advances in manufacturing and may even require fundamental principles to be discovered because the science does not exist yet. But she said: We can work with noisy devices. There is a lot we can do but you have to get your hands dirty.
Algorithmiqs approach is about making a mindset shift. Rather than waiting for the emergence of universal fault-tolerant quantum computing, Maniscalco said: We look for what types of algorithms we can develop with noisy [quantum] devices.
To work with noisy devices, algorithms need to take account of quantum physics in order to model and understand what is going on in the quantum computer system.
The target application area for Algorithmiq is drug discovery. Quantum computing offers researchers the possibility to simulate molecules accurately at the quantum level, something that is not possible in classical computing, as each qubit can map onto an electron.
According to a quantum computing background paper by Microsoft, if an electron had 40 possible states, to model every state would have 240 configurations, as each position can either have or not have an electron. To store the quantum state of the electrons in a conventional computer memory would require more than 130GB of memory. As the number of states increases, the memory required grows exponentially.
This is one of the limitations of using a classical computing architecture for quantum chemistry simulations. According to Scientific American, quantum computers are now at the point where they can begin to model the energetics and properties of small molecules, such as lithium hydride.
In November 2021, a consortium led by Universal Quantum, a University of Sussex spin-out company, was awarded a 7.5m grant from Innovate UKs Industrial Strategy Challenge Fund to build a scalable quantum computer. Its goal is to achieve a million qubit system.
Many of todays quantum computing systems rely on supercooling to just a few degrees above absolute zero to achieve superconducting qubits. Cooling components to just above absolute zero is required to build the superconducting qubits that are encoded in a circuit. The circuit only exhibits quantum effects when supercooled, otherwise it behaves like a normal electrical circuit.
Significantly, Universals quantum technology, based on the principle of a trapped ion quantum computer, can operate at much more normal temperatures. Explaining why its technology does not require supercooling, co-founder and chief scientist Winfried Hensinger said: Its the nature of the hardware platform. The qubit is the atom that exhibits quantum effects. The ions levitate above the surface of the chip, so there is no requirement on cooling the chip in order to make a better qubit.
Just as a microprocessor may run at 150W and operate at room temperature, the quantum computer that Universal Quantum is building should not require anything more than is needed in an existing server room for cooling.
The design is also more resilient to noise, which introduces errors in quantum computing. Hensinger added: In a superconducting qubit, the circuit is on the chip, so it is much harder to isolate from the environment and so is prone to much more noise. The ion is naturally much better isolated from the environment as it just levitates above a chip.
The key reason why Hensinger and the Universal Quantum team believe they are better placed to further the scalability of quantum computers is down to the cooling power of a fridge. According to Hensinger, the cooling needed for superconducting qubits is very difficult to scale to large numbers of qubits.
Another startup, Quantum Motion, a spin-out from University College London (UCL), is looking at a way to achieve quantum computing that can be industrialised. The company is leading a three-year project, Altnaharra, funded by UK Research and Innovations National Quantum Technologies Programme (NQTP), which combines expertise in qubits based on superconducting circuits, trapped ions and silicon spins.
The company says it is developing fault-tolerant quantum computing architectures. John Morton, co-founder of Quantum Motion and professor of nanoelectronics at UCL, said: To build a universal quantum computer, you need to scale to millions of qubits.
But because companies like IBM are currently running only 127-qubit systems, the idea of universal quantum computing comprising millions of physical qubits, built using existing processes, is seen by some as a pipedream. Instead, said Morton: We are looking at how to take a silicon chip and make it exhibit quantum properties.
Last April, Quantum Motion and researchers at UCL were able to isolate and measure the quantum state of a single electron (the qubit) in a silicon transistor manufactured using a CMOS (complementary metal-oxide-semiconductor) technology similar to that used to make chips in computer processors.
Rather than being at a high-tech campus or university, the company has just opened its new laboratory off Londons Caledonian Road, surrounded by a housing estate, a community park and a gym. But in this lab, it is able to lower the temperature of components to a shade above absolute zero.
James Palles-Dimmock, chief operation officer at Quantum Motion, said: Were working with technology that is colder than deep space and pushing the boundaries of our knowledge to turn quantum theory into reality. Our approach is to take the building blocks of computing the silicon chip and demonstrate that it is the most stable, reliable and scalable way of mass manufacturing quantum silicon chips.
The discussion Computer Weekly had with these startups shows just how much effort is going into giving quantum computing a clear advantage over HPC. What is clear from these conversations is that these companies are all very different. Unlike classical computing, which has chosen the stored program architecture described by mathematician John von Neumann in the 1940s, there is unlikely to be one de-facto standard architecture for quantum computing.
Original post:
Breaking the noise barrier: The startups developing quantum computers - ComputerWeekly.com
AJM Book Talk: What Is Real? by Adam Becker – Lone Star Ball
I recently finished What is Real? by Adam Becker, and it is a book that Ive not stopped thinking about after reading it. It deals with quantum physics and the nature of reality, a topic Im fascinated with, but which I realize many people dont find interesting, so I get it if you close the browser or otherwise move on to something else after reading this sentence. But I have a tendency to want to write about things Ive been thinking about a lot, and I have this here blog, and theres not much baseball going on, so...
What is Real? came out in 2018, when Becker was 34, and was written as a result of him getting a Sloan Foundation grant a couple of years earlier to research and write a book on the history of the foundations of quantum physics, with a particular emphasis on the continued dominance of the troubled Copenhagen Interpretation. The fact that grant summary calls it the troubled Copenhagen Interpretation provides a pretty clear hint at the direction which Becker is coming from, something I wasnt aware of when I was reading the book, but which became clear pretty quickly.
Becker has an interesting academic background he got his B.A. from Cornell in Physics and in Philosophy in 2006, got a masters a year later from the University of Michigan in Physics, and then got a Ph.D. from UofM in Computational Cosmology in 2012. Hes done a lot of writing and teaching, and is currently at the Lawrence Berkeley National Laboratory in a position he describes on his website as being Science Writer and Communications Specialist, and describes himself as Author and Astrophysicist.
Before I really dive into this, I want to talk a little bit about my dad (and if youre just interested in Beckers book and quantum physics you can skip the next several paragraphs and resume reading after the * * * ). As many of you know, hes a veterinarian who has a clinic and, as of several years ago, a no-kill animal shelter in southwest Fort Worth. Hes had his own veterinary clinic for almost 50 years.
He was also the first person in the family to go to college. He grew up in a working class family, as that term was understood in the early and middle parts of the 20th century, the oldest of three children. His father my grandfather worked for the railroad, was in the union, and had the type of nice, steady job that was much more commonplace back then than it is now, the type of job that is romanticized is some quarters and decried as wage slavery in others. My grandfather worked hard at a job that, as best as I can tell, he didnt particularly care for, but which he had to do in order to provide for his wife and children.
One of the things that my dad has always talked about, for as far back as I can remember, was that when he was a kid, when he was growing up, he looked at the life his father had, the life others in the neighborhood had, and said to himself, Im not going to live like this. Im not going to work every day for someone else doing a job that I dont really want to do in order to just scrape by. He was adamant he was not going to live his life that way.
Unlike his eldest son, my father has always had tremendous drive and a great work ethic. He started working at an early age, always had a job from his adolescence, often times more than one. He got up early in the morning throughout his teenage years to throw papers. One of his jobs was working behind the counter at Swensons Ice Cream for whatever reason, thats one of the ones that really sticks in my mind.
And while he was doing this, he always got very good grades in school. Again, unlike his eldest son, he had the willpower and drive to pay attention, get his schoolwork done, and put in the effort and the work to get good grades. He did well enough in high school to get into Texas A&M University, which he (and my mom, when they got married after his freshman year) worked to put himself through, both undergrad and vet school.
All through school, elementary, middle and high school, then undergrad and graduate school, he says he only made one C. And that was in Physics. My niece, who is currently at A&M seeking to follow in his footsteps and go to vet school (shes currently an undergrad), was talking to us about that part of the family lore over Christmas, and how thats also the thing that she has the most problems with. Biology, both of them really enjoyed and understood and were good at. Chemistry was, for both of them, something they could handle, but was a strong suit. Physics, though, was her biggest hurdle, just like with her grandfather.
I thought that was interesting because for me it is the exact opposite. I have never understood biology. I have never been good at it. I struggled with it in school, and still have a hard time grokking it. On the other hand, physics THAT I get. I can process it. I was always good at it in school, both high school and college. Its something Im fascinated by, and enjoy reading about.
As I was explaining that in our discussion, my dad turned to my niece and said, Well of course he likes physics. He loves all that math. I dont want to deal with all that math. I want to deal with whats alive, with whats REAL.
* * *
You can draw a line, if you want, between biology and physics, with biology dealing with what the real, physics with the abstract, and chemistry straddling the line, having a connection to both. During the Enlightenment, there wasnt so much of this schism there were simply Natural Philosophers, exemplified by the Royal Society, who sought to understand nature, in whatever form. But as our understanding of nature, of the world, of the universe surrounding us increased, the breadth and depth of knowledge made it harder to keep up (Albert Einstein famously said that Johann Goethe, who died almost 200 years ago, was the last man in the world to know everything), and resulted in ever greater degrees of separation and specialization of the sciences.
But part of what I find so fascinating about physics is that, contrary to what my dad said, it is about whats real. Yes, I do like and grok math more than...whatever it is you need to like and grok to get biology, but I hit a wall at Calculus II, and am not going to pretend I have any sort of detailed knowledge or understanding of the level of math necessary to do high level physics. But on a fundamental level, what makes it so gripping is that it is about understanding or trying to understand how Everything with a capital E works.
And with quantum physics, part of what makes it so fascinating is how bizarre and non-intuitive reality is or at least how we perceive reality. Becker, examining the past century of work in quantum physics, ends up delving into the meta-issue of, what does it mean to understand how Everything works?
As is inherent in any examination of the history of quantum physics, Niels Bohr is a major figure in the book, and while Becker says (in response to a negative review of What Is Real?) that he doesnt think he paint[s] Bohr as a villain, he does come across to me as, at least, the antagonist. The breakthroughs Bohr and Werner Heisenberg made in the 1920s in taking the findings and theories that Max Planck, Albert Einstein and others made in the first quarter of the 20th century and using it to build mathematical models that reflected the observational data led to what is known as the Copenhagen Interpretation.* It is the Copenhagen Interpretation that Becker spends much of the book assailing, and as Bohr is the godfather of the Copenhagen Interpretation, it ends up coming across as challenging or, less charitably, attacking Bohr.
* The term Copenhagen Interpretation was not used at the time it appears to have been coined at some point in the 1950s but its use became widespread. Im using it to refer to the collection of principles that eventually came to be known as the Copenhagen Interpretation even when referring to events that occurred before the actual term was invented.
As Becker illustrates in his book, it would be hard to overstate the influence Bohr had over the development of quantum theory as it was developing from its early embryonic stages. In 1913 Bohr used Plancks quantum theory and Ernest Rutherfords model of the atom as a springboard to create what is now known as the Bohr model of the atom. This is the model for the atom which we commonly use even today to visualize an atom a nucleus surrounded by concentric orbital shells which contain electrons, and which electrons can jump between (the quantum jump). The model was groundbreaking, and was one of the major contributions that led to Bohr receiving the Nobel Prize in 1922.
By 1925, as a Nobel Laureate and one of the leading lights in the still-nascent field of quantum theory, Bohr headed up the Institute for Theoretical Physics at the University of Copenhagen. That year, one of his students, Werner Heisenberg, published the paper that ultimately led to Heisenberg promulgating the revolutionary matrix mechanics formulation of quantum mechanics. Two years later, while working under Bohr, Heisenberg developed his Uncertainty Principle*. Paul Dirac and Ernest Schrodinger spent time at the Institute during this time as well, and the Institute quickly became Ground Zero for the development of quantum theory and quantum mechanics.
* The Uncertainty Principle provides that there is an inherent mathematical limit to how much we can know about both the position and the momentum of a particle the more certainty we have about one, the less we have about the other. This is often interpreted as meaning that the act of measuring position or momentum changes the state of the particle (the Observer Effect) and that does happen, since the interaction of a photon to measure results in a change in the particles position and/or momentum, and was the original basis of Heisenbergs theory. However, the Uncertainty Principle exists independent of any external measurement for example, atoms cooled to extremely low temperatures have very little uncertainty in momentum, resulting in their smearing, their wavefunctions overlapping with other nearby atoms, and their occupying the same states, resulting in a Bose-Einstein Condensate.
As work developed in these fields, Becker appears to acknowledge, Bohr was less theoretical physicist and more of a guiding supervisor and philosopher. This is complicated by the fact that Bohr was one of those scientists who, as Becker notes repeatedly, struggled mightily to put his thoughts into words, resulting in his writing being famously opaque, his views vague and sometimes contradictory. The combination of Bohr being revered by the generation of quantum physicists working in the pre-WWII era as almost a gatekeeper, someone whose blessing was needed for ones work to have merit and his being unable to express his views clearly resulted in Bohr seemingly transferring into an oracle over time.
Whatever weaknesses Bohr had in elucidating his own fundamental opinions, he was a successful advocate of the theories he and his students advanced. Albert Einstein, long painted in the second half of his career as a curmudgeon who wrongheadedly refused to accept quantum mechanics, challenged the inherently probabilistic nature of the universe described by the Copenhagen Interpretation (god doesnt play dice)*, and challenged the Copenhagen Interpretation over the next decade, particularly its violation of causality and its reliance on the observer effect in wave function collapse.
* The actual quote, translated into English, is I, at any rate, am convinced that [God] does not throw dice, from a letter written by Einstein to Max Born in 1926, and is an unfair oversimplification of the subtle and complicated views Einstein held on the subject of quantum mechanics.
Einstein argued that, at a minimum, the Copenhagen Interpretation was incomplete, and argued for the existence of hidden variables or other phenomena that were driving the results the Copenhagen Interpretation was finding. Einsteins problem, however, was that, whatever the reason for it, the math behind the Copenhagen Interpretation worked, and Einstein couldnt give a definitive contrary explanation as to why. When John von Neumann produced a proof that (supposedly)* conclusively established that there could be no hidden variables, that appeared to put an end to the debate. Bohr won, Einstein lost, and the Copenhagen Interpretation was the Law of the Land.
* As it turns out, that proof was flawed, and a central theme of the second half of the book is an individual challenging the von Neumann proof, having found the flaw, and being ignored because, well, hes John von Neumann and youre just some rando. It took decades, but the flawed nature of the proof appears to have now been accepted.
Beckers deep dive into the historical underpinnings of both the Copenhagen Interpretation which is, in essence, an umbrella term for various underlying views and principles which are generally embraced by those who studied under Bohr or his students, and who built upon the work derived therefrom and its almost universal adoption in the twentieth century lays the groundwork for the second half of the book, in which he follows the efforts of a few isolated individuals who sought to challenge what had become quantum mechanical orthodoxy.
Becker emphasizes what can best be described as a philosophical split between what can best be described as mainstream Copenhagen Interpretation and those who question it. That philosophical split is over the importance relevance, even of understanding why quantum mechanics works. While Bohr himself was not necessarily completely dismissive of the why, he never really elucidated a comprehensive and coherent explanation as to the whys, and over time, adherents of the Copenhagen Interpretation mostly, according to Becker, quit worrying or caring about the whys of it. Instead, the mindset that Becker lays out is one of Shut up and calculate! (a phrase coined by N. David Merman, though often attributed to Richard Feynman) the math works, the theories accurately predict the results, and thats all that matters.
The most significant example of this problem is the collapse of the wave function, which is exemplified in the Two-Slit Problem*, a thought experiment that has since been experimentally verified which showed (among other things) that, under the rules of quantum mechanics, whether a single photon of light acts as a wave or a particle depends on whether it is being observed.
* I remember the first time I read about this, in a book called Schrodingers Kittens and the Search for Reality, by John Gribbin. It blew my mind. I had to re-read it a couple of times to make sure I understood it. Niels Bohr famously said, A person who wasnt outraged on first hearing about quantum theory didnt understand what had been said, and that describes my reaction in reading Gribbins book for the first time.
While much of quantum physics is inherently unintuitive, the idea that a particle behaves differently depending on whether or not it is observed goes well beyond unintuitive, and even beyond anthropocentrism into the realm of solipsism. Bohr espoused the theory of complementarity the idea that objects have complementary properties that cannot be measured simultaneously as a way of getting around the subjective element of the observer effect.
This line of thought seems to hold that what an object does when it is not being measured is unknowable, and thus irrelevant. This leads to the philosophical split mentioned above, whereby the Copenhagen Interpretation gets characterized as not describing reality, but simply providing a mathematical framework that is describing what is happening at the quantum level. Under this mindset, whether or not a photon is in reality a wave, a particle, sometimes one and sometimes the other, both, or something else entirely is irrelevant since the math works, theres no reason to wonder whether or not it reflects a literal depiction of what is happening at that level.
Becker opines that disregarding what is real over the second half of the twentieth century, and the embrace of the Copenhagen Interpretation without regard to what the results and data say about the underlying nature of reality, is due in no small part to the Cold War and the extent to which university funding was tied to government grants and the ability to produce practical results. Becker spends a fair amount of time looking at the politics of the Western World particularly the United States in the post-war era, and how it impacted not just fields of study, but individuals. Becker writes at length about the travails of David Bohm, and iconoclastic physicist who challenged the Copenhagen Interpretation, but who also was hamstrung in his ability to teach or work in the United States do to his being a Communist. Bohms quixotic life and career, and alternative interpretations, get quite a bit of coverage in What Is Real?
Becker tracks the efforts of the occasional renegade, leading to John Stewart Bell, who both identified the flaw in von Neumanns proof and established that the local hidden variable theory set forth in the Einstein-Podolsky-Rosen Paradox (part of the challenge to the Copenhagen Interpretation, and put out in 1935) violated quantum theory. The EPR Paradox noted that under the quantum mechanics, there would exist the phenomenon of quantum entanglement particles that are entangled in such a way that, when separated, measuring the spin state of one of them would immediately result in the other particle having the opposite spin state. This phenomenon derided by Einstein as spooky action at a distance violated locality (and thus a fundamental part of causality). Einstein hypothesized that local hidden variables would have to be involved, thus preserving the notion of locality and establishing that the Copenhagen Interpretation was incomplete.
Bell ends up being the hero of What is Real?, as Becker follows out Bells Theorem has led to the questioning of the Copenhagen Interpretation orthodoxy and the promulgation of alternative theories notably the many worlds theory, which Becker describes as having gained significant support. Under the many worlds theory, rather than a collapse of the wave function triggered by an observation resulting in a particle or object assuming one of the possible forms it could take, with a likelihood based on the probabilistic nature of the wave function, every possible outcome available under the probabilistic model occurs, with each outcome results in the branching off of a new universe.
While Becker says he himself doesnt advocate for the many-worlds interpretation in What is Real?, its hard to come away from Beckers book not feeling like that is his view. And theres appeal to the many worlds interpretation not least of which that, it seems to me at least, it provides some actual explanation for what is occurring on the quantum level. Conversely, the Copenhagen Interpretation, as set forth by Becker, requires one to either assume everything is probabilistic and nothing is real until the act of observing rife with philosophical questions or choosing to just shut up and calculate!, and if one does choose to think about the whys, having to accept the equivalent of Xenas a wizard did it explanation.
One of the problems that one has in grappling with this issue particularly in the framework of the Copenhagen Interpretation/Not Copenhagen Interpretation duality Becker uses in his book is that, as Becker acknowledges, there is not one specific Copenhagen Interpretation. It is not a specific set of laws and rules so much as it is a collection of guiding principles which have been used by mainstream physicists for most of the last century or so. Becker notes that such a loosy-goosy definition provides a degree of flexibility to its adherents, as two followers of the Copenhagen Interpretation may have diametrically opposed views as to a certain element of quantum theory, and yet still rightfully claim to be part of the Copenhagen Interpretation orthodoxy. That can make critiquing the Copenhagen Interpretation like nailing Jello to the wall, as a particular criticism can be deflected by saying, well, that doesnt mean the Copenhagen Interpretation is wrong, because heres someone who is an adherent who agrees with that criticism, and really, its a theory that is bigger than just that one particular issue. Conversely, though, that big umbrella means that there are many more things that Becker can identify as being part of the Copenhagen Interpretation and attack, thereby eroding the credibility of the theory as a whole.
That being said, it seems like theres something apropos about the Copenhagen Interpretation, which has as one of its core tenets the fundamental indeterminability of things at the quantum level, having such a level of uncertainty as to what it really is.
Personally, I understand the desire to look at the underpinnings of what quantum mechanics describes. On a practical level, I get that it is enough to know that the math works, and not worry about the extent to which the math is simply a formalism, a symbolic approximation of things we dont maybe cant understand at the quantum level, versus an accurate description of reality. I dont have to know how an internal combustion engine works in order to drive a car.
But that doesnt mean that I dont want to know that I dont want to understanding that fundamental question that is the title of Beckers book. And its part of what made it such a compelling read for me, and a book that Im likely going to re-read at some point in the future.
Read the original post:
AJM Book Talk: What Is Real? by Adam Becker - Lone Star Ball
Why Is Silicon Valley Still Waiting for the Next Big Thing? – The New York Times
In the fall of 2019, Google told the world it had reached quantum supremacy.
It was a significant scientific milestone that some compared to the first flight at Kitty Hawk. Harnessing the mysterious powers of quantum mechanics, Google had built a computer that needed only three minutes and 20 seconds to perform a calculation that normal computers couldnt complete in 10,000 years.
But more than two years after Googles announcement, the world is still waiting for a quantum computer that actually does something useful. And it will most likely wait much longer. The world is also waiting for self-driving cars, flying cars, advanced artificial intelligence and brain implants that will let you control your computing devices using nothing but your thoughts.
Silicon Valleys hype machine has long been accused of churning ahead of reality. But in recent years, the tech industrys critics have noticed that its biggest promises the ideas that really could change the world seem further and further on the horizon. The great wealth generated by the industry in recent years has generally been thanks to ideas, like the iPhone and mobile apps, that arrived years ago.
Have the big thinkers of tech lost their mojo?
The answer, those big thinkers are quick to respond, is absolutely not. But the projects they are tackling are far more difficult than building a new app or disrupting another aging industry. And if you look around, the tools that have helped you cope with almost two years of a pandemic the home computers, the videoconferencing services and Wi-Fi, even the technology that aided researchers in the development of vaccines have shown the industry hasnt exactly lost a step.
Imagine the economic impact of the pandemic had there not been the infrastructure the hardware and the software that allowed so many white-collar workers to work from home and so many other parts of the economy to be conducted in a digitally mediated way, said Margaret OMara, a professor at the University of Washington who specializes in the history of Silicon Valley.
As for the next big thing, the big thinkers say, give it time. Take quantum computing. Jake Taylor, who oversaw quantum computing efforts for the White House and is now chief science officer at the quantum start-up Riverlane, said building a quantum computer might be the most difficult task ever undertaken. This is a machine that defies the physics of everyday life.
A quantum computer relies on the strange ways that some objects behave at the subatomic level or when exposed to extreme cold, like metal chilled to nearly 460 degrees below zero. If scientists merely try to read information from these quantum systems, they tend to break.
While building a quantum computer, Dr. Taylor said, you are constantly working against the fundamental tendency of nature.
The most important tech advances of the past few decades the microchip, the internet, the mouse-driven computer, the smartphone were not defying physics. And they were allowed to gestate for years, even decades, inside government agencies and corporate research labs before ultimately reaching mass adoption.
The age of mobile and cloud computing has created so many new business opportunities, Dr. OMara said. But now there are trickier problems.
Still, the loudest voices in Silicon Valley often discuss those trickier problems as if they were just another smartphone app. That can inflate expectations.
People who arent experts who understand the challenges may have been misled by the hype, said Raquel Urtasun, a University of Toronto professor who helped oversee the development of self-driving cars at Uber and is now chief executive of the self-driving start-up Waabi.
Technologies like self-driving cars and artificial intelligence do not face the same physical obstacles as quantum computing. But just as researchers do not yet know how to build a viable quantum computer, they do not yet know how to design a car that can safely drive itself in any situation or a machine that can do anything the human brain can do.
Even a technology like augmented reality eyeglasses that can layer digital images onto what you see in the real world will require years of additional research and engineering before it is perfected.
Andrew Bosworth, vice president at Meta, formerly Facebook, said that building these lightweight eyeglasses was akin to creating the first mouse-driven personal computers in the 1970s (the mouse itself was invented in 1964). Companies like Meta must design an entirely new way of using computers, before stuffing all its pieces into a tiny package.
Over the past two decades, companies like Facebook have built and deployed new technologies at a speed that never seemed possible before. But as Mr. Bosworth said, these were predominantly software technologies built solely with bits pieces of digital information.
Building new kinds of hardware working with physical atoms is a far more difficult task. As an industry, we have almost forgotten what this is like, Mr. Bosworth said, calling the creation of augmented reality glasses a once-in-a-lifetime project.
Technologists like Mr. Bosworth believe they will eventually overcome those obstacles and they are more open about how difficult it will be. But thats not always the case. And when an industry has seeped into every part of daily life, it can be hard to separate hand-waving from realism especially when it is huge companies like Google and well-known personalities like Elon Musk drawing that attention.
Many in Silicon Valley believe that hand-waving is an important part of pushing technologies into the mainstream. The hype helps attract the money and the talent and the belief needed to build the technology.
If the outcome is desirable and it is technically possible then its OK if were off by three years or five years or whatever, said Aaron Levie, chief executive of the Silicon Valley company Box. You want entrepreneurs to be optimistic to have a little bit of that Steve Jobs reality-distortion field, which helped to persuade people to buy into his big ideas.
The hype is also a way for entrepreneurs to generate interest among the public. Even if new technologies can be built, there is no guarantee that people and businesses will want them and adopt them and pay for them. They need coaxing. And maybe more patience than most people inside and outside the tech industry will admit.
When we hear about a new technology, it takes less than 10 minutes for our brains to imagine what it can do. We instantly compress all of the compounding infrastructure and innovation needed to get to that point, Mr. Levie said. That is the cognitive dissonance we are dealing with.
Continued here:
Why Is Silicon Valley Still Waiting for the Next Big Thing? - The New York Times
Schrdingers Pedophilia: The Cat Is Out Of The Bag (Box) – Forbes
Erwin Schrdinger
According to a December 2021 report from the Irish Times, the Nobel Prize-winning Austrian physicist Erwin Schrdinger was a pedophile.
To refresh memory: Schrdinger made many astounding contributions to the burgeoning fields of quantum physics, electrodynamics, molecular biology, and color theory. For example, nearly 100 years ago, he gave the world a way to calculate the probable energy and position of electrons in space and time. For non-scientists, the thought experiment now called Schrdinger's Cat is perhaps his most fun gift to science. Imagine a cat living in a sealed box that is equipped with a contraption that has a 50% chance of killing it via a random, subatomic event. Quantum physics deals in probabilities rather than fixed realities. Given this, the question Schrdinger posed was whether two probable states could exist at once. Could the cat in the box be simultaneously alive and dead? Having asked, Schrdinger also pointed out that there is a measurement problem that precludes answering his question with certainty. No one can learn if the cat exists simultaneously in two states of being because, the moment that anyone opens the box and looks in, the cat becomes either alive or dead.
Perhaps we all should have known. The evidence given by the Irish Times has been staring us in the face for almost a decade. Citing specific sources, the Irish Times article details two stories.
Ithi Junger. British astrophysicist John Gribbin reported in his 2013 biography Erwin Schrdinger and the Quantum Revolution that, at the age of 39, Schrdinger became enamored of 14-year-old Ithi, whom he was tutoring in math. As well as the maths, the lessons included a fair amount of petting and cuddling [as Schrdinger stated in his diary] and Schrdinger soon convinced himself that he was in love with Ithi. There is no evidence that things went beyond petting and cuddling when Ithi was 14, but before he died in 1961 Schrdinger admitted that hed impregnated her when she was 17. Her abortion left her sterile.
Barbara MacEntee. According to Schrdingers biographer Walter Moore (writing in 2015 in Schrdinger: Life and Thought), Schrdinger kept a list in his diary of the women and girls hed romanced. A 12-year-old named Barbara MacEntee was on it. Schrodinger approached her when he was 53 years old. Her astonished family asked a Catholic priest to intervene. As Moores biography explained, the priest had a serious word with [Schrdinger], and muttering dark imprecations, [Schrdinger] desisted from further attentions to Barbara, although he listed her among the unrequited loves of his life.
Quoting directly from Schrdingers diaries, Moore revealed that the physicist justified his attraction to girls by considering that, being a genius (which he believed no woman ever could be), he was naturally entitled. It seems to be the usual thing that men of strong, genuine intellectuality are immensely attracted only by women who, forming the very beginning of the intellectual series, are as nearly connected to the preferred springs of nature as they themselves. Nothing intermediate will do, since no woman will ever approach nearer to genius by intellectual education than some unintellectuals do by birth so to speak.
After reading the Irish Times article, I found myself emotionally agape. Wanting, perhaps, to preserve for myself the glory of Schrdingers scientific reputation, I tried to normalize his behavior. Searching out examples of pedophilia among other western cultural heroes, I learned that:
Edgar Allen Poe married his 13-year-old cousin.
As reported in the Paris Review, after his wife died, Mark Twain dabbled with angel-fish. (Thats what he called schoolgirls.) Theres no indisputable evidence of sexuality but there was an awful lot of promiscuous cuddling.
Horatio Alger, who wrote rags-to-riches stories about young, disadvantaged boys, was briefly a Unitarian minister. He was expelled from the ministry for boy-directed pederasty.
Rock and roll stars with sexual interests in very young girls have included Jerry Lee Lewis, who was 22 when he married his 13-year-old cousin. (Rumor has it that she still believed in Santa Claus.) A 24-year-old Elvis Presley dated Priscilla when she was 14.
Pedophilia is surprisingly common. The psychiatric professions Diagnostic and Statistical Manual V places its prevalence in the male population at 3%-5% and acknowledges that it is both highly resistant to treatment and less prevalent among women.
The DSM V has no listing for Pedophilia. Rather, it defines Pedophilic Disorder as a diagnosis assigned to adults (defined as age 16 and up) who have sexual desire for prepubescent children.
According to information published by Johns Hopkins All Children Hospital, puberty is not a now-you-see-it-now-you-dont kinda thing. For girls, the first signs can come at around age 8. For some girls, the transition into biological adulthood happens quickly. Even if it doesnt, it generally resolves by around age 16. For boys, puberty generally starts about two years later.
Thinking about Schrdinger in the context of the DSM V definition of Pedophilic Disorder, one point of confusion arose for me. At age 14 and age 12, Ithi and Barbara may have reached puberty already. Does that absolve his behavior with them in any way?
By modern standards, legally it does not. In Ireland, the age of sexual consent is now 17. In the United States, each of the 50 states has its own law that defines the age of consent as 16, 17, or 18.
Whats more, there are enormous differences between grown men and very young women in terms of social power and the sort of confidence necessary to make wise, safe, sexual choices.
This is all to say that pedophilia like that of Edwin Schrdinger is probably abhorrent any way you look at it.
In 2015, the Norwegian philosopher Ole Martin Moen of the University of Oslo wondered in the Nordic Journal of Applied Ethics about The Ethics of Pedophilia. According to Moen, pedophilia itself is a morally neutral sexual preference. It is the actions harming children that are immoral.
Being a pedophile is unfortunate for the pedophile himself, who will most likely not have a good sexual and romantic life.... he wrote.
Pedophile as victim seems a stretch even for someone like me who wants to preserve her image of Schrdinger as an intellectual hero. Moen, however, may have been serious about his pity the poor pedophile idea. He suggested a way in which society at large might save pedophiles from themselveswhile saving children from pedophiles. It could legalize ways for pedophiles to satisfy their compulsions with victimless entertainments like kiddie porn fiction and computer-generated images.
At least the use of computer-generated images would obviate the need for the truly repulsive act of posing real-life children in sexual ways. As Moet pointed out, though, a question of practicality remains. Would making use of such fiction and images satisfy pedophiles urges? Or might it instead encourage them to act out dangerously?
Moet cited research done in the 1990s that suggests that kiddie porn does not make pedophiles more prone to engage in real-world sex with children and that, indeed, it may give pedophiles a harmless outlet for their sexual urges. This idea seems to be buttressed by 1999 and 2011 research also cited by Moet; when Japan and the Czech Republic lifted their bans on kiddie porn, the rates of child rape dropped. From those two countries criminal justice experiences, Moet concluded that "Granted our current knowledge, it therefore seems that texts and computer-generated graphics with pedophilic content may result in less adult-child sex.
Schrdingers cat is both alive and dead until you open the box to look, whereupon it becomes either ... [+] alive or dead.
As helpful as such texts and graphics may turn out to be, I have trouble imagining a more divisive issue to raise state by state in todays America than whether governments should allowor perhaps even sponsorkiddie porn.
Meanwhile, Im still dealing with my shock about Erwin Schrdinger. For me right now, he exists in two states of being at once. In other words, Erwin Schrdinger has become Schrdingers Cat. He is both a beacon of scientific light and a monster. Both/and, not yet either/or. That being said, some behavior is just too putrid to tolerate. As the revelations about his behavior continue to curdle inside of me, one of those views will take precedence. Very soon, I suspect, I will say, Hes dead to me.
More here:
Schrdingers Pedophilia: The Cat Is Out Of The Bag (Box) - Forbes