Page 2,152«..1020..2,1512,1522,1532,154..2,1602,170..»

Quantum Holograms Dont Even Need to See Their Subject – IEEE Spectrum

Applications for the CAD software extend far beyond medicine and throughout the burgeoning field of synthetic biology, which involves redesigning organisms to give them new abilities. For example, we envision users designing solutions for biomanufacturing; it's possible that society could reduce its reliance on petroleum thanks to microorganisms that produce valuable chemicals and materials. And to aid the fight against climate change, users could design microorganisms that ingest and lock up carbon, thus reducing atmospheric carbon dioxide (the main driver of global warming).

Our consortium, GP-write, can be understood as a sequel to the Human Genome Project, in which scientists first learned how to "read" the entire genetic sequence of human beings. GP-write aims to take the next step in genetic literacy by enabling the routine "writing" of entire genomes, each with tens of thousands of different variations. As genome writing and editing becomes more accessible, biosafety is a top priority. We're building safeguards into our system from the start to ensure that the platform isn't used to craft dangerous or pathogenic sequences.

Need a quick refresher on genetic engineering? It starts with DNA, the double-stranded molecule that encodes the instructions for all life on our planet. DNA is composed of four types of nitrogen basesadenine (A), thymine (T), guanine (G), and cytosine (C)and the sequence of those bases determines the biological instructions in the DNA. Those bases pair up to create what look like the rungs of a long and twisted ladder. The human genome (meaning the entire DNA sequence in each human cell) is composed of approximately 3 billion base-pairs. Within the genome are sections of DNA called genes, many of which code for the production of proteins; there are more than 20,000 genes in the human genome.

The Human Genome Project, which produced the first draft of a human genome in 2000, took more than a decade and cost about $2.7 billion in total. Today, an individual's genome can be sequenced in a day for $600, with some predicting that the $100 genome is not far behind. The ease of genome sequencing has transformed both basic biological research and nearly all areas of medicine. For example, doctors have been able to precisely identify genomic variants that are correlated with certain types of cancer, helping them to establish screening regimens for early detection. However, the process of identifying and understanding variants that cause disease and developing targeted therapeutics is still in its infancy and remains a defining challenge.

Until now, genetic editing has been a matter of changing one or two genes within a massive genome; sophisticated techniques like CRISPR can create targeted edits, but at a small scale. And although many software packages exist to help with gene editing and synthesis, the scope of those software algorithms is limited to single or few gene edits. Our CAD program will be the first to enable editing and design at genome-scale, allowing users to change thousands of genes, and it will operate with a degree of abstraction and automation that allows designers to think about the big picture. As users create new genome variants and study the results in cells, each variant's traits and characteristics (called its phenotype) can be noted and added to the platform's libraries. Such a shared database could vastly speed up research on complex diseases.

What's more, current genomic design software requires human experts to predict the effect of edits. In a future version, GP-write's software will include predictions of phenotype to help scientists understand if their edits will have the desired effect. All the experimental data generated by users can feed into a machine-learning program, improving its predictions in a virtuous cycle. As more researchers leverage the CAD platform and share data (the open-source platform will be freely available to academia), its predictive power will be enhanced and refined.

Our first version of the CAD software will feature a user-friendly graphical interface enabling researchers to upload a species' genome, make thousands of edits throughout the genome, and output a file that can go directly to a DNA synthesis company for manufacture. The platform will also enable design sharing, an important feature in the collaborative efforts required for large-scale genome-writing initiatives.

There are clear parallels between CAD programs for electronic and genome design. To make a gadget with four transistors, you wouldn't need the help of a computer. But today's systems may have billions of transistors and other components, and designing them would be impossible without design-automation software. Likewise, designing just a snippet of DNA can be a manual process. But sophisticated genomic designwith thousands to tens of thousands of edits across a genomeis simply not feasible without something like the CAD program we're developing. Users must be able to input high-level directives that are executed across the genome in a matter of seconds.

Our CAD program will be the first to enable editing at genome-scale, with a degree of abstraction and automation that allows designers to think about the big picture.

A good CAD program for electronics includes certain design rules to prevent a user from spending a lot of time on a design, only to discover that it can't be built. For example, a good program won't let the user put down transistors in patterns that can't be manufactured or put in a logic that doesn't make sense. We want the same sort of design-for-manufacture rules for our genomic CAD program. Ultimately, our system will alert users if they're creating sequences that can't be manufactured by synthesis companies, which currently have limitations such as trouble with certain repetitive DNA sequences. It will also inform users if their biological logic is faulty; for example, if the gene sequence they added to code for the production of a protein won't work, because they've mistakenly included a "stop production" signal halfway through.

But other aspects of our enterprise seem unique. For one thing, our users may import huge files containing billions of base-pairs. The genome of the Polychaos dubium, a freshwater amoeboid, clocks in at 670 billion base-pairsthat's over 200 times larger than the human genome! As our CAD program will be hosted on the cloud and run on any Internet browser, we need to think about efficiency in the user experience. We don't want a user to click the "save" button and then wait ten minutes for results. We may employ the technique of lazy loading, in which the program only uploads the portion of the genome that the user is working on, or implement other tricks with caching.

Getting a DNA sequence into the CAD program is just the first step, because the sequence, on its own, doesn't tell you much. What's needed is another layer of annotation to indicate the structure and function of that sequence. For example, a gene that codes for the production of a protein is composed of three regions: the promoter that turns the gene on, the coding region that contains instructions for synthesizing RNA (the next step in protein production), and the termination sequence that indicates the end of the gene. Within the coding region, there are "exons," which are directly translated into the amino acids that make up proteins and "introns," intervening sequences of nucleotides that are removed during the process of gene expression. There are existing standards for this annotation that we want to improve on, so our standardized interface language will be readily interpretable by people all over the world.

The CAD program from GP-write will enable users to apply high-level directives to edit a genome, including inserting, deleting, modifying, and replacing certain parts of the sequence. GP-write

Once a user imports the genome, the editing engine will enable the user to make changes throughout the genome. Right now, we're exploring different ways to efficiently make these changes and keep track of them. One idea is an approach we call genome algebra, which is analogous to the algebra we all learned in school. In mathematics, if you want to get from the number 1 to the number 10, there are infinite ways to do it. You could add 1 million and then subtract almost all of it, or you could get there by repeatedly adding tiny amounts. In algebra, you have a set of operations, costs for each of those operations, and tools that help organize everything.

In genome algebra, we have four operations: we can insert, delete, invert, or edit sequences of nucleotides. The CAD program can execute these operations based on certain rules of genomics, without the user having to get into the details. Similar to the "PEMDAS rule" that defines the order of operations in arithmetic, the genome editing engine must order the user's operations correctly to get the desired outcome. The software could also compare sequences against each other, essentially checking their math to determine similarities and differences in the resulting genomes.

In a later version of the software, we'll also have algorithms that advise users on how best to create the genomes they have in mind. Some altered genomes can most efficiently be produced by creating the DNA sequence from scratch, while others are more suited to large-scale edits of an existing genome. Users will be able to input their design objectives and get recommendations on whether to use a synthesis or editing strategyor a combination of the two.

Users can import any genome (here, the E. coli bacteria genome), and create many edited versions; the CAD program will automatically annotate each version to show the changes made. GP-write

Our goal is to make the CAD program a "one-stop shop" for users, with the help of the members of our Industry Advisory Board: Agilent Technologies, a global leader in life sciences, diagnostics and applied chemical markets; the DNA synthesis companies Ansa Biotechnologies, DNA Script, and Twist Bioscience; and the gene editing automation companies Inscripta and Lattice Automation. (Lattice was founded by coauthor Douglas Densmore). We are also partnering with biofoudries such as the Edinburgh Genome Foundry that can take synthetic DNA fragments, assemble them, and validate them before the genome is sent to a lab for testing in cells.

Users can most readily benefit from our connections to DNA synthesis companies; when possible, we'll use these companies' APIs to allow CAD users to place orders and send their sequences off to be synthesized. (In the case of DNA Script, when a user places an order it would be quickly printed on the company's DNA printers; some dedicated users might even buy their own printers for more rapid turnaround.) In the future, we'd like to make the ordering step even more user-friendly by suggesting the company best suited to the manufacture of a particular sequence, or perhaps by creating a marketplace where the user can see prices from multiple manufacturers, the way people do on airfare sites.

We've recently added two new members to our Industrial Advisory Board, each of which brings interesting new capabilities to our users. Catalog Technologies is the first commercially viable platform to use synthetic DNA for massive digital storage and computation, and could eventually help users store vast amounts of genomic data generated on GP-write software. The other new board member is SOSV's IndieBio, the leader in biotech startup development. It will work with GP-write to select, fund, and launch companies advancing genome-writing science from IndieBio's New York office. Naturally, all those startups will have access to our CAD software.

We're motivated by a desire to make genome editing and synthesis more accessible than ever before. Imagine if high-school kids who don't have access to a wet lab could find their way to genetic research via a computer in their school library; this scenario could enable outreach to future genome design engineers and could lead to a more diverse workforce. Our CAD program could also entice people with engineering or computational backgroundsbut with no knowledge of biologyto contribute their skills to genetic research.

Because of this new level of accessibility, biosafety is a top priority. We're planning to build several different levels of safety checks into our system. There will be user authentication, so we'll know who's using our technology. We'll have biosecurity checks upon the import and export of any sequence, basing our "prohibited" list on the standards devised by the International Gene Synthesis Consortium (IGSC), and updated in accordance with their evolving database of pathogens and potentially dangerous sequences. In addition to hard checkpoints that prevent a user from moving forward with something dangerous, we may also develop a softer system of warnings.

Imagine if high-school kids who don't have access to a lab could find their way to genetic research via a computer in their school library.

We'll also keep a permanent record of redesigned genomes for tracing and tracking purposes. This record will serve as a unique identifier for each new genome and will enable proper attribution to further encourage sharing and collaboration. The goal is to create a broadly accessible resource for researchers, philanthropies, pharmaceutical companies, and funders to share their designs and lessons learned, helping all of them identify fruitful pathways for advancing R&D on genetic diseases and environmental health. We believe that the authentication of users and annotated tracking of their designs will serve two complementary goals: It will enhance biosecurity while also engendering a safer environment for collaborative exchange by creating a record for attribution.

One project that will put the CAD program to the test is a grand challenge adopted by GP-write, the Ultra-Safe Cell Project. This effort, led by coauthor Farren Isaacs and Harvard professor George Church, aims to create a human cell line that is resistant to viral infection. Such virus-resistant cells could be a huge boon to the biomanufacturing and pharmaceutical industry by enabling the production of more robust and stable products, potentially driving down the cost of biomanufacturing and passing along the savings to patients.

The Ultra-Safe Cell Project relies on a technique called recoding. To build proteins, cells use combinations of three DNA bases, called codons, to code for each amino acid building block. For example, the triplet 'GGC' represents the amino acid glycine, TTA represents leucine, GTC represents valine, and so on. Because there are 64 possible codons but only 20 amino acids, many of the codons are redundant. For example, four different codons can code for glycine: GGT, GGC, GGA, and GGG. If you replaced a redundant codon in all genes (or 'recode' the genes), the human cell could still make all of its proteins. But viruseswhose genes would still include the redundant codons and which rely on the host cell to replicatewould not be able to translate their genes into proteins. Think of a key that no longer fits into the lock; viruses trying to replicate would be unable to do so in the cells' machinery, rendering the recoded cells virus-resistant.

This concept of recoding for viral resistance has already been demonstrated. Isaacs, Church, and their colleagues reported in a 2013 paper in Science that, by removing all 321 instances of a single codon from the genome of the E. coli bacterium, they could impart resistance to viruses which use that codon. But the ultra-safe cell line requires edits on a much grander scale. We estimate that it would entail thousands to tens of thousands of edits across the human genome (for example, removing specific redundant codons from all 20,000 human genes). Such an ambitious undertaking can only be achieved with the help of the CAD program, which can automate much of the drudge work and let researchers focus on high-level design.

The famed physicist Richard Feynman once said, "What I cannot create, I do not understand." With our CAD program, we hope geneticists become creators who understand life on an entirely new level.

From Your Site Articles

Related Articles Around the Web

Continued here:

Quantum Holograms Dont Even Need to See Their Subject - IEEE Spectrum

Read More..

What is the ‘Gold Foil Experiment’? The Geiger-Marsden experiments explained – Livescience.com

The Geiger-Marsden experiment, also called the gold foil experiment or the -particle scattering experiments, refers to a series of early-20th-century experiments that gave physicists their first view of the structure of the atomic nucleus and the physics underlying the everyday world. It was first proposed by Nobel Prize-winning physicist Ernest Rutherford.

As familiar as terms like electron, proton and neutron are to us now, in the early 1900s, scientists had very little concept of the fundamental particles that made up atoms.

In fact, until 1897, scientists believed that atoms had no internal structure and believed that they were an indivisible unit of matter. Even the label "atom" gives this impression, given that it's derived from the Greek word "atomos," meaning "indivisible."

But that year, University of Cambridge physicist Joseph John Thomson discovered the electron and disproved the concept of the atom being unsplittable, according to Britannica. Thomson found that metals emitted negatively charged particles when illuminated with high-frequency light.

His discovery of electrons also suggested that there were more elements to atomic structure. That's because matter is usually electrically neutral; so if atoms contain negatively charged particles, they must also contain a source of equivalent positive charge to balance out the negative charge.

By 1904, Thomson had suggested a "plum pudding model" of the atom in which an atom comprises a number of negatively charged electrons in a sphere of uniform positive charge, distributed like blueberries in a muffin.

The model had serious shortcomings, however primarily the mysterious nature of this positively charged sphere. One scientist who was skeptical of this model of atoms was Rutherford, who won the Nobel Prize in chemistry for his 1899 discovery of a form of radioactive decay via -particles two protons and two neutrons bound together and identical to a helium-4 nucleus, even if the researchers of the time didn't know this.

Rutherford's Nobel-winning discovery of particles formed the basis of the gold foil experiment, which cast doubt on the plum pudding model. His experiment would probe atomic structure with high-velocity -particles emitted by a radioactive source. He initially handed off his investigation to two of his protgs, Ernest Marsden and Hans Geiger, according to Britannica.

Rutherford reasoned that if Thomson's plum pudding model was correct, then when an -particle hit a thin foil of gold, the particle should pass through with only the tiniest of deflections. This is because -particles are 7,000 times more massive than the electrons that presumably made up the interior of the atom.

Marsden and Geiger conducted the experiments primarily at the Physical Laboratories of the University of Manchester in the U.K. between 1908 and 1913.

The duo used a radioactive source of -particles facing a thin sheet of gold or platinum surrounded by fluorescent screens that glowed when struck by the deflected particles, thus allowing the scientists to measure the angle of deflection.

The research team calculated that if Thomson's model was correct, the maximum deflection should occur when the -particle grazed an atom it encountered and thus experienced the maximum transverse electrostatic force. Even in this case, the plum pudding model predicted a maximum deflection angle of just 0.06 degrees.

Of course, an -particle passing through an extremely thin gold foil would still encounter about 1,000 atoms, and thus its deflections would be essentially random. Even with this random scattering, the maximum angle of refraction if Thomson's model was correct would be just over half a degree. The chance of an -particle being reflected back was just 1 in 10^1,000 (1 followed by a thousand zeroes).

Yet, when Geiger and Marsden conducted their eponymous experiment, they found that in about 2% of cases, the -particle underwent large deflections. Even more shocking, around 1 in 10,000 -particles were reflected directly back from the gold foil.

Rutherford explained just how extraordinary this result was, likening it to firing a 15-inch (38 centimeters) shell (projectile) at a sheet of tissue paper and having it bounce back at you, according to Britannica

Extraordinary though they were, the results of the Geiger-Marsden experiments did not immediately cause a sensation in the physics community. Initially, the data were unnoticed or even ignored, according to the book "Quantum Physics: An Introduction" by J. Manners.

The results did have a profound effect on Rutherford, however, who in 1910 set about determining a model of atomic structure that would supersede Thomson's plum pudding model, Manners wrote in his book.

The Rutherford model of the atom, put forward in 1911, proposed a nucleus, where the majority of the particle's mass was concentrated, according to Britannica. Surrounding this tiny central core were electrons, and the distance at which they orbited determined the size of the atom. The model suggested that most of the atom was empty space.

When the -particle approaches within 10^-13 meters of the compact nucleus of Rutherford's atomic model, it experiences a repulsive force around a million times more powerful than it would experience in the plum pudding model. This explains the large-angle scatterings seen in the Geiger-Marsden experiments.

Later Geiger-Marsden experiments were also instrumental; the 1913 tests helped determine the upper limits of the size of an atomic nucleus. These experiments revealed that the angle of scattering of the -particle was proportional to the square of the charge of the atomic nucleus, or Z, according to the book "Quantum Physics of Matter," published in 2000 and edited by Alan Durrant.

In 1920, James Chadwick used a similar experimental setup to determine the Z value for a number of metals. The British physicist went on to discover the neutron in 1932, delineating it as a separate particle from the proton, the American Physical Society said.

Yet the Rutherford model shared a critical problem with the earlier plum pudding model of the atom: The orbiting electrons in both models should be continuously emitting electromagnetic energy, which would cause them to lose energy and eventually spiral into the nucleus. In fact, the electrons in Rutherford's model should have lasted less than 10^-5 seconds.

Another problem presented by Rutherford's model is that it doesn't account for the sizes of atoms.

Despite these failings, the Rutherford model derived from the Geiger-Marsden experiments would become the inspiration for Niels Bohr's atomic model of hydrogen, for which he won a Nobel Prize in Physics.

Bohr united Rutherford's atomic model with the quantum theories of Max Planck to determine that electrons in an atom can only take discrete energy values, thereby explaining why they remain stable around a nucleus unless emitting or absorbing a photon, or light particle.

Thus, the work of Rutherford, Geiger (who later became famous for his invention of a radiation detector)and Marsden helped to form the foundations of both quantum mechanics and particle physics.

Rutherford's idea of firing a beam at a target was adapted to particle accelerators during the 20th century. Perhaps the ultimate example of this type of experiment is the Large Hadron Collider near Geneva, which accelerates beams of particles to near light speed and slams them together.

Thomson's Atomic Model, Lumens Chemistry for Non-Majors,.

Rutherford Model, Britannica, https://www.britannica.com/science/Rutherford-model

Alpha particle, U.S NRC, https://www.nrc.gov/reading-rm/basic-ref/glossary/alpha-particle.html

Manners. J., et al, 'Quantum Physics: An Introduction,' Open University, 2008.

Durrant, A., et al, 'Quantum Physics of Matter,' Open University, 2008

Ernest Rutherford, Britannica, https://www.britannica.com/biography/Ernest-Rutherford

Niels Bohr, The Nobel Prize, https://www.nobelprize.org/prizes/physics/1922/bohr/facts/

House. J. E., 'Origins of Quantum Theory,' Fundamentals of Quantum Mechanics (Third Edition), 2018

View post:

What is the 'Gold Foil Experiment'? The Geiger-Marsden experiments explained - Livescience.com

Read More..

What is the double-slit experiment, and why is it so important? – Interesting Engineering

Few science experiments are as strange and compelling as the double-slit experiment.

Few experiments, if any, in modern physics are capable of conveyingsuch a simple ideathat light and matter can act as both waves and discrete particles depending on whether they are being observedbut which is nonetheless one of the great mysteries of quantum mechanics.

It's the kind of experiment that despite its simplicity is difficult to wrap your mind around because what it shows is incredibly counter-intuitive.

But not only has the double-slit experiment been repeated countless times in physics labs around the world, but it has also even spawned many derivative experiments that further reinforce its ultimate result, that particles can be waves or discrete objects and that it is as if they "know" when you are watching them.

To understand what the double-slit experiment demonstrates, we need to lay out some key ideas from quantum mechanics.

In 1925,Werner Heisenberg presented his mentor, the eminent German physicist Max Born, with a paper to review that showed how the properties of subatomic particles, like position, momentum, and energy, could be measured.

Born saw that these properties could be represented through mathematical matrices, with definite figures and descriptions of individual particles, and this laid the foundation for the matrix description of quantum mechanics.

Meanwhile, in 1926,Edwin Schrdingerpublished his wave theory of quantum mechanics which showed that particles could be described by an equation that defined their waveform; that is, it determined that particles were actually waves.

This gave rise to the concept of wave-particle duality, which is one of the defining features of quantum mechanics. According to this concept, subatomic entities can be described as both waves and particles, and it is up to the observer to decide how to measure them.

That last part is important since it will determine how quantum entities will manifest. If you try to measure a particle's position, you will measure a particle's position, and it will cease to be a wave at all.

If you try to define its momentum, you will find that behaves like a wave and you can't know anything definitive about its position beyond the probability that it exists at any given point within that wave.

Essentially, you will measure it as a particle or a wave, and doing so decides what form it will take.

The double-slit experiment is one of the simplest demonstrations of this wave-particle duality as well as a central defining weirdness ofquantum mechanics, one that makes the observer an active participant in the fundamental behavior of particles.

The easiest way to describe the double-slit experiment is by using light. First, take a source of coherent light, such as a laser beam, that shines in a single wavelength, like purely blue visible light at 460nm, and aim it at a wall with two slits in it.The distance between the slits should be roughly the same as the light's wavelengthso that they will both sit inside that beam of light.

Behind that wall, place a screen that can detect and record the light that impacts it. If you fire the laser beam at the two slits, on the recording screen behind the wall you will see a stripey pattern like this:

This is probably not what you might have been expecting, and that's perfectly rational if you treat light as if it were a wave. If the light was a wave, then when the single wave of light from the laser hit both slits, each slit would become a new "source" of light on the other side of the wall, and so you would have a new wave originating from each slit producing two waves.

Where those two waves intersect causes something known as interference, and it can be either constructive or destructive. When the amplitude of the waves overlaps at either a peak or a trough, it acts to boost the wavelength in either direction by adding its energy together. This is constructive interference, and it produces these brighter bars in this pattern.

When the waves cancel each other out, as when a peak hits a trough, the effect neutralizes the wavelength and diminishes or even eliminates the light, producing the blacked-out spaces in between the blue bars.

But in the case of quantum entities like photons of light or electrons, they are also individual particles. So what happens when you shoot a single photon through the double slits?

One photon alone reacting to the screen might leave a tiny dot behind, which might not mean much in isolation, but if you shoot many single photons at the double slits, those tiny dots that the photon leaves behind on our screen actually show up in that same stripey interference pattern produced by the laser beam hitting the double slits.

In other words, the individual photon behaves as if it passed through both slits like it was a wave.

Now, here's where things get really weird.

We can set up a detector in front of one of the slits that can watch for photons and light up whenever it detects one passing through. When we do this, the detector will light up 50% of the time, and the pattern left behind on the screen changes, giving us something that looks like this:

And to make things even wilder, we can set up a detector behind the wall that only detects a photon after it has passed through the slit and we get the same result. That means that even if the photon passes through both slits as a wave, the moment it is detected, it is no longer a wave but a particle. And not just that, that second wave emerging from the other slit also collapses back into the particle that was detected passing through the other slit.

In practice, this means that somehow the universe "knows" that someone is watching and flips the metaphorical quantum coin to see which slit the particle passed through. The more individual photons you shoot through the double slit, the closer that photon detector comes to detecting photons 50% of the time, just as flipping a coin 10 times might give you heads 70% of the time while flipping it 100 times might give you tails 55% of the time, and flipping it 1 billion times gives you heads 50.0003% of the time.

This seems to show that not only is the universe watching the observer as well, but that the quantum states of entities passing through the double slits are governed by the laws of probability, making it impossible to ever predict with certainty what the quantum state of an entity will be.

The double-slit experiment actually predates quantum mechanics by a little more than acentury.

During the Scientific Revolution, the nature of light was a particularly contentious topic, with manylike Isaac Newton himselfarguing in favor of a corpuscular theory of light that held that light was transmitted through particles.

Others believed that light was a wave that was transmitted through "aether" or some other medium, the way sound travels through air and water, but Newton's reputation and a lack of an effective means to demonstrate the wave theory of light solidified the corpuscular view for just shy of a century after Newton published hisOpticks in 1704.

The definitive demonstration came from the British polymath Thomas Young, who presented a paper to the Royal Society of London in 1803 that described a pair of simple experiments that anyone could perform to see for themselves that light was in fact a wave.

First, Young established that a pair of waves were subject to interference when they overlapped, producing a distinctive interference pattern.

He initially demonstrated this interference pattern using a ripple tank of water, showing that such a pattern is characteristic of wave propagation.

Young then introduced the precursor to the modern double-slit experiment, though instead of using a laser beam to produce the required light source, Young used reflected sunlight striking two slits in a card as its target.

The resulting light diffraction showed the expected interference pattern, and the wave theory of light gained considerable support. It would take another decade and a half before further experimentation conclusively refuted corpuscles in favor of waves, but the double-slit experiment that Young developed proved to be a fatal blow to Newton's theory.

Young wasn't lying when he said, "The experiments I am about to relate...may be repeated with great ease, whenever the sun shines, and without any other apparatus than is at hand to everyone."

While it might be a stretch to say that you can use the double-slit experiment to demonstrate some of the more counterintuitive features of quantum mechanics (unless you have a photon detector handy and a laser that shoots individual photons), you can still use it to demonstrate the wave nature of light.

If you want to replicate Young's experiment, you only need as large a box as is practical with a hole cut in it a little smaller than an index card. Then, take an Exacto knife or similar blade for fine cutting work and cut two slits into a piece of cardboard larger than the hole in your box. The slits should be between 0.1mm and 0.4mm apart, as the closer together they are, the more distinct the interference pattern will be. It's better to create cards for this rather than cut directly into the box since you might need to make adjustments to the spacing of the slits.

Once you're satisfied with the spacing, affix the card with the double-slit in it over the hole and secure it in place with tape. Just make sure sunlight isn't leaking around the card.

You'll also need to create some eye-holes in the box so you can look inside without getting in the way of the light hitting the double-slit card, but once you figure that out, you're all set.

To accurately diffract sunlight using this box, you will need to have the sunlight more or less hitting the double-slit card dead on, so it might take some maneuvering to get it properly positioned.

Once it is, look through the eye holes and you can see the interference pattern forming on the inside wall, as well as different colors emerging as the different wavelengths interfering with each other change the color of the light being created.

If you wanted to try it out with something fancier, get yourself a laser pointer from an office supply store. Just like you'd do with a viewing box, create cards with slits in them, and when properly spaced, set up a shielded area for the card to rest on.

You'll want to make sure that only the light from the laser pointer is hitting the double-slit, so shield the card however you need to. Then, set the laser pointer on a surface level with the slits and shine the laser at them. On the wall behind the card, the interference pattern from the slits should be clearly visible.

If you don't want to go through all that trouble, you can also use Photoshop or similar software to recreate the effect.

First, create a template of evenly spaced concentric circles. Using different layers for each source, as well as a background later, position the center of the concentric rings near to one another. On a 1200 pixel wide canvas, a distance of 100 pixels between the two centers should do nicely.

Then, fill in the color of each concentric ring, alternating light and dark, with an opacity set to about 33%. You may need to hide one of the concentric circle layers while you work on the other. When you're done, reveal the two overlapping layers of circles and the interference pattern should jump out at you immediately, looking something like this:

Of course, if you want to dig into the quantum mechanics side of things, you'll need to work in a pretty advanced physics lab at a university or science institute, since photon detectors aren't the kind of thing you can pick up at the hobby store.

Still, if you're compelled to try the heavier stuff out for yourself, you wouldn't be the first person to get drawn into a career in physics because of the weirdness of quantum mechanics, and there are definitely worse ways to make a living.

Read the original post:

What is the double-slit experiment, and why is it so important? - Interesting Engineering

Read More..

Condensed Matter Physics and Quantum Light and Matter Project Coordinator job with DURHAM UNIVERSITY | 281141 – Times Higher Education (THE)

Department of Physics

Grade 5: - 22,847 - 26,341Fixed Term - Full TimeContract Duration: 24 MonthsContracted Hours per Week: 35Closing Date: 04-Mar-2022, 7:59:00 AM

The Department and role purpose:

The Department of Physics at Durham University is one of the very best UK Physics departments with an outstanding reputation for excellence in teaching, research and employability of our students.

The Department of Physics is committed to building and maintaining a diverse and inclusive environment. It is pledged to the Athena SWAN charter, where we hold a silver award, and has the status of IoP Juno Champion. We embrace equality and particularly welcome applications from women, black and minority ethnic candidates, and members of other groups that are under-represented in physics. Durham University provides a range of benefits including pension, flexible and/or part time working hours, shared parental leave policy and childcare provision.

The Condensed Matter Physics (CMP) and Quantum Light and Matter (QLM) research sections are seeking to appoint a self-motivated and experienced Project Coordinator to support the daily operations and the effective and efficient running of their research. This post offers the successful applicant an opportunity to be part of one of Durham Universitys leading research groups.

The post holder will be a committed, enthusiastic professional who relates well to people at all levels. She/he will be expected to demonstrate a high level of initiative and be confident in dealing with diverse groups, including visiting researchers, Heads of Faculties, Departments and Colleges, and research groups across the University.

The post holder will be expected to work flexibly to deliver effective administrative support and guidance to the CMP/QLM staff and its stakeholders. Working closely with senior staff and colleagues, she/he will take responsibility for the fundamental and general CMP/QLM administrative services, as well as assisting with data gathering for funding and project applications, organising events and research activities, creating and maintaining financial and publishing records. The role will also provide opportunities for the post holder to contribute to the development of new promotional materials and communication tools for the CMP and QLM research sections e.g. website and social media content.

The CMP & QLM Project Coordinator will act as the first point of contact for enquiries and managing a wide range of internal and external enquiries from staff, partners and other stakeholders via email, telephone and face-to-face contact, taking an active decision-making role and using judgement on a day-to-day basis, providing advice, support and information.

The candidate would be expected to assist the Grant PIs, providing administrative support to ensure the smooth running of activities and to maximise effective use of academic staff time.

This role is an excellent opportunity for an administrator seeking to develop their experience and knowledge at both strategic and operational levels, and applications are invited from enthusiastic individuals looking to embrace a new challenge.

Core responsibilities:

Role responsibilities:

Specific role requirements

Working Arrangements

At Durham we recognise that our staff and students are our greatest asset and we want to support the health and wellbeing of all. Hybrid working supports this ethos and provides many benefits to our colleagues, including empowering people, where their role allows, to work in a manner which is more suitable for them, whilst encouraging our commitment to environmental sustainability.

Depending on the needs of the business and the job role, Durham University is piloting hybrid working for all Professional Services colleagues in the academic year 2021/2022, which may include the opportunity to work both on and off campus and to flex working hours. If appointed to the post, your line-manager will discuss the specific arrangements with you. Any hybrid arrangements are non-contractual and may change within the pilot and when the pilot ends.

Interviews are anticipated to take place on or around28February 2022.

Reward and Benefits

To support the delivery of the University's People Strategy to attract, retain and reward the very best, we offer a fantastic range ofrewards and benefitsto our staff,including:

Recruiting to this post

In order to be considered for interview, candidates must evidence each of the essential criteria required for the role in the person specification. In some cases, the recruiting panel may also consider the desirable criteria, so we recommend you evidence all criteria in your application.

Please note that some criteria will only be considered at interview stage.

How to apply

We prefer to receive applications online.

Please note that in submitting your application Durham University will be processing your data. We would ask you to consider the relevant University Privacy Statementhttps://www.dur.ac.uk/ig/dp/privacy/pnjobapplicants/which provides information on the collation, storing and use of data.

Information if you have a disability

The University welcomes applications from disabled people. We are committed to ensuring fair treatment throughout the recruitment process. We will make adjustments to support the interview process wherever it is reasonable to do so and, where successful, adjustments will be made to support people within their role.

If you are unable to complete your application via our recruitment system, please get in touch with us one.recruitment@durham.ac.uk.

What you are required to submit:

Please ensure that you submit all documentation listed above or your application cannot proceed to the next stage.

Contact details

For further information regarding this post, please contact;

Mrs Linda Wilkinson, Research Manager, Department of Physics, Lower Mountjoy, Durham, DH1 3LE (l.a.wilkinson@durham.ac.uk)

Contact information for technical difficulties when submitting your application

If you encounter technical difficulties when using the online application form, we prefer you send enquiries by email. Please send your name along with abrief description of the problem youre experiencing toe.recruitment@durham.ac.uk

We will notify you on the status of your application at various points throughout the selection process, via automated emails from our e-recruitment system. Please check your spam/junk folder periodically to ensure you receive all emails.

At Durham University, our aim is to create an open and inclusive environment where everyone can reach their full potential and believe our staff should reflect the diversity of the global community in which we work. We welcome and encourage applications from members of groups who are under-represented in our work force including people with disabilities, women and black, Asian and minority ethnic communities.

As a University we foster a collegiate community of extraordinary people aligned to the Universitysvalues. Equality, Diversity, and Inclusion (EDI) are a key part of the Universitys Strategy and a central part of everything we do. At Durham we actively work towards providing an environment where our staff and students can study, work and live in a community which is supportive and inclusive, and in doing so, recruit the worlds best candidates from all backgrounds and identities. Its important to us that all of our colleagues are aligned to both our values and commitment to EDI.

Person specification - skills, knowledge, qualifications and experience required

Essential Criteria

Desirable Criteria

Durham University

OUR CHARACTERISTICS:We are a globally outstanding centre of teaching and research excellence, a collegiate community of extraordinary people, in a unique and historic setting.

OUR VALUES:We are inspiring, challenging, innovative, responsible and enabling.

Durham University is one of the world's top universities with strengths across the Arts and Humanities, Business, Sciences and Social Sciences. We are home to some of the most talented scholars and researchers from around the world who are tackling global issues and making a difference to people's lives.

The University sits in a beautiful historic city where it shares ownership of a UNESCO World Heritage Site with Durham Cathedral, the greatest Romanesque building in Western Europe. A collegiate University, Durham recruits outstanding students from across the world and offers an unmatched wider student experience.

Durham University seeks to promote and maintain an inclusive and supportive environment for work and study that assists all members of our University community to reach their full potential. Diversity brings strength and we welcome applications from across the international, national and regional communities that we work with and serve.

It is expected that all staff within the University:

Family key attributes

Roles in this family provide a comprehensive service and deliver the efficient administration and governance of the University.

Overall family purpose

Link to key strategic plan

DBS Requirement:Not Applicable.

See original here:

Condensed Matter Physics and Quantum Light and Matter Project Coordinator job with DURHAM UNIVERSITY | 281141 - Times Higher Education (THE)

Read More..

Is Afterlife Possible? Scientist Reveals the Physics Behind Death – News18

The human brain is a mysterious organ that is much bigger than it looks. This deceptive characteristic of the brain is also reflected in the sense of self that humans entail. While what we look like are just a collection of atoms and molecules, the sheer probability of having consciousness, and that too, this advanced, triggers a belief that humans are much more than just flesh and bones.

And this is how the concept of soul is fostered. Religious texts and teachings frequently bring the soul into the discussion. What some perceive as soul boils down to consciousness that assists us in being us. Soul is believed to exist beyond the laws of life and death. It is postulated that our soul existed before we did and will exist after we do not. However, this concept becomes feeble when looked through a scientific spectacle.

Sean M. Carroll, a physicist specialising in cosmology, gravity, and quantum mechanics, shared his piece of mind regarding this never-ending journey of a soul through a blog post. Sean elaborately analysed the tributaries of this thought that claims that life after death does not end at decomposing of the body but exists beyond that.

The questions that target the sanctity of this belief revolved around the fundamental laws of physics that play their role in the interaction of atoms with their surroundings. Sean throws light on the fact that for life after death to be true, the basic structure of physics of atoms and electrons will have to be demolished, and someone will have to build a new model. Believing n life after death, to put it mildly, required physics beyond the standard model. Most importantly, we need some way for that new Physics to interact with the atoms that we do have.

Most people perceive souls as a blob of energy. What Sean argues about is the interaction of this energy with the world that we witness and the building blocks of it that we do not see. Multiple equations such as the Dirac equation, Lorentz invariance, Hamiltonian system of Quantum Mechanics, Gauge Variance, etc., will be proven void, or the concept of the soul will lose trustful ground in attempts to justify the existence of life after death.

While discussions such as these do tickle the thought process, it also sways us away from the more reality-centric questions about human beings and the consciousness giving them an identity. So, what do you think about the existence of an immaterial, immortal soul and the life after we die?

Read all the Latest News, Breaking News and Coronavirus News here.

More:

Is Afterlife Possible? Scientist Reveals the Physics Behind Death - News18

Read More..

The ten greatest ideas in the history of science – Big Think

In his bookThe Structure of Scientific Revolutions, Thomas Kuhn argued that science, instead of progressing gradually in small steps as is commonly believed, actually moves forward in awkward leaps and bounds. The reason for this is that established theories are difficult to overturn, and contradictory data is often dismissed as merely anomalous. However, at some point, the evidence against the theory becomes so overwhelming that it is forcefully displaced by a better one in a process that Kuhn refers to as a paradigm shift. And in science, even the most widely accepted ideas could, someday, be considered yesterdays dogma.

Yet, there are some concepts which are considered so rock solid, that it is difficult to imagine them ever being replaced with something better. Whats more, these concepts have fundamentally altered their fields, unifying and illuminating them in a way that no previous theory had done before.

So, what are these ideas? Compiling such a list would be a monumental task, mostly because there are so many good ones to choose from. Thankfully, Oxford chemistry professor Peter Atkins has done just that in his 2003 bookGalileos Finger: The Ten Great Ideas of Science. Dr. Atkins breadth of scientific knowledge is truly impressive, and his ten choices are excellent. Though this book was written with a popular audience in mind, it can be quite incomprehensible in places, even for people with a background in science. Still, I highly recommend it.

Lets take a look at the ten great ideas (listed in no particular order).

In 1973, evolutionary biologist Theodosius Dobzhansky penned an essay titled Nothing in Biology Makes Sense Except in the Light of Evolution. By now, thousands of students across the globe have heard this title quoted to them by their biology teachers.

And for good reason, too. The power of evolution comes from its ability to explain both the unity and diversity of life; in other words, the theory describes how similarities and differences between species arise by descent from a universal common ancestor. Remarkably, all species have aboutone-third of their genes in common, and65% of human genesare similar to those found in bacteria and unicellular eukaryotes (like algae and yeast).

One of the most fascinating examples of common descent is theevolution of the gene responsible for the final step in vitamin C synthesis. Humans have this gene, but it is broken. That is why we have to drink orange juice or find some other external source of vitamin C. By sequencing this gene and tracking mutations, it is possible to trace back exactly when the ability to synthesize vitamin C was lost. According to this phylogenetic tree (see above), the loss occurred in an ancestor which gave rise to the entire anthropoid primate lineage. Humans, chimpanzees, orangutans, and gorillas all possess this broken gene, and hence, all of them need an external source of vitamin C. (At other points in evolutionary history, bats and guinea pigs also lost this vitamin C gene.) Yet, many mammals dont need vitamin C in their diet because they possess a functioning copy and are able to produce it on their own; thats why your dog or cat gets by just fine without orange juice.

The most satisfying explanation for these observations is descent with modification from a common ancestor.

A contrarian embodiment to the notion that science and religion are in conflict, the Father of Genetics was none other than Gregor Mendel, an Augustinian friar. He famously conducted experiments using pea plants and, in the process, deduced the basic patterns of inheritance. He referred to these heritable units as elements; today, we call them genes. Amazingly,Mendel didnt even know DNA existed, andCharles Darwin knew about neither DNA nor the discoveries of Mendel.

It wasnt until 1952 that scientists determined that DNA was the molecule responsible for transmitting heritable information. An experiment conducted by Alfred Hershey and Martha Chase, usingviruses with radioactively labeled sulfur or phosphorus to infect bacteria, rather convincingly demonstrated that this was the case. Then, in 1953, James Watson and Francis Crick, with substantial input from Rosalind Franklin, shattered the biological world with their double helix model of DNA structure.

From there, it was determined that the letters (A, C, G, T) of the DNA sequence encoded information. In groups of three (e.g., ACG, GAA, CCT, etc.), these nucleotides coded for amino acids, the building blocks of protein. Collectively, every possible combination of three letters is known as the genetic code. (See diagram above. Note that every T is replaced with U in RNA.) Eventually, the central dogma of molecular biology emerged: (1) DNA is the master blueprint and is responsible for inheritance; (2) DNA is transcribed into RNA, which acts as a messenger, conveying this vital information; and (3) RNA is translated into proteins, which provide structural and enzymatic functions for the cell.

Today, it is known that DNA sequences alone are insufficient to explain all the behaviors observed at the cellular level. Alterations to the DNA which do not affect the sequence of letters known asepigenetic changes are under intense investigation. It is currently unclear to what extent epigenetics is responsible for heritable traits.

All the energy that currently exists in the Universe is all that ever has been and all that ever will be. Energy is neither created nor destroyed (which is why you shouldnever buy a perpetual motion machine), though it can be transformed into mass (and vice versa). This is known as mass-energy equivalence, and every schoolchild knows the equation that describes it: E = mc2.

The story of energy largely begins with Isaac Newton. His three laws of motion got the ball rolling, so to speak, but they did not deal with energy directly; instead, they dealt with force. Eventually, with the help of scientists like Lord Kelvin, physics began to focus on energy. The two most important forms of it are potential energy (stored energy) and kinetic energy (energy of motion). Most other forms of energy, including chemical and electric energy, are simply varying manifestations of potential and kinetic energy. Also, work and heat are not forms of energy themselves, but are simply methods of transferring it.

Murphys Lawstates, Anything that can go wrong, will go wrong. Entropy is sort of like Murphys Law applied to the entire Universe.

Put simply, entropy is a measure of disorder, and the Second Law of Thermodynamics states that all closed systems tend to maximize entropy. Reversing this ever increasing tendency toward disorder requires the input of energy. Thats why housekeeping is so tiresome. Left on its own, your house would get dusty, spiders would move in, and eventually, it would fall apart. However, the energy put into preventing disorder in one place simultaneously increases it somewhere else. Overall, the entropy of the Universealwaysincreases.

Entropy also manifests in another way: There is no perfect transfer of energy. Your body (or a cell) cannot perfectly utilize food as an energy source because some of that energy is lost forever to the Universe. So, just like in finance, every transaction comes with a tax. (University of Washington microbiologist Franklin Harold liked to call it Gods energy tax.)

The common adage that nothing in life is certain except death and taxes hence takes on a new meaning.

Air, water, bacteria, humans, computers, the stars: All of them are made from atoms. In fact, the atoms that make up Earth (and everything on it, including us), originally came from the stars, which is why Carl Sagan famously quipped, We are made of starstuff.

But what are atoms? Mostly empty space, actually. That means you are mostly empty space, as well. The center of each atom, called a nucleus, consists of positively-charged protons and uncharged neutrons. Surrounding this dense cluster of positivity are the negatively-charged electrons, which buzz about, rather unpredictably. Originally, it was thought that the electrons orbited the nucleus in a way that resembles the planets around the sun, the so-called solar system model of the atom, for which Niels Bohr is given credit. The model is overly simplistic and incorrect, but it does well enough for certain calculations, which is why it is still taught in basic chemistry classes. The model was ultimately replaced with the more complex atomicorbital model.

All the known atoms are found on the periodic table, the centerpiece of every chemistry class. The table organizes the atoms in various ways, two of which are particularly important: First, the atoms are arranged by increasing atomic number, which represents the number of protons and defines each element. Second, each column on the table represents the number of outer shell electrons in each atom. This is important because the outer shell electrons largely determine the sorts of chemical reactions in which the atoms will participate.

Perhaps the most fascinating aspect of the periodic table is how it came about. The Russian chemist, Dmitri Mendeleev, first created the modern periodic table. But, it was missing elements. And using his table, he correctly predicted the existence of elements that had not yet been discovered.

Symmetry, that somewhat vague concept that involves folding or twisting triangles, cubes, and other objects in various ways has applications far beyond high school geometry class. As it turns out, the Universe is riddled with symmetry, or the lack thereof.

Themost beautiful human facesare also the most symmetrical. Atoms in a crystal are arranged in a symmetrical, repeating pattern.Many other phenomenathroughout nature exhibit breathtaking symmetry, from honeycombs to spiral galaxies.

Particle physics and astrophysics are also captivated by the concept of symmetry. One of the biggest asymmetries is the fact that our Universe is made ofmore matter than antimatter. If the Universe were perfectly symmetrical, there would be equal amounts of both. (But then the Universe probably wouldnt exist, since matter and antimatter annihilate each other.) However, as Atkins writes, the Universe issymmetricalifsimultaneouslywe change particles for antiparticles, reflect the Universe in a mirror, and reverse the direction of time.

Does that explain why Miss Universe is always so pretty?

The classical physics of Isaac Newton and James Clerk Maxwell work reasonably well for most everyday applications. But classical physics is limited in the sense that itdoes not quite accurately depict reality.

The first inkling that something was seriously wrong came from analysis of blackbody radiation. Imagine a hot stove: It first starts out red, then turns white as it gets hotter. Classical physics was incapable of explaining this. Max Planck, however, had an idea: Perhaps the released energy came in little packets called quanta. Instead of energy taking on continuous values, it instead takes on only discrete values. (Think of the difference between a ramp and a staircase; a person standing on a ramp can take on any height, while a person standing on a staircase only has certain discrete heights from which to choose.) As it turns out, these quanta of light energy are today known as photons. Thus, it was demonstrated that light, which until that time generally had been thought of as a wave, could also act like discrete particles.

Then along came Louis de Broglie who extended the concept: All particles can act like waves, and all waves can act like particles. Slam-dunk evidence for this idea came by way of the famousdouble-slit experiment, which conclusively showed that photons, electrons, and even molecules like buckyballs exhibit wave-particle duality. (A lab confirmed the results of this experiment yetagainin May 2013.)

These two concepts, quantization and wave-particle duality, form the core of the discipline known as quantum mechanics. Two other core concepts include theuncertainty principle(that is, the inability to know various pairs of characteristics of a system with precision) and thewavefunction(which, when squared, gives the probability of finding a particle in a particular location). And what does all that give us?Schrdingers cat, which is simultaneously dead and alive.

No wonder Stephen Hawking wouldalways reach for his gun.

About 13.8 billion years ago, the Universe underwent a period of rapid expansion, known as cosmic inflation. Immediately after that was the Big Bang. (Yes, cosmic inflation occurredbeforethe Big Bang.) Ever since then, the Universe has kept right on expanding.

We know the Big Bang occurred because of the telltale evidence it left behind: the cosmic microwave background (CMB) radiation. As the Universe expanded, the initial burst of light from the Big Bang got stretched. (Remember, light can be both a wave and a particle.) When light is stretched, the wavelength increases. Today, that light is no longer visible with the naked eye because it now inhabits the microwave range of the electromagnetic spectrum. However, you can still see it on old-school television sets with antennas; thestatic on in-between channelsis partially due to the CMB.

But not only is the Universe expanding, itsrate of expansionis accelerating due to dark energy. And the further away an object is from Earth, the faster it is accelerating away from us. If you thought the Universe was a lonely place now,just wait 100 billion years. Thanks to dark energy, we wont be able to see any stars beyond our own galaxy (which, at that time, will be a giant merger between the Milky Way and Andromeda galaxies and their smaller satellite galaxies).

The fabric of our universe is spacetime, which consists of the three spatial dimensions (length, width, and height) combined with the dimension of time. Imagine this fabric as a stretchy, rubber sheet. And then imagine placing a giant bowling ball on that sheet. The sheet would warp around the bowling ball, and any object placed near the bowling ball would roll toward it. This metaphor for Albert Einsteins theory of general relativity explains how gravity works. (Despite being Einsteins greatest achievement, general relativity is not for what he won the Nobel Prize; instead, the prize was awarded for his work on thephotoelectric effect.)

But this wasnt Einsteins only contribution. He also came up with special relativity, which describes how time slows down for moving objects, especially as they travel closer to the speed of light.

Interestingly, theeffects of both general and special relativitymust be taken into account for GPS satellites to work properly. If these effects were not considered, then the clocks on Earth and on the satellites would be out of sync, and consequently, the distances reported by the GPS unit would be wildly inaccurate. So, every time you use your smartphone successfully to find the local Starbucks, give thanks to Albert Einstein.

Fundamentally, mathematics makes no sense. That probably doesnt come as a surprise to those of us who struggled in algebra or calculus. Though it is the language of science, the truth is that mathematics is built upon a cracked foundation.

For instance, consider a number. You think you know one when you see one, but its rather difficult to define. (In that sense,numbers are like obscenity or pornography.) Not that mathematicians havent tried to define numbers. The field of set theory is largely dedicated to such an endeavor, butit isnt without controversy.

Or consider infinity.Georg Cantordid, and (it is speculated by some that) he went crazy in the process. Counterintuitively, there is such a thing as one infinity being larger than another infinity. The rational numbers (those that can be expressed as a fraction) constitute one infinity, but irrational numbers (those that cannot be expressed as a fraction) constitute a larger infinity. A special type of irrational number, called the transcendental number, is particularly to blame for this. The most famous transcendental is pi, which can neither be expressed as a fraction nor as the solution to an algebraic equation. The digits which make up pi (3.14159265) go on and on infinitely in no particular pattern. Most numbers are transcendental, like pi. And that yields a very bizarre conclusion: The natural numbers (1, 2, 3) are incredibly rare. Its amazing that we can do any math whatsoever.

At its core, mathematics is intimately tied to philosophy. The most hotly debated questions, such as theexistence and qualities of infinity, seem far more philosophical in nature than scientific. And thanks toKurt Gdel, we know that an infinite number of mathematical expressions are probably true, but unprovable.

Such difficulties explain why, from an epistemological viewpoint, mathematics is so disturbing: It places a finite boundary on human reason.

This article is adapted from aversionoriginally published on RealClearScience.

Go here to read the rest:

The ten greatest ideas in the history of science - Big Think

Read More..

The worst thought experiments imaginable – The Next Web

While the rest of us are doing good, honest work like podcasting and influencer-ing, theres a group of thinkers out there conducting horrific experiments. Theyre conjuring pedantic monsters, murdering innumerable cats, and putting humans inside of computers.

Sure, these thought experiments are all in their heads. But thats how it starts. First you dont know whether the cats dead or alive and then a demon opens the box and were all in the Matrix.

Unfortunately, there are only two ways to fight science and philosophy:

Thus, well arm ourselves with the collective knowledge of those whove gone before us (ahem, Google Scholar) and critique so snarky it could tank a Netflix Original. And well decide once-and-for-all whose big, bright ideas are the worst.

What if I told you there was a box that gave away a free lunch every time it was opened? Some of you are reading this and thinking is Neural suggesting we eat dead cats?

No. Im talking about a different box from a different thought experiment. Erwin Schrdingers cat actually came along some 68 years after James Clerk Maxwells Demon.

In Maxwells Demon, we have a box with a gate in the middle separating its contents (a bunch of particles) into two sides. Outside the box, theres what Maxwell calls a finite being (who other scientists later inexplicably decided was a demon) who acts as the gatekeeper.

So this demon being controls which particles go from one side of the box to the other. And, because particle behavior varies at different temperatures, this means the demons able to exploit physics to harness energy from the universes tendency towards entropy.

This particular thought experiment is awful. As in: its awfully good at being awesome!

Maxwells Demon has managed to stand the test of time and, a century-and-a-half later, its at the heart of the quantum computing industry. It might be the best scientific thought experiment ever.

The worst is actually Szilards Engine. But you have to go through Maxwells Demon to get there. Because in Szilards box, rather than Maxwells Demon exploiting the tendencies of the universe, the universe exploits Maxwells Demon.

Szilards work imagines a single-molecule engine inside of the box that results in a system where entropy works differently than it does in Maxwells experiment.

This difference in opinion over the efficacy of entropy caused a kerfuffle.

It all started when scientists came up with the second law of thermodynamics, which basically just says that if you drop an ice cube in a pot of boiling water, it wont make the water hotter.

Well, Maxwells Demon essentially says sure, but what if were talking about really tiny things experiencing somewhat quantum interactions? This made a lot of sense and has led to numerous breakthroughs in the field of quantum physics.

But then Szilard comes along and says, Oh yeah, what if the system only had one molecule and, like, the demon was really bored?

Those probably arent their exact words. Im, admittedly, guessing. The point is that Szilards Engine was tough to swallow back when he wrote it in 1929 and its only garnered more scrutiny since.

Dont just take my word for it. Its so awful that John D. Norton, a scientist from the department of history and philosophy of science at the University of Pittsburgh, once wrote an entire research paper describing it as the worst thought experiment.

In their criticism, Norton wrote:

In its capacity to engender mischief and confusion, Szilards thought experiment is unmatched. It is the worst thought experiment I know in science. Let me count the ways it has misled us.

Thats borderline hate-poetry and I love it. The only criticism I have to add is that its preposterous Szilard didnt reimagine the whole thing as Szilards Lizard.

The missed opportunity alone gets it our stamp for worst scientific thought experiment.

Honestly, Id say Ren Descartesscogito, ergo sum is the worst thought experiment of all time. But theres not much to discuss.

You ever meet someone who, if they started a sentence with I think, youd want to interrupt them to disagree? Imagine that, but at the multiverse level.

Accepting Descartesspremise requires two leaps of faith in just three words and Im not prepared to give anyone that much credit.

But, admittedly, thats low hanging fruit. So lets throw another twist in this article and discuss my favorite paper of all time because its also the worst philosophical thought experiment ever.

Nick BostromsSimulation Argument lies at the intersection of lazy physics and brilliant philosophy. Its like the Han Solo of thought experiments: you love itbecause its so simple, not in spite of it.

It goes like this: Uh, what if, like, we live inside a computer?

For the sake of fairness, this is how Bostrom puts it:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a posthuman stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

Think about it for a second.

Done? Good. It doesnt go any deeper. It really is just, what if all of this is just a dream? But instead of a dream, were digital entities in a computer simulation.

Its uh, kinda dumb, right?

But that doesnt mean Bostroms paper isnt important. I think its the most influential thought experiment since Descartess off-puttinginsistence upon his own existence (self involved much D?)

Bostroms a master philosopher because he understands that the core of explanation lies not in burdening a reader with unessential thought, but in stripping it away. He understands perfection as Antoine de Saint Exupry did when he declared it was attained not when there is nothing more to add, but when there is nothing more to remove.

Bostrom whittled the Simulation Argument down with Occams Razor until it became a paper capable of pre-empting your biggest yeah but, what about. queries before you could think them.

Still though, you dont have to be the head of Oxfords philosophy department to wonder if life is but a dream.

Theres no official name for this one, so well just call it That time the people building the A-bomb had to spend a few hours wondering if they were about to set the atmosphere on fire before deciding the math looked good and everything was going to be fine.

A close runner-up for this prize is That time the Nazis most famous quantum physicist was asked if it was possible that Germanys weapons could blow up the Earth by setting all the oceans aflame and he was all like: lol, maybe.

If I can channel our pal John D. Norton from above: these thought experiments are the worst. Allow me to list the ways I hate them.

The Axis and Allies werent far apart in their respective endeavors to create a weapon of mass destruction during World War II.

Of course we know how things played out: the Germans never got there and the US managed to avoid lighting the planet on fire when it dropped atomic bombs on the civilian populations of Hiroshima and Nagasaki.

In reality, Albert Einstein and company on the Allies side and Warner Heisenberg and his crew on the Axis were never concerned with setting off a globally-catastrophic chain reaction by detonating an atomic bomb. Both sides had done the math and determined it wasnt really a problem.

Unfortunately, the reason were aware of this is because both sides were also keen to talk to outsiders. Heisenberg famously joked about it to a German politician. And Arthur Compton, whod worked with Einstein and others on The Manhattan Project, gave a now infamous interview wherein he made it seem like the possibility of such a tragic event was far greater than it actually was.

This is our selection for the absolute worst thought experiment(s) of all time because its clear that both the Axis and the Allies were pretty far along in the process of actually building atomic bombs before anyone stopped and thought hey guys, are we going to blow up the planet if we do this?

Thats Day One stuff right there. Thats a question you should have to answer during orientation. You dont start building a literal atom bomb and then hold an all-hands meeting to dig into the whole killing all life thing.

Those are all great examples of terrible thought experiments. For scientists and philosophers anyway. But everyone knows the worst ideascome from journalists.

I think I can come up with a terrible thought experiment thatll trump each of the above. All I have to do is reverse-engineer someone elses work and restate it with added nonsense (hey, it worked for Szilard right?).

So lets do this. The most important part of any thought experiment is its title. We need to combine the name of an important scientist with a science-y creature if we want to be taken seriously like Maxwell and his Demon or Schrdinger and his Cat.

And, while substance isnt really what were going for here, we still need a real problem that remains unsolved, can be addressed with a vapid premise, and is accessible to intellects of any level.

Thus, without further ado, I present: Ogres Ogre, athought experiment that uses all the best ideas from the dumb ones mentioned above but contains none of their weaknesses (such as math and the scientific method).

Unlike those theories, Ogres Ogre doesnt require you to understand or know anything. Its just quietly cajoling you into a natural state of curiosity.

In short, Ogres Ogre isnt some overeager overachiever like those others. Where Maxwells Demon demonizes particles by maximizing the tendency toward entropy, and Szilars Engine engages in entropy in only isolated incidents, Ogres Ogre egregiously accepts all eventualities.

It goes like this: What if C-A-T really spelled dog?

Read this article:

The worst thought experiments imaginable - The Next Web

Read More..

What’s the maximum number of planets that could orbit the sun? – Verve Times

An artists impression of the planets in the solar system, not to scale. (Image credit: Shutterstock)

The solar system contains eight planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune, all of which circle the sun due to its intense gravitational pull. But is this the maximum number of planets that can orbit the sun? Or is there room for more?

Compared with other known planetary systems, the solar system contains an unusually high number of planets. In total, there are 812 known planetary systems with three or more confirmed planets, and only one other known system, Kepler-90, that contains as many planets as the solar system, according to The Extrasolar Planets Encyclopaedia.

There is a good chance that a lot of these systems have small inner planets that we cannot detect, so it is unlikely that the solar system is actually the most populated planetary system in our cosmic neighborhood. But it highlights that eight planets may be near the upper limit of how large a planetary system can naturally grow.

Related: How many atoms are in the observable universe?

Therefore, to work out the absolute maximum capacity of planets orbiting the sun, we need to move into the realm of the theoretical, ignoring some of the natural factors that may limit how many planets can form. One of the best ways to do that is to design, or engineer, a brand-new solar system from scratch.

When youre talking about how many planets could be in a planetary system, there are lots of different aspects you need to consider, Sean Raymond, an astronomer at the Bordeaux Astrophysics Laboratory in France who specializes in planetary systems, told Live Science.

The structure of a planetary system is the result of a number of complex factors, Raymond said, including the size of the star, the size of the planets, the type of planets (for instance, rocky planets or gas giants), the number of moons orbiting each planet, the location of large asteroids and comets (such as those in the asteroid belt between Jupiter and Mars and in the Kuiper Belt beyond Neptune), the direction of the planets orbits and the amount of material left over from the suns formation to create the planets. It also takes hundreds of millions of years of intense collisions and gravitational tugs-of-war between planets for a system to settle into a stable configuration.

However, if we were a super-advanced civilization with technology and resources that far exceeded our current capabilities, it might be possible to get around a lot of these limitations and design a solar system packed with the maximum number of planets, Raymond said.

In this theoretical engineered solar system, we could assume that there were no limit to the materials available to create planets and that they could be produced artificially and positioned at will. It would also be possible to remove moons, asteroids, comets and other obstructions that might complicate things. The only limitations would be that the gravity that the planets and the sun exert would be the same as they normally would be and that the planets would have to orbit the sun in a stable configuration without interfering with each other.

A planet is defined as celestial body that (a) is in orbit around the sun, (b) has sufficient mass to achieve hydrostatic equilibrium (making it round in shape) and (c) has cleared the neighborhood around its orbit from debris, the latter being the reason why Pluto is not considered a true planet, according to the International Astronomical Union.

In an engineered solar system, the maximum number of planets is limited by the number of planetary orbits you can fit around the sun before they start to become unstable.

When a planetary system becomes unstable, the orbits of planets start to cross each other, which means they might collide with each other or just gravitationally scatter, where planets slingshot around other planets and get catapulted out of the system, Raymond said.

Related: Why are galaxies different shapes?

The minimum safe distance between the orbits of different planets in a stable system is dependent on each planets size or, more accurately, its Hill radius. A planets Hill radius is the distance between the planet and the edge of its sphere of influence, within which objects with a smaller mass will be affected by its gravity, such as the moon orbiting Earth.

More massive planets exert a stronger gravitational force, which means they have a greater Hill radius. That is why the distance between the orbits of Earth and Mars, which is around 48.65 million miles (78.3 million kilometers), is around seven times smaller than the distance between the orbits of Mars and Jupiter, which is around 342.19 million miles (550.7 million km), according to NASA.

For this reason, the number of orbits that could fit inside the solar system depends predominantly on the size of the planets, Raymond said. For example, Jupiter is around 300 times more massive than Earth, which means that its Hill radius is around 10 times larger, Raymond said. This means that 10 separate Earth orbits could fit into the same space taken up by Jupiters current orbit.

Therefore, to maximize the number of planets in a system, you have to make the planets as small as possible.

The size of the planets is the key to maximizing the number of orbits that could fit into an engineered system. However, there is another clever trick we could exploit to add in a few extra orbits regardless of the planets size: change the direction in which they move around the sun.

In the current solar system, each planet orbits in the same direction around the sun. This is because the planets formed from a large cloud of dust rotating in the same direction around the sun. However, in our engineered solar system, it would be possible to have planets that orbit the sun in the opposite direction, known as retrograde orbits, Raymond said. However, this idea is somewhat fanciful; retrograde orbits likely do not exist in nature due to the nature of how planets form.

That said, if two planets were to orbit the sun in the opposite direction, the gravitational forces between them would be slightly weakened and the minimum safe distance between their orbits could be reduced.

If two planets in different orbits are going in the same direction, then they have a longer time to encounter each other as they pass, which creates a larger gravitational kick, Raymond said. However, if they are going in the opposite direction, they zoom past each other and interact for a shorter amount of time, which means they can be closer together without colliding or scattering.

Related: What happened before the Big Bang?

Therefore, if we made every other orbit in our engineered system a retrograde orbit, like a carousel where adjacent people are moving in opposite directions, we could minimize the space needed between each orbit and, in doing so, squeeze in extra planets.

Until this point, we have assumed that each orbit in our engineered solar system contains just one planet. However, it is actually possible to have multiple planets that share an orbit, Raymond said. And we can see an example of this in our current solar system.

Jupiter has two clusters of asteroids, known as the Greeks and the Trojans, that share its orbit. These clusters are located around 60 degrees in front of and behind the gas giant as it orbits the sun, Raymond said. However, astronomers think it is possible to have planets share orbits in a similar way. Theyve dubbed these theoretical worlds Trojan planets.

People are actively searching for examples of these Trojan planets among exoplanet systems because theyre expected to form naturally, Raymond said. However, none have been observed yet, he added.

If we want to maximize the number of planets in our engineered solar system, we will want to have as many of these Trojan planets as possible. However, just like with the number of orbits you can fit around the sun, the number of planets you can fit into an orbit must be spaced out enough to remain stable.

In a study published in 2010 in the journal Celestial Mechanics and Dynamical Astronomy, a pair of astronomers used Hill radii to work out how many planets could share an orbit. They found that it would be possible to have as many as 42 Earth-size planets share a single orbit. Moreover, just like with the number of orbits in a system, the smaller the planets, the more you could fit into the same orbit, Raymond said.

Of course, the chances of this many planets naturally sharing a single orbit are practically zero, because each planet would need to be exactly the same size and have formed at the same time to be stable, Raymond said. But in an engineered solar system, this level of co-orbital structure would be possible and would greatly increase the number of planets we could squeeze in.

Related: Why does outer space look black?

Now that we understand the key variables we need to engineer a planet-packed solar system, its finally time to crunch the numbers and see how many planets we can fit inside it.

Luckily, Raymond has already done this for us using computer simulations he created; they can be viewed in more detail on his blog, PlanetPlanet. However, it is important to note that although these calculations are based on theories astronomers use to create legitimate simulations, these models are not peer-reviewed and should be regarded with a pinch of playful skepticism.

To maximize the number of planets, Raymonds engineered system extends to 1,000 astronomical units (AU) from the sun. (One AU is the average distance from the sun to Earths orbit, which is about 93 million miles, or 150 million km.) Currently, the defined edge of the solar system, also known as the heliosphere, is around 100 AU from the sun, according to the European Space Agency, but the suns gravitational influence can extend much farther. Whats more, Raymonds model uses equally sized planets with alternating retrograde orbits.

Taking all of this into account, if you used Earth-size planets, you could fit in 57 orbits, each containing 42 planets, which gives a total of 2,394 planets. However, if you used smaller planets that are one-tenth the size of Earth (roughly the same mass as Mars), you could fit in 121 orbits, each containing 89 planets, which gives a total of 10,769 planets. And if the planets were around the size of the moon (one-hundredth the mass of Earth), you could have 341 orbits, each containing 193 planets, which gives a total of 65,813 planets.

Obviously, these numbers are extreme, and the ability to engineer such complicated systems is far beyond humanitys reach. But this fun thought experiment does highlight that there is much more space for planets in the solar system than the meager eight we see today. However, it is very unlikely that any more could have formed naturally.

Originally published on Live Science.

See the original post:

What's the maximum number of planets that could orbit the sun? - Verve Times

Read More..

Breaking the noise barrier: The startups developing quantum computers – ComputerWeekly.com

Today is the era of noisy intermediate scale quantum (Nisq) computers. These can solve difficult problems, but they are said to be noisy, which means many physical qubits are required for every logical qubit that can be applied to problem-solving. This makes it hard for the industry to demonstrate a truly practical advantage that quantum computers have over classical high-performance computing (HPC) architectures.

Algorithmiq recently received $4m in seed funding to enable it to deliver what it claims are truly noise-resilient quantum algorithms. The company is targeting one specific application area drug discovery and hopes to work with major pharmaceutical firms to develop molecular simulations that are accurate at the quantum level.

Algorithmiq says it has a unique strategy of using standard computers to un-noise quantum computers. The algorithms it is developing offer researchers the ability to boost the speed of chemical simulations on quantum computers by a factor of 100x compared with current industry benchmarks.

Sabrina Maniscalco, co-founder and CEO at Algorithmiq and a professor of quantum information, computing and logic at the University of Helsinki, has been studying noise in quantum computers for 20 years. My main field of research is about extracting noise, she said. Quantum information is very fragile.

In Maniscalcos experience, full tolerance requires technological advances in manufacturing and may even require fundamental principles to be discovered because the science does not exist yet. But she said: We can work with noisy devices. There is a lot we can do but you have to get your hands dirty.

Algorithmiqs approach is about making a mindset shift. Rather than waiting for the emergence of universal fault-tolerant quantum computing, Maniscalco said: We look for what types of algorithms we can develop with noisy [quantum] devices.

To work with noisy devices, algorithms need to take account of quantum physics in order to model and understand what is going on in the quantum computer system.

The target application area for Algorithmiq is drug discovery. Quantum computing offers researchers the possibility to simulate molecules accurately at the quantum level, something that is not possible in classical computing, as each qubit can map onto an electron.

According to a quantum computing background paper by Microsoft, if an electron had 40 possible states, to model every state would have 240 configurations, as each position can either have or not have an electron. To store the quantum state of the electrons in a conventional computer memory would require more than 130GB of memory. As the number of states increases, the memory required grows exponentially.

This is one of the limitations of using a classical computing architecture for quantum chemistry simulations. According to Scientific American, quantum computers are now at the point where they can begin to model the energetics and properties of small molecules, such as lithium hydride.

In November 2021, a consortium led by Universal Quantum, a University of Sussex spin-out company, was awarded a 7.5m grant from Innovate UKs Industrial Strategy Challenge Fund to build a scalable quantum computer. Its goal is to achieve a million qubit system.

Many of todays quantum computing systems rely on supercooling to just a few degrees above absolute zero to achieve superconducting qubits. Cooling components to just above absolute zero is required to build the superconducting qubits that are encoded in a circuit. The circuit only exhibits quantum effects when supercooled, otherwise it behaves like a normal electrical circuit.

Significantly, Universals quantum technology, based on the principle of a trapped ion quantum computer, can operate at much more normal temperatures. Explaining why its technology does not require supercooling, co-founder and chief scientist Winfried Hensinger said: Its the nature of the hardware platform. The qubit is the atom that exhibits quantum effects. The ions levitate above the surface of the chip, so there is no requirement on cooling the chip in order to make a better qubit.

Just as a microprocessor may run at 150W and operate at room temperature, the quantum computer that Universal Quantum is building should not require anything more than is needed in an existing server room for cooling.

The design is also more resilient to noise, which introduces errors in quantum computing. Hensinger added: In a superconducting qubit, the circuit is on the chip, so it is much harder to isolate from the environment and so is prone to much more noise. The ion is naturally much better isolated from the environment as it just levitates above a chip.

The key reason why Hensinger and the Universal Quantum team believe they are better placed to further the scalability of quantum computers is down to the cooling power of a fridge. According to Hensinger, the cooling needed for superconducting qubits is very difficult to scale to large numbers of qubits.

Another startup, Quantum Motion, a spin-out from University College London (UCL), is looking at a way to achieve quantum computing that can be industrialised. The company is leading a three-year project, Altnaharra, funded by UK Research and Innovations National Quantum Technologies Programme (NQTP), which combines expertise in qubits based on superconducting circuits, trapped ions and silicon spins.

The company says it is developing fault-tolerant quantum computing architectures. John Morton, co-founder of Quantum Motion and professor of nanoelectronics at UCL, said: To build a universal quantum computer, you need to scale to millions of qubits.

But because companies like IBM are currently running only 127-qubit systems, the idea of universal quantum computing comprising millions of physical qubits, built using existing processes, is seen by some as a pipedream. Instead, said Morton: We are looking at how to take a silicon chip and make it exhibit quantum properties.

Last April, Quantum Motion and researchers at UCL were able to isolate and measure the quantum state of a single electron (the qubit) in a silicon transistor manufactured using a CMOS (complementary metal-oxide-semiconductor) technology similar to that used to make chips in computer processors.

Rather than being at a high-tech campus or university, the company has just opened its new laboratory off Londons Caledonian Road, surrounded by a housing estate, a community park and a gym. But in this lab, it is able to lower the temperature of components to a shade above absolute zero.

James Palles-Dimmock, chief operation officer at Quantum Motion, said: Were working with technology that is colder than deep space and pushing the boundaries of our knowledge to turn quantum theory into reality. Our approach is to take the building blocks of computing the silicon chip and demonstrate that it is the most stable, reliable and scalable way of mass manufacturing quantum silicon chips.

The discussion Computer Weekly had with these startups shows just how much effort is going into giving quantum computing a clear advantage over HPC. What is clear from these conversations is that these companies are all very different. Unlike classical computing, which has chosen the stored program architecture described by mathematician John von Neumann in the 1940s, there is unlikely to be one de-facto standard architecture for quantum computing.

Original post:

Breaking the noise barrier: The startups developing quantum computers - ComputerWeekly.com

Read More..

A new federal effort to bolster the nations expertise in quantum computing – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drives daily audio interviews onApple PodcastsorPodcastOne.

Two federal science agencies have together launched a plan to bolster U.S. strength in a field known as quantum information science and technology. The Office of Science and Technology Policy, part of the White House crew, and the National Science Foundation parted with a group called the National Q-12 Education Partnership to, as they put it, explore training and education opportunities in quantum. The Federal Drive with Tom Temin spoke with the National Science Foundation director, Dr. Sethuraman Panchanathan about whats going on and why its important.

Tom Temin: This must be important if the director is taking a personal interest in this particular program. So tell us what is quantum, quantum computing and science and why does it matter so much?

Sethuraman Panchanathan: Thank you so much, Tom. We can look at quantum from different perspectives. For example, in physics, it means a smallest, non-divisible amount of a physical property, such as energy, for example. And at that scale, the rules of nature behave very differently from how they behave at the scale of you and me. From a policy perspective, education, popular science and technology, and others, quantum is more often used as a jargon for Quantum Information Science and Engineering, or referred to as few QISE, Sometimes also called QIST, the T is for technology. This use of quantum essentially clones a set of disciplines that are involved: physics, material science, chemistry, computer science, engineering, mathematics and so on. So in collaboration with industry, that youre using unique properties that exist at a quantum scale, to develop practical applications, such as quantum computers, quantum sensors and quantum communication networks. In this context, you often hear about quantum education of quantum workforce as other variations on this theme.

Tom Temin: And this is a technology that China is pursuing. And when we get down to the level of quantum mechanics used in quantum calculation, what can it do that we cant do now?

Sethuraman Panchanathan: The speed of computing that you can do, the speed at which you can do this, the scale at which you can do this, the energy consumption that goes with it, that is a much lower energy consumption, all of things make the future of computing exceedingly exciting. We can solve mega problems, huge problems, whether it is related in relation to climate, or predictive properties, like the prediction of a pandemic, for example. Working with the human genome data, and a whole host of things where you can actually process things at speed and at scale. And thats what makes this very exciting. Clearly, there are many countries who are also pursuing the approaches to enhancing the capacities and capabilities and technologies in quantum, because its a leading edge technology, the future industry, if you want to look at it that way. We have to be in the vanguard of how we make sure that we are not only producing the research, the advanced research concepts, but also translating them into technologies, working with industry, but most importantly, training this diverse workforce that is capable of engaging in this new area, which is not just a disciplinary area, as I said earlier, it is an interdisciplinary area by bringing together multiple disciplines.

Tom Temin: Now you have several companies that have claimed they are at the quantum computing level and using the units of quantum computing that have come into the parlance. Google I think is one, maybe IBM is one, maybe Amazon is one. But it sounds like youre talking to something larger than that, which is been hard to verify. So my question is, isnt this what theyre teaching now anyway, in the computer science schools?

Sethuraman Panchanathan: So when you teach at a computer science school, Im a computer scientist myself, you might see one facet of quantum computing, as it pertains to the computer science aspects of it. But when you want to sort of train people in the broadest sense of what quantum means, for example, a quantum engineer must know elements of coding, quantum mechanics, low temperature physics, material science and electronics in order to build and operate a computer. So as you can see, Tom, it requires training, which brings inspirations from multiple disciplines in training the quantum workforce of the future, and quantum researchers of the future. They may pursue research in a particular facet of it, but they need to have the broadest understanding of what it means to work in this area of quantum. So when you talk about the industry, therefore, theyre looking for such talent being generated at scale, so that we might be in the vanguard of competitiveness.

Tom Temin: Were speaking with Dr. Sethuraman Panchanathan, director of the National Science Foundation. The difference here I guess, is in traditional computer science and electrical engineering, one can proceed relatively free of the other, because you can run something in a new programming language on old hardware. And new hardware can run software designed for an older piece of hardware. But in this case, it sounds like the nation needs a systems approach to getting to quantum.

Sethuraman Panchanathan: Thats an excellent way of saying it Tom, a systems approach. Thats exactly what it is. Right from determining the basic materials to the building of the devices. Theyre building of the system, and programming of the system to do the things that you want it to do. All of this requires training and understanding at the scale that we need to, for example, the quantum workforce, we might need a diverse set of specialists. While they may have this broad set of training and specializations in certain aspects. For example, you could have qualified machinists, producing intricate parts to academic researchers exploring the theoretical limits of a quantum scale environment. So because the field is expanding rapidly, alongside swift technological progress in quantum computing and networking, the demand for qualified workers is increasing, as you talked about earlier, from industry.

But our schools may not always be ready to switch from a disciplinary training to the diverse, multidisciplinary one needed here. So industry, academia and governments alike are facing shortages of qualified people. Which means to every problem that is an opportunity, isnt it Tom? Therefore, the shortage in the QISE workforce opens up opportunities for broadening participation, and including because we talk about diversity of discipline, so diversity of so many facets that can be brought to this challenge that we are facing right now. So for example, minority serving institutions as partners in solving the workforce shortage issue would be a fantastic outcome. So this way, thanks to the disciplinary diversity QRST and QISE offers unique opportunities to broaden participation, and include meaningful activities to include IQ system, missing millions, the talent that is available in our nation, across the broad socioeconomic demographic, and the geographic diversity of the nation being brought fully into the workforce and into the research realm, and creating new entrepreneurs of the future and robust industries of the future. So thats what I believe this quantum revolution will bring to bear.

Tom Temin: All right, so now we have an actual program of the NSF and also of the White House, and of this group called the Q-12, National Education Partnership, what is going to happen under this trilateral type of agreement?

Sethuraman Panchanathan: So the National Q-12 education partnership as you outline, it is a partnership of OSTP, NSF, and key community stakeholders, including industry, professional societies and academia. So it takes all of the above in terms of coming together to build this future. So it builds upon efforts spearheaded by OSTP, an NSF to double up nine key QIS concepts that can be introduced to and adapted for computer science. You talked earlier about what can be done to augment these disciplines adapted for computer science, mathematics, physics and chemistry courses throughout middle and high schools. So the work focuses on helping Americas educators ensure a strong quantum learning environment, from providing classroom tools for hands on experiences, to developing educational materials, to supporting pathways to quantum careers.

So together as a partnership that you talked about, we hope to foster a range of training opportunities to increase the capabilities, diversity and a number of students who are ready to engage in the quantum workforce. So as I said earlier, this partnership provides teaching materials, curriculum development frameworks, learning and teaching resources, informative events and coordination for industry involvement, ultimately, creating opportunities for both teachers and students.

Tom Temin: You have to have the teachers capable of imparting this knowledge in order to have students interested in it. So again, sounds like you need a vertical approach from student all the way up through say, faculty and administration of some of these institutions.

Sethuraman Panchanathan: Exactly, Tom, you brought up the point that is precisely what it is. It is at all levels that we have to address. So it is not just at the research level. It is not just as a teacher training level, all the way up to student levels. How do they excite students to be able to engage in this quantum revolution? Right? For example, when this plan was released, we also announced a $2.2 million grant supplement to the Montana Arkansas, MonArk NSF Quantum Foundry, led by the Montana State University and the University of Arkansas to create the Arkansas, Montana, South Dakota to the quantum photonics alliance to the QP alliance. This alliance extends the MonArk Quantum Foundry that we had already funded to the tune of about $20 million, which focused on novel materials and devices for future quantum computing and networking, as well as chip scale integrated quantum photonics devices. So what were trying to do here is by these augmentations, and as you know, University of Arkansas at Pine Bluff is historically a Black university. And so its thrilling to see how we might bring opportunities to all institutions to be able to engage, develop the appropriate curriculum, train the teachers and also the foundry being such that that is accessible to any fifth or eighth grader whos excited about wanting to play with quantum and learn more and get excited I call it the quantum spark. How do we get them to get that? So these kinds of infrastructure investments then make possible those kinds of things happening also exciting students, even at the high school or even before, and then university students, and then building the research capacity at the same time, all of this happening at the same time. So in fact, the NSF released a dear colleague letter on advancing quantum education workforce development, which essentially opens up existing programs that NSF has with tribal colleges and universities called TCUP Program, and NSFs innovative technology experiences for students and teachers writers program. And NSF includes program among many other programs, to activities that broaden participation in quantum workforce and education.

Tom Temin: Now, early in the Space Race, back in the late 1950s, people saw Sputnik go overhead. And there was the majesty of the great expanse that inspired a generation of people to go into science and engineering in the Space Race. You cant see quantum, you cant touch it. And so how do you get young kids interested in it do you think that say, wow, thats what I want to do?

Sethuraman Panchanathan: The way you do that is, you prove an excellent point, the way you do that is by communicating the excitement of quantum by actually them looking at the outcomes of what a quantum computing can do, or a quantum sensor can do. You know, these days people are working with clearly with these phones that they carry all around, right, which is no trillions and millions of transistors and devices. So what you do is you say, this is what a quantum computer will do. Contrasting it to what it is today, in your hand now, what are the kinds of things it will do? How will it reach, change the whole way in which we look at the future in terms of concrete examples? So the more we talk about it in terms of outcome terms, we can get people more excited. In addition to being able to see things its about experiencing things.

Continue reading here:

A new federal effort to bolster the nations expertise in quantum computing - Federal News Network

Read More..