Page 2,731«..1020..2,7302,7312,7322,733..2,7402,750..»

Connections are the superpower of the College of ACES – Agri News

CHICAGO The COVID-19 pandemic not only triggered lots of difficulties for people, it also provided opportunities.

Its been a very difficult time for people not only here, but around the world, and people have rallied to try to do their best to navigate the situation, said Kim Kidwell, dean of the College of Agriculture, Consumer and Environmental Sciences at the University of Illinois.

What is fabulous about this situation is the research we do became more obvious to people, said Kidwell during a presentation at The Chicago Farmers meeting. People didnt understand what U of I Extension does or whats done in the College of ACES thats helpful for their everyday lives.

For example, Extension already had a listing of Illinois food banks.

It didnt take long to put that information on a website, Kidwell said. There were some people that never sought a food bank before that wanted to know where to get food in their local community.

Access to 4-H programs also was expanded last year.

Some was difficult to do online, but it also offered people who couldnt get to events to participate online, Kidwell said. Weve realized that some hybrid opportunities of in-person and online will serve well to expand access.

A lot of engagement is now happening with digital agriculture.

Our farmdoc is one of the best examples of a successful digital program in the country and it has been around for 20 years, Kidwell said. They were early innovators to provide real-time information to growers on a daily basis so they can make decisions on what to do on their farms.

One Of A Kind

Students at the U of I have the opportunity to study a new major.

The Metropolitan Food and Environmental Systems major is designed to bring people in that may not come from an ag background, Kidwell said. Its the first undergraduate major of its kind focused on all the components of the system and the goal is to breakdown the silos between the disciplines and have students be versed in all aspects of the food system.

Food system reliance was a big issue during the pandemic.

This major is designed to take a look at some of these things not only locally, but at the national level, as well, Kidwell said. Graduates from this program will drive innovation and work with infrastructure and policy changes in the food and agricultural space.

Connections, Kidwell said, is the super power of the College of ACES.

We love to do things that matter to people, she said. We have a system on campus that provides lots of opportunity for us to discover things, to ground test them and to de-risk them before they go into commercial channels.

The U of I Research Park includes about 120 companies.

This is where academia meets industry, Kidwell said. We hope some of the discoveries we make on campus come to life with our industry partners so we can ground test them in our field space adjacent to campus.

The Future Is Now

A couple of years ago, the College of ACES and the Grainger College of Engineering worked together to develop the Center for Digital Agriculture, which resulted in a couple of new majors computer science plus crop science and computer science plus animal science.

Its fabulous to see engineers talk to crop and animal scientists and visa versa, Kidwell said. Theyve opened each others eyes to possibilities.

Data analytics is a big component of the center, as well as automation and precision agriculture.

Were building an online program for digital agriculture that should launch in about a year, Kidwell said.

A 4-acre autonomous farm has been established on campus.

It has all kinds of high-tech equipment like robots that can go between rows of corn, Kidwell said. The robots do exciting things like identify weeds through facial recognition software and shoot lasers at the weeds to kill them.

The goal of these projects is improve farmer productivity, as well as improve environmental safety and sustainability.

Its high-tech agriculture, and urban gardening is a big part of this, including fruit harvesting with mechanical robots, Kidwell said.

We have an aspirational goal to create an Extension and public engagement enterprise that is the envy of our peers, she said. We want to create a southern portal to campus that is a destination.

One of Kidwells favorite places on the U of I campus is the Arboretum.

We did a master plan for the Arboretum to elevate it to a more visible asset for the U of I with more pathways, demonstration areas for teaching, hospitality space for weddings and the creation of a home for Extension, she said.

Currently, Extension is located at six places on campus.

We would like to have a new facility as the heartbeat of Extension, Kidwell said. This building will be the go-to place for people to connect to teach, learn and engage with the community.

The goal is to kick the project off in the next couple of months through a fundraising campaign.

Were excited for this space to cultivate engagement and host events for 4-H, master gardeners and for kids to come here to learn, Kidwell said.

Go here to read the rest:

Connections are the superpower of the College of ACES - Agri News

Read More..

What is Google Project Starline and how does it render in hyper-real 3D? – Pocket-lint

(Pocket-lint) - People are still learning from the coronavirus pandemic, including that there is no substitute for being together in a room with someone.

While video conferencing apps such as Zoom, Google Meet, and Microsoft Teams try to help you virtually connect with those you love and work with, the experience still feels rather limited. So, in an attempt to push the boundaries of remote collaboration, Google has come up with Project Starline.

Google CEO Sundar Pichai took the stage at Google I/O 2021 to announce that his company kicked off a project "several years ago" that builds on different areas of computer science and relies on custom-built hardware and highly specialised equipment. Called Project Starline, it's an early system that essentially allows you to video call someone and experience them in hyper-realistic 3D.

There are three components to Project Starline:

Project Starline uses high-resing cameras and custom depth sensors to capture a user's shape and appearance from multiple perspectives, and then all that is fused together by software to create an extremely detailed, real-time 3d model. Google said it's applying research in computer vision, machine learning, spatial audio, and real-time compression.The effect is the feeling of a person sitting just across from you.

The resulting data is also huge - many gigabits per second.

So, to send this 3D imagery over existing networks, Google developed novel compression and streaming algorithms that reduce the data by a factor of more than 100. Google also developed a light field display that shows you the realistic representation of someone sitting right in front of you in three dimensions.

As you move your head and body, Google's system can adjust the images you see in the light field display to match your perspective. It creates a sense of volume and depth without the need for additional glasses or headsets. "You can talk naturally gesture and make eye contact," CEO Sundar Pichai described at I/O 2021. "It's as close as we can get to the feeling of sitting across from someone."

Google said it's already spent thousands of hours testing Project Starline in its own offices. Although there are no plans to commercially release a product for consumers, it said there's been excitement from enterprise partners, and it is planning to expand access to partners in healthcare and media.

Check out Google's blog post for more details.

Writing by Maggie Tillman.

See the article here:

What is Google Project Starline and how does it render in hyper-real 3D? - Pocket-lint

Read More..

Brain-Computer Interface Translates Brain Signals Associated with Handwriting into Text | Computer Science, Neuroscience – Sci-News.com

Researchers with the BrainGate Collaboration have deciphered the brain activity associated with handwriting: working with a 65-year-old (at the time of the study) participant with paralysis who has sensors implanted in his brain, they used an algorithm to identify letters as he attempted to write them; then, the system displayed the text on a screen; by attempting handwriting, the participant typed 90 characters per minute more than double the previous record for typing with a brain-computer interface.

As part of the BrainGate clinical trial, researchers are using tiny electrode arrays to record signals from the motor cortex of the brain; those signals can then be used to control robotic prostheses, computers or other devices; the hope is that such a system may one help restore communication and movement in people with paralysis due to injury or illness. Image credit: BrainGate.org.

Brain-computer interfaces can restore communication to people who have lost the ability to move or speak.

So far, a major focus of brain-computer interface research has been on restoring gross motor skills, such as reaching and grasping or point-and-click typing with a computer cursor.

However, rapid sequences of highly dexterous behaviors, such as handwriting or touch typing, might enable faster rates of communication.

Scientists from the BrainGate Collaboration have been working for several years on such systems.

Previous studies have involved trial participants thinking about the motions involved in pointing to and clicking letters on a virtual keyboard. That system enabled one participant to type 40 characters per minute, which was the previous record speed.

For the latest study, the BrainGate researchers wanted to find out if asking a participant to think about motions involved in writing letters and words by hand would be faster.

An important mission of our BrainGate Consortium research is to restore rapid, intuitive communication for people with severe speech or motor impairments, said Professor Leigh Hochberg, a critical care neurologist in the School of Engineering and Carney Institute for Brain Science at Brown University, the Center for Neurotechnology and Neurorecovery at Massachusetts General Hospital, and the Department of Veterans Affairs Providence Healthcare System.

The new demonstration of fast, accurate neural decoding of handwriting marks an exciting new chapter in the development of clinically useful neurotechnologies.

We want to find new ways of letting people communicate faster, said Dr. Frank Willett, a neuroscientist at Stanford University and the Howard Hughes Medical Institute.

This new system uses both the rich neural activity recorded by intracortical electrodes and the power of language models that, when applied to the neurally decoded letters, can create rapid and accurate text.

As part of the clinical trial, the scientists placed two tiny electrodes about the size of a baby aspirin in a part of the trial participants brain associated with the movement of his right arm and hand.

Using signals the sensors picked up from individual neurons when the man imagined writing, a machine learning algorithm recognized the patterns his brain produced with each letter.

With this system, the man could copy sentences and answer questions at a rate similar to that of someone the same age typing on a smartphone.

The system is so fast because each letter elicits a highly distinctive activity pattern, making it relatively easy for the algorithm to distinguish one from another, Dr. Willett said.

The innovation could, with further development, let people with paralysis rapidly type without using their hands, said Dr. Krishna Shenoy, a researcher at Stanford University.

This technology and others like it have the potential to help people with all sorts of disabilities, added Dr. Jose Carmena, a neural engineer at the University of California, Berkeley.

Though the findings are preliminary, its a big advancement in the field.

Brain-computer interfaces convert thought into action. This paper is a perfect example: the interface decodes the thought of writing and produces the action.

The people who enroll in the BrainGate trial are amazing, Professor Hochberg said.

Its their pioneering spirit that not only allows us to gain new insights into human brain function, but that leads to the creation of systems that will help other people with paralysis.

The teams work was published in the journal Nature.

_____

F.R. Willett et al. 2021. High-performance brain-to-text communication via handwriting. Nature 593, 249-254; doi: 10.1038/s41586-021-03506-2

Read the original here:

Brain-Computer Interface Translates Brain Signals Associated with Handwriting into Text | Computer Science, Neuroscience - Sci-News.com

Read More..

Grand Valley State University: Where tomorrow’s STEM experts are made – Study International News

How does an identified need lead to a functional technical solution? How can creativity transform complex problems into captivating,real products? The answers can be found in the educational preparation of the STEM professionals who use their expertise to launch widely-used apps, engineer self-driving cars, and innovate better practices for scientific research, health systems, transportation, banking, and communications.

STEM graduates are at the forefront of innovation and change, bettering society with their discoveries. After all, the future of the world and its economies relies heavily on science, technology, engineering, and mathematics making professionals in the field well-deserving of the hefty salaries and wide variety of employment opportunities they command.

Becoming a STEM professional requires the right combination of education and experience. Now, envision an ideal educational institution that offers 24/7 access to state-of-the-art facilities, opportunities to gain industry experience, early exposure to project-based learning, individualised guidance from experts in small classrooms, and ample resources for post-graduation support.

At Grand Valley State University (GVSU) theres no need to imagine such a combination of resources for STEM students. The GVSU Seymour & Esther Padnos College of Engineering and Computing (PCEC) boasts all these offerings and more.

Outstanding outcomes are common at the School of Computer and Information Systems. Source: Grand Valley State University

PCEC is home to a variety of undergraduate and graduate STEM programmes that develop world-changing, technical experts. PCEC graduates have established meaningful careers creating their own businesses, developing high-impact technologies, and leading multinational corporations.

Thomas Rudd is a perfect example of the successes experienced by PCEC graduates. After completing his BS in Information Systems, he spent 18 months on a project that improved data storage. His customers ended up being over 200,000 of Fords employees around the world. He even gave the multinationals CEO a demo. Rudds next destination was Fords Cybersecurity Engineering team where he spent 10 months working on their Security Information & Event Management system. Today, he is thriving in Seattle, Washington as a Cloud Support Engineer in Amazons Intelligence Initiative Programme. Thomas credits his successes to his Grand Valley education which provided him with IT fundamentals and specialised knowledge in a range of areas.

Outcomes like these are common for those who apply their passions in the GVSU School of Computer and Information Systems. Here, undergraduate majors include Computer Science, Cybersecurity, Information Systems, Information Technology, and several minors that complete the experience. Those interested in graduate studies could choose to pursue Applied Computer Science, Cybersecurity, Health Informatics and Bioinformatics, or Data Science and Analytics.

For those looking to build the future, the GVSU School of Engineering provides an excellent launch pad. Undergraduates can opt to form connections between science, medical knowledge, mathematics and creativity by majoring in Biomedical Engineering; they could integrate electrical engineering with computer science to analyse and solve problems with Computer Engineering; or they could develop a wide range of electrical and electronic technologies through Electrical Engineering.

The college also hosts programmes in Interdisciplinary Engineering; Mechanical Engineering; as well as Product Design and Manufacturing Engineering (one of only a few such programs in the world). Those with a solid foundation could advance their career with a Master of Science in Engineering.

Those who wish to form connections between science, medical knowledge, mathematics and creativity can do so by majoring in Biomedical Engineering. Source: Grand Valley State University

If youre on a quest to begin an impactful career managing workplace health and safety, the college offers a BS in Occupational Safety and Health Management. This bachelors degree provides real-world experiences and countless opportunities to build a strong professional network though student organisations and/or conferences.

PCEC is committed to preparing highly qualified industry professionals. Individuals who wish to gain a competitive edge in their fields can do so with one of five Professional Science Masters Degree programmes: Applied Statistics, Biostatistics, Cell and Molecular Biology, Data Science and Analytics, or Health Informatics and Bioinformatics.

Each path offers flexible class schedules and evening meetings which make it all the more accessible for students with existing responsibilities. The best part? Most end up sharing their newfound knowledge in reputable companies such as Priority Health, VanAndel Research Institute, Meijer, Spectrum Health, Integrated DNA Technologies, Cisco Systems, and NxGen MDx.

Regardless of your choice, every PCEC programme offers access to industry internships or co-op experiences. In fact, theyre integrated into all programmes at the undergraduate level. Since these are paid positions, those enrolled rack up a stellar ROI early on. Most opportunities even result in job offers contributing to the colleges nearly 100% employment upon graduation rate. Dont just take our word for it; listen to these PCEC students share their perspectives about Grand Valley State University.

STEM experts are in high demand in the United States and around the world. They are needed in every industry. In fact, the top 20 employers of PCEC graduates are in 19 different industries! According to the US Bureau of Labour and Statistics, employment in STEM occupations is projected to grow 8.8% by 2028, and healthcare-related occupations, such as Biomedical Engineering, Occupational Safety and Health Management, and Health Informatics, are projected to grow even more. Meanwhile, non-STEM occupations will only grow by 5%.

Whats more, the STEM field is becoming increasingly diverse. According to data from the US Census Bureau, more women are being represented in the field. In 1970, the US only recorded a mere 8% of female STEM workers the number rose to 27% in 2019.

To fast-track to your dream STEM career in West Michigan a state known for its beautiful great lakes, automotive innovations, and multinational corporation headquarters and beyond, apply to the GVSU Padnos College of Engineering and Computing here.

Follow GVSU Padnos College of Engineering & Computing on Facebook, Twitter, Instagram, LinkedIn, and YouTube

Follow this link:

Grand Valley State University: Where tomorrow's STEM experts are made - Study International News

Read More..

How Quantum Physics Allows Us To See Back Through Space And Time – Forbes

There are limits to how far back we can see: the earliest galaxies, the first stars, and even the ... [+] emission of the leftover glow from the Big Bang when neutral atoms first stably form. However, if it weren't for the quantum mechanical property of enabling a two-photon transition between higher and lower energy spherical states, our Universe would not only look very different, but we wouldn't be able to see as far back in time or through space.

In many ways, our views of the distant Universe are the closest things well ever get to having a time machine. While we cannot travel back through time, we can do the next best thing: view the Universe not as it is today, but rather as it was a significant amount of time ago. Whenever light is emitted from a distant source like a star, galaxy, or quasar it first needs to traverse the vast cosmic distances separating that source from ourselves, the observer, and that takes time.

Even at the speed of light, it can take billions or even over ten billion years for those signals to arrive, meaning that the farther away we see a distant object, the closer back in time towards the Big Bang were looking. The earliest light we can see, however, comes from a time before any stars or galaxies: when the Universes atomic nuclei and electrons combined to form neutral atoms. Yet its only a very specific quirk of quantum physics that allows us to see the Universe as it was so long ago. Without it, the earliest signals wouldnt exist, and wed be unable to look as far back through space and time as we can today. Heres how quantum physics allows us to see so far back in space and time.

The quantum fluctuations that occur during inflation get stretched across the Universe, and when ... [+] inflation ends, they become density fluctuations. This leads, over time, to the large-scale structure in the Universe today, as well as the fluctuations in temperature observed in the CMB. New predictions like these are essential for demonstrating the validity of a proposed fine-tuning mechanism.

To understand where the earliest observable signal in the Universe comes from, we have to go way back in time: to the earliest instants of the Big Bang. Back when the Universe was hot, dense, almost perfectly uniform, and filled with a mix of matter, antimatter, and radiation, it was expanding incredibly rapidly. In these earliest moments, there were regions of the Universe that were slightly denser than average and regions that were slightly less dense than average, but only by ~1 part in 30,000.

If it were up to gravity alone, the overdense regions would grow, attracting more of the surrounding matter than the average or underdense regions, while the underdense regions would give up their matter to its denser surrounding regions. But the Universe isnt governed only by gravity; the other forces of nature play an important role. Radiation, for example particularly in the form of photons is extremely energetic in the early Universe, and its effects on how matter evolves are important in a number of ways.

At early times (left), photons scatter off of electrons and are high-enough in energy to knock any ... [+] atoms back into an ionized state. Once the Universe cools enough, and is devoid of such high-energy photons (right), they cannot interact with the neutral atoms, and instead simply free-stream, since they have the wrong wavelength to excite these atoms to a higher energy level.

First off, matter (and antimatter), if its electrically charged, will readily scatter off of photons. This means that any quantum of radiation, anytime it encounters a charged particle, will interact and exchange energy with it, with encounters being more likely with low-mass charged particles (like electrons) than high-mass ones (like protons or atomic nuclei).

Second off, as matter attempts to gravitationally collapse, the energy density of that region rises above this average. But radiation responds to those higher energy densities by flowing out of those high-density regions into the lower density ones, and this leads to a sort of bounce, where:

and the cycle continues. When we talk about the fluctuations we see in the cosmic microwave background, they follow a particular pattern of wiggles that corresponds to these bounces, or acoustic oscillations, occurring in the plasma of the early Universe.

As our satellites have improved in their capabilities, they've probes smaller scales, more frequency ... [+] bands, and smaller temperature differences in the cosmic microwave background. The temperature imperfections help teach us what the Universe is made of and how it evolved, painting a picture that requires dark matter to make sense.

But theres a third thing happening concurrently with all of these: the Universe is expanding. When the Universe expands, its density drops, since the total number of particles within it stays the same while the volume increases. A second thing, however, happens as well: the wavelength of every photon every quantum of electromagnetic radiation stretches as the Universe expands. Because a photons wavelength determines its energy, with longer wavelengths corresponding to lower energies, the Universe also cools off as it expands.

A Universe that gets less dense and cools from an initially hot and dense state will do a lot more than just gravitate. At high energies, every collision between two quanta will have a chance to spontaneously create particle/antiparticle pairs; as long as theres enough energy available in each collision to create massive particles (and antiparticles) via Einsteins E = mc, theres a chance it will happen.

At early times, this happens copiously, but as the Universe expands and cools, it stops happening, and instead when particle/antiparticle pairs meet, they annihilate away. When the energy drops to low enough values, only a tiny excess of matter will remain.

In the early Universe, the full suite of particles and their antimatter particles were ... [+] extraordinarily abundant, but as they Universe cooled, the majority annihilated away. All the conventional matter we have left over today is from the quarks and leptons, with positive baryon and lepton numbers, that outnumbered their antiquark and antilepton counterparts.

As the Universe continues to expand and cool and as the density and temperature both drop a number of other important transitions happen. In order:

Its only until this final step is complete a step taking over 100,000 years that the Universe becomes transparent to the light present within it. The ionized plasma that existed previously absorbs and re-emits photons continuously, but once neutral atoms form, those photons simply free-stream and redshift with the expanding Universe, creating the cosmic microwave background we observe today.

A Universe where electrons and protons are free and collide with photons transitions to a neutral ... [+] one that's transparent to photons as the Universe expands and cools. Shown here is the ionized plasma (L) before the CMB is emitted, followed by the transition to a neutral Universe (R) thats transparent to photons. The light, once it stops scattering, simply free-streams and redshifts as the Universe expands, eventually winding up in the microwave portion of the spectrum.

That light, on average, comes to us from a time corresponding to ~380,000 years after the Big Bang. This is incredibly short compared to our Universes history of 13.8 billion years, but is very long compared to the earlier steps, which occur over the first fraction-of-a-second to the first few minutes after the Big Bang. Because photons outnumber atoms by more than a billion-to-one, even a tiny number of super-energetic photons can keep the entire Universe ionized. Only when they cool to a specific threshold corresponding to a temperature of about ~3000 K can these neutral atoms finally form.

But theres an immediate problem with that final step, if you think about it.

When electrons bind to atomic nuclei, theyll cascade down the various energy levels in a chain reaction. Eventually, those electrons will make their most energetic transition: to the ground state. The most common transition that occurs is from the second-lowest energy state (called n=2) to the lowest state (n=1), in which case it emits an energetic, Lyman-series photon.

Electron transitions in the hydrogen atom, along with the wavelengths of the resultant photons, ... [+] showcase the effect of binding energy and the relationship between the electron and the proton in quantum physics. Hydrogen's strongest transition is Lyman-alpha (n=2 to n=1), but its second strongest is visible: Balmer-alpha (n=3 to n=2).

Why is this a problem? We needed the Universe to cool below about ~3000 K so that there wouldnt be enough energetic photons to re-excite those ground-state electrons back to an excited state, where theyd be easy to ionize. So we waited and waited and waited, and finally, a few hundred thousand years after the Big Bang, we got there. At that time, electrons bind to nuclei, they cascade down their various energy levels, and finally make a transition down to a ground state.

That energetic, final transition causes the emission of a high-energy, Lyman-series photon. Now, if youve begun to form neutral atoms all over the Universe, you can calculate how far that Lyman-series photon travels before smashing into a neutral atom, and compare that to the amount of redshifting that will occur for that photon. If it redshifts by a great enough amount, its wavelength will lengthen and atoms wont be able to absorb it. (Remember, atoms can only absorb photons of particular frequencies.)

When you do the math, however, you find that the overwhelming majority of photons produced by these transitions to the ground state about 99,999,999 out of every 100,000,000 simply get reabsorbed by another, identical atom, which then can very easily become ionized.

When an electron transitions from a higher-energy state to a lower-energy state, it typically emits ... [+] a single photon of a particular energy. That photon, however, has the right properties to be absorbed by an identical atom in that lower-energy state. If this were to occur exclusively for a hydrogen atom reaching the ground state in the early Universe, it would not be sufficient to explain our cosmic microwave background.

This implies something rather disturbing: we waited all this time for the Universe to become electrically neutral, and then when it does, we calculate that practically every atom that does so will itself be responsible for re-ionizing a different atom of the same type.

You might think that this means we just need to wait for a sufficient amount of time, and then enough of these transitions will occur with a sufficiently long time passing between when those photons are emitted and it encounters another atom. Thats true, but the time it would take for the Universe to become electrically neutral wouldnt be ~380,000 years if this were the way it happened. Instead, it would take more like ~790,000 years for this transition to occur, where the Universe would have dropped all the way down to more like ~1900 K in temperature.

In other words, the simplest way youd attempt to form neutral atoms the way it happens naturally when the ions in our Universe recombine today cannot be the main mechanism for how it occurred in the early Universe.

The lowest energy level (1S) of hydrogen, top left, has a dense electron probability cloud. Higher ... [+] energy levels have similar clouds, but with much more complicated configurations. For the first excited state, there are two independent configurations: the 2S state and the 2P state, which have different energy levels due to a very subtle effect.

So how does it happen, then? You have to remember that the lowest-energy state for an electron in an atom, the n=1 state, is always spherical. You can fit up to two electrons in that state, and so hydrogen the most common element in the Universe always has one electron in the n=1 state when it gets there.

However, the n=2 state can fit up to eight electrons: there are two slots in a spherical state (the s-orbital) and two slots in each of the x, y, and z directions (the p-orbitals).

The problem is that transitions from one s-orbital to another are forbidden, quantum mechanically. Theres no way to emit one photon from an s-orbital and have your electron wind up in a lower energy s-orbital, so the transition we talked about earlier, where you emit a Lyman-series photon, can only occur from the 2p state to the 1s state.

But there is a special, rare process that can occur: a two-photon transition from the 2s state (or the 3s, or 4s, or even the 3d orbital) down to the ground (1s) state. It occurs only about 0.000001% as frequently as the Lyman-series transitions, but each occurrence nets us one new neutral hydrogen atom. This quantum mechanical quirk is the primary method of creating neutral hydrogen atoms in the Universe.

When you transition from an "s" orbital to a lower-energy "s" orbital, you can on rare occasion do ... [+] it through the emission of two photons of equal energy. This two-photon transition occurs even between the 2s (first excited) state and the 1s (ground) state, about one time out of every 100 million transitions, and is the primary mechanism by which the Universe becomes neutral.

If it werent for this rare transition, from higher energy spherical orbitals to lower energy spherical orbitals, our Universe would look incredibly different in detail. We would have different numbers and magnitudes of acoustic peaks in the cosmic microwave background, and hence a different set of seed fluctuations for our Universe to build its large-scale structure out of. The ionization history of our Universe would be different; it would take longer for the first stars to form; and the light from the leftover glow of the Big Bang would only take us back to 790,000 years after the Big Bang, rather than the 380,000 years we get today.

In a very real sense, there are a myriad of ways that our view into the distant Universe to the farthest reaches of deep space where we detect the earliest signals arising after the Big Bang that would be fundamentally less powerful if not for this one quantum mechanical transition. If we want to understand how the Universe came to be the way it is today, even on cosmic scales, its remarkable how subtly dependent the outcomes are on the subatomic rules of quantum physics. Without it, the sights we see looking back across space and time would be far less rich and spectacular.

Read the rest here:

How Quantum Physics Allows Us To See Back Through Space And Time - Forbes

Read More..

A New Quantum Paradox Throws the Foundations of Observed Reality into Question – Interesting Engineering

If a tree falls in a forest and no one is there to hear it, does it make a sound? Perhaps not, some say.

And if someone is there to hear it? If you think that means it obviously did make a sound, you might need to revise that opinion.

We have found a new paradox in quantum mechanics one of our two most fundamental scientific theories, together with Einsteins theory of relativity that throws doubt on some common-sense ideas about physical reality.

Take a look at these three statements:

When someone observes an event happening, it really happened.

It is possible to make free choices, or at least, statistically random choices.

A choice made in one place cant instantly affect a distant event. (Physicists call this locality.)

These are all intuitive ideas, and widely believed even by physicists. But our research, published in Nature Physics, shows they cannot all be true or quantum mechanics itself must break down at some level.

This is the strongest result yet in a long series of discoveries in quantum mechanics that have upended our ideas about reality. To understand why its so important, lets look at this history.

Quantum mechanics works extremely well to describe the behavior of tiny objects, such as atoms or particles of light (photons). But that behavior is very odd.

In many cases, quantum theory doesnt give definite answers to questions such as where is this particle right now? Instead, it only provides probabilities for where the particle might be found when it is observed.

For Niels Bohr, one of the founders of the theory a century ago, thats not because we lack information, but because physical properties like position dont actually exist until they are measured.

And whats more, because some properties of a particle cant be perfectly observed simultaneously such as position and velocity they cant be real simultaneously.

No less a figure than Albert Einstein found this idea untenable. In a 1935 article with fellow theorists Boris Podolsky and Nathan Rosen, he argued there must be more to reality than what quantum mechanics could describe.

The article considered a pair of distant particles in a special state now known as an entangled state. When the same property (say, position or velocity) is measured on both entangled particles, the result will be random but there will be a correlation between the results from each particle.

For example, an observer measuring the position of the first particle could perfectly predict the result of measuring the position of the distant one, without even touching it. Or the observer could choose to predict the velocity instead. This had a natural explanation, they argued, if both properties existed before being measured, contrary to Bohrs interpretation.

However, in 1964 Northern Irish physicist John Bell found Einsteins argument broke down if you carried out a more complicated combination of different measurements on the two particles.

Bell showed that if the two observers randomly and independently choose between measuring one or another property of their particles, like position or velocity, the average results cannot be explained in any theory where both position and velocity were pre-existing local properties.

That sounds incredible, but experiments have now conclusively demonstrated Bells correlations do occur. For many physicists, this is evidence that Bohr was right: physical properties dont exist until they are measured.

But that raises the crucial question: what is so special about a measurement?

In 1961, the Hungarian-American theoretical physicist Eugene Wigner devised a thought experiment to show whats so tricky about the idea of measurement.

He considered a situation in which his friend goes into a tightly sealed lab and performs a measurement on a quantum particle its position, say.

However, Wigner noticed that if he applied the equations of quantum mechanics to describe this situation from the outside, the result was quite different. Instead of the friends measurement making the particles position real, from Wigners perspective the friend becomes entangled with the particle and infected with the uncertainty that surrounds it.

This is similar to Schrdingers famous cat, a thought experiment in which the fate of a cat in a box becomes entangled with a random quantum event.

For Wigner, this was an absurd conclusion. Instead, he believed that once the consciousness of an observer becomes involved, the entanglement would collapse to make the friends observation definite.

But what if Wigner was wrong?

In our research, we built on an extended version of the Wigners friend paradox, first proposed by aslav Brukner of the University of Vienna. In this scenario, there are two physicists call them Alice and Bob each with their own friends (Charlie and Debbie) in two distant labs.

Theres another twist: Charlie and Debbie are now measuring a pair of entangled particles, like in the Bell experiments.

As in Wigners argument, the equations of quantum mechanics tell us Charlie and Debbie should become entangled with their observed particles. But because those particles were already entangled with each other, Charlie and Debbie themselves should become entangled in theory.

But what does that imply experimentally?

Our experiment goes like this: the friends enter their labs and measure their particles. Some time later, Alice and Bob each flip a coin. If its heads, they open the door and ask their friend what they saw. If its tails, they perform a different measurement.

This different measurement always gives a positive outcome for Alice if Charlie is entangled with his observed particle in the way calculated by Wigner. Likewise for Bob and Debbie.

In any realization of this measurement, however, any record of their friends observation inside the lab is blocked from reaching the external world. Charlie or Debbie will not remember having seen anything inside the lab, as if waking up from total anaesthesia.

But did it really happen, even if they dont remember it?

If the three intuitive ideas at the beginning of this article are correct, each friend saw a real and unique outcome for their measurement inside the lab, independent of whether or not Alice or Bob later decided to open their door. Also, what Alice and Charlie see should not depend on how Bobs distant coin lands, and vice versa.

We showed that if this were the case, there would be limits to the correlations Alice and Bob could expect to see between their results. We also showed that quantum mechanics predicts Alice and Bob will see correlations that go beyond those limits.

Next, we did an experiment to confirm the quantum mechanical predictions using pairs of entangled photons. The role of each friends measurement was played by one of two paths each photon may take in the setup, depending on a property of the photon called polarisation. That is, the path measures the polarisation.

Our experiment is only really a proof of principle, since the friends are very small and simple. But it opens the question whether the same results would hold with more complex observers.

We may never be able to do this experiment with real humans. But we argue that it may one day be possible to create a conclusive demonstration if the friend is a human-level artificial intelligence running in a massive quantum computer.

Although a conclusive test may be decades away, if the quantum mechanical predictions continue to hold, this has strong implications for our understanding of reality even more so than the Bell correlations. For one, the correlations we discovered cannot be explained just by saying that physical properties dont exist until they are measured.

Now the absolute reality of measurement outcomes themselves is called into question.

Our results force physicists to deal with the measurement problem head on: either our experiment doesnt scale up, and quantum mechanics gives way to a so-called objective collapse theory, or one of our three common-sense assumptions must be rejected.

There are theories, like de Broglie-Bohm, that postulate action at a distance, in which actions can have instantaneous effects elsewhere in the universe. However, this is in direct conflict with Einsteins theory of relativity.

Some search for a theory that rejects freedom of choice, but they either require backwards causality, or a seemingly conspiratorial form of fatalism called superdeterminism.

Another way to resolve the conflict could be to make Einsteins theory even more relative. For Einstein, different observers could disagree about when or where something happens but what happens was an absolute fact.

However, in some interpretations, such as relational quantum mechanics, QBism, or the many-worlds interpretation, events themselves may occur only relative to one or more observers. A fallen tree observed by one may not be a fact for everyone else.

All of this does not imply that you can choose your own reality. Firstly, you can choose what questions you ask, but the answers are given by the world. And even in a relational world, when two observers communicate, their realities are entangled. In this way a shared reality can emerge.

This means that if we both witness the same tree falling and you say you cant hear it, you might just need a hearing aid.

ByEric Cavalcanti, Griffith University.This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read more here:

A New Quantum Paradox Throws the Foundations of Observed Reality into Question - Interesting Engineering

Read More..

27 Milestones In The History Of Quantum Computing – Forbes

circa 1931: German-born physicist Albert Einstein (1879 - 1955) standing beside a blackboard with ... [+] chalk-marked mathematical calculations written across it. (Photo by Hulton Archive/Getty Images)

40 years ago, Nobel Prize-winner Richard Feynman argued that nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical. This was later perceived as a rallying cry for developing a quantum computer, leading to todays rapid progress in the search for quantum supremacy. Heres a very short history of the evolution of quantum computing.

1905Albert Einstein explains the photoelectric effectshining light on certain materials can function to release electrons from the materialand suggests that light itself consists of individual quantum particles or photons.

1924The term quantum mechanics is first used in a paper by Max Born

1925Werner Heisenberg, Max Born, and Pascual Jordan formulate matrix mechanics, the first conceptually autonomous and logically consistent formulation of quantum mechanics

1925 to 1927Niels Bohr and Werner Heisenberg develop the Copenhagen interpretation, one of the earliest interpretations of quantum mechanics which remains one of the most commonly taught

1930Paul Dirac publishes The Principles of Quantum Mechanics, a textbook that has become a standard reference book that is still used today

1935Albert Einstein, Boris Podolsky, and Nathan Rosen publish a paper highlighting the counterintuitive nature of quantum superpositions and arguing that the description of physical reality provided by quantum mechanics is incomplete

1935Erwin Schrdinger, discussing quantum superposition with Albert Einstein and critiquing the Copenhagen interpretation of quantum mechanics, develops a thought experiment in which a cat (forever known as Schrdingers cat) is simultaneously dead and alive; Schrdinger also coins the term quantum entanglement

1947Albert Einstein refers for the first time to quantum entanglement as spooky action at a distance in a letter to Max Born

1976Roman Stanisaw Ingarden of the Nicolaus Copernicus University in Toru, Poland, publishes one of the first attempts at creating a quantum information theory

1980Paul Benioff of the Argonne National Laboratory publishes a paper describing a quantum mechanical model of a Turing machine or a classical computer, the first to demonstrate the possibility of quantum computing

1981In a keynote speech titled Simulating Physics with Computers, Richard Feynman of the California Institute of Technology argues that a quantum computer had the potential to simulate physical phenomena that a classical computer could not simulate

1985David Deutsch of the University of Oxford formulates a description for a quantum Turing machine

1992The DeutschJozsa algorithm is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm

1993The first paper describing the idea of quantum teleportation is published

1994Peter Shor of Bell Laboratories develops a quantum algorithm for factoring integers that has the potential to decrypt RSA-encrypted communications, a widely-used method for securing data transmissions

1994The National Institute of Standards and Technology organizes the first US government-sponsored conference on quantum computing

1996Lov Grover of Bell Laboratories invents the quantum database search algorithm

1998First demonstration of quantum error correction; first proof that a certain subclass of quantum computations can be efficiently emulated with classical computers

1999Yasunobu Nakamura of the University of Tokyo and Jaw-Shen Tsai of Tokyo University of Science demonstrate that a superconducting circuit can be used as a qubit

2002The first version of the Quantum Computation Roadmap, a living document involving key quantum computing researchers, is published

2004First five-photon entanglement demonstrated by Jian-Wei Pan's group at the University of Science and Technology in China

2011The first commercially available quantum computer is offered by D-Wave Systems

2012 1QB Information Technologies (1QBit), the first dedicated quantum computing software company, is founded

2014Physicists at the Kavli Institute of Nanoscience at the Delft University of Technology, The Netherlands, teleport information between two quantum bits separated by about 10 feet with zero percent error rate

2017 Chinese researchers report the first quantum teleportation of independent single-photon qubits from a ground observatory to a low Earth orbit satellite with a distance of up to 1400 km

2018The National Quantum Initiative Act is signed into law by President Donald Trump, establishing the goals and priorities for a 10-year plan to accelerate the development of quantum information science and technology applications in the United States

2019Google claims to have reached quantum supremacy by performing a series of operations in 200 seconds that would take a supercomputer about 10,000 years to complete; IBM responds by suggesting it could take 2.5 days instead of 10,000 years, highlighting techniques a supercomputer may use to maximize computing speed

The race for quantum supremacy is on, to being able to demonstrate a practical quantum device that can solve a problem that no classical computer can solve in any feasible amount of time. Speedand sustainabilityhas always been the measure of the jump to the next stage of computing.

In 1944, Richard Feynman, then a junior staff member at Los Alamos, organized a contest between human computers and the Los Alamos IBM facility, with both performing a calculation for the plutonium bomb. For two days, the human computers kept up with the machines. But on the third day, recalled an observer, the punched-card machine operation began to move decisively ahead, as the people performing the hand computing could not sustain their initial fast pace, while the machines did not tire and continued at their steady pace (seeWhen Computers Were Human, by David Alan Greer).

Nobel Prize winning physicist Richard Feynman stands in front of a blackboard strewn with notation ... [+] in his lab in Los Angeles, Californina. (Photo by Kevin Fleming/Corbis via Getty Images)

See the article here:

27 Milestones In The History Of Quantum Computing - Forbes

Read More..

Quantum Machine Learning Hits a Limit: A Black Hole Permanently Scrambles Information That Can’t Be Recovered – SciTechDaily

A new theorem shows that information run through an information scrambler such as a black hole will reach a point where any algorithm will be unable to learn the information that has been scrambled. Credit: Los Alamos National Laboratory

A black hole permanently scrambles information that cant be recovered with any quantum machine learning algorithm, shedding new light on the classic Hayden-Preskill thought experiment.

A new theorem from the field of quantum machine learning has poked a major hole in the accepted understanding about information scrambling.

Our theorem implies that we are not going to be able to use quantum machine learning to learn typical random or chaotic processes, such as black holes. In this sense, it places a fundamental limit on the learnability of unknown processes, said Zoe Holmes, a post-doc at Los Alamos National Laboratory and coauthor of the paper describing the work published on May 12, 2021, in Physical Review Letters.

Thankfully, because most physically interesting processes are sufficiently simple or structured so that they do not resemble a random process, the results dont condemn quantum machine learning, but rather highlight the importance of understanding its limits, Holmes said.

In the classic Hayden-Preskill thought experiment, a fictitious Alice tosses information such as a book into a black hole that scrambles the text. Her companion, Bob, can still retrieve it using entanglement, a unique feature of quantum physics. However, the new work proves that fundamental constraints on Bobs ability to learn the particulars of a given black holes physics means that reconstructing the information in the book is going to be very difficult or even impossible.

Any information run through an information scrambler such as a black hole will reach a point where the machine learning algorithm stalls out on a barren plateau and thus becomes untrainable. That means the algorithm cant learn scrambling processes, said Andrew Sornborger a computer scientist at Los Alamos and coauthor of the paper. Sornborger is Director of Quantum Science Center at Los Alamos and leader of the Centers algorithms and simulation thrust. The Center is a multi-institutional collaboration led by Oak Ridge National Laboratory.

Barren plateaus are regions in the mathematical space of optimization algorithms where the ability to solve the problem becomes exponentially harder as the size of the system being studied increases. This phenomenon, which severely limits the trainability of large scale quantum neural networks, was described in a recent paper by a related Los Alamos team.

Recent work has identified the potential for quantum machine learning to be a formidable tool in our attempts to understand complex systems, said Andreas Albrecht, a co-author of the research. Albrecht is Director of the Center for Quantum Mathematics and Physics (QMAP) and Distinguished Professor, Department of Physics and Astronomy, at UC Davis. Our work points out fundamental considerations that limit the capabilities of this tool.

In the Hayden-Preskill thought experiment, Alice attempts to destroy a secret, encoded in a quantum state, by throwing it into natures fastest scrambler, a black hole. Bob and Alice are the fictitious quantum dynamic duo typically used by physicists to represent agents in a thought experiment.

You might think that this would make Alices secret pretty safe, Holmes said, but Hayden and Preskill argued that if Bob knows the unitary dynamics implemented by the black hole, and share a maximally entangled state with the black hole, it is possible to decode Alices secret by collecting a few additional photons emitted from the black hole. But this prompts the question, how could Bob learn the dynamics implemented by the black hole? Well, not by using quantum machine learning, according to our findings.

A key piece of the new theorem developed by Holmes and her coauthors assumes no prior knowledge of the quantum scrambler, a situation unlikely to occur in real-world science.

Our work draws attention to the tremendous leverage even small amounts of prior information may play in our ability to extract information from complex systems and potentially reduce the power of our theorem, Albrecht said. Our ability to do this can vary greatly among different situations (as we scan from theoretical consideration of black holes to concrete situations controlled by humans here on earth). Future research is likely to turn up interesting examples, both of situations where our theorem remains fully in force, and others where it can be evaded.

Reference: Barren Plateaus Preclude Learning Scramblers by Zo Holmes, Andrew Arrasmith, Bin Yan, Patrick J. Coles, Andreas Albrecht and Andrew T. Sornborger, 12 May 2021, Physical Review Letters.DOI: 10.1103/PhysRevLett.126.190501

Funding: U.S. Department of Energy, Office of Science

Read the original:

Quantum Machine Learning Hits a Limit: A Black Hole Permanently Scrambles Information That Can't Be Recovered - SciTechDaily

Read More..

Is everything predetermined? Why physicists are reviving a taboo idea – New Scientist

Superdeterminism makes sense of the quantum world by suggesting it is not as random as it seems, but critics say it undermines the whole premise of science. Does the idea deserve its terrible reputation?

By Michael Brooks

Pete Reynolds

IVE never worked on anything so unpopular! Sabine Hossenfelder, a theoretical physicist at the Frankfurt Institute for Advanced Studies in Germany, laughs as she says it, but she is clearly frustrated.

The idea she is exploring has to do with the biggest mystery in quantum theory, namely what happens when the fuzzy, undecided quantum realm is distilled into something definite, something we would experience as real. Are the results of this genesis entirely random, as the theory suggests?

Albert Einstein was in no doubt: God, he argued, doesnt play dice with the universe. Hossenfelder is inclined to agree. Now, she and a handful of other physicists are stoking controversy by attempting to revive a non-random, deterministic idea where effects always have a cause. The strangeness of quantum mechanics, they say, only arises because we have been working with a limited view of the quantum world.

The stakes are high. Superdeterminism, as this idea is known, wouldnt only make sense of quantum theory a century after it was conceived. It could also provide the key to uniting quantum theory with relativity to create the final theory of the universe. Hossenfelder and her colleagues arent exactly being cheered on from the sidelines, however. Many theorists are adamant that superdeterminism is the most dangerous idea in physics. Take its implications seriously, they argue, and you undermine the whole edifice of science.

So what is the answer? Does superdeterminism deserve its bad reputation or, in the absence of a better solution, do we have little choice but to give it a chance?

Quantum theory describes the behaviour of matter at

Originally posted here:

Is everything predetermined? Why physicists are reviving a taboo idea - New Scientist

Read More..

New evidence for electron’s dual nature found in a quantum spin liquid . New experiments conducted at – Princeton University

A new discovery led by Princeton University researchers could upend our understanding of how electrons behave under extreme conditions due to the laws of quantum physics.

The finding provides experimental evidence that this familiar building block of matter often behaves as if it is made of two particles one particle that gives the electron its negative charge and another that gives it a magnet-like property known as spin.

We think this is the first hard evidence of spin-charge separation, said Nai Phuan Ong, Eugene Higgins Professor of Physics, the senior author on a study published this week in the journal Nature Physics.

The experimental results fulfill a prediction made decades ago to explain one of the most mind-bending states of matter, the quantum spin liquid. In all materials, the spin of an electron can point either up or down. In the familiar magnet, the spins uniformly point in one direction throughout the sample when the temperature drops below a critical temperature.

However, in spin liquid materials, the spins are unable to establish a uniform pattern even when cooled very close to absolute zero. Instead, the spins are constantly changing in a tightly coordinated, entangled choreography. The result is one of the most entangled quantum states ever conceived, a state of great interest to researchers in the nascent field of quantum computing.

To describe this behavior mathematically, Nobel prize-winning Princeton physicist Philip Anderson (1923-2020), who first predicted the existence of spin liquids in 1973, proposed an explanation: in the quantum regime an electron may be regarded as composed of two particles, one bearing the electrons negative charge and the other containing its spin. Anderson called the spin-containing particle a spinon.

In this new study, the team searched for signs of the spinon in a spin liquid composed of ruthenium and chlorine atoms. At temperatures a fraction of a Kelvin above absolute zero (or roughly 452 degrees Fahrenheit), ruthenium chloride crystals enter a spin liquid state in the presence of a high magnetic field.

Physics graduate student Peter Czajka and Tong Gao, a 2020 Ph.D. graduate, connected three highly sensitive thermometers to the crystal as it sat in a bath maintained at temperatures close to absolute zero Kelvin. They then applied the magnetic field and a small amount of heat to one crystal edge to measure its thermal conductivity, a quantity that expresses how well it conducts a heat current. If spinons were present, they should appear as an oscillating pattern in a graph of the thermal conductivity versus magnetic field.

The oscillating signal they were searching for was tiny just a few hundredths of a degree change so the measurements demanded an extraordinarily precise control of the sample temperature as well as careful calibrations of the thermometers in a strong magnetic field.

Researchers at Princeton University conducted experiments on materials known as quantum spin liquids, finding evidence that the electrons in the quantum regime behave as if they are made up of two particles. The 3D color-plot, a composite of many experiments, shows how the thermal conductivity Kxx (vertical axis) varies as a function of the magnetic field B (horizontal axis) and the temperature T (axis into the page). The oscillations provide evidence for spinons.

Graph by Peter Czajka, Princeton University

The team used the purest crystals available, ones grown at the U.S. Department of Energys Oak Ridge National Laboratory under the leadership of David Mandrus, materials science professor at the University of Tennessee-Knoxville, and Stephen Nagler, corporate research fellow in ORNLs Neutron Scattering Division. The ORNL team has extensively studied the quantum spin liquid properties of ruthenium chloride.

In a series of experiments extending over nearly three years, Czajka and Gao detected the temperature oscillations consistent with spinons with increasingly higher resolution, providing evidence that the electron is composed of two particles, consistent with Andersons prediction.

People have been searching for this signature for four decades, Ong said. If this finding and the spinon interpretation are validated, it would significantly advance the field of quantum spin liquids.

From the purely experimental side, Czajka said, it was exciting to see results that in effect break the rules that you learn in elementary physics classes.

Czajka and Gao spent last summer confirming the experiments while under COVID-19 restrictions that required them to wear masks and maintain social distancing.

The experiment was performed in collaboration with Max Hirschberger, a 2017 Ph.D. alumnus now at the University of Tokyo; Arnab Banerjee at Purdue University and ORNL; David Mandrus and Paula Lempen-Kelley at the University of Tennessee-Knoxville and ORNL; and Jiaqiang Yan and Stephen E. Nagler at ORNL. Funding at Princeton was provided by the Gordon and Betty Moore Foundation, the U.S. Department of Energy and the National Science Foundation. The Gordon and Betty Moore Foundation also supported the crystal growth program at the University of Tennessee.

The study, Oscillations of the thermal conductivity in the spin-liquid state of -RuCl3, by Peter Czajka, Tong Gao, Max Hirschberger, Paula Lampen-Kelley, Arnab Banerjee, Jiaqiang Yan, David G. Mandrus, Stephen E. Nagler and N. P. Ong, was published in the journal Nature Physics online on May 13, 2021. DOI: 10.1038/s41567-021-01243-x.

Read this article:

New evidence for electron's dual nature found in a quantum spin liquid . New experiments conducted at - Princeton University

Read More..