Page 2,759«..1020..2,7582,7592,7602,761..2,7702,780..»

Can consciousness be explained by quantum physics? My research takes us a step closer to finding out – The Conversation UK

One of the most important open questions in science is how our consciousness is established. In the 1990s, long before winning the 2020 Nobel Prize in Physics for his prediction of black holes, physicist Roger Penrose teamed up with anaesthesiologist Stuart Hameroff to propose an ambitious answer.

They claimed that the brains neuronal system forms an intricate network and that the consciousness this produces should obey the rules of quantum mechanics the theory that determines how tiny particles like electrons move around. This, they argue, could explain the mysterious complexity of human consciousness.

Penrose and Hameroff were met with incredulity. Quantum mechanical laws are usually only found to apply at very low temperatures. Quantum computers, for example, currently operate at around -272C. At higher temperatures, classical mechanics takes over. Since our body works at room temperature, you would expect it to be governed by the classical laws of physics. For this reason, the quantum consciousness theory has been dismissed outright by many scientists though others are persuaded supporters.

Instead of entering into this debate, I decided to join forces with colleagues from China, led by Professor Xian-Min Jin at Shanghai Jiaotong University, to test some of the principles underpinning the quantum theory of consciousness.

In our new paper, weve investigated how quantum particles could move in a complex structure like the brain but in a lab setting. If our findings can one day be compared with activity measured in the brain, we may come one step closer to validating or dismissing Penrose and Hameroffs controversial theory.

Our brains are composed of cells called neurons, and their combined activity is believed to generate consciousness. Each neuron contains microtubules, which transport substances to different parts of the cell. The Penrose-Hameroff theory of quantum consciousness argues that microtubules are structured in a fractal pattern which would enable quantum processes to occur.

Fractals are structures that are neither two-dimensional nor three-dimensional, but are instead some fractional value in between. In mathematics, fractals emerge as beautiful patterns that repeat themselves infinitely, generating what is seemingly impossible: a structure that has a finite area, but an infinite perimeter.

Read more: Explainer: what are fractals?

This might sound impossible to visualise, but fractals actually occur frequently in nature. If you look closely at the florets of a cauliflower or the branches of a fern, youll see that theyre both made up of the same basic shape repeating itself over and over again, but at smaller and smaller scales. Thats a key characteristic of fractals.

The same happens if you look inside your own body: the structure of your lungs, for instance, is fractal, as are the blood vessels in your circulatory system. Fractals also feature in the enchanting repeating artworks of MC Escher and Jackson Pollock, and theyve been used for decades in technology, such as in the design of antennas. These are all examples of classical fractals fractals that abide by the laws of classical physics rather than quantum physics.

Its easy to see why fractals have been used to explain the complexity of human consciousness. Because theyre infinitely intricate, allowing complexity to emerge from simple repeated patterns, they could be the structures that support the mysterious depths of our minds.

But if this is the case, it could only be happening on the quantum level, with tiny particles moving in fractal patterns within the brains neurons. Thats why Penrose and Hameroffs proposal is called a theory of quantum consciousness.

Were not yet able to measure the behaviour of quantum fractals in the brain if they exist at all. But advanced technology means we can now measure quantum fractals in the lab. In recent research involving a scanning tunnelling microscope (STM), my colleagues at Utrecht and I carefully arranged electrons in a fractal pattern, creating a quantum fractal.

When we then measured the wave function of the electrons, which describes their quantum state, we found that they too lived at the fractal dimension dictated by the physical pattern wed made. In this case, the pattern we used on the quantum scale was the Sierpiski triangle, which is a shape thats somewhere between one-dimensional and two-dimensional.

This was an exciting finding, but STM techniques cannot probe how quantum particles move which would tell us more about how quantum processes might occur in the brain. So in our latest research, my colleagues at Shanghai Jiaotong University and I went one step further. Using state-of-the-art photonics experiments, we were able to reveal the quantum motion that takes place within fractals in unprecedented detail.

We achieved this by injecting photons (particles of light) into an artificial chip that was painstakingly engineered into a tiny Sierpiski triangle. We injected photons at the tip of the triangle and watched how they spread throughout its fractal structure in a process called quantum transport. We then repeated this experiment on two different fractal structures, both shaped as squares rather than triangles. And in each of these structures we conducted hundreds of experiments.

Our observations from these experiments reveal that quantum fractals actually behave in a different way to classical ones. Specifically, we found that the spread of light across a fractal is governed by different laws in the quantum case compared to the classical case.

This new knowledge of quantum fractals could provide the foundations for scientists to experimentally test the theory of quantum consciousness. If quantum measurements are one day taken from the human brain, they could be compared against our results to definitely decide whether consciousness is a classical or a quantum phenomenon.

Our work could also have profound implications across scientific fields. By investigating quantum transport in our artificially designed fractal structures, we may have taken the first tiny steps towards the unification of physics, mathematics and biology, which could greatly enrich our understanding of the world around us as well as the world that exists in our heads.

Read the rest here:

Can consciousness be explained by quantum physics? My research takes us a step closer to finding out - The Conversation UK

Read More..

How Bell’s Theorem Proved ‘Spooky Action at a Distance’ Is Real – Quanta Magazine

We take for granted that an event in one part of the world cannot instantly affect what happens far away. This principle, which physicists call locality, was long regarded as a bedrock assumption about the laws of physics. So when Albert Einstein and two colleagues showed in 1935 that quantum mechanics permits spooky action at a distance, as Einstein put it, this feature of the theory seemed highly suspect. Physicists wondered whether quantum mechanics was missing something.

Then in 1964, with the stroke of a pen, the Northern Irish physicist John Stewart Bell demoted locality from a cherished principle to a testable hypothesis. Bell proved that quantum mechanics predicted stronger statistical correlations in the outcomes of certain far-apart measurements than any local theory possibly could. In the years since, experiments have vindicated quantum mechanics again and again.

Bells theorem upended one of our most deeply held intuitions about physics, and prompted physicists to explore how quantum mechanics might enable tasks unimaginable in a classical world. The quantum revolution thats happening now, and all these quantum technologies thats 100% thanks to Bells theorem, says Krister Shalm, a quantum physicist at the National Institute of Standards and Technology.

Heres how Bells theorem showed that spooky action at a distance is real.

The spooky action that bothered Einstein involves a quantum phenomenon known as entanglement, in which two particles that we would normally think of as distinct entities lose their independence. Famously, in quantum mechanics a particles location, polarization and other properties can be indefinite until the moment they are measured. Yet measuring the properties of entangled particles yields results that are strongly correlated, even when the particles are far apart and measured nearly simultaneously. The unpredictable outcome of one measurement appears to instantly affect the outcome of the other, regardless of the distance between them a gross violation of locality.

To understand entanglement more precisely, consider a property of electrons and most other quantum particles called spin. Particles with spin behave somewhat like tiny magnets. When, for instance, an electron passes through a magnetic field created by a pair of north and south magnetic poles, it gets deflected by a fixed amount toward one pole or the other. This shows that the electrons spin is a quantity that can have only one of two values: up for an electron deflected toward the north pole, and down for an electron deflected toward the south pole.

Imagine an electron passing through a region with the north pole directly above it and the south pole directly below. Measuring its deflection will reveal whether the electrons spin is up or down along the vertical axis. Now rotate the axis between the magnet poles away from vertical, and measure deflection along this new axis. Again, the electron will always deflect by the same amount toward one of the poles. Youll always measure a binary spin value either up or down along any axis.

It turns out its not possible to build any detector that can measure a particles spin along multiple axes at the same time. Quantum theory asserts that this property of spin detectors is actually a property of spin itself: If an electron has a definite spin along one axis, its spin along any other axis is undefined.

Armed with this understanding of spin, we can devise a thought experiment that we can use to prove Bells theorem. Consider a specific example of an entangled state: a pair of electrons whose total spin is zero, meaning measurements of their spins along any given axis will always yield opposite results. Whats remarkable about this entangled state is that, although the total spin has this definite value along all axes, each electrons individual spin is indefinite.

Suppose these entangled electrons are separated and transported to distant laboratories, and that teams of scientists in these labs can rotate the magnets of their respective detectors any way they like when performing spin measurements.

When both teams measure along the same axis, they obtain opposite results 100% of the time. But is this evidence of nonlocality? Not necessarily.

Alternatively, Einstein proposed, each pair of electrons could come with an associated set of hidden variables specifying the particles spins along all axes simultaneously. These hidden variables are absent from the quantum description of the entangled state, but quantum mechanics may not be telling the whole story.

Hidden variable theories can explain why same-axis measurements always yield opposite results without any violation of locality: A measurement of one electron doesnt affect the other but merely reveals the preexisting value of a hidden variable.

Bell proved that you could rule out local hidden variable theories, and indeed rule out locality altogether, by measuring entangled particles spins along different axes.

Suppose, for starters, that one team of scientists happens to rotate its detector relative to the other labs by 180 degrees. This is equivalent to swapping its north and south poles, so an up result for one electron would never be accompanied by a down result for the other. The scientists could also choose to rotate it an in-between amount 60 degrees, say. Depending on the relative orientation of the magnets in the two labs, the probability of opposite results can range anywhere between 0% and 100%.

Without specifying any particular orientations, suppose that the two teams agree on a set of three possible measurement axes, which we can label A, B and C. For every electron pair, each lab measures the spin of one of the electrons along one of these three axes chosen at random.

Lets now assume the world is described by a local hidden variable theory, rather than quantum mechanics. In that case, each electron has its own spin value in each of the three directions. That leads to eight possible sets of values for the hidden variables, which we can label in the following way:

The set of spin values labeled 5, for instance, dictates that the result of a measurement along axis A in the first lab will be up, while measurements along axes B and C will be down; the second electrons spin values will be opposite.

For any electron pair possessing spin values labeled 1 or 8, measurements in the two labs will always yield opposite results, regardless of which axes the scientists choose to measure along. The other six sets of spin values all yield opposite results in 33% of different-axis measurements. (For instance, for the spin values labeled 5, the labs will obtain opposite results when one measures along axis B while the other measures along C; this represents one-third of the possible choices.)

Thus the labs will obtain opposite results when measuring along different axes at least 33% of the time; equivalently, they will obtain the same result at most 67% of the time. This result an upper bound on the correlations allowed by local hidden variable theories is the inequality at the heart of Bells theorem.

Now, what about quantum mechanics?Were interested in the probability of both labs obtaining the same result when measuring the electrons spins along different axes. The equations of quantum theory provide a formula for this probability as a function of the angles between the measurement axes.

According to the formula, when the three axes are all as far apart as possible that is, all 120 degrees apart, as in the Mercedes logo both labs will obtain the same result 75% of the time. This exceeds Bells upper bound of 67%.

Thats the essence of Bells theorem: If locality holds and a measurement of one particle cannot instantly affect the outcome of another measurement far away, then the results in a certain experimental setup can be no more than 67% correlated. If, on the other hand, the fates of entangled particles are inextricably linked even across vast distances, as in quantum mechanics, the results of certain measurements will exhibit stronger correlations.

Since the 1970s, physicists have made increasingly precise experimental tests of Bells theorem. Each one has confirmed the strong correlations of quantum mechanics. In the past five years, various loopholes have been closed. Locality that long-held assumption about physical law is not a feature of our world.

Editors note: The author is currently a postdoctoral researcher at JILA in Boulder, Colorado.

See the article here:

How Bell's Theorem Proved 'Spooky Action at a Distance' Is Real - Quanta Magazine

Read More..

Physicists Show That a Quantum Particle Made of Light and Matter Can Be Dragged by a Current of Electrons – Columbia University

In therecent Nature study, Basov and his colleagues recreated Fizeaus experiments on a speck of graphene made up of a single layer of carbon atoms. Hooking up the graphene to a battery, they created an electrical current reminiscent of Fizeaus water streaming through a pipe. But instead of shining light on the moving water and measuring its speed in both directions, as Fizeau did, they generated an electromagnetic wave with a compressed wavelengtha polaritonby focusing infrared light on a gold nub in the graphene. The activated stream of polaritons look like light but are physically more compact due to their short wavelengths.

The researchers clocked the polaritons speed in both directions. When they traveled with the flow of the electrical current, they maintained their original speed. But when launched against the current, they slowed by a few percentage points.

We were surprised when we saw it, saidstudy co-author Denis Bandurin, a physics researcher at MIT. First, the device was still alive, despite the heavy current we passed through itit hadnt blown up. Then we noticed the one-way effect, which was different from Fizeaus original experiments.

The researchers repeated the experiments over and over, led by the studys first-author, Yinan Dong, a Columbia graduate student. Finally, it dawned on them. Graphene is a material that turns electrons into relativistic particles, Dong said. We needed to account for their spectrum.

A group at Berkeley Lab founda similar result, published in the same issue of Nature. Beyond reproducing the Fizeau effect in graphene, both studies have practical applications. Most natural systems are symmetric, but here, researchers found an intriguing exception. Basov said he hopes to slow down, and ultimately, cut off the flow of polaritons in one direction. Its not an easy task, but it could hold big rewards.

Engineering a system with a one-way flow of light is very difficult to achieve, saidMilan Delor, a physical chemist working on light-matter interactions at Columbia who was not involved in the research. As soon as you can control the speed and direction of polaritons, you can transmit information in nanoscale circuits on ultrafast timescales. Its one of the ingredients currently missing in photon-based circuits.

Read the original post:

Physicists Show That a Quantum Particle Made of Light and Matter Can Be Dragged by a Current of Electrons - Columbia University

Read More..

Christian Ferko’s PhD Thesis Defense | Department of Physics | The University of Chicago – UChicago News

11:00 am12:00 pm

Please join us:

Christian Ferkos PhDThesisDefense

Monday July 26, 2021 at 11 am CDT

SUPERSYMMETRY AND IRRELEVANT DEFORMATIONS

This The T bar{T} operator provides a universal irrelevant deformation of two-dimensional quantum field theories with remarkable properties, including connections to both string theory and holography beyond AdS spacetimes. In particular, it appears that a T bar{T}- deformed theory is a kind of new structure, which is neither a local quantum field theory nor a full-fledged string theory, but which is nonetheless under some analytic control. On the other hand, supersymmetry is a beautiful extension of Poincare symmetry which relates bosonic and fermionic degrees of freedom. The extra computational power provided by supersymmetry renders many calculations more tractable. It is natural to ask what one can learn about irrelevant deformations in supersymmetric quantum field theories.

In this talk, I will describe a presentation of the T bar{T} deformation in manifestly supersymmetric settings. I define a ``supercurrent-squared'' operator, which is closely related to T bar{T}, in any two-dimensional theory with (0, 1), (1, 1), or (2, 2) supersymmetry. This deformation generates a flow equation for the superspace Lagrangian of the theory, which therefore makes the supersymmetry manifest. In certain examples, the deformed theories produced by supercurrent-squared are related to superstring and brane actions, and some of these theories possess extra non-linearly realized supersymmetries. Finally, I will show that Tbar{T} defines a new theory of both abelian and non-abelian gauge fields coupled to charged matter, which includes models compatible with maximal supersymmetry. In analogy with the

Dirac-Born-Infeld (DBI) theory, which defines a non-linear extension of Maxwell electrodynamics, these models possess a critical value for the electric field.

Committee members:

Savdeep Sethi (Chair)

Jeffrey Harvey

Robert Wald

Mark Oreglia

Christian will be starting a postdoc at UC Davis in the Center for Quantum Mathematics and

Physics (QMAP).

Thesis Defense

Read the original here:

Christian Ferko's PhD Thesis Defense | Department of Physics | The University of Chicago - UChicago News

Read More..

Here’s How IBM Is Driving the Use of Quantum Computing on Wall Street – Business Insider

Quantum computers might look like extravagant chandeliers, but they actually hold great potential. And IBM's top quantum chief said Wall Street's use of the tech is on the cusp of taking off.

Quantum computing unlocks the ability to execute big, complex calculations faster than traditional computers. It does so by leveraging quantum mechanics, which is a form of physics that runs on quantum bits, or qubits, rather than the traditional 1 and 0 that computers typically use.

For years, theoretical research has shown that while quantum computing can be beneficial, the cost for companies to deploy the tech has been too high to justify .

JPMorgan Chase, for example, has worked with IBM to use quantum to test an algorithm that predicted options prices, according to a 2019 IBM research blog.

IBM's quantum computer required less data input, cutting down the number of samples for a given simulation from millions to a few thousand, IBM mathematician Dr. Stefan Woerner said at the time. With fewer samples, he said, computations could be done in near real-time, as opposed to overnight.

While the technology was tested successfully and is ready to use, the bank has kept the capability on the back burner. The resources required for the quantum machine made the classical computer a better, more efficient option, a bank spokesperson told Insider.

But that'll soon change, according to IBM's chief quantum exponent, Bob Sutor.

"When is quantum going to do something more for me than the systems I have already?" Sutor told Insider. "Within a few years we'll start to see that."

That's because more people are starting to test quantum techniques. The more users, Sutor said, the more IBM and others within its quantum network, including JPMorgan, Goldman Sachs, and Wells Fargo can learn, iterate, and build off the rare instances when using quantum over a classical computer makes sense, financially.

In 2016 IBM put quantum on the cloud, and now has about 20 quantum computing systems accessible via the web, Sutor said. Half are free to use.

Roughly 325,000 people have registered to use the tech since 2016 and there are about 2 billion circuits (the tiny bits of code sent to the quantum hardware to run) executed daily, he added.

An open-source tool, called Qiskit, enables users to code in Python when using the cloud-based quantum computers, Sutor said, a coding language specifically chosen for its widespread use and deep roots within the data science and AI communities.

Meanwhile, IBM is making investments to grow the quantum team across scientists and developers, according to a spokesperson who declined to specify numbers. The company is also standing up a quantum computer in Tokyo this year and another on-premise quantum computer in Cleveland, Sutor said.

IBM's quantum computers are getting bigger, too. The firm's first quantum computer was a 5-qubit machine (the number and quality of qubits reflect the machine's compute power). Now it has a machine with 65 qubits; by year end it will build a machine with 127 qubits; and by 2023, IBM will have a machine with more than 1,000 qubits, Sutor said.

Sutor said financial services companies are on the forefront of quantum exploration, adding that "their researchers are very hardcore when you're talking about artificial intelligence and now quantum."

Speed matters for financial institutions performing risk calculations or algorithmic trading, making a strong use case for quantum computing, Howard Boville, head of IBM's hybrid cloud platform, told Insider.

"They're always looking for milliseconds of advantage in terms of latency," he said, referring to the financial firms tapping the technology.

Read more from the original source:

Here's How IBM Is Driving the Use of Quantum Computing on Wall Street - Business Insider

Read More..

4 bizarre Stephen Hawking theories that turned out to be right (and 6 we’re not sure about) – Livescience.com

Stephen Hawking was one of the greatest theoretical physicists of the modern age. Best known for his appearances in popular media and his lifelong battle against debilitating illness, his true impact on posterity comes from his brilliant five-decade career in science. Beginning with his doctoral thesis in 1966, his groundbreaking work continued nonstop right up to his final paper in 2018, completed just days before his death at the age of 76.

Hawking worked at the intellectual cutting edge of physics, and his theories often seemed bizarrely far-out at the time he formulated them. Yet they're slowly being accepted into the scientific mainstream, with new supporting evidence coming in all the time. From his mind-blowing views of black holes to his explanation for the universes humble beginnings, here are some of his theories that were vindicated and some that are still up in the air.

Hawking got off to a flying start with his doctoral thesis, written at a critical time when there was heated debate between two rival cosmological theories: the Big Bang and the Steady State. Both theories accepted that the universe is expanding, but in the first it expands from an ultra-compact, super-dense state at a finite time in the past, while the second assumes the universe has been expanding forever, with new matter constantly being created to maintain a constant density. In his thesis, Hawking showed that the Steady State theory is mathematically self-contradictory. He argued instead that the universe began as an infinitely small, infinitely dense point called a singularity. Today, Hawking's description is almost universally accepted among scientists.

More than anything else, Hawking's name is associated with black holes another kind of singularity, formed when a star undergoes complete collapse under its own gravity. These mathematical curiosities arose from Einstein's theory of general relativity, and they had been debated for decades when Hawking turned his attention to them in the early 1970s.

According to an article in Nature, his stroke of genius was to combine Einstein's equations with those of quantum mechanics, turning what had previously been a theoretical abstraction into something that looked like it might actually exist in the universe. The final proof that Hawking was correct came in 2019, when the Event Horizon Telescope obtained a direct image of the supermassive black hole lurking in the center of giant galaxy Messier 87.

Black holes got their name because their gravity is so strong that photons, or particles of light, shouldn't be able to escape from them. But in his early work on the subject, Hawking argued that the truth is more subtle than this monochrome picture.

By applying quantum theory specifically, the idea that pairs of "virtual photons" can spontaneously be created out of nothing he realized that some of these photons would appear to be radiated from the black hole. Now referred to as Hawking radiation, the theory was recently confirmed in a laboratory experiment at the Technion-Israel Institute of Technology, Israel. In place of a real black hole, the researchers used an acoustic analog a "sonic black hole" from which sound waves cannot escape. They detected the equivalent of Hawking radiation exactly in accordance with the physicist's predictions.

In classical physics, entropy, or the disorder of a system that can only ever increase with time, never decreases. Together with Jacob Bekenstein, Hawking proposed that the entropy of a black hole is measured by the surface area of its surrounding event horizon.

The recent discovery of gravitational waves emitted by merging pairs of black holes shows that Hawking was right again. As Hawking told the BBC after the first such event in 2016, "the observed properties of the system are consistent with predictions about black holes that I made in 1970 ... the area of the final black hole is greater than the sum of the areas of the initial black holes." More recent observations have provided further confirmation of Hawking's "area theorem."

So the world is gradually catching up with Stephen Hawking's amazing predictions. But there are still quite a few that have yet to be proven one way or the other:

The existence of Hawking radiation creates a serious problem for theoreticians. It seems to be the only process in physics that deletes information from the universe.

The basic properties of the material that went into making the black hole appear to be lost forever; the radiation that comes out tells us nothing about them. This is the so-called information paradox that scientists have been trying to solve for decades. Hawking's own take on the mystery, which was published in 2016, is that the information isn't truly lost. It's stored in a cloud of zero-energy particles surrounding the black hole, which he dubbed "soft hair." But Hawking's hairy black hole theorem is only one of several hypotheses that have been put forward, and to date no one knows the true answer.

Black holes are created from the gravitational collapse of pre-existing matter such as stars. But it's also possible that some were created spontaneously in the very early universe, soon after the Big Bang.

Hawking was the first person to explore the theory behind such primordial black holes in depth. It turns out they could have virtually any mass whatsoever, from very light to very heavy though the really tiny ones would have "evaporated" into nothing by now due to Hawking radiation. One intriguing possibility considered by Hawking is that primordial black holes might make up the mysterious dark matter that astronomers believe permeates the universe. However, as LiveScience previously reported, current observational evidence indicates that this is unlikely. Either way, we currently don't have observational tools to detect primordial black holes or to say whether they make up dark matter.

One of the topics Hawking tinkered with toward the end of his life was the multiverse theory the idea that our universe, with its beginning in the Big Bang, is just one of an infinite number of coexisting bubble universes.

Hawking wasn't happy with the suggestion, made by some scientists, that any ludicrous situation you can imagine must be happening right now somewhere in that infinite ensemble. So, in his very last paper in 2018, Hawking sought, in his own words, to "try to tame the multiverse." He proposed a novel mathematical framework that, while not dispensing with the multiverse altogether, rendered it finite rather than infinite. But as with any speculation concerning parallel universes, we have no idea if his ideas are right. And it seems unlikely that scientists will be able to test his idea any time soon.

Surprising as it may sound, the laws of physics as we understand them today don't prohibit time travel. The solutions to Einstein's equations of general relativity include "closed time-like curves," which would effectively allow you to travel back into your own past. Hawking was bothered by this, because he felt that backward travel in time raised logical paradoxes that simply shouldn't be possible.

So he suggested that some currently unknown law of physics prevents closed timelike curves from occurring his so-called "chronology protection conjecture." But "conjecture" is just science-speak for "guess," and we really don't know whether time travel is possible or not.

One of the questions cosmologists get asked most often is "what happened before the Big Bang?" Hawking's own view was that the question is meaningless. To all intents and purposes, time itself as well as the universe and everything in it began at the Big Bang.

"For me, this means that there is no possibility of a creator," he said, and as LiveScience previously reported, "because there is no time for a creator to have existed in." That's an opinion many people will disagree with, but one that Hawking expressed on numerous occasions throughout his life. It almost certainly falls in the "will never be resolved one way or the other" category.

In his later years, Hawking made a series of bleak prophecies concerning the future of humanity that he may or may not have been totally serious about, BBC reported

These range from the suggestion that the elusive Higgs boson, or "God particle," might trigger a vacuum bubble that would gobble up the universe to hostile alien invasions and artificial intelligence (AI) takeovers. Although Stephen Hawking was right about so many things, we'll just have to hope he was wrong about these.

Originally published on Live Science.

See the original post:

4 bizarre Stephen Hawking theories that turned out to be right (and 6 we're not sure about) - Livescience.com

Read More..

Can we build a computer with free will? – The Next Web

Do you have free will? Can you make your own decisions? Or are you more like an automaton, just moving as required by your constituent parts? Probably, like most people, you feel you have something called free will. Your decisions are not predetermined; you could do otherwise.

Yet scientists can tell you that you are made up of atoms and molecules and that they are governed by the laws of physics. Fundamentally, then in terms of atoms and molecules we can predict the future for any given starting point. This seems to leave no room for free will, alternative actions, or decisions.

Confused? You have every right to be. This has been one of the long outstanding unresolved problems in philosophy. There has been no convincing resolution, though speculation has included a key role for quantum theory, which describes the uncertainty of nature at the smallest scales. It is this that has fascinated me. My research interests include the foundations of quantum theory. So could free will be thought of as a macroscopic quantum phenomenon? I set out to explore the question.

There is enough philosophy literature on the subject to fill a small library. As a trained scientist I approached the problem by asking: what is the evidence? Sadly, in some ways, my research showed no link between free will and fundamental physics. Decades of philosophical debate as to whether free will could be a quantum phenomenon has been chasing an unfounded myth.

Imagine you are on stage, facing two envelopes. You are told that one has 100 inside and the other is empty. You have a free choice to pick one yet every time the magician wins, and you pick the empty one. This implies that our sense of free will is not quite as reliable as we think it is or at least that its subject to manipulation, if it is there.

This is just one of a wide variety of examples that question our awareness of our own decision-making processes. Evidence from psychology, sociology, and even neuroscience all give the same message that we are unaware of how we make decisions. And our own introspection is unreliable as evidence of how our mental processes function.

So, what is the evidence for the abstract concept of free will? None. How could we test for it? We cant. How could we recognize it? We cant. The supposed connection between our perception of free will and the uncertainty inherent to quantum theory is, therefore, unsupported by the evidence.

But we do have an experience of free will, and this experience is a fact. So having debunked the supposed link with fundamental physics, I wanted to go further and explore why we have a perception of being able to do otherwise. That perception has nothing to do with knowing the exact position of every molecule in our bodies, but everything to do with how we question and challenge our decision-making in a way that really does change our behavior.

For me as a scientist, this meant building a model of free will and testing it. But how would you do this? Could I mimic it with a computer program? If I were successful how would my computer or robot be tested?

The topic is fuelled by prejudice. You would probably assume without evidence that my brother has free will, but my computer does not. So I will offer an emotionally neutral challenge: if an alien lands on Earth, how would you decide if it was an alien being with free will like us, or a sophisticated automaton?

Strangely, the philosophical literature does not seem to consider tests for free will. But as a scientist, it was essential to have a test for my model. So here is my answer: if you are right-handed, you will write your name holding a pen in your right hand. You will do so predictably almost 100% of the time. But you have free will, you could do otherwise. You can prove it by responding to a challenge or even challenging yourself. Given a challenge you may well write with your left hand. That is a highly discerning test of free will. And you can probably think of others, not just finely balanced 50:50 choices, but really rare events that show your independence and distinguish you from an automaton.

Based on this, I would test my alien with a challenge to do something unusual and useless, perhaps slightly harmful even, like putting its hand near a flame. I would take that as evidence of free will. After all, no robot would be programmed to do that.

And so I tried to model that behavior in the simplest most direct way, starting with a generic goal-seeking computer program that responds to inputs from the environment. These programs are commonly used across disciplines from sociology, economics, and AI. The goal-seeking program is so general that it applies to simple models of human behavior, but also to hardware like the battery saving program in your mobile phone.

For free will, we add one more goal: to assert independence. The computer program is then designed to satisfy this goal or desire by responding to challenges to do otherwise. Its as simple as that. Test it out yourself, the challenges can be external or you can generate your own. After all, isnt that how you conclude that you have free will?

In principle, the program can be implemented in todays computers. It would have to be sophisticated enough to recognize a challenge and even more so to generate its own challenges. But this is well within reach of current technology. That said, Im not sure that I want my own personal computer exercising free will, though.

This article byMark Hadley, Visiting Academic in Physics, University of Warwick isrepublished from The Conversation under a Creative Commons license. Read the original article.

Excerpt from:

Can we build a computer with free will? - The Next Web

Read More..

Cloud Computing – W3schools

Cloud Computing has become the buzzing topic of today's technology, driving mainly by marketing and services offered by prominent corporate organizations like Google, IBM & Amazon. Cloud computing is the next stage to evolve the Internet. Though for some people, "Cloud Computing" is a big deal, it is not. In reality, cloud computing is something that we have been using for a long time; it is the internet facility, along with the associated standards that provide a set of web-services to users. When users draw the term 'Internet' as a "cloud", they represent the essential characteristics of cloud computing.

Cloud computing is the latest generation technology with an extensive IT infrastructure that provides us a means by which we can use and utilize the applications as utilities via the Internet. Cloud computing makes IT infrastructure along with its services available "on-need" basis. The cloud technology includes - a development platform, hard disk, computing power, software application, and database. This technology doesn't require large-scale capital expenditure to access cloud vendors. Instead, the cloud facilitates 'pay-per-use,' i.e., the organizations' users have to pay only that limited amount to use the cloud infrastructure. In other words, cloud computing refers to applications and services that run on a distributed network using virtualized resources and uses the standard internet protocols for accessing.

Before learning about Cloud technology, readers must know about Networking, computers, database, etc. Terms such as operating system, applications, programs, and their meanings must be known before starting this.

The small and extensive IT companies follow the old traditions of managing IT infrastructure, i.e., server room, to keep all the details and maintain that server. In a word, it is a server room consists of database servers, mail server, firewalls, routers, switches, QPS (Query per second) & Load handler, and other networking devices along with server engineers. To provide such IT infrastructure, a huge amount of money has to spend. So, to reduce the IT infrastructure cost, Cloud Computing technology came into play.

Cloud computing technology brings a shift in the real paradigm of technology in the way systems are deployed. The massive cloud computing technology was enabled by the likeness & trend of the Internet & the growth of some famous multinational companies. Cloud computing makes the user dream come into reality by the concepts of 'pay-as-you-go', infinite scale architecture, and universal system available with high-speed and accuracy.

With the cloud's help, an organization or individual can start from low and small grade to a big name within a short time. So cloud computing is said to be a revolutionary change, even though the technology is still in an evolving stage. Cloud computing takes services, applications, and technology similar to the internet world and converts them into a self-service utility.

If we analyze the Cloud technology intelligently, we will see that most people separate the cloud computing model into two distinct sets:

These topics will have an elaborate discussion on the later chapters as each of them has their subcategories.

See the rest here:
Cloud Computing - W3schools

Read More..

What Is Windows 365 Cloud PC in 2021? [Microsoft 365 Cloud PC] – Cloudwards

Well, were finally here. Ever since the cloud first made it big on the computing scene with Dropboxs innovative consumer cloud storage solution, people have been speculating about how personal computers might one day work entirely in the cloud. Windows 365 Cloud PC promises to do just that, but will it live up to the hype?

In this article well look into Windows 365 Cloud PC and all of Microsofts big promises. Well explain what this Cloud PC does, how remote access might change the way you work and try to answer the question: What Is Windows 365 Cloud PC? Stick around for all the details on Microsofts new cloud computing service.

Its a cloud computing platform that offers virtual PCs with Windows 10 or 11 installed that are quick and easy to set up.

Yes, because it is a cloud platform, you must use the cloud (and thus, an internet connection) to access it.

Windows 365 Cloud PC is a hardware virtualization platform. That might seem like a mouthful, but its very easy to understand.

In simple terms, Microsoft 365 Cloud PC is a cloud service that hosts virtual PCs with Windows installed on them. Companies can use these virtual machines (or VMs) as if they were real Windows computers physically located in their offices. The only requirement is an internet connection.

Microsofts Windows 365 aims to transform the workplace into a more fluid space between the office and the home.

If everything works as it should, these Cloud PCS should be able to stream a full Windows experience to any device you own, allowing you to easily switch devices for remote work.

Business customers will be able to take advantage of this hybrid Windows model (as Microsoft calls it) to create a seamless transition between in-office and remote work. Users will be able to use a Windows 365 virtual desktop when working in the office, then log in to that same Windows 365 Cloud PC on their personal computer when working remotely, with access to the same apps and files.

Windows 365 will come as part of the Microsoft 365 SaaS platform, which means it will follow a subscription model, billed per user.

According to Microsoft, these cloud PCs will be able to stream a full Windows 10 or 11 operating system, apps, files and all, to any device you choose.

Microsoft first dipped its toes into PC virtualization with its Windows Virtual Desktop platform (renamed Azure Virtual Desktop), which made it big after the COVID-19 pandemic. The one downside to Azure Virtual Desktop is that it can be complicated to set up, as you have to configure your own VM in order to use it.

Although its built on Azure Virtual Desktop, the new Microsoft cloud platform is far easier to use. Cloud PCs will come preconfigured with Windows 10 or 11, and you can use them right out of the box as if they were regular PCs. System admins can manage cloud PCs via Microsofts remote desktop app, Microsoft Endpoint Manager.

The most exciting thing about Microsofts new cloud computing platform is that it should work with any device and operating system. Once your workplace assigns you a Windows 365 Cloud PC account, you can log in to it from any device with an internet connection, and use it as if it were your real-life office PC.

Microsofts tagline Hybrid Windows for a hybrid world highlights cloud PCs ability to adapt to both office and remote work.

This means that you could log in to your virtual Windows 365 Cloud PC from any personal or corporate device. This includes your laptop whether it runs Windows, macOS or Linux and even your Android or iOS mobile devices. You wont even have to roll out of bed to pick up your laptop; you can just fire up your work PC from your phone.

Why should businesses care? Well, they can save an up-front cost on hardware by running Microsofts virtual office PCs on weaker hardware. This creates opportunities for small businesses that lack resources to access the same computing power as their larger competitors.

Plus, the Microsoft cloud PC platform lets you scale processing power by allocating extra resources to your cloud PCs (at an additional cost). Plus, because you pay on a per-user basis, you wont have to pay for redundant VMs.

Microsoft lets you choose the processing capacity of your cloud PCs and change them at will.

Software developers will be glad to hear that the new virtual Windows machines work just like regular physical devices do, so developing for a virtual Windows machine should work the same way. This bodes well for app compatibility, too.

According to Microsoft, the Windows 365 platform is built with a zero-trust approach in mind. If youre not familiar with it, zero trust is the philosophy that employees should only be granted the minimum level of access to company files and systems, in order to minimize the risk of leaks and breaches.

If a company uses zero-trust architecture, a low-level employee falling prey to a phishing scam wont compromise the entire company system.

Using Windows 365s integration with Azure Active Directory (Azure AD) and Microsoft Endpoint Manager, system admins can create access policies for easy access management and overview. Multi-factor authentication adds another assurance that only those who need access can receive it.

Windows 365 is a cloud-based service, and as with all things cloud, it requires a constant, stable internet connection. Streaming Netflix in HD can be taxing enough on some connections, let alone streaming an entire operating system to your device.

Although many countries do have stable internet across their entire territory, even a country as advanced as the United States has poor internet coverage in rural areas. This means that a lot of Microsofts big talk about Windows 365 revolutionizing remote work just wont be an option in a large part of the world.

Our advice is to just take everything with a grain of salt until these Microsoft Cloud PCs are actually released and field-tested. After all, not even Google could successfully pull off its ambitious cloud gaming project Google Stadia and we have reason to believe that Windows 365 might hit some of the same speed bumps that Stadia did.

At the time of writing, we dont have any specific Microsoft pricing information. However, we gleaned one possible pricing plan during the latest Microsoft Inspire conference. The Windows 365 Business plan that was revealed offers a cloud PC with a two-core CPU, 4GB of RAM and 128GB of storage for $31 per month.

Concrete prices will be available on August 2, 2021, when Windows 365 launches.

We remain skeptical about how applicable the Microsoft Windows 365 Cloud PCs might be across the globe, but the service does show promise, especially if it can work well on a slower internet connection. If all goes according to plan, Windows 365 could indeed revolutionize the workspace and make virtual cloud PCs available to the masses.

Are you excited about these Cloud PCs? Do you believe Microsoft can pull off a quality streamed Windows experience, or do you expect it to fail? Please share your thoughts in the comments below, and as always, thank you for reading.

Let us know if you liked the post. Thats the only way we can improve.

YesNo

Here is the original post:
What Is Windows 365 Cloud PC in 2021? [Microsoft 365 Cloud PC] - Cloudwards

Read More..

USCIS Gets a Grip on Cloud Costs With Robotics Cloud Automation – MeriTalk

When U.S. Citizenship and Immigration Services (USCIS) began moving to the cloud in 2014, much of the agencys cloud movement was lift and shift simply moving workloads as-is to the cloud. Over time, IT staff trained themselves on cloud operations and began to take greater advantage of the flexibility and scalability that cloud computing offers.

After several years, agency leadership tasked the Office of Information Technology (OIT) to digitize all of the agencys immigration benefits processes. Around that time, IT leaders realized that they needed stronger governance and standards around cloud. They moved to domain-driven design, which allowed IT to work across the agencys directorates.

Domain-driven design enabled us to work across the IT architecture, teams, and communications channels, said Rob Brown, USCIS chief technology officer. We began to understand what development teams were doing across the agency. As a result, we began to consolidate teams, platforms, and toolsets taking advantage of opportunities to reuse capabilities instead of buying more.

Then, last year, immigration services were curtailed because of the COVID-19 pandemic. Unlike most other Federal agencies, USCIS is self-funded through fees collected for the provision of immigration and citizenship benefits not through congressional appropriations. With USCIS offices closed, fee collection dropped precipitously, and agency units were tasked to employ cost-saving measures.

At OIT, leaders developed a cost-management strategy, as well as a 90-day Operation Cloud Control (OCC) project to realize immediate savings and establish processes to lock in those savings moving forward. Moving to reserved instances alone saved more than $2.5 million during 2020-2021. In total, the OCC project saved nearly $4 million during the same time frame.

Through the OCC project, OIT educated staff about the need to rightsize cloud instances, and it created dashboards to measure progress, which were visible to IT teams as well as executive management.

OIT also re-architected applications to operate more effectively in the cloud, and it rolled out design-cost principles and policies. Those policies are enforced by Robotic Cloud Automation (RCA), a library of serverless cloud automation solutions developed by Simple Technology Solutions (STS) that leverages Amazon Web Services (AWS) native tagging capabilities and Lambda scripts. The library is a one-time cost to USCIS, rather than a recurring expense.

RCA automatically identifies cloud sprawl using those tags and governance-as-code, and, in contrast to other solutions, then remediates cloud instances, environments, and resources that are over-provisioned, over-scheduled, or not compliant with the agencys usage standards. Remediation actions include moving to reserved instances, autoscaling to accommodate spikes in demand, and more.

USCISs design-cost principles and policies are manifested in the Lambda scripts, noted Aaron Kilinski, principal and chief technologist at STS. Not only do the scripts ensure good cloud hygiene across the enterprise, but they also enable USCIS to take advantage of the operational agility and economic advantages of AWSs consumption-based model.

Read more:
USCIS Gets a Grip on Cloud Costs With Robotics Cloud Automation - MeriTalk

Read More..