Page 2,391«..1020..2,3902,3912,3922,393..2,4002,410..»

The Trump-loving, climate-sceptic island sinking into the sea – Sydney Morning Herald

Normal text sizeLarger text sizeVery large text size

Tangier Island, Virginia: As she surveys her waterlogged front lawn, Bonnie Landon doesnt dare think about the future. The present is upsetting enough. Its been less than a year since her husband of almost six decades, Harold, died. Now she fears she will be forced to abandon the home they shared for their entire marriage.

Landon, 77, lives on Tangier Island, a tiny and remote community in Virginias Chesapeake Bay located 150 kilometres from Washington DC. The only way to reach it is by a ferry ride that takes between 45 minutes and an hour from the US mainland. Once you arrive, mobile phone service is virtually non-existent. The marshy island which spans just three square kilometres is so small most people get around on golf carts rather than cars. No alcohol is allowed to be sold, reflecting the deeply conservative and devoutly Christian nature of the community.

Bonnie Landon stands in front of her home amid her flooded lawn on Tangier Island in Virginia.Credit:Amanda Andrade-Rhoades

Landons lawn is submerged in ankle-deep water following a storm the previous night. It wasnt an especially dramatic downpour, but because of Tangiers low-lying topography, even minor storms can trigger heavy flooding. This happens when the tide comes up, Landon says. She adds that the problem has been getting increasingly bad over recent years.

While the western side of Tangier is partially protected by a break wall of rocks, the eastern side, where Landon lives, is entirely exposed to the elements. We need a seawall bad on this side of the island, she says, the desperation rising in her voice. Without it, well just be underwater.

Shes not alone in her pessimism. Tangier Island is groaning under the weight of severe economic, demographic and environmental strain so much so that its very existence is in doubt. While tourists commonly say that visiting Tangier feels like stepping into the past, scientists say it instead offers a glimpse of the future in a world of catastrophic climate change.

According to US census data, around 1000 people lived on the island in the 1940s - a figure that has plunged to just 400 or so today. The decline is so severe experts have labelled it a demographic collapse.

Most everybody who graduates high school leaves the island now, says ferry captain Mark Haynie, who was born and raised on the island. Theres a lot less people around than when I was a boy.

The islands crab and oyster harvesters known as watermen are struggling to make a living because of environmental regulations and falling prices. Worst of all, an estimated two-thirds of the islands land mass has disappeared since 1850. Much of what remains are swampy wetlands unfit for human habitation.

In a couple more years you might not see none of this, waterman Clayton Parks says as he gazes at Tangier harbours distinctive wooden crab shanties. Were getting washed away.

Until recently, Tangier was famous for two reasons: being the soft shell crab capital of America and the unique dialect of English that is spoken on the island. In 2015, it shot to international attention when a paper in the journal Scientific Reports predicted its residents could become some of Americas first climate refugees. According to the papers authors, the island may have to be abandoned within 25 years because of sea level rise associated with climate change.

Tangier Island is a deeply conservative and devoutly Christian society.Credit:Amanda Andrade-Rhoades

As world leaders prepare to meet in Glasgow for a crucial climate summit next week, Tangier Island is precisely the type of place environmentalists point to when arguing for dramatic cuts to carbon emissions.

The catch is that most of the islands residents dont believe that they are living on the climate change frontline. Instead, they largely blame naturally occurring factors that have ravaged the island for centuries.

I dont believe its got anything to do with the changing climate, Landon says of the tides she fears will one day engulf her street.

Even typical storms can cause major flooding on the streets of Tangier Island.Credit:Amanda Andrade-Rhoades

James Eskridge, Tangier Islands mayor for the past 14 years, never tires of telling the story. After all, getting a phone call from Donald Trump was one of the highlights of his life. In June 2017, a CNN crew visited the island and asked Eskridge if he had a message for Trump.

I said, Yes, tell him I love him like family, Eskridge recalls over lunch at Lorraines, a seafood restaurant near the towns marina. Like nine in 10 of the islands residents, Eskridge voted Trump in the 2016 election. There are very few Democrats on the island, Eskridge says. We allow them to live here.

Mayor James Ooker Eskridge has lunch at Lorraines Seafood Restaurant on Tangier Island.Credit:Amanda Andrade-Rhoades

A few days after the CNN interview aired, Eskridge, a lifelong waterman, was out crabbing when his son drove out in a boat to find him. He said, Dad youve got to get home, the President wants to talk to you. I said, The President of what? I didnt know if someone was joking with me.

Trump and Eskridge spoke for around 10 minutes, bonding over their opposition to environmental red tape and scepticism about climate change. Eskridge says Trump assured him: Tangier is not going anywhere. The abundance of Trump 2024 flags already flying on the island suggest the former president remains as popular as ever here. We were very disappointed, Eskridge says of Trumps 2020 election defeat. I know its controversial, but Im not so sure he lost, he adds, backing Trumps unfounded claims of widespread election fraud.

Eskridge, known universally on the island by his childhood nickname of Ooker, has a Jesus fish tattoo on one arm and a star of David on the other. Over the years he has named his pet cats after an array of famous conservative figures including right-wing pundit Ann Coulter and Supreme Court justice Samuel Alito.

Mark Haynie drives a boat between Crisfield, Maryland and Tangier Island in Virginia.Credit:Amanda Andrade-Rhoades

As a waitress brings out servings of soft shell crab sandwiches and fries smothered in crab dip, Eskridge reflects on what makes Tangier such an unusual place. The isolation from the rest of society, he says, fosters a sense of community that has largely disappeared from modern America.

My kids live on the mainland and dont even know who their neighbours are. Thats so odd to me to live by somebody for years and never talk to them. Its a different world.

Then there is another byproduct of Tangiers remoteness: the language islanders use among themselves.

This time of year, people say Hawkins is coming, Eskridge says. To prove his point he yells out to a diner at a nearby table: You know who Hawkins is, dont you?

Oh yeah, replies Mark Crockett, a local waterman and ferry operator. We dont want to see Hawkins just yet, were not ready for him.

Jamie Parks brings a plate of crabby fries to a table at Lorraines Seafood Restaurant.Credit:Amanda Andrade-Rhoades

Eskridge explains that Hawkins, in the Tangier lexicon, means cold weather and little or no money being made. I dont know where it came from: my father said it and my grandfather used to say it.

Other local phrases include to have the mibs (to smell), to be dry as Peckards cow (to be thirsty) and to be selling cakes (to have your fly down). Islanders also use what is known as backwards talk in which they say the opposite of what they actually mean. To describe a stranger as ugly, for example, is to say you think they are attractive.We tone it down when were talking to folks from the mainland, Eskridge says of the dialect.

After lunch he takes The Sydney Morning Herald and The Age on a golf-cart tour of the island and a boat ride to his yellow-and-lime green crab shanty. He says journalists from 40 countries have visited the island in recent years, but proudly notes this is his first time hosting a reporter from Australia.

Along the way Eskridge inspects his crab pots to see what has arrived overnight. Hes a man in his element, doing what he believes God put him on earth to do. Explaining why he never wants to live anywhere else, he says: Its the freedom we have here. Crabbing, working the water, you are your own boss, you make your own hours.

Like the scientific experts, Eskridge believes his beloved island is in a fight for survival. Its disappearing, he says of the place where he grew up, and his father, grandfather and great-grandfather before that. Weve lost five or six other smaller communities around Tangier that have gone underwater.

James Ooker Eskridge takes in a crab trap.Credit:Amanda Andrade-Rhoades

But he disagrees with them on the cause of the problem. Rather than rising tides caused by climate change, he says coastal erosion is to blame. Im not concerned about sea level rise, he says. If I see the sea level rising I will say so, but to me the sea looks the same as when I was a kid.

He regards the debate about shifting from fossil fuels to renewables as a distraction from his mission to get as much of the island as possible protected by stone breakwalls. Solar panels would be good for the island if we could pile them up on the shoreline and make a seawall out of them, he quips.

David Schulte, a marine biologist with the Army Corps of Engineers who co-wrote the attention-grabbing 2015 paper on climate changes impact on Tangier, insists a sea wall will not be enough to save the island.

You can build a ring of stone around the edge of the island, but the problem is that sea levels are going to continue to rise and convert the high ground the town is sitting on into swamp and marsh, he says. And you really cant live in marsh.

For a forthcoming paper in the peer-reviewed journal Frontiers in Climate, Schulte found alarming declines in the height of Tangiers three upland ridges. He says its an important contribution to the debate over sea level rise and erosion. These ridges are not on the coast, so are not subject to coastal erosion. Any decline in their extent can be directly attributed to sea level rise.

Even more worryingly, he found that sea level rise in the Chesapeake Bay is trending towards the higher end of estimates, meaning the island could be uninhabitable within 20 to 25 years.

In the next couple of decades a combination of sea level rise and erosion is going to drive them off the island unless significant action is taken. I dont think theres any way to save Tangier without a massive engineering undertaking.

This would involve raising the height of the island ridges, temporarily relocating all residents and retrofitting the islands plumbing and electricity systems an expensive and laborious exercise. Given the islands small and declining population, its a price American taxpayers may not be willing to pay.

Waves break on the shoreline on Tangier Island.Credit:Amanda Andrade-Rhoades

Heartened by the attention the island has received in recent years, Eskridge is more optimistic. I think well get the help that we need in time, but its taking a while, he says. The fight for Tangiers survival is not one he can conceive of losing. When we talk about saving the island, Im not just talking about a piece of land. Im talking about a culture and a way of life. Weve been here for hundreds of years and we plan to be here for hundreds more.

Get daily updates on the climate summit that will shape our future. Sign up to our COP26 newsletter here.

Read this article:
The Trump-loving, climate-sceptic island sinking into the sea - Sydney Morning Herald

Read More..

Is ‘Impeachment’ Changing The Way America Sees The Clinton Affair? – The Federalist

Federalist Culture Editor Emily Jashinsky and D.C. Columnist Eddie Scarry discuss American Crime Story: Impeachment, Ryan Murphys stab at the scandalous affair between Bill Clinton and Monica Lewinsky.

Emily Jashinsky: Ryan Murphy usually loses me around the first episode of his series and seasons. His insane output means a lot of his work is formulaic, and his critical acclaim means a lot of it is exhaustingly self-indulgent.

But American Crime Story: Impeachment is Murphy at his best, giving strong women their due with balance and passion. He also does something rare, capturing D.C. as the fluorescent-lit hellscape that it is while also conveying the citys drama and gravity without being overly romantic.

The casting is both perfect and terrible. Edie Falco is a letdown. Colbie Smulders is a vision. I think Sarah Paulson and Murphy are doing Linda Tripp justice, something shes never really been afforded. What do you think, Eddie? The casting is a little controversial, but who are your standouts and letdowns?

Eddie Scarry: The only real disappointment Ive had with the casting is with Monica! The real one is and was a lot more attractive and had a certain confidence. Or that was my impression at the time as a young not-yet-gay boy catching glimpses of her on TV.

Beanie Feldstein just fit my memory, and I wonder if Lewinsky (credited as a producer on the show) was in favor of that casting. Otherwise, Sarah Paulson as Tripp is my absolute favorite thing on TV of 2021.

I didnt know much about the real Linda Tripp because much of what I learned about the Clinton impeachment was done years later, as an adult and through reading. So if she was anything like this character in the show then, well, she was certainly a character.

Why do you hate Falco? She might yet have her moment as Hillary.

EJ: The confidence point is an excellent one. We see glimpses of it from Feldstein, but not with the swagger of someone who would walk around in a beret. We know Lewinsky said she was involved in pretty much every minute of the show. I think that raises a lot of serious questions. The show is obviously dramatized, so are we to assume Lewinsky rubber-stamped exaggerations and fictionalizations of the events? If so, whats accurate (and new) and whats dramatized?

Falco should have used the prosthetics to look more like Hillary. Thats kind of how ACS works. Its a distraction that she didnt. Id like to see more of her too, although Im glad they let Tripp repeat the gossip about the Clintons coming into the White House with bad attitudes and a sense of entitlement. The Paula Jones casting is incredible too, although I didnt love Taran Killam as her husband it was cartoonish.

All that aside, do you think the show is succeeding because of 90 nostalgia and the benefit of built-in familiarity, or because its also good on its own merits? I think the latter is true, but I can understand the argument for the former.

ES: I would guess its probably true that the audience likes seeing this culture-defining saga play out in a storified and dramatized way that we all have such sharp memories about. But I also think that for a lot of people whove tuned in, they had no idea that all of this started with Vince Foster and Whitewater and a special counsel, and then there were these colorful people like Ann Coulter and Matt Drudge pulling so many strings.

All of that to me is SO MUCH more fascinating than the low-rent Monica-Clinton affair. And I would think a lot of people finding out about that stuff for the first time are also really fascinated by it.

EJ:Okay, I agree completely with that. Great point. Fearful of being in bed with the vast right-wing conspiracy, legacy media has smoothed out the rough edges of the Clinton administration for decades. But its a fascinating story! And Murphy is actually diving in, from Drudge to Coulter to Paula Jones.

That story has been waiting to be told in this format. And Murphy is subtly very brave by letting Smulders really nail Ann Coulter and her lesser-known contributions to the saga. She comes across exactly as she should, unusually witty and surprisingly brilliant for someone so young and beautiful.

Ill also add that I think the Clinton-Lewinsky scandal is often depicted as a fling and Murphy is plumbing the depths of the relationship to great effect. It was both sexual and emotional, and Lewinsky, having received hatpins and copies of Leaves of Grass from the leader of the free world, was obsessively in love. Its easier to understand why when you have the full context.

Do you think the show is having any meaningful effect on the publics perception of the entire ordeal?

ES: Right, the perception created by the media at the time was that the affair was this sexual spicy secret that two naughty adults were caught with but it was way more serious. Im not some feminist champion or a storied Monica sympathizer, but something I do hope people take from the show is that to be the subject of a national pile-on, the butt of endless jokes, whether on late-night TV or now the internet, can be a very debilitating and lonely thing, especially for someone who doesnt work in the business like you and me.

Thats what happened to Lewinsky and she was arguably the first one to suffer it. At 24!

Follow this link:
Is 'Impeachment' Changing The Way America Sees The Clinton Affair? - The Federalist

Read More..

Grover’s algorithm – Wikipedia

Quantum search algorithm

In quantum computing, Grover's algorithm, also known as the quantum search algorithm, refers to a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just O ( N ) {displaystyle O({sqrt {N}})} evaluations of the function, where N {displaystyle N} is the size of the function's domain. It was devised by Lov Grover in 1996.[1]

The analogous problem in classical computation cannot be solved in fewer than O ( N ) {displaystyle O(N)} evaluations (because, on average, one has to check half of the domain to get a 50% chance of finding the right input). At roughly the same time that Grover published his algorithm, Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani proved that any quantum solution to the problem needs to evaluate the function ( N ) {displaystyle Omega ({sqrt {N}})} times, so Grover's algorithm is asymptotically optimal.[2] Since researchers generally believe that NP-complete problems are difficult because their search spaces have essentially no structure, the optimality of Grover's algorithm for unstructured search suggests (but does not prove) that quantum computers cannot solve NP-complete problems in polynomial time.[3]

Unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts, Grover's algorithm provides only a quadratic speedup. However, even quadratic speedup is considerable when N {displaystyle N} is large, and Grover's algorithm can be applied to speed up broad classes of algorithms.[3] Grover's algorithm could brute-force a 128-bit symmetric cryptographic key in roughly 264 iterations, or a 256-bit key in roughly 2128 iterations. As a result, it is sometimes suggested[4] that symmetric key lengths be doubled to protect against future quantum attacks.

Grover's algorithm, along with variants like amplitude amplification, can be used to speed up a broad range of algorithms.[5][6][7] In particular, algorithms for NP-complete problems generally contain exhaustive search as a subroutine, which can be sped up by Grover's algorithm.[6] The current best algorithm for 3SAT is one such example. Generic constraint satisfaction problems also see quadratic speedups with Grover.[8] These algorithms do not require that the input be given in the form of an oracle, since Grover's algorithm is being applied with an explicit function, e.g. the function checking that a set of bits satisfies a 3SAT instance.

Grover's algorithm can also give provable speedups for black-box problems in quantum query complexity, including element distinctness[9] and the collision problem[10] (solved with the BrassardHyerTapp algorithm). In these types of problems, one treats the oracle function f as a database, and the goal is to use the quantum query to this function as few times as possible.

Grover's algorithm essentially solves the task of function inversion. Roughly speaking, if we have a function y = f ( x ) {displaystyle y=f(x)} that can be evaluated on a quantum computer, Grover's algorithm allows us to calculate x {displaystyle x} when given y {displaystyle y} . Consequently, Grover's algorithm gives broad asymptotic speed-ups to many kinds of brute-force attacks on symmetric-key cryptography, including collision attacks and pre-image attacks.[11] However, this may not necessarily be the most efficient algorithm since, for example, the parallel rho algorithm is able to find a collision in SHA2 more efficiently than Grover's algorithm.[12]

Grover's original paper described the algorithm as a database search algorithm, and this description is still common. The database in this analogy is a table of all of the function's outputs, indexed by the corresponding input. However, this database is not represented explicitly. Instead, an oracle is invoked to evaluate an item by its index. Reading a full data-base item by item and converting it into such a representation may take a lot longer than Grover's search. To account for such effects, Grover's algorithm can be viewed as solving an equation or satisfying a constraint. In such applications, the oracle is a way to check the constraint and is not related to the search algorithm. This separation usually prevents algorithmic optimizations, whereas conventional search algorithms often rely on such optimizations and avoid exhaustive search.[13]

The major barrier to instantiating a speedup from Grover's algorithm is that the quadratic speedup achieved is too modest to overcome the large overhead of near-term quantum computers.[14] However, later generations of fault-tolerant quantum computers with better hardware performance may be able to realize these speedups for practical instances of data.

As input for Grover's algorithm, suppose we have a function f : { 0 , 1 , , N 1 } { 0 , 1 } {displaystyle f:{0,1,ldots ,N-1}to {0,1}} . In the "unstructured database" analogy, the domain represent indices to a database, and f(x) = 1 if and only if the data x points to satisfies the search criterion. We additionally assume that only one index satisfies f(x) = 1, and we call this index . Our goal is to identify .

We can access f with a subroutine (sometimes called an oracle) in the form of a unitary operator U that acts as follows:

This uses the N {displaystyle N} -dimensional state space H {displaystyle {mathcal {H}}} , which is supplied by a register with n = log 2 N {displaystyle n=lceil log _{2}Nrceil } qubits.This is often written as

Grover's algorithm outputs with probability at least 1/2 using O ( N ) {displaystyle O({sqrt {N}})} applications of U. This probability can be made arbitrarily small by running Grover's algorithm multiple times. If one runs Grover's algorithm until is found, the expected number of applications is still O ( N ) {displaystyle O({sqrt {N}})} , since it will only be run twice on average.

This section compares the above oracle U {displaystyle U_{omega }} with an oracle U f {displaystyle U_{f}} .

U is different from the standard quantum oracle for a function f. This standard oracle, denoted here as Uf, uses an ancillary qubit system. The operation then represents an inversion (NOT gate) conditioned by the value of f(x) on the main system:

or briefly,

These oracles are typically realized using uncomputation.

If we are given Uf as our oracle, then we can also implement U, since U is Uf when the ancillary qubit is in the state | = 1 2 ( | 0 | 1 ) = H | 1 {displaystyle |-rangle ={frac {1}{sqrt {2}}}{big (}|0rangle -|1rangle {big )}=H|1rangle } :

So, Grover's algorithm can be run regardless of which oracle is given.[3] If Uf is given, then we must maintain an additional qubit in the state | {displaystyle |-rangle } and apply Uf in place of U.

The steps of Grover's algorithm are given as follows:

For the correctly chosen value of r {displaystyle r} , the output will be | {displaystyle |omega rangle } with probability approaching 1 for N 1. Analysis shows that this eventual value for r ( N ) {displaystyle r(N)} satisfies r ( N ) 4 N {displaystyle r(N)leq {Big lceil }{frac {pi }{4}}{sqrt {N}}{Big rceil }} .

Implementing the steps for this algorithm can be done using a number of gates linear in the number of qubits.[3] Thus, the gate complexity of this algorithm is O ( log ( N ) r ( N ) ) {displaystyle O(log(N)r(N))} , or O ( log ( N ) ) {displaystyle O(log(N))} per iteration.

There is a geometric interpretation of Grover's algorithm, following from the observation that the quantum state of Grover's algorithm stays in a two-dimensional subspace after each step. Consider the plane spanned by | s {displaystyle |srangle } and | {displaystyle |omega rangle } ; equivalently, the plane spanned by | {displaystyle |omega rangle } and the perpendicular ket | s = 1 N 1 x | x {displaystyle textstyle |s'rangle ={frac {1}{sqrt {N-1}}}sum _{xneq omega }|xrangle } .

Grover's algorithm begins with the initial ket | s {displaystyle |srangle } , which lies in the subspace. The operator U {displaystyle U_{omega }} is a reflection at the hyperplane orthogonal to | {displaystyle |omega rangle } for vectors in the plane spanned by | s {displaystyle |s'rangle } and | {displaystyle |omega rangle } , i.e. it acts as a reflection across | s {displaystyle |s'rangle } . This can be seen by writing U {displaystyle U_{omega }} in the form of a Householder reflection:

The operator U s = 2 | s s | I {displaystyle U_{s}=2|srangle langle s|-I} is a reflection through | s {displaystyle |srangle } . Both operators U s {displaystyle U_{s}} and U {displaystyle U_{omega }} take states in the plane spanned by | s {displaystyle |s'rangle } and | {displaystyle |omega rangle } to states in the plane. Therefore, Grover's algorithm stays in this plane for the entire algorithm.

It is straightforward to check that the operator U s U {displaystyle U_{s}U_{omega }} of each Grover iteration step rotates the state vector by an angle of = 2 arcsin 1 N {displaystyle theta =2arcsin {tfrac {1}{sqrt {N}}}} .So, with enough iterations, one can rotate from the initial state | s {displaystyle |srangle } to the desired output state | {displaystyle |omega rangle } . The initial ket is close to the state orthogonal to | {displaystyle |omega rangle } :

In geometric terms, the angle / 2 {displaystyle theta /2} between | s {displaystyle |srangle } and | s {displaystyle |s'rangle } is given by

We need to stop when the state vector passes close to | {displaystyle |omega rangle } ; after this, subsequent iterations rotate the state vector away from | {displaystyle |omega rangle } , reducing the probability of obtaining the correct answer. The exact probability of measuring the correct answer is

where r is the (integer) number of Grover iterations. The earliest time that we get a near-optimal measurement is therefore r N / 4 {displaystyle rapprox pi {sqrt {N}}/4} .

To complete the algebraic analysis, we need to find out what happens when we repeatedly apply U s U {displaystyle U_{s}U_{omega }} . A natural way to do this is by eigenvalue analysis of a matrix. Notice that during the entire computation, the state of the algorithm is a linear combination of s {displaystyle s} and {displaystyle omega } . We can write the action of U s {displaystyle U_{s}} and U {displaystyle U_{omega }} in the space spanned by { | s , | } {displaystyle {|srangle ,|omega rangle }} as:

So in the basis { | , | s } {displaystyle {|omega rangle ,|srangle }} (which is neither orthogonal nor a basis of the whole space) the action U s U {displaystyle U_{s}U_{omega }} of applying U {displaystyle U_{omega }} followed by U s {displaystyle U_{s}} is given by the matrix

This matrix happens to have a very convenient Jordan form. If we define t = arcsin ( 1 / N ) {displaystyle t=arcsin(1/{sqrt {N}})} , it is

It follows that r-th power of the matrix (corresponding to r iterations) is

Using this form, we can use trigonometric identities to compute the probability of observing after r iterations mentioned in the previous section,

Alternatively, one might reasonably imagine that a near-optimal time to distinguish would be when the angles 2rt and 2rt are as far apart as possible, which corresponds to 2 r t / 2 {displaystyle 2rtapprox pi /2} , or r = / 4 t = / 4 arcsin ( 1 / N ) N / 4 {displaystyle r=pi /4t=pi /4arcsin(1/{sqrt {N}})approx pi {sqrt {N}}/4} . Then the system is in state

A short calculation now shows that the observation yields the correct answer with error O ( 1 N ) {displaystyle Oleft({frac {1}{N}}right)} .

If, instead of 1 matching entry, there are k matching entries, the same algorithm works, but the number of iterations must be 4 ( N k ) 1 / 2 {textstyle {frac {pi }{4}}{left({frac {N}{k}}right)^{1/2}}} instead of 4 N 1 / 2 {textstyle {frac {pi }{4}}{N^{1/2}}} .

There are several ways to handle the case if k is unknown.[15] A simple solution performs optimally up to a constant factor: run Grover's algorithm repeatedly for increasingly small values of k, e.g. taking k = N, N/2, N/4, ..., and so on, taking k = N / 2 t {displaystyle k=N/2^{t}} for iteration t until a matching entry is found.

With sufficiently high probability, a marked entry will be found by iteration t = log 2 ( N / k ) + c {displaystyle t=log _{2}(N/k)+c} for some constant c. Thus, the total number of iterations taken is at most

A version of this algorithm is used in order to solve the collision problem.[16][17]

A modification of Grover's algorithm called quantum partial search was described by Grover and Radhakrishnan in 2004.[18] In partial search, one is not interested in finding the exact address of the target item, only the first few digits of the address. Equivalently, we can think of "chunking" the search space into blocks, and then asking "in which block is the target item?". In many applications, such a search yields enough information if the target address contains the information wanted. For instance, to use the example given by L. K. Grover, if one has a list of students organized by class rank, we may only be interested in whether a student is in the lower 25%, 2550%, 5075% or 75100% percentile.

To describe partial search, we consider a database separated into K {displaystyle K} blocks, each of size b = N / K {displaystyle b=N/K} . The partial search problem is easier. Consider the approach we would take classically we pick one block at random, and then perform a normal search through the rest of the blocks (in set theory language, the complement). If we don't find the target, then we know it's in the block we didn't search. The average number of iterations drops from N / 2 {displaystyle N/2} to ( N b ) / 2 {displaystyle (N-b)/2} .

Grover's algorithm requires 4 N {textstyle {frac {pi }{4}}{sqrt {N}}} iterations. Partial search will be faster by a numerical factor that depends on the number of blocks K {displaystyle K} . Partial search uses n 1 {displaystyle n_{1}} global iterations and n 2 {displaystyle n_{2}} local iterations. The global Grover operator is designated G 1 {displaystyle G_{1}} and the local Grover operator is designated G 2 {displaystyle G_{2}} .

The global Grover operator acts on the blocks. Essentially, it is given as follows:

The optimal values of j 1 {displaystyle j_{1}} and j 2 {displaystyle j_{2}} are discussed in the paper by Grover and Radhakrishnan. One might also wonder what happens if one applies successive partial searches at different levels of "resolution". This idea was studied in detail by Vladimir Korepin and Xu, who called it binary quantum search. They proved that it is not in fact any faster than performing a single partial search.

Grover's algorithm is optimal up to sub-constant factors. That is, any algorithm that accesses the database only by using the operator U must apply U at least a 1 o ( 1 ) {displaystyle 1-o(1)} fraction as many times as Grover's algorithm.[19] The extension of Grover's algorithm to k matching entries, (N/k)1/2/4, is also optimal.[16] This result is important in understanding the limits of quantum computation.

If the Grover's search problem was solvable with logc N applications of U, that would imply that NP is contained in BQP, by transforming problems in NP into Grover-type search problems. The optimality of Grover's algorithm suggests that quantum computers cannot solve NP-Complete problems in polynomial time, and thus NP is not contained in BQP.

It has been shown that a class of non-local hidden variable quantum computers could implement a search of an N {displaystyle N} -item database in at most O ( N 3 ) {displaystyle O({sqrt[{3}]{N}})} steps. This is faster than the O ( N ) {displaystyle O({sqrt {N}})} steps taken by Grover's algorithm.[20]

Read this article:
Grover's algorithm - Wikipedia

Read More..

Quantum Engineering | Electrical and Computer Engineering

Quantum mechanics famously allows objects to be in two places at the same time. The same principle can be applied to information, represented by bits: quantum bits can be both zero and one at the same time. The field of quantum information science seeks to engineer real-world devices that can store and process quantum states of information. It is believed that computers operating according to such principles will be capable of solving problems exponentially faster than existing computers, while quantum networks have provable security guarantees. The same concepts can be applied to making more precise sensors and measurement devices. Constructing such systems is a significant challenge, because quantum effects are typically confined to the atomic scale. However, through careful engineering, several physical platforms have been identified for quantum computing, including superconducting circuits, laser-cooled atoms and ions and electron spins in semiconductors.

Research at Princeton focuses on several aspects of this problem, ranging from fundamental studies of materials and devices to quantum computer architecture and algorithms. Our research groups have close-knit collaborations across several departments including chemistry, computer science and physics and with industry.

Excerpt from:
Quantum Engineering | Electrical and Computer Engineering

Read More..

First Photonic Quantum Computer on the Cloud – IEEE Spectrum

While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely availablealong with the burgeoning quantities of data that can be easily harvested and used to train neural networks.

The amount of computing power at people's fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google's Tensor Processing Unit (TPU) being a prime example.

Here, I will describe a very different approach to this problemusing optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.

Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.

Reducing the energy needs of neural networks might require computing with light

For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.

While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).

What are these mysterious linear-algebra calculations? They aren't so complicated really. They involve operations on matrices, which are just rectangular arrays of numbersspreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.

This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1,600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.

Advancing from LeNet's initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore's law provided much of that increase. The challenge has been to keep this trend going now that Moore's law is running out of steam. The usual solution is simply to throw more computing resourcesalong with time, money, and energyat the problem.

As a result, training today's large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.

Improvements in digital electronic computers allowed deep learning to blossom, to be sure. But that doesn't mean that the only way to carry out neural-network calculations is with such machines. Decades ago, when digital computers were still relatively primitive, some engineers tackled difficult calculations using analog computers instead. As digital electronics improved, those analog computers fell by the wayside. But it may be time to pursue that strategy once again, in particular when the analog computations can be done optically.

It has long been known that optical fibers can support much higher data rates than electrical wires. That's why all long-haul communication lines went optical, starting in the late 1970s. Since then, optical data links have replaced copper wires for shorter and shorter spans, all the way down to rack-to-rack communication in data centers. Optical data communication is faster and uses less power. Optical computing promises the same advantages.

But there is a big difference between communicating data and computing with it. And this is where analog optical approaches hit a roadblock. Conventional computers are based on transistors, which are highly nonlinear circuit elementsmeaning that their outputs aren't just proportional to their inputs, at least when used for computing. Nonlinearity is what lets transistors switch on and off, allowing them to be fashioned into logic gates. This switching is easy to accomplish with electronics, for which nonlinearities are a dime a dozen. But photons follow Maxwell's equations, which are annoyingly linear, meaning that the output of an optical device is typically proportional to its inputs.

The trick is to use the linearity of optical devices to do the one thing that deep learning relies on most: linear algebra.

To illustrate how that can be done, I'll describe here a photonic device that, when coupled to some simple analog electronics, can multiply two matrices together. Such multiplication combines the rows of one matrix with the columns of the other. More precisely, it multiplies pairs of numbers from these rows and columns and adds their products togetherthe multiply-and-accumulate operations I described earlier. My MIT colleagues and I published a paper about how this could be done in 2019. We're working now to build such an optical matrix multiplier.

Optical data communication is faster and uses less power. Optical computing promises the same advantages.

The basic computing unit in this device is an optical element called a beam splitter. Although its makeup is in fact more complicated, you can think of it as a half-silvered mirror set at a 45-degree angle. If you send a beam of light into it from the side, the beam splitter will allow half that light to pass straight through it, while the other half is reflected from the angled mirror, causing it to bounce off at 90 degrees from the incoming beam.

Now shine a second beam of light, perpendicular to the first, into this beam splitter so that it impinges on the other side of the angled mirror. Half of this second beam will similarly be transmitted and half reflected at 90 degrees. The two output beams will combine with the two outputs from the first beam. So this beam splitter has two inputs and two outputs.

To use this device for matrix multiplication, you generate two light beams with electric-field intensities that are proportional to the two numbers you want to multiply. Let's call these field intensities x and y. Shine those two beams into the beam splitter, which will combine these two beams. This particular beam splitter does that in a way that will produce two outputs whose electric fields have values of (x + y)/2 and (x y)/2.

In addition to the beam splitter, this analog multiplier requires two simple electronic componentsphotodetectorsto measure the two output beams. They don't measure the electric field intensity of those beams, though. They measure the power of a beam, which is proportional to the square of its electric-field intensity.

Why is that relation important? To understand that requires some algebrabut nothing beyond what you learned in high school. Recall that when you square (x + y)/2 you get (x2 + 2xy + y2)/2. And when you square (x y)/2, you get (x2 2xy + y2)/2. Subtracting the latter from the former gives 2xy.

Pause now to contemplate the significance of this simple bit of math. It means that if you encode a number as a beam of light of a certain intensity and another number as a beam of another intensity, send them through such a beam splitter, measure the two outputs with photodetectors, and negate one of the resulting electrical signals before summing them together, you will have a signal proportional to the product of your two numbers.

Simulations of the integrated Mach-Zehnder interferometer found in Lightmatter's neural-network accelerator show three different conditions whereby light traveling in the two branches of the interferometer undergoes different relative phase shifts (0 degrees in a, 45 degrees in b, and 90 degrees in c).Lightmatter

My description has made it sound as though each of these light beams must be held steady. In fact, you can briefly pulse the light in the two input beams and measure the output pulse. Better yet, you can feed the output signal into a capacitor, which will then accumulate charge for as long as the pulse lasts. Then you can pulse the inputs again for the same duration, this time encoding two new numbers to be multiplied together. Their product adds some more charge to the capacitor. You can repeat this process as many times as you like, each time carrying out another multiply-and-accumulate operation.

Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don't have to do that after each pulseyou can wait until the end of a sequence of, say, N pulses. That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large. Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy.

Sometimes you can save energy on the input side of things, too. That's because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple timesconsuming energy each timeit can be transformed just once, and the light beam that is created can be split into many channels. In this way, the energy cost of input conversion is amortized over many operations.

Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements.

I've outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat. Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence, are developing optical neural-network accelerators based on this approach. Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year.

Another startup using optics for computing is Optalysis, which hopes to revive a rather old concept. One of the first uses of optical computing back in the 1960s was for the processing of synthetic-aperture radar data. A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely.

Theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

There is also a company called Luminous, spun out of Princeton University, which is working to create spiking neural networks based on something it calls a laser neuron. Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous's hardware is still in the early phase of development, but the promise of combining two energy-saving approachesspiking and opticsis quite exciting.

There are, of course, still many technical challenges to be overcome. One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That's because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it's difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision. While 8-bit electronic deep-learning hardware exists (the Google TPU is a good example), this industry demands higher precision, especially for neural-network training.

There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can't be packed nearly as tightly as transistors, so the required chip area adds up quickly. A 2017 demonstration of this approach by MIT researchers involved a chip that was 1.5 millimeters on a side. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way.

There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug. What's clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

Based on the technology that's currently available for the various components (optical modulators, detectors, amplifiers, analog-to-digital converters), it's reasonable to think that the energy efficiency of neural-network calculations could be made 1,000 times better than today's electronic processors. Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed.

Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks, were first demonstrated in the 1970s. But this approach didn't catch on. Will this time be different? Possibly, for three reasons.

First, deep learning is genuinely useful now, not just an academic curiosity. Second, we can't rely on Moore's Law alone to continue improving electronics. And finally, we have a new technology that was not available to earlier generations: integrated photonics. These factors suggest that optical neural networks will arrive for real this timeand the future of such computations may indeed be photonic.

Excerpt from:
First Photonic Quantum Computer on the Cloud - IEEE Spectrum

Read More..

Quantum Computer Maker Rigetti to Go Public via $1.5 …

Quantum computer maker Rigetti said on Wednesday it will go public through a merger with a blank-chequefirm in a deal that potentially values the combined company at $1.5 billion (roughly Rs. 11,240 crores).

This is the second quantum computer hardware maker to announce going public this year using a blank-cheque, or special purpose acquisition company (SPAC). Maryland-based IonQ listed on the New York Stock Exchange on Friday. SPACs are shell companies that raise funds through an initial public offering to acquire a private company, which then becomes public as a result.

Rigetti said the merger with Zillow co-founder Spencer Rascoff-backed Supernova Partners Acquisition Company will provide it with about $458 million (roughly Rs. 3,431 crores) in proceeds, including over $100 million (roughly Rs. 749 crores) in investments from funds and accounts advised by T. Rowe Price Associates, Bessemer Venture Partners, Franklin Templeton, venture capital firm In-Q-Tel backed by the Central Intelligence Agency and some strategic partners including Palantir Technologies.

Rigetti's last funding was February last year when it raised $79 million (roughly Rs. 592 crores) in a round that was led by Silicon Valley venture capital firm Bessemer. Andreessen Horowitz, Lux Capital, Sutter Hill Ventures and DCVC are also early investors in the Berkeley, California-based quantum computing firm.

Researchers believe quantum computers could operate millions of times faster than today's advanced supercomputers, potentially making possible tasks such as mapping complex molecular structures and chemical reactions to boosting the power of artificial intelligence.

While there is some debate about when quantum computers will be able to crack real-world problems, many companies are dedicating resources to ensure they are ready and investors have been flocking to quantum computing hardware and software startups. Big tech companies like Alphabet, International Business Machines, Honeywell, Microsoft and Amazon have also been investing in the future computing technology.

Thomson Reuters 2021

For the latest tech news and reviews, follow Gadgets 360 on Twitter, Facebook, and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel.

Go here to see the original:
Quantum Computer Maker Rigetti to Go Public via $1.5 ...

Read More..

Building a large-scale quantum computer is a huge challenge. Will we ever get there? – ZDNet

Almost exactly two years ago, Google achieved so-called quantum supremacy -- a "hello world" moment,in the words of the company's CEO Sundar Pichai, that made waves in the field and brought quantum computing, until then a relatively obscure branch of engineering, a little bit further into the mainstream.

The CIO's guide to Quantum computing

Quantum computers offer great promise for cryptography and optimization problems, and companies are racing to make them practical for business use. ZDNet explores what quantum computers will and wont be able to do, and the challenges that remain.

Read More

For the first time, it was shown that a quantum computer could solve a computational task that was impossible to run on a classical device in any realistic amount of time: Google's Sycamore quantum processor took just 200 seconds to calculate the answer to a problem that would have taken the world's biggest supercomputers 10,000 years to complete.

Since then, the quantum computing ecosystem has flourished. Tech giants and small start-ups alike jumped on the bandwagon, driven by the promise that the technology will one day unlock unprecedented compute power and resolve problems ranging from drug discovery to financial modelling, in turn generating huge improvements in efficiency and in business outcomes.

But although it was a historical milestone, demonstrating quantum supremacy was no guarantee that quantum computers could eventually usher in this "new era of computing" -- nor even that building a large-scale, useful quantum system might be feasible at all.

Take it from the scientist that led Google's team to quantum supremacy himself. "It's not like with quantum supremacy we fell over the finish line," John Martinis, who publishedthe 2019 Nature paperpresenting quantum supremacy together with Google's Quantum AI team, tells ZDNet. "We can still keep going and going."

Martinis has now left Google and its Silicon Valley campus, and moved to Australia to consult as a system engineer to a start-up called Silicon Quantum Computing. Advancing the state of the art of quantum computing, therefore, is never far from his mind. "Since I've left, I've been thinking about all the things we still have to fix," he says.

It's not about downplaying the importance of his experiment with Google. When Martinis joined Google's Quantum AI division in 2014, the idea of achieving quantum supremacy seemed a significant challenge -- which, the scientist remembers, a lot of people didn't think was possible to achieve at all.

Martinis had been doing quantum computing research since the 1980s, and from his perspective, the project was high-risk-high-reward -- a big stretch for the team, but still doable. A few years later, he was proven right: Google's 53-qubit quantum processor carried out a computation that was impossible to run on the most powerful supercomputers, and quantum supremacy was proclaimed for the first time.

But there are big caveats. Just because the Sycamore processor achieved quantum supremacy for one computation, doesn't mean that Google's quantum computer could compete against conventional devices for all problems. In fact, it's quite the opposite: Google's team designed a task specifically for the quantum system to solve, with no utility other than to demonstrate quantum supremacy.

As Pichai describes in the blog post announcing the milestone, the experiment was similar to building the first rocket that could leave the Earth's gravity to touch the edge of space: demonstrating the possibility of travelling in space, but without going anywhere useful just yet.

That is far from saying that it will soon be possible to travel to the moon, just as quantum supremacy doesn't mean that a large-scale quantum computer will be available in the future to resolve scientific and business problems that conventional computers are incapable of taking on.

"It's not clear in my mind that we will be able to build a quantum computer," says Martinis. "We're racing against Nature, and the real question is: Will Nature allow us to build the quantum computer?"

"It's really hard and there are lots of technical problems to solve. I'm still optimistic that we can solve these problems -- but then of course you have to go out and do it, which is hard."

For a quantum computer to start working on problems that have real-world relevance, scientists anticipate that one million or more qubits will be necessary. The problem is that qubits are fragile particles that are extremely prone to decoherence, meaning that they can easily fall from their quantum state when they interact with the environment around them. This introduces random errors in the system -- and the more qubits there are, the more potential there is for error.

With existing technologies, that's an impossible engineering challenge. Keeping qubits under control currently requires rooms-worth of equipment: the Sycamore processor's 53 qubits, for example, need to be cooled down to temperatures colder than outer space thanks to huge refrigerators and cryogenic control tools. Now imagine scaling that to one million qubits.

Of course, Martinis and his team never set out to demonstrate the possibility of a one million-qubit system with quantum supremacy. The experiment was rather about proving the possibility of quantum computing -- about showing Google executives that the technology could be taken seriously, and that it was worth pouring money into quantum research.

It worked: leaders at Google were convinced, but also decision-makers around the world. Together with the search giant, IBM had already been investing in quantum computing for years, and the pair were promptly joined by Microsoft and Amazon, and a host of smaller companies also started to emerge in the field. There were only a handful of quantum computing start-ups in 2013; by 2020, the number had jumped to 200.

Governments in the EU, the UK, the USA and Chinaare launching large-scale quantum programs, often tied to billion-dollar budgets. While working on the quantum supremacy experiment felt like a group effort, says Martinis, the spirit is now increasingly shifting to resemble that of a race.

That's a good thing, given the scale of the challenge. "People are being funded by big companies and start-ups, there is a whole vibrant ecosystem, everyone feels they have to make progress and fix their system," says Martinis. "With all these different pools, it's a healthy situation and it's more likely that someone can build something that works."

Some experts are more sceptical of the quantum ecosystem, and blame the tendency to over-hype a technology that is yet to prove itself. Mikhail Dyakonov, a professor of physics at the University of Montpellier in France, and one of the more vocal critics in the field, even wrote in a post -- called "The case against quantum computing" -- that large-scale error-corrected systems will not appear in the foreseeable future due to the "gargantuan" technical challenges that would have to be overcome.

But Martinis remains optimistic. With an ever-increasing number of companies investing in quantum computing, he says, it is becoming possible to test different approaches to the technology and understand how to make the systems bigger.

"I've said this for years; we have to make the qubits better," says Martinis. "I have some really exciting ideas on how to do that. I see a lot of potential still."

The scientist says that he is keeping a list of the issues that still need fixing, and he is working at ticking them off one at a time. Quantum supremacy was only the beginning for Martinis; even more exciting milestones are yet to come.

Excerpt from:
Building a large-scale quantum computer is a huge challenge. Will we ever get there? - ZDNet

Read More..

Two UCSB Scientists Receive Award to Partner With Ciscos New Quantum Research Team – Noozhawk

A new collaboration between UC Santa Barbara researchers and Cisco Systems aims to push the boundaries of quantum technologies.

Assistant professors Yufei Ding and Galan Moody have received research awards from the technology giant to work with its new Quantum Research Team, which was formed to pursue the research and development required to turn quantum hardware, software and applications into broadly used technologies.

We are pleased to support the research by Professor Moody and Professor Ding in quantum information processing, said Alireza Shabani, head of Ciscos Quantum Research and the Emerging Technologies & Incubation Team. Collaborations with universities are part of Ciscos plan for quantum technology development, and we are excited for the opportunity to work with UCSB labs.

Quantum computers have already been shown to solve some problems more efficiently than classical computers. The key to the incredible speed of a quantum computer lies in its ability to manipulate entangled quantum bits, or qubits.

To date, most efforts to build quantum computers have relied on qubits created in superconducting wires cooled to near absolute zero or on trapped ions held in place by microelectric circuits. But those approaches face certain challenges, most notably that the qubits are extremely sensitive to environmental factors. As the number of qubits increases, so too does the error rate when executing an algorithm.

Cisco has agreed to provide $150,000 in support of an alternative approach pursued by Moody that uses a photon as an optical qubit to encode quantum information and to integrate the components necessary for that process into a photonic integrated circuit (PIC) with built-in error correction.

Were thrilled to be able to work with the Cisco Quantum Research Team, said Moody, an assistant professor of electrical and computer engineering. The grant helps support the design, fabrication and testing of prototype devices, but more importantly, we will be collaborating closely with their team to tackle the key challenges for scalable quantum computing with integrated photonics.

Traditionally, silicon photonics are used to guide light around a photonic chip, but a collaboration with Distinguished Professor John Bowers, a photonics pioneer and director of UCSBs Institute for Energy Efficiency, demonstrated that aluminum gallium arsenide (AlGaAs) is orders-of-magnitude more efficient at generating the quantum states of light that are needed for photonic quantum computing.

Moodys research group has already designed the first version of computing architecture it would like to test.

With Cisco, well develop a prototype quantum computing chip to showcase the advantages of AlGaAs, said Moody. Then, well evaluate the performance of our prototypes, refine the designs and explore new architectures to improve the performance and scalability going forward.

The project complements ongoing research efforts by Moody that were supported with funding associated with a prestigious Early CAREER Award from the National Science Foundation and a Young Investigator Award from the Air Force Office of Scientific Research (AFOSR).

Moody also received a Defense University Research Instrumentation Program (DURIP) Award from the U.S. Department of Defense and AFOSR to build a quantum photonic computing testbed in his lab in Henley Hall, the new state-of-the-art home of the IEE.

He said this new collaboration with Cisco provides his group with an opportunity to transition from more fundamental research to engineering and developing quantum technologies that may eventually lead to commercialization.

While were still quite far from having practical and generally useful quantum computers, he said, we aim to address some of the fundamental and technical challenges needed to advance photonic quantum computing technologies to the point where we can make real and impactful benefits to society.

Ding, an assistant professor of computer science, has received $100,000 from Cisco to support several novel quantum computing research activities from a programming system perspective. She has proposed an in-depth and systematic study of optimization problems in quantum circuit distribution, a project that could help researchers build a network of connected quantum devices.

I am excited for this opportunity to deepen and widen my programming and compiler research on quantum computing through the Cisco research grant, said Ding. I look forward to working with Ciscos quantum team, Professor Galan Moody and other awardees to build advanced quantum systems.

Ding has proposed tackling the optimization problems by focusing on compilation, the process by which a computer converts a high-level programming language into a secondary language that it can understand and use to create an executable file or result. The software that performs this conversion, called a compiler, is a tool that can be used to overcome the gap between algorithms and hardware.

In the case of a quantum computer, a compiler would understand any hardware constraints and automatically map a quantum program to the physical devices. Ding is seeking to develop novel programming and compilation support that would make efficient quantum circuit mapping possible.

Her research group plans to investigate a large set of quantum algorithms, understand their common communication and execution patterns, and crystallize their findings into a set of optimization principles that can be applied more broadly.

A key step in building a large-scale quantum computer is to develop quantum systems that can network multiple individual quantum devices and allow quantum information exchanges, said Ding.

Through a thorough examination of the compilation optimization space, our project aims to automatically turn a standard quantum algorithm into a distributable version that captures the resources required to operate a networked quantum computer, she said.

As it does for Moodys research, the Cisco project will advance Dings ongoing research efforts, including her own work funded through an NSF Early CAREER Award.

Dings CAREER Award project is intended to achieve two main objectives: to create a high-level programming language that optimizes algorithms, and to improve device-level performance by controlling the analog pulses that stimulate the qubits and manipulate their state.

Our CAREER project aims to take the optimizations from the gate-level a quantum gate is a basic logical operation that manipulates the state of the qubits and extend them to the higher algorithmic level and the lower pulse level, said Ding, who has also received an Early Career Award from the IEEE Computer Societys Technical Consortium on High Performance Computing.

In the work supported by Cisco, we will seek to extend the compilation from a single-node quantum processor to a multi-node distributed quantum system, she said.

According to Ding, an advanced programming system that supports large quantum programs could enable major quantum applications, such as quantum chemistry and combinatorial optimization, and machine learning. The system could also be expanded and applied to the fields of materials science and finance.

Read more from the original source:
Two UCSB Scientists Receive Award to Partner With Ciscos New Quantum Research Team - Noozhawk

Read More..

Researchers show new strategy for detecting non-conformist particles called anyons – Brown University

PROVIDENCE, R.I. [Brown University] A team of Brown University researchers has shown a new method of probing the properties of anyons, strange quasiparticles that could be useful in future quantum computers.

In research published in the journal Physical Review Letters, the team describes a means of probing anyons by measuring subtle properties of the way in which they conduct heat. Whereas other methods probe these particles using electrical charge, this new method enables researchers to probe anyons even in non-conducting materials. Thats critical, the researchers say, because non-conducting systems have far less stringent temperature requirements, making them a more practical option for quantum computing.

We have beautiful ways of probing anyons using charge, but the question has been how do you detect them in the insulating systems that would be useful in whats known as topological quantum computing, said Dima Feldman, a physics professor at Brown and study co-author. We show that it can be done using heat conductance. Essentially, this is a universal test for anyons that works in any state of matter.

Anyons are of interest because they dont follow the same rules as particles in the everyday, three-dimensional world.In three dimensions, there are only two broad kinds of particles: bosons and fermions. Bosons follow whats known as Bose-Einstein statistics, while fermions follow Fermi-Dirac statistics. Generally speaking, those different sets of statistical rules mean that if one boson orbits around another in a quantum system, the particles wave function the equation that fully describes its quantum state does not change. On the other hand, if a fermion orbits around another fermion, the phase value of its wave function flips from a positive integer to a negative integer. If it orbits again, the wave function returns to its original state.

Anyons, which emerge only in systems that are confined to two dimensions, dont follow either rule. When one anyon orbits another, its wave function changes by some fraction of an integer. And another orbit does not necessarily restore the original value of the wave function. Instead, it has a new value almost as if the particle maintains a memory of its interactions with the other particle even though it ended up back where it started.

That memory of past interactions can be used to encode information in a robust way, which is why the particles are interesting tools for quantum computing. Quantum computers promise to perform certain types of calculations that are virtually impossible for todays computers. A quantum computer using anyons known as a topological quantum computer has the potential to operate without elaborate error correction, which is a major stumbling block in the quest for usable quantum computers.

But using anyons for computing requires first being able to identify these particles by probing their quantum statistics. Last year, researchers did that for the first time using a technique known as charge interferometry. Essentially, anyons are spun around each other, causing their wave functions to interfere with each other occasionally. The pattern of interference reveals the particles quantum statistics. That technique of probing anyons using charge works beautifully in systems that conduct electricity, the researchers say, but it cant be used to probe anyons in non-conducting systems. And non-conducting systems have the potential to be useful at higher temperatures than conducting systems, which need to be near absolute zero. That makes them a more practical option of topological quantum computing.

For this new research, Feldman, who in 2017 was part of a team that measured the heat conductance of anyons for the first time, collaborated with Brown graduate student Zezhu Wei and Vesna Mitrovic, a Brown physics professor and experimentalist. Wei, Feldman and Mitrovic showed that comparing properties of heat conductance in two-dimensional solids etched in very specific geometries could reveal the statistics of the anyons in those systems.

Any difference in the heat conductance in the two geometries would be smoking gun evidence of fractional statistics, Mitrovic said. What this study does is show exactly how people should set up experiments in their labs to test for these strange statistics.

Ultimately, the researchers hope the study is a step toward understanding whether the strange behavior of anyons can indeed be harnessed for topological quantum computing.

The research was supported by the National Science Foundation (DMR-1902356, QLCI-1936854, DMR-1905532).

Follow this link:
Researchers show new strategy for detecting non-conformist particles called anyons - Brown University

Read More..

IonQ to Report Third Quarter 2021 Financial Results on November 15, 2021 – Yahoo Finance

COLLEGE PARK, Md., October 28, 2021--(BUSINESS WIRE)--IonQ, Inc. ("IonQ" or the "Company") (NYSE: IONQ), a leader in quantum computing, today announced that the Company will release its third quarter 2021 financial results on Monday, November 15th, 2021 after the financial markets close.

The Company will host a conference call that same day to discuss its results and business outlook at 4:30 p.m. Eastern time. The call will be accessible by telephone at 877-300-8521 (domestic) or 412-317-6026 (international) using passcode 10161621. The call will also be available live via webcast on the Companys website here, or directly here.

A telephone replay of the conference call will be available at 844-512-2921 or 412-317-6671 with access code 10161621 and will be available until 11:59 PM Eastern time, November 29th, 2021. An archive of the webcast will also be available here shortly after the call and will remain available for 90 days.

About IonQ

IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQs next-generation quantum computer is the worlds most powerful trapped-ion quantum computer, and IonQ has defined what it believes is the best path forward to scale. IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visit http://www.ionq.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211028005118/en/

Contacts

Media:ionq@missionnorth.com

Investor:Michael Bowen and Ryan GardellaIonQIR@icrinc.com

Read more:
IonQ to Report Third Quarter 2021 Financial Results on November 15, 2021 - Yahoo Finance

Read More..