Page 2,855«..1020..2,8542,8552,8562,857..2,8602,870..»

Artificial intelligence kept expanding through a turbulent year, with some exceptions – ZDNet

The year 2020 may have been one of turmoil and uncertainty across the globe, but artificial intelligence remained on a steady course of growth and further exploration -- perhaps because of the Covid-19 crisis. Healthcare was a big area for AI investment, and concerns about diversity and ethics grew -- but little action has been taken. Most surprisingly of all, while AI job growth accelerated across the world, it flattened in the US.

These are among the key metrics of AI tracked in the latest release of theAI Index, an annual data update from Stanford University'sHuman-Centered Artificial Intelligence Institute. The index tracks AI growth across a range of metrics, from degree programs to industry adoption.

Here are some key measures extracted from the 222-page index:

AI investments rising: The report cites a McKinsey survey that shows the Covid-19 crisis had no effect on their investment in AI, while 27% actually reported increasing their investment. Less than a fourth of businesses decreased their investment in AI.

AI jobs grow worldwide, flatten in the US:Another key metric is the amount of AI-related jobs opening up. Surprisingly, the US recorded a decrease in its share of AI job postings from 2019 to 2020-the first drop in six years. The total number of AI jobs posted in the US also decreased by 8.2% from 2019 to 2020, from 325,724 in 2019 to 300,999 jobs in 2020. This may be attributable to the mature market in the US, the report's authors surmise. Globally, however, demand for AI skills is on the rise, and has grown significantly in the last seven years. On average, the share of AI job postings among all job postings in 2020 is more than five times larger than in 2013. In 2020, industries focused on information (2.8%); professional, scientific, and technical services (2.5%); and agriculture, forestry, fishing, and hunting (2.1%) had the highest share of AI job postings among all job postings in the US.

AI investment in healthcare increased significantly:The product category of "drugs, cancer, molecular, drug discovery" received the greatest amount of private AI investment in 2020, with more than $13.8 billion, 4.5 times higher than 2019, the report states. "The landscape of the healthcare and biology industries has evolved substantially with the adoption of machine learning," the report's authors state. "DeepMind's AlphaFold applied deep learning technique to make a significant breakthrough in the decades-long biology challenge of protein folding. Scientists use ML models to learn representations of chemical molecules for more effective chemical synthesis planning. PostEra, an AI startup used ML-based techniques to accelerate COVID-related drug discovery during the pandemic."

Generative everything:"AI systems can now compose text, audio, and images to a sufficiently high standard that humans have a hard time telling the difference between synthetic and non-synthetic outputs for some constrained applications of the technology. That promises to generate a tremendous range of downstream applications of AI for both socially useful and less-useful purposes."

AI has a diversity and ethics challenge: In 2019, 45% new U.S. resident AI PhD graduates were white -- by comparison, 2.4% were African American and 3.2% were Hispanic, the report states. Plus, "despite growing calls to address ethical concerns associated with using AI, efforts to address these concerns in the industry are limited. For example, issues such as equity and fairness in AI continue to receive comparatively little attention from companies. Moreover, fewer companies in 2020 view personal or individual privacy risks as relevant, compared with in 2019, and there was no change in the percentage of respondents whose companies are taking steps to mitigate these particular risks."

Computer vision has become industrialized:"Companies are investing increasingly large amounts of computational resources to train computer vision systems at a faster rate than ever before. Meanwhile, technologies for use in deployed systems-like object-detection frameworks for analysis of still frames from videos-are maturing rapidly, indicating further AI deployment."

AI conference attendance up, virtually:An important metric of AI adoption is conference attendance. "That's way up. If anything, Covid-19 may have led to a higher number of people participating in AI research conferences, as the pandemic forced conferences to shift to virtual formats, which in turn led to significant spikes in attendance," the survey's authors contend.

More and more information and research is available: The number of AI journal publications grew by 34.5% from 2019 to 2020 -- a much higher percentage growth than from 2018 to 2019 (19.6%), the report's authors state. "In just the last six years, the number of AI-related publications on arXiv grew by more than six-fold, from 5,478 in 2015 to 34,736 in 2020. AI publications represented 3.8% of all peer-reviewed scientific publications worldwide in 2019, up from 1.3% in 2011."

Read more:
Artificial intelligence kept expanding through a turbulent year, with some exceptions - ZDNet

Read More..

The Book Corner: The Deep by Rivers Solomon, Daveed Diggs, William Hutson, and Jonathan Snipes – University Press

Rivers Solomons The Deep is about the struggle between sacrifice and duty.

Illustration by Michelle Rodriguez.

The Deep by Rivers Solomon, Daveed Diggs, William Hutson, and Jonathan Snipes is captivating and thought-provoking, in the story readers are introduced to an underwater world plagued with horrific historical events intertwined with mysticism.

The novel focuses on the story of the water-dwelling descendants of pregnant African women who were thrown overboard from slave ships: the Wanjiru.

The mermaid-like clan has no long-term memory, instead choosing to live in the moment without the burden of the past. The Wanjuri swim along the sea in ignorant bliss without carrying the weight of memories of the suffering of their past ancestors.

The novel follows the story of Yetu, a member of the Wanjuri who has chosen to carry the burden of being a Historian. It is the job of the Historian, to carry the communitys memories along with the echoes of emotional trauma from such events.

Solomon delves into Yetus characterization through her exploration of identity and self while also discussing the devastations of slavery through fiction.

The multilayered work is dynamic and heart-wrenching as culture, grief, and desire are encompassed in the novel. Yetu is against her responsibility as the Historian because of the physical and emotional undertakings the duty imposed on her mind and body. She doesnt wish for her identity to be consumed with the responsibility of carrying the grievances of the cruel deaths of her ancestors.

Solomons poetic prose and immersive story-telling are apparent throughout the novel as Yetu embarks on a journey of understanding community ties and remembering the past. The novel delves into concepts of identity and personhood as the world is casually LGBTQIA+, includes representations of anxiety, a nonbinary side character, and more.

The Deep is a simple yet complicated novel as it questions the struggle between sacrifice and duty, between tradition and progress, and between vengeance and forgiveness.

Darlene Antoine is the Features Editor for the University Press. For information regarding this or other stories, email her at [emailprotected]

Read the original:
The Book Corner: The Deep by Rivers Solomon, Daveed Diggs, William Hutson, and Jonathan Snipes - University Press

Read More..

12-foot-deep sinkhole ‘accidentally’ discovered near Williams Arts Center The Lafayette – The Lafayette

Last week, a grounds crew was doing standard maintenance outside the Williams Arts Center when they noticed the ground under their feet sounded hollow. Upon investigation, they discovered what lay underneath: a sinkhole, 12 feet deep, about 10 feet long and three feet wide, narrowing to a point on each end.

[The grounds crew] immediately called me and we investigated, and it did indeed sound hollow, Scott Kennedy, Director of Facilities Operations, said. [They] could have opened it up accidentally.

The sinkhole was assessed last Monday by geotechnical engineers from Pennoni Associates, Inc. The sinkhole was opened, and they determined a course of action to remediate the problem before securing the hole with steel plates for the evening, Kennedy explained.

On Tuesday, all loose material that fell into the sinkhole was removed and the ground was compacted, and the following day 16 cubic yards of high flow concrete was installed about halfway up the hole to avoid covering the pipes in the area.

[Wednesday] theyre installing stoneso the hole will be filled, Kennedy said. [Today] we hope to finish with concrete on the top and replace the sidewalk they had to cut out, and then theyll reinstall the bricks.

Geology Professor and Acting Department Head Dru Germanoski, explained that a common misconception of sinkholes is they form suddenly due to an influx of water dissolving rock. The Allentown formation which underlies campus is composed of mostly dolomite with some limestone. Although these rocks do dissolve at a faster rate than other types, water moves through fractures in these rocks over millennia.

As the water is moving through the bedding planes and the fractures, it dissolves them and becomes a positive feedback mechanism where you get more water flow, that can dissolve the rock, a bit more and more water flow because now youve increased the size of the opening, Germanoski explained.

Sometimes pipelines underground can have cracks or breaks that exacerbate the problem, which is common in cities and towns in the Lehigh Valley and other carbonate terrains, Germanoski explained. However, the sinkhole on campus is much smaller than ones often caused or made worse by pipeline breaks.

The geotechnical engineers were not able to pinpoint the cause of this sinkhole because there is no active water break in that area.

The assumption is erosion over timetheres a little bit of staining underneath one pipe, so theyre going to go in and re-cement and secure that, that could have been a slow leak over years, Kennedy said. Were going to camera all the storm lines around Williams, make sure theres no cracks and breaks and storm piping, and theyre also going to do some ground-penetrating radar to make sure there are no other voids in that area.

Kennedy added that they do not expect to find any other voids around Williams because when they poured concrete in to begin filling the sinkhole, it would have drained into another cavity.

The concrete basically creates a giant plug for the bottom of that sinkhole. So, if there is an opening somewhere, that should block it, Kennedy said.

Germanoski noted that there has been a handful of sinkholes on campus since he started working at Lafayette in 1987.

We had one between Van Wickle and Colton chapel, there are a couple that comes to mind. Actually, a couple of different ones on the quad and they have been able to handle them, Germanoski said. They repaired them, and theyve remained stable.

The rest is here:
12-foot-deep sinkhole 'accidentally' discovered near Williams Arts Center The Lafayette - The Lafayette

Read More..

Are quantum computers good at picking stocks? This project tried to find out – ZDNet

The researchers ran a model for portfolio optimization on Canadian company D-Wave's 2,000-qubit quantum annealing processor.

Consultancy firm KPMG, together with a team of researchers from the Technical University of Denmark (DTU) and a yet-to-be-named European bank, has been piloting the use of quantum computing to determine which stocks to buy and sell for maximum return, an age-old banking operation known as portfolio optimization.

The researchers ran a model for portfolio optimization on Canadian company D-Wave's 2,000-qubit quantum annealing processor, comparing the results to those obtained with classical means. They foundthat the quantum annealer performed better and faster than other methods, while being capable of resolving larger problems although the study also indicated that D-Wave's technology still comes with some issues to do with ease of programming and scalability.

The smart distribution of portfolio assets is a problem that stands at the very heart of banking. Theorized by economist Harry Markowitz as early as 1952, it consists of allocating a fixed budget to a collection of financial assets in a way that will produce as much return as possible over time. In other words, it is an optimization problem: an investor should look to maximize gain and minimize risk for a given financial portfolio.

SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium)

As the number of assets in the portfolio multiplies, the difficulty of the calculation exponentially increases, and the problem can quickly become intractable, even to the world's largest supercomputers. Quantum computing, on the other hand, offers the possibility of running multiple calculations at once thanks to a special quantum state that is adopted by quantum bits, or qubits.

Quantum systems, for now, cannot support enough qubits to have a real-world impact. But in principle, large-scale quantum computers could one day solve complex portfolio optimization problems in a matter of minutes which is why the world's largest banks are already putting their research team to work on developing quantum algorithms.

To translate Markowitz's classical model for the portfolio selection problem into a quantum algorithm, the DTU's researchers formulated the equation into a quantum model called a quadratic unconstrained binary optimization (QUBO) problem, which they based on the usual criteria used for the operation such as budget and expected return.

When deciding which quantum hardware to pick to test their model, the team was faced with a number of options: IBM and Google are both working on a superconducting quantum computer, while Honeywell and IonQ are building trapped-ion devices; Xanadu is looking at photonic quantum technologies, and Microsoft is creating a topological quantum system.

D-Wave's quantum annealing processor is yet another approach to quantum computing. Unlike other systems, which are gate-based quantum computers, it is not possible to control the qubits in a quantum annealer; instead, D-Wave's technology consists of manipulating the environment surrounding the system, and letting the device find a "ground state". In this case, the ground state corresponds to the most optimal portfolio selection.

This approach, while limiting the scope of the problems that can be resolved by a quantum annealer, also enable D-Wave to work with many more qubits than other devices. The company's latest devicecounts 5,000 qubits, while IBM's quantum computer, for example, supports less than 100 qubits.

The researchers explained that the maturity of D-Wave's technology prompted them to pick quantum annealing to trial the algorithm; and equipped with the processor, they were able to embed and run the problem for up to 65 assets.

To benchmark the performance of the processor, they also ran the Markowitz equation with classical means, called brute force. With the computational resources at their disposal, brute force could only be used for up to 25 assets, after which the problem became intractable for the method.

Comparing between the two methods, the scientists found that the quality of the results provided by D-Wave's processor was equal to that delivered by brute force proving that quantum annealing can reliably be used to solve the problem. In addition, as the number of assets grew, the quantum processor overtook brute force as the fastest method.

From 15 assets onwards, D-Wave's processor effectively started showing significant speed-up over brute force, as the problem got closer to becoming intractable for the classical computer.

To benchmark the performance of the quantum annealer for more than 25 assets which is beyond the capability of brute force the researchers compared the results obtained with D-Wave's processor to those obtained with a method called simulated annealing. There again, shows the study, the quantum processor provided high-quality results.

Although the experiment suggests that quantum annealing might show a computational advantage over classical devices, therefore, Ulrich Busk Hoff, researcher at DTU, who participated in the research, warns against hasty conclusions.

"For small-sized problems, the D-Wave quantum annealer is indeed competitive, as it offers a speed-up and solutions of high quality," he tells ZDNet. "That said, I believe that the study is premature for making any claims about an actual quantum advantage, and I would refrain from doing that. That would require a more rigorous comparison between D-Wave and classical methods and using the best possible classical computational resources, which was far beyond the scope of the project."

DTU's team also flagged some scalability issues, highlighting that as the portfolio size increased, there was a need to fine-tune the quantum model's parameters in order to prevent a drop in results quality. "As the portfolio size was increased, a degradation in the quality of the solutions found by quantum annealing was indeed observed," says Hoff. "But after optimization, the solutions were still competitive and were more often than not able to beat simulated annealing."

SEE: The EU wants to build its first quantum computer. That plan might not be ambitious enough

In addition, with the quantum industry still largely in its infancy, the researchers pointed to the technical difficulties that still come with using quantum technologies. Implementing quantum models, they explained, requires a new way of thinking; translating classical problems into quantum algorithms is not straightforward, and even D-Wave's fairly accessible software development kit cannot be described yet as "plug-and-play".

The Canadian company's quantum processor nevertheless shows a lot of promise for solving problems such as portfolio optimization. Although the researchers shared doubts that quantum annealing would have as much of an impact as large-scale gate-based quantum computers, they pledged to continue to explore the capabilities of the technology in other fields.

"I think it's fair to say that D-Wave is a competitive candidate for solving this type of problem and it is certainly worthwhile further investigation," says Hoff.

KPMG, DTU's researchers and large banks are far from alone in experimenting with D-Wave's technology for near-term applications of quantum computing. For example, researchers from pharmaceutical company GlaxoSmithKline (GSK) recently trialed the use of different quantum methods to sequence gene expression, and found that quantum annealingcould already compete against classical computersto start addressing life-sized problems.

Read the original post:
Are quantum computers good at picking stocks? This project tried to find out - ZDNet

Read More..

Quantum computing is finally having something of a moment – World Finance

Author: David Orrell, Author and Economist

March 16, 2021

In 2019, Google announced that they had achieved quantum supremacy by showing they could run a particular task much faster on their quantum device than on any classical computer. Research teams around the world are competing to find the first real-world applications and finance is at the very top of this list.

However, quantum computing may do more than change the way that quantitative analysts run their algorithms. It may also profoundly alter our perception of the financial system, and the economy in general. The reason for this is that classical and quantum computers handle probability in a different way.

The quantum coinIn classical probability, a statement can be either true or false, but not both at the same time. In mathematics-speak, the rule for determining the size of some quantity is called the norm. In classical probability, the norm, denoted the 1-norm, is just the magnitude. If the probability is 0.5, then that is the size.

The next-simplest norm, known as the 2-norm, works for a pair of numbers, and is the square root of the sum of squares. The 2-norm therefore corresponds to the distance between two points on a 2-dimensional plane, instead of a 1-dimensional line, hence the name. Since mathematicians love to extend a theory, a natural question to ask is what rules for probability would look like if they were based on this 2-norm.

It is only in the final step, when we take the magnitude into account, that negative probabilities are forced to become positive

For one thing, we could denote the state of something like a coin toss by a 2-D diagonal ray of length 1. The probability of heads is given by the square of the horizontal extent, while the probability of tails is given by the square of the vertical extent. By the Pythagorean theorem, the sum of these two numbers equals 1, as expected for a probability. If the coin is perfectly balanced, then the line should be at 45 degrees, so the chances of getting a heads or tails are identical. When we toss the coin and observe the outcome, the ambiguous state collapses to either heads or tails.

Because the norm of a quantum probability depends on the square, one could also imagine cases where the probabilities were negative. In classical probability, negative probabilities dont make sense: if a forecaster announced a negative 30 percent chance of rain tomorrow, we would think they were crazy. However, in a 2-norm, there is nothing to prevent negative probabilities occurring. It is only in the final step, when we take the magnitude into account, that negative probabilities are forced to become positive. If were going to allow negative numbers, then for mathematical consistency we should also permit complex numbers, which involve the square root of negative one. Now its possible well end up with a complex number for a probability; however the 2-norm of a complex number is a positive number (or zero). To summarise, classical probability is the simplest kind of probability, which is based on the 1-norm and involves positive numbers. The next-simplest kind of probability uses the 2-norm, and includes complex numbers. This kind of probability is called quantum probability.

Quantum logicIn a classical computer, a bit can take the value of 0 or 1. In a quantum computer, the state is represented by a qubit, which in mathematical terms describes a ray of length 1. Only when the qubit is measured does it give a 0 or 1. But prior to measurement, a quantum computer can work in the superposed state, which is what makes them so powerful.

So what does this have to do with finance? Well, it turns out that quantum algorithms behave in a very different way from their classical counterparts. For example, many of the algorithms used by quantitative analysts are based on the concept of a random walk. This assumes that the price of an asset such as a stock varies in a random way, taking a random step up or down at each time step. It turns out that the magnitude of the expected change increases with the square-root of time.

Quantum computing has its own version of the random walk, which is known as the quantum walk. One difference is the expected magnitude of change, which grows much faster (linearly with time). This feature matches the way that most people think about financial markets. After all, if we think a stock will go up by eight percent in a year then we will probably extend that into the future as well, so the next year it will grow by another eight percent. We dont think in square-roots.

This is just one way in which quantum models seem a better fit to human thought processes than classical ones. The field of quantum cognition shows that many of what behavioural economists call paradoxes of human decision-making actually make perfect sense when we switch to quantum probability. Once quantum computers become established in finance, expect quantum algorithms to get more attention, not for their ability to improve processing times, but because they are a better match for human behaviour.

View post:
Quantum computing is finally having something of a moment - World Finance

Read More..

How and when quantum computers will improve machine learning? – Medium

The different strategies toward quantum machine learningThey say you should start an article with a cool fancy image. Google 72 qubits chip Sycamore Google

There is a strong hope (and hype) that Quantum Computers will help machine learning in many ways. Research in Quantum Machine Learning (QML) is a very active domain, and many small and noisy quantum computers are now available. Different approaches exist, for both long term and short term, and we may wonder what are their respective hopes and limitations, both in theory and in practice?

It all started in 2009 with the publications of the HHL Algorithm [1] proving an exponential acceleration for matrix multiplication and inversion, which triggered exciting applications in all linear algebra-based science, hence machine learning. Since, many algorithms were proposed to speed up tasks such as classification [2], dimensionality reduction [3], clustering [4], recommendation system [5], neural networks [6], kernel methods [7], SVM [8], reinforcement learning [9], and more generally optimization [10].

These algorithms are what I call Long Term or Algorithmic QML. They are usually carefully detailed, with guarantees that are proven as mathematical theorems. We can (theoretically) know the amount of speedup compared to the classical algorithms they reproduce, which are often polynomial or even exponential, with respect to the number of input data for most of the cases. They come with precise bounds on the results probability, randomness, and accuracy, as usual in computer science research.

While they constitute theoretical proof that a universal and fault-tolerant quantum computer would provide impressive benefits in ML, early warnings [11] showed that some underlying assumptions were very constraining.

These algorithms often require loading the data with a Quantum Random Access Memory, or QRAM [12], a bottleneck part without which exponential speedups are much more complex to obtain. Besides, they sometimes need long quantum circuits and many logical qubits (which, due to error correction, are themselves composed of many more physical qubits), that might not be arriving soon enough.

When exactly? When we will reach the Universal Fault-Tolerant Quantum Computer, predicted by Google in 2029, or by IonQ in only 5 years. More conservative opinion claim this will not happen before 20+ years, and some even say we will never reach that point. Future will tell!

More recently, a mini earthquake amplified by scientific media has cast doubt on the efficiency of Algorithm QML: the so-called dequantization papers [13] that introduced classical algorithms inspired from the quantum ones to obtain similar exponential speedups, in the field of QML at least. This impressive result was then hindered by the fact that the equivalent speedup only concerns the number of data, and comes at a cost of a terrible polynomial slowdown with respect to other parameters for now. This makes these quantum-inspired classical algorithms currently unusable in practice [14].

In the meantime, something very exciting happened: actual quantum computers were built and became accessible. You can play with noisy devices made of 5 to 20 qubits, and soon more. Quite recently Google performed a quantum circuit with 53 qubits [15], the first that could not be efficiently simulable by a classical computer.

Researchers have then been looking at new models that these noisy intermediate scale quantum computers (NISQ) could actually perform [16]. They are all based on the same idea of variational quantum circuits (VQC), inspired by classical machine learning.

The main difference with algorithmic QML is that the circuit is not implementing a known classical ML algorithm. One would simply hope that the chosen circuit will converge to successfully classify data or predict values. For now, there are several types of circuits in the literature [17] and we start to see interesting patterns in the success. The problem itself is often encoded in the loss function we try to decrease: we sum the error made compared to the true values or labels, or compared to the quantum states we aim for, or to the energy levels, and so on, depending on the task. Active research tries to understand why some circuits work better than others on certain tasks, and why quantumness would help.

Another core difference is that many providers [18, 19, 20] allow you to program these VQC so you can play and test them on actual quantum computers!

In recent years, researchers have tried to find use cases where Variational QML would succeed at classical problems, or even outperforms the classical solutions [21, 22]. Some hope that the variational nature of the training confers some resilience to hardware noise. If this happens to be the case, it would be beneficial not to wait for Error Correction models that require many qubits. One would only need Error Mitigation techniques to post-process the measurements.

On the theoretical side, researchers hope that quantum superposition and entangling quantum gates would project data in a much bigger space (the Hilbert Space of n qubits has dimension 2^n) where some classically inaccessible correlations or separations can be done. Said differently, some believe that the quantum model will be more expressive.

It is important to notice that research on Variational QML is less focused on proving computational speedups. The main interest is to reach a more expressive or complex state of information processing. The two approaches are related but they represent two different strategies. Unfortunately, less is proven compared to Algorithmic QML, and we are far from understanding the theoretical reasons that would prove the advantage of these quantum computations.

Of course, due to the limitations of the current quantum devices, experiments are often made on a small number of qubits (4 qubits in the above graph) or on simulators, often ideal or limited to 30+ qubits. It is hard to predict what will happen when the number of qubits will grow.

Despite the excitement, VQC also suffers from theoretical disturbance. It is proven that when the number of qubits or the number of gates becomes too big, the optimization landscape will be flat and hinder the ability to optimize the circuit. Many efforts are made to circumvent this issue, called Barren Plateaus [23], by using specific circuits [24] or smart initialization of the parameters [25].

But Barren Plateaus are not the only caveat. In many optimization methods, one must compute the gradient of a cost function with respect to each parameter. Said differently, we want to know how much the model is improved when I modify each parameter. In classical neural networks, computing the gradients is usually done using backpropagation because we analytically understand the operations. With VQC, operations become too complex, and we cannot access intermediate quantum states (without measuring and therefore destroying them).

The current state-of-the-art solution is called the parameter shift rule [27, 28] and requires to apply the circuit and measure its result 2 times for each parameter. By comparison, in classical deep learning, the network is applied just once forward and once backward to obtain all thousand or millions gradients. Hopefully, we could parallelize the parameter shift rule on many simulators or quantum devices, but this could be limited for a large number of parameters.

Finally, researchers tend to focus more and more on the importance of data loading into a quantum state [29], also called feature map [30]. Without the ideal amplitude encoding obtained with the QRAM, there are doubts that we will be able to load and process high dimensional classical data with an exponential or high polynomial factor. Some hope remains on data independent tasks such as generative models [21, 31] or solving partial differential equations.

Note that the expression Quantum Neural Networks has been used to show the similarities with classical Neural Networks (NN) training. However they are not equivalent, since the VQC dont have the same hidden layers architecture, and neither have natural non linearities, unless a measurement is performed. And theres no simple rule to convert any NN to a VQC or vice versa. Some now prefer to compare VQC to Kernel Methods [30].

We now have a better understanding of the advantages and weaknesses of the two main strategies towards quantum machine learning. Current research is now focused on two aspects:

Finally, and most importantly, improve the quantum devices! We all hope for constant incremental improvements or a paradigm shift in the quality of the qubits, their number, the error correction process, to reach powerful enough machines. Please physicists, can you hurry?

PS: lets not forget to use all this amazing science to do good things that will benefit everyone.

Jonas Landman is a Ph.D. student at the University of Paris under the supervision of Prof. Iordanis Kerenidis. He is Technical Advisor at QC Ware and member of QuantX. He has previously studied at Ecole Polytechnique and UC Berkeley.

See more here:
How and when quantum computers will improve machine learning? - Medium

Read More..

After the Govt’s Big Allocation on Quantum Technologies in 2020, What Next? – The Wire Science

Photograph of a quantum computing chip that a Google team used in their claimed quantum computer. Photo: Nature 574, 505-510 (2019).

The Union finance ministry presented the national budget for 2021 one and a half months ago. One of the prime motivations of a nationalist government should be cyber-security, and it is high time we revisited this technological space from the context of this budget and the last one.

One of the highlights of the 2020 budget was the governments new investment in quantum computing. Finance minister Nirmala Sitharamans words then turned the heads of researchers and developers working in this area: It is proposed to provide an outlay of 8,000 crore rupees over a period of five years for the National Mission on Quantum Technologies and Applications.

Thanks to the pandemic, it is not clear how much funding the government transferred in the first year. The 2021 budget speech made no reference to quantum technologies.

Its important we discuss this topic from a technological perspective. Around four decades ago, physicist Richard Feynman pointed out the possibility of devices like quantum computers in a famous speech. In the early 1990s, Peter Shor and others proved that such computers could easily factor the product of two large prime numbers a task deemed very difficult for the classical computers we are familiar with. This problem, of prime factorisation, underlies the utility of public key crypto-systems, used to secure digital transactions, sensitive information, etc. online.

If we have a practicable quantum computer, the digital security systems currently in use around the world will break down quickly, including that of financial institutions. But commercial quantum computers are still many years away.

On this count, the economically developed nations are on average far ahead of others. Countries like the US, Canada, Australia and China have already made many advancements towards building usable quantum computers with meaningful capabilities. Against this background, the present governments decision in February 2020 to invest such a large sum in quantum technologies was an outstanding development.

The problem now lies with distributing the money and achieving the actual technological advances. So far, there is no clear evidence of this in the public domain.

A logical step in this direction would be to re-invest a large share of the allocation in indigenous development. This is also where the problems lie. One must understand that India has never been successful in fabricating advanced electronic equipment. While we have very good software engineers and theoretical computer scientists, there is no proven expertise in producing chips and circuits. We might have some limited exposure in assembling and testing but nothing beyond that.

So while Atmanirbhar Bharat is an interesting idea, it will surely take a very long time before we find ourselves able to compete with developed nations vis--vis seizing on this extremely sophisticated technology involving quantum physics. In the meantime, just as we import classical computers and networking equipment, so should we proceed by importing quantum equipment, until our indigenous capability in this field matures to a certain extent.

For example, demonstrating a four-qubit quantum system or designing a proof-of-concept quantum key distribution (QKD) circuit might be a nice textbook assignment. However, the outcome will not nearly be competitive to products already available in the international arena. IBM and Google have demonstrated the use of machines with more than 50 qubits. (These groups have participation from Indian scientists working abroad.) IBM has promised a thousand-qubit machine by 2023. ID Quantique has been producing commercial QKD equipment for more than five years.

India must procure such finished products and start testing them for security trapdoors before deploying them at home. Doing so requires us to train our engineers with state-of-the-art equipment as soon as possible.

In sum, indigenous development shouldnt be discontinued but allocating a large sum of money for indigenous development alone may not bring the desired results at this point.

By drafting a plan in the 2020 Union budget to spend Rs 8,000 crore, the government showed that it was farsighted. While the COVID-19 pandemic has made it hard to assess how much of this money has already been allocated, we can hope that there will be renewed interest in the matter as the pandemic fades.

This said, such a huge allocation going to academic institutes and research laboratories for trivial demonstrations might be imprudent. In addition, we must begin by analysing commercially available products, made by international developers, so we can secure Indias security infrastructure against quantum adversaries.

Serious science requires deep political thought, people with strong academic commitment in the government and productive short- as well as long-term planning. I hope the people in power will enable the Indian community of researchers to make this quantum leap.

Subhamoy Maitra is a senior professor at the Indian Statistical Institute, Kolkata. His research interests are cryptology and quantum computing.

Continued here:
After the Govt's Big Allocation on Quantum Technologies in 2020, What Next? - The Wire Science

Read More..

Cloud Servers – Data Center Map

Due to the many different definitions of cloud servers, or IaaS (Infrastructure as a Service), we have limited the requirements to services that are based on virtualization and automatically provisioned. To set more specific requirements for which clouds you would like to see on the map (such as high availability, scalability, utility based billing, short term commitments and support of specific technologies) please use the filtering function in the bottom of the page.

The intention with our database of cloud / IaaS server providers, is the build up a database of providers offering infrastructure as a service with as many relevant details as possible about the various offerings. This enables our users to filter the providers based on their exact needs, and thereby quickly narrowing down the list of providers to those that match their needs.

The entries in our database are primarily added and maintained directly by the service providers themselves, which means that is always updated and growing with new entries. All submissions are pending review before they are included though, to ensure that the quality of the service is not compromised.

Apart from the cloud database for infrastructure as a service solutions (IaaS), our site also features multiple other services such as colocation, managed hosting, dedicated servers etc., many of which can actually be combined with cloud computing. For example a mix of virtualized cloud servers together with dedicated servers, or alternatively a managed hosting solution based on cloud servers.

Continue reading here:
Cloud Servers - Data Center Map

Read More..

Cloud computing could prevent the emission of 1 billion metric tons of CO2 – Help Net Security

Continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide (CO2) from 2021 through 2024, a forecast from IDC shows.

The forecast uses data on server distribution and cloud and on-premises software use along with third-party information on datacenter power usage, CO2 emissions per kilowatt-hour, and emission comparisons of cloud and non-cloud datacenters.

A key factor in reducing the CO2 emissions associated with cloud computing comes from the greater efficiency of aggregated compute resources. The emissions reductions are driven by the aggregation of computation from discrete enterprise datacenters to larger-scale centers that can more efficiently manage power capacity, optimize cooling, leverage the most power-efficient servers, and increase server utilization rates.

At the same time, the magnitude of savings changes based on the degree to which a kilowatt of power generates CO2, and this varies widely from region to region and country to country. Given this, it is not surprising that the greatest opportunity to eliminate CO2 by migrating to cloud datacenters comes in the regions with higher values of CO2 emitted per kilowatt-hour.

The Asia/Pacific region, which utilizes coal for much of its power generation, is expected to account for more than half the CO2 emissions savings over the next four years. Meanwhile EMEA will deliver about 10% of the savings, largely due to its use of power sources with lower CO2 emissions per kilowatt-hour.

While shifting to cleaner sources of energy is very important to lowering emissions, reducing wasted energy use will also play a critical role. Cloud datacenters are doing this through optimizing the physical environment and reducing the amount of energy spent to cool the datacenter environment. The goal of an efficient datacenter is to have more energy spent on running the IT equipment than cooling the environment where the equipment resides.

Another capability of cloud computing that can be used to lower CO2 emissions is the ability to shift workloads to any location around the globe. Developed to deliver IT service wherever it is needed, this capability also enables workloads to be shifted to enable greater use of renewable resources, such as wind and solar power.

The forecast includes upper and lower bounds for the estimated reduction in emissions. If the percentage of green cloud datacenters today stays where it is, just the migration to cloud itself could save 629 million metric tons over the four-year time period. If all datacenters in use in 2024 were designed for sustainability, then 1.6 billion metric tons could be saved.

The projection of more than 1 billion metric tons is based on the assumption that 60% of datacenters will adopt the technology and processes underlying more sustainable smarter datacenters by 2024.

The idea of green IT has been around now for years, but the direct impact of hyperscale computing can have on CO2 emissions is getting increased notice from customers, regulators, and investors and its starting to factor into buying decisions, said Cushing Anderson, program VP at IDC.

For some, going carbon neutral will be achieved using carbon offsets, but designing datacenters from the ground up to be carbon neutral will be the real measure of contribution. And for advanced cloud providers, matching workloads with renewable energy availability will further accelerate their sustainability goals.

Original post:
Cloud computing could prevent the emission of 1 billion metric tons of CO2 - Help Net Security

Read More..

Azure Arc Becomes The Foundation For Microsofts Hybrid And Multi-Cloud Strategy – Forbes

Microsoft continues to expand Azure Arcs capabilities to transform it into a hybrid cloud and multi-cloud platform. At the recent Spring Ignite conference, Microsoft announced the general availability of Azure Arc enabled Kubernetes, and the preview of Arc enabled machine learning.

Wooden Jetty

Initially announced in 2019, Azure Arc is a strategic technology for Microsoft to expand its footprint to the enterprise data center and other public cloud platforms. Azure Arc is the only offering available in the market to manage both the legacy infrastructure based on physical servers and modern infrastructure powered by containers and Kubernetes.

Azure Arc for Hybrid and Multi-Cloud Deployments

With Azure Arc enabled servers, customers can onboard existing Linux and Windows servers running on bare metal servers or virtual machines to Azure Arc to manage them centrally. These servers could be running in on-premises environments or public cloud environments. Once registered with Azure Arc, they can seamlessly extend the Azure-based automation, management, and policy-driven configuration to any server irrespective of their deployment environment. This simplifies the fleet management and governance of infrastructure.

For example, with Azure Arc enabled servers, DevOps teams can roll out a consistent password policy to all the machines running in Azure VMs, on-prem data center, and even to Amazon EC2 or Google Compute Engine instances. They can also audit the compliance and remediate the issues from a centralized control plane.

Azure Arc enabled Kubernetes lets customers register Kubernetes clusters with Azure to take control of the cluster sprawl. Similar to Azure Arc enabled servers, they can apply consistent policies across all the registered clusters. An additional advantage of Azure Arc enabled Kubernetes is the integration of the GitOps-based deployment mechanism. Cluster managers can ensure that every Kubernetes cluster runs the same configuration and workloads across all registered clusters. GitOps provides at-scale deployment of workloads spanning the clusters running in the public cloud, data centers, and the edge.

Azure Stack, the hardware-based hybrid cloud offering from Microsoft, runs both VMs and managed Kubernetes clusters that can be registered with Azure Arc.

Optionally, Azure Arc customers can ingest the logs and metrics from servers and Kubernetes clusters into Azure Monitor - an integrated observability platform.

As of March 2021, Arc enabled servers and Arc enabled Kubernetes offerings are generally available.

Kubernetes has become the level playing field for running modern workloads. Its transforming to become the new operating system for running distributed workloads, including databases and machine learning platforms.

Kubernetes plays a crucial role in Azure Arc by becoming the infrastructure foundation for running managed services such as databases and machine learning. Microsoft is leveraging Kubernetes to abstract the low-level infrastructure to run platform services reliably. Azure Arc enabled data services and Azure Arc enabled machine learning are early indicators of how Microsoft plans to unleash its managed services to run on any Kubernetes cluster.

Kubernetes as the foundation for Azure Arc enabled managed services

Azure Arc enabled data services extends Microsoft Azures managed databases, including PostgreSQL Hyperscale and SQL Managed Instance to Kubernetes clusters running in hybrid and multi-cloud environments. Customers can use Azure Portal or the CLI to manage the lifecycle of database servers deployed through Arc enabled data services. The key advantage of this service is the ability to run databases in disconnected environments such as edge locations. Customers can run the databases in a highly secure environment without opening any outbound connections to the cloud.

Having experimented with databases, Microsoft is all set to bring machine learning to Azure Arc. Customers get the familiar Azure ML experience running in on-prem environments and other public cloud environments. Arc enabled machine learning combines the best of Kubernetes with data science and machine learning workflows. DevOps teams can provision workspaces with pre-configured Conda and Jupyter Notebook IDE. Through Role-Based Access Control (RBAC), data scientists and ML engineers can be given access to select operations needed for their job. With Arc enabled machine learning, customers can mix and match CPU hosts and GPU hosts of a Kubernetes cluster to run distributed training jobs. The models can then be deployed in managed Kubernetes clusters in the cloud or at the edge for inference.

Arc enabled machine learning is a masterstroke from Microsoft. It essentially brings ML Platform as a Service (PaaS) closer to the origin of the data. Customers may have large datasets uploaded to Amazon S3 while the ML training jobs are running in Azure. In that case, they can launch an Amazon EKS cluster in AWS to run Arc enabled machine learning with the same Jupyter Notebook and Azure ML SDK to train a model on AWS. The machine learning model can then be registered and deployed in Azure ML for inference.

Microsofts investments in Azure Stack-based hardware and Azure Arc platform become the critical differentiating factor. Azure is the only public cloud platform with hardware and software-based choices for implementing an enterprise hybrid cloud and multi-cloud strategy.

Read more from the original source:
Azure Arc Becomes The Foundation For Microsofts Hybrid And Multi-Cloud Strategy - Forbes

Read More..