Page 11234..1020..»

What is Artificial Intelligence (AI)? … – Techopedia

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.

Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions.

Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

]]>[Master Deep Learning and build a career in AI, with this highly sought after course from Coursera.]

Go here to read the rest:

What is Artificial Intelligence (AI)? ... - Techopedia

Read More..

What is AI (artificial intelligence)? – Definition from …

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses ,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

AI is incorporated into a variety of different types of technology. Here are seven examples.

Artificial intelligence has made its way into a number of areas. Here are six examples.

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes , convincingly fabricated videos of public figures saying or doing things that never took place .

Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

See more here:

What is AI (artificial intelligence)? - Definition from ...

Read More..

What is a quantum computer? Explained with a simple example.

by YK Sugi

Hi everyone!

The other day, I visited D-Wave Systems in Vancouver, Canada. Its a company that makes cutting-edge quantum computers.

I got to learn a lot about quantum computers there, so Id like to share some of what I learned there with you in this article.

The goal of this article is to give you an accurate intuition of what a quantum computer is using a simple example.

This article will not require you to have prior knowledge of either quantum physics or computer science to be able to understand it.

Okay, lets get started.

Edit (Feb 26, 2019): I recently published a video about the same topic on my YouTube channel. I would recommend watching it (click here) before or after reading this article because I have added some additional, more nuanced arguments in the video.

Here is a one-sentence summary of what a quantum computer is:

There is a lot to unpack in this sentence, so let me walk you through what it is exactly using a simple example.

To explain what a quantum computer is, Ill need to first explain a little bit about regular (non-quantum) computers.

Now, a regular computer stores information in a series of 0s and 1s.

Different kinds of information, such as numbers, text, and images can be represented this way.

Each unit in this series of 0s and 1s is called a bit. So, a bit can be set to either 0 or 1.

A quantum computer does not use bits to store information. Instead, it uses something called qubits.

Each qubit can not only be set to 1 or 0, but it can also be set to 1 and 0. But what does that mean exactly?

Let me explain this with a simple example. This is going to be a somewhat artificial example. But its still going to be helpful in understanding how quantum computers work.

Now, suppose youre running a travel agency, and you need to move a group of people from one location to another.

To keep this simple, lets say that you need to move only 3 people for now Alice, Becky, and Chris.

And suppose that you have booked 2 taxis for this purpose, and you want to figure out who gets into which taxi.

Also, suppose here that youre given information about whos friends with who, and whos enemies with who.

Here, lets say that:

And suppose that your goal here is to divide this group of 3 people into the two taxis to achieve the following two objectives:

Okay, so this is the basic premise of this problem. Lets first think about how we would solve this problem using a regular computer.

To solve this problem with a regular, non-quantum computer, youll need first to figure out how to store the relevant information with bits.

Lets label the two taxis Taxi #1 and Taxi #0.

Then, you can represent who gets into which car with 3 bits.

For example, we can set the three bits to 0, 0, and 1 to represent:

Since there are two choices for each person, there are 2*2*2 = 8 ways to divide this group of people into two cars.

Heres a list of all possible configurations:

A | B | C0 | 0 | 00 | 0 | 10 | 1 | 00 | 1 | 11 | 0 | 01 | 0 | 11 | 1 | 01 | 1 | 1

Using 3 bits, you can represent any one of these combinations.

Now, using a regular computer, how would we determine which configuration is the best solution?

To do this, lets define how we can compute the score for each configuration. This score will represent the extent to which each solution achieves the two objectives I mentioned earlier:

Lets simply define our score as follows:

(the score of a given configuration) = (# friend pairs sharing the same car) - (# enemy pairs sharing the same car)

For example, suppose that Alice, Becky, and Chris all get into Taxi #1. With three bits, this can be expressed as 111.

In this case, there is only one friend pair sharing the same car Alice and Becky.

However, there are two enemy pairs sharing the same car Alice and Chris, and Becky and Chris.

So, the total score of this configuration is 1-2 = -1.

With all of this setup, we can finally go about solving this problem.

With a regular computer, to find the best configuration, youll need to essentially go through all configurations to see which one achieves the highest score.

So, you can think about constructing a table like this:

A | B | C | Score0 | 0 | 0 | -10 | 0 | 1 | 1 <- one of the best solutions0 | 1 | 0 | -10 | 1 | 1 | -11 | 0 | 0 | -11 | 0 | 1 | -11 | 1 | 0 | 1 <- the other best solution1 | 1 | 1 | -1

As you can see, there are two correct solutions here 001 and 110, both achieving the score of 1.

This problem is fairly simple. It quickly becomes too difficult to solve with a regular computer as we increase the number of people in this problem.

We saw that with 3 people, we need to go through 8 possible configurations.

What if there are 4 people? In that case, well need to go through 2*2*2*2 = 16 configurations.

With n people, well need to go through (2 to the power of n) configurations to find the best solution.

So, if there are 100 people, well need to go through:

This is simply impossible to solve with a regular computer.

How would we go about solving this problem with a quantum computer?

To think about that, lets go back to the case of dividing 3 people into two taxis.

As we saw earlier, there were 8 possible solutions to this problem:

A | B | C0 | 0 | 00 | 0 | 10 | 1 | 00 | 1 | 11 | 0 | 01 | 0 | 11 | 1 | 01 | 1 | 1

With a regular computer, using 3 bits, we were able to represent only one of these solutions at a time for example, 001.

However, with a quantum computer, using 3 qubits, we can represent all 8 of these solutions at the same time.

There are debates as to what it means exactly, but heres the way I think about it.

First, examine the first qubit out of these 3 qubits. When you set it to both 0 and 1, its sort of like creating two parallel worlds. (Yes, its strange, but just follow along here.)

In one of those parallel worlds, the qubit is set to 0. In the other one, its set to 1.

Now, what if you set the second qubit to 0 and 1, too? Then, its sort of like creating 4 parallel worlds.

In the first world, the two qubits are set to 00. In the second one, they are 01. In the third one, they are 10. In the fourth one, they are 11.

Similarly, if you set all three qubits to both 0 and 1, youd be creating 8 parallel worlds 000, 001, 010, 011, 100, 101, 110, and 111.

This is a strange way to think, but it is one of the correct ways to interpret how the qubits behave in the real world.

Now, when you apply some sort of computation on these three qubits, you are actually applying the same computation in all of those 8 parallel worlds at the same time.

So, instead of going through each of those potential solutions sequentially, we can compute the scores of all solutions at the same time.

With this particular example, in theory, your quantum computer would be able to find one of the best solutions in a few milliseconds. Again, thats 001 or 110 as we saw earlier:

A | B | C | Score0 | 0 | 0 | -10 | 0 | 1 | 1 <- one of the best solutions0 | 1 | 0 | -10 | 1 | 1 | -11 | 0 | 0 | -11 | 0 | 1 | -11 | 1 | 0 | 1 <- the other best solution1 | 1 | 1 | -1

In reality, to solve this problem, you would need to give your quantum computer two things:

Given these two things, your quantum computer will spit out one of the best solutions in a few milliseconds. In this case, thats 001 or 110 with a score of 1.

Now, in theory, a quantum computer is able to find one of the best solutions every time it runs.

However, in reality, there are errors when running a quantum computer. So, instead of finding the best solution, it might find the second-best solution, the third best solution, and so on.

These errors become more prominent as the problem becomes more and more complex.

So, in practice, you will probably want to run the same operation on a quantum computer dozens of times or hundreds of times. Then pick the best result out of the many results you get.

Even with the errors I mentioned, the quantum computer does not have the same scaling issue a regular computer suffers from.

When there are 3 people we need to divide into two cars, the number of operations we need to perform on a quantum computer is 1. This is because a quantum computer computes the score of all configurations at the same time.

When there are 4 people, the number of operations is still 1.

When there are 100 people, the number of operations is still 1. With a single operation, a quantum computer computes the scores of all 2 ~= 10 = one million million million million million configurations at the same time.

As I mentioned earlier, in practice, its probably best to run your quantum computer dozens of times or hundreds of times and pick the best result out of the many results you get.

However, its still much better than running the same problem on a regular computer and having to repeat the same type of computation one million million million million million times.

Special thanks to everyone at D-Wave Systems for patiently explaining all of this to me.

D-Wave recently launched a cloud environment for interacting with a quantum computer.

If youre a developer and would like actually to try using a quantum computer, its probably the easiest way to do so.

Its called Leap, and its at https://cloud.dwavesys.com/leap. You can use it for free to solve thousands of problems, and they also have easy-to-follow tutorials on getting started with quantum computers once you sign up.

Footnote:

Excerpt from:
What is a quantum computer? Explained with a simple example.

Read More..

Working at DeepMind | Glassdoor

Our success depends on many teams joining together for a shared goal. No single discipline has all the answers needed to build AI, and we've found that many exciting new ideas come from dedicated collaboration between different fields. Learn more about our dedicated teams below.

Research

Our research teams work on cutting-edge computer science, neuroscience, ethics, and public policy to responsibly pioneer new AI systems. Research scientists and engineers collaborate across DeepMind and with our partners to create systems that can benefit all parts of society.

Find out more here.

Engineering

Our engineers help accelerate our research by building, maintaining, and optimising the tools and environments we use. From developing bespoke environments to scaling research prototypes, our engineers enable us to perform safe and rigorous experimentation at scale.

Find out more here.

Science

Our multidisciplinary group of researchers and engineers collaborate with expert partners on a wide range of scientific problems. From protein folding to quantum chemistry, were using AI to unlock some of the most fascinating challenges in the natural sciences.

Find out more here.

Ethics & Society

Our interdisciplinary group of policy experts, philosophers, and researchers work with other groups in academia, civil society, and the broader AI community to address using new technologies, putting ethics into practice, and helping society address the impacts of AI.

Find out more here.

DeepMind for Google

Our researchers and engineers work with our partners at Google to apply our systems in the real world. This collaboration has already reduced Googles energy consumption and improved products that are in the hands of hundreds of millions of people around the world.

Find out more here.

Operations

Our dedicated teams for recruitment, people development, property and workplace, travel, executive support, events, communications, finance, legal, and public engagement work across the organisation to maintain, optimise, and nurture our culture and world-leading research.

Find out more here.

More here:

Working at DeepMind | Glassdoor

Read More..

Benefits & Risks of Artificial Intelligence – Future of …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

View original post here:

Benefits & Risks of Artificial Intelligence - Future of ...

Read More..

artificial intelligence | Definition, Examples, and …

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasksas, for example, discovering proofs for mathematical theorems or playing chesswith great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasps instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligenceconspicuously absent in the case of Sphexmust include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and proceduresknown as rote learningis relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the add ed rule and so form the past tense of jump based on experience with similar verbs.

Original post:

artificial intelligence | Definition, Examples, and ...

Read More..

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

Go here to see the original:

Artificial Intelligence What it is and why it matters | SAS

Read More..

What is Artificial Intelligence? How Does AI Work? | Built In

Can machines think? Alan Turing, 1950

Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: "Can machines think?"

Turing's paper "Computing Machinery and Intelligence" (1950), and it's subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.

At it's core, AI is the branch of computer science that aims to answer Turing's question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to manyquestions and debates. So much so, that no singular definition of the field is universally accepted.

The major limitation in defining AI as simply "building machines that are intelligent" is that it doesn't actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is "the study of agents that receive percepts from the environment and perform actions." (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI:

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting "all the skills needed for the Turing Test also allow an agent to act rationally." (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as "algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together."

While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with machine learning and other subsets of artificial intelligence.

While addressing a crowd at the Japan AI Experience in 2017, DataRobot CEO Jeremy Achin began his speech by offering the following definition of how AI is used today:

"AI is a computer system able to perform tasks that ordinarily require human intelligence... Many of these artificial intelligence systems are powered by machine learning, some of them are powered by deep learning and some of them are powered by very boring things like rules."

Here is the original post:

What is Artificial Intelligence? How Does AI Work? | Built In

Read More..

What Is Machine Learning? | How It Works, Techniques …

Supervised Learning

Supervised machine learning builds a model that makes predictions based on evidence in the presence of uncertainty. A supervised learning algorithm takes a known set of input data and known responses to the data (output) and trains a model to generate reasonable predictions for the response to new data. Use supervised learning if you have known data for the output you are trying to predict.

Supervised learning uses classification and regression techniques to develop predictive models.

Classification techniques predict discrete responsesfor example, whether an email is genuine or spam, or whether a tumor is cancerous or benign. Classification models classify input data into categories. Typical applications include medical imaging, speech recognition, and credit scoring.

Use classification if your data can be tagged, categorized, or separated into specific groups or classes. For example, applications for hand-writing recognition use classification to recognize letters and numbers. In image processing and computer vision, unsupervised pattern recognition techniques are used for object detection and image segmentation.

Common algorithms for performing classification include support vector machine (SVM), boosted and bagged decision trees, k-nearest neighbor, Nave Bayes, discriminant analysis, logistic regression, and neural networks.

Regression techniques predict continuous responsesfor example, changes in temperature or fluctuations in power demand. Typical applications include electricity load forecasting and algorithmic trading.

Use regression techniques if you are working with a data range or if the nature of your response is a real number, such as temperature or the time until failure for a piece of equipment.

Common regression algorithms include linear model, nonlinear model, regularization, stepwise regression, boosted and bagged decision trees, neural networks, and adaptive neuro-fuzzy learning.

Continue reading here:

What Is Machine Learning? | How It Works, Techniques ...

Read More..

Qubits and Defining the Quantum Computer | HowStuffWorks

The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.

Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.

This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).

Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.

Next, we'll look at some recent advancements in the field of quantum computing.

Excerpt from:
Qubits and Defining the Quantum Computer | HowStuffWorks

Read More..