Category Archives: Deep Mind
To Understand The Future of AI, Study Its Past – Forbes
Dr. Claude Shannon, one of the pioneers of the field of artificial intelligence, with an electronic ... [+] mouse designed to navigate its way around a maze after only one 'training' run. May 10, 1952 at Bell Laboratories. (Photo by Keystone/Getty Images)
A schism lies at the heart of the field of artificial intelligence. Since its inception, the field has been defined by an intellectual tug-of-war between two opposing philosophies: connectionism and symbolism. These two camps have deeply divergent visions as to how to "solve" intelligence, with differing research agendas and sometimes bitter relations.
Today, connectionism dominates the world of AI. The emergence of deep learning, which is a quintessentially connectionist technique, has driven the worldwide explosion in AI activity and funding over the past decade. Deep learning's recent accomplishments have been nothing short of astonishing. Yet as deep learning spreads, its limitations are becoming increasingly evident.
If AI is to reach its full potential going forward, a reconciliation between connectionism and symbolism is essential. Thankfully, in both academic and commercial settings, research efforts that fuse these two traditionally opposed approaches are beginning to emerge. Such synthesis may well represent the future of artificial intelligence.
Symbolic approaches to AI seek to build systems that behave intelligently through the manipulation of symbols that map directly to conceptsfor instance, words and numbers. Connectionist approaches, meanwhile, represent information and simulate intelligence via massive networks of interconnected processing units (commonly referred to as neural networks), rather than explicitly with symbols.
In many respects, connectionism and symbolism represent each others yin and yang: each approach has core strengths which for the other are important weaknesses. Neural networks develop flexible, bottoms-up intuition based on the data they are fed. Their millions of interconnected "neurons" allow them to be highly sensitive to gradations and ambiguities in input; their plasticity allows them to learn in response to new information.
But because they are not explicitly programmed by humans, neural networks are "black boxes": it is generally not possible to pinpoint, in terms that are meaningful to humans, why they make the decisions that they do. This lack of explainability is a fundamental impediment to the widespread use of connectionist methods in high-stakes real-world environments.
Symbolic systems do not have this problem. Because these systems operate with high-level symbols to which discrete meanings are attached, their logic and inner workings are human-readable. The tradeoff is that symbolic systems are more static and brittle. Their performance tends to break down when confronted with situations that they have not been explicitly programmed to handle. The real world is complex and heterogeneous, full of fuzzily defined concepts and novel situations. Symbolic AI is ill-suited to grapple with this complexity.
At its inception, the field of artificial intelligence was dominated by symbolism. As a serious academic discipline, artificial intelligence traces its roots to the summer of 1956, when a small group of academics (including future AI icons like Claude Shannon, Marvin Minsky and John McCarthy) organized a two-month research workshop on the topic at Dartmouth College. As is evident in the group's original research proposal from that summer, these AI pioneers' conception of intelligence was oriented around symbolic theories and methods.
Throughout the 1960s and into the 1970s, symbolic approaches to AI predominated. Famous early AI projects like Eliza and SHRDLU are illustrative examples. These programs were designed to interact with humans using natural language (within carefully prescribed parameters). For instance, SHRDLU could successfully respond to human queries like: Is there a large block behind a pyramid? or "What does the box contain?"
At the same time that symbolic AI research was showing early signs of promise, nascent efforts to explore connectionist paths to AI were shut down in dramatic fashion. In 1969, in response to early research on artificial neural networks, leading AI scholars Marvin Minsky and Seymour Papert published a landmark book called Perceptrons. The book set forth mathematical proofs that seemed to establish that neural networks were not capable of executing certain basic mathematical functions.
Perceptrons impact was sweeping: the AI research community took the analysis as authoritative evidence that connectionist methods were an unproductive path forward in AI. As a consequence, neural networks all but disappeared from the AI research agenda for over a decade.
Yet despite its early momentum, it would soon become clear that symbolic AI had profound shortcomings of its own.
Symbolic AI reached its mainstream zenith in the early 1980s with the proliferation of what were called expert systems: computer programs that, using extensive if-then logic, sought to codify the knowledge and decision-making of human experts in particular domains. These systems generated tremendous expectations and hype: startups like Teknowledge and Intellicorp raised millions and Fortune 500 companies invested billions in attempts to commercialize the technology.
Expert systems failed spectacularly to deliver on these expectations, due to the shortcomings noted above: their brittleness, inflexibility and inability to learn. By 1987 the market for expert systems had all but collapsed. An "AI winter" set in that would stretch into the new century.
Amid the ashes of the discredited symbolic AI paradigm, a revival of connectionist methods began to take shape in the late 1980sa revival that has reached full bloom in the present day. In 1986 Geoffrey Hinton published a landmark paper introducing backpropagation, a new method for training neural networks that has become the foundation for modern deep learning. As early as 1989, Yann LeCun had built neural networks using backpropagation that could reliably read handwritten zip codes for the U.S. Postal Service.
But these early neural networks were impractical to train and could not scale. Through the 1990s and into the 2000s, Hinton, LeCun and other connectionist pioneers persisted in their work on neural networks in relative obscurity. In just the past decade, a confluence of technology developmentsexponentially increased computing capabilities, larger data sets, and new types of microprocessorshave supercharged these connectionist methods first devised in the 1980s. These forces have catapulted neural networks out of the research lab to the center of the global economy.
Yet for all of its successes, deep learning has meaningful shortcomings. Connectionism is at heart a correlative methodology: it recognizes patterns in historical data and makes predictions accordingly, nothing more. Neural networks do not develop semantic models about their environment; they cannot reason or think abstractly; they do not have any meaningful understanding of their inputs and outputs. Because neural networks inner workings are not semantically grounded, they are inscrutable to humans.
Importantly, these failings correspond directly to symbolic AI's defining characteristics: symbolic systems are human-readable and logic-based.
Recognizing the promise of a hybrid approach, AI researchers around the world have begun to pursue research efforts that represent a reconciliation of connectionist and symbolic methods.
DARPA
To take one example, in 2017 DARPA launched a program called Explainable Artificial Intelligence (XAI). XAI is providing funding to 13 research teams across the country to develop new AI methods that are more interpretable than traditional neural networks.
Some of these research teams are focused on incorporating symbolic elements into the architecture of neural networks. Other teams are going further still, developing purely symbolic AI methods.
Autonomous vehicles
Another example of the merits of a dual connectionist/symbolic approach comes from the development of autonomous vehicles.
A few years ago, it was not uncommon for AV researchers to speak of pursuing a purely connectionist approach to vehicle autonomy: developing an "end-to-end" neural network that would take raw sensor data as input and generate vehicle controls as output, with everything in between left to the opaque workings of the model.
As of 2016, prominent AV developers like Nvidia and Drive.ai were building end-to-end deep learning solutions. Yet as research efforts have progressed, consensus has developed across the industry that connectionist-only methods are not workable for the commercial deployment of AVs.
The reason is simple: for an activity as ubiquitous and safety-critical as driving, it is not practicable to use AI systems whose actions cannot be closely scrutinized and explained. Regulators across the country have made clear that an AV systems inability to account for its own decisions is a non-starter.
Today, the dominant (perhaps the exclusive) technological approach among AV programs is to combine neural networks with symbolically-based features in order to increase model transparency.
Most often, this is achieved by breaking the overall AV cognition pipeline into modules: e.g., perception, prediction, planning, actuation. Within a given module, neural networks are deployed in targeted ways. But layered on top of these individual modules is a symbolic framework that integrates the various components and validates the systems overall output.
Academia
Finally, at leading academic institutions around the world, researchers are pioneering cutting-edge hybrid AI models to capitalize on the complementary strengths of the two paradigms. Notable examples include a 2018 research effort at DeepMind and a 2019 program led by Josh Tenenbaum at MIT.
In a fitting summary, NYU professor Brenden Lake said of the MIT research: Neural pattern recognition allows the system to see, while symbolic programs allow the system to reason. Together, the approach goes beyond what current deep learning systems can do.
Taking a step back, we would do well to remember that the human mind, that original source of intelligence that has inspired the entire AI enterprise, is at once deeply connectionist and deeply symbolic.
Anatomically, thoughts and memories are not discretely represented but rather distributed in parallel across the brains billions of interconnected neurons. At the same time, human intelligence is characterized at the level of consciousness by the ability to express and manipulate independently meaningful symbols. As philosopher Charles Sanders Peirce put it, We think only in signs.
Any conception of human intelligence that lacked either a robust connectionist or a robust symbolic dimension would be woefully incomplete. The same may prove to be true of machine intelligence. As dazzling as the connectionist-driven advances in AI have been over the past decade, they may be but a prelude to what becomes possible when the discipline more fully harmonizes connectionism and symbolism.
Go here to see the original:
Health strategies of Google, Amazon, Apple, and Microsoft – Business Insider
Dr. David Feinberg, the head of Google Health. Reuters
Over the past year, Google has gotten deeper into healthcare, hiring Dr. David Feinberg to head up the Google Health division.
A big Google health project is now drawing scrutiny. Google teamed up with the health system Ascension on "Project Nightingale," in which the hospital operator is using Google as its cloud provider and also working with the tech giant to build out tools the health system can use.
Business Insider reported on Monday that by 2020, records on 50 million Ascension patients will be on Google's cloud network. About 150 Google employees are able to access the data, according to documents seen by Business Insider.
The project drew concern from those inside the company, lawmakers, and the US Department of Health and Human Services about how the data is being handled. Google and Ascension said that the relationship followed health-privacy laws.
More broadly, Feinberg's team is now responsible for coordinating health initiatives across Google, including in the company's search-engine and map products, its Android smartphone operating system, and its more futuristic offerings in areas like artificial intelligence.
In his speech at a conference in October, Feinberg said one of his first main goals for the team would be to oversee how health-related searches come up and work to improve that with the Google Search team. According to documents reviewed by Business Insider, it appears the team has been building a Patient Search tool to help medical providers sift through patient information.
Read more: We just got our first look at what Google's grand plans are for healthcare after it brought in a top doctor to lead its health team
Google Health is just one aspect of the healthcare strategy of its parent company, Alphabet. Within Google, Google Cloud is working to sign cloud contracts with healthcare systems. Mayo Clinic in September signed Google as its cloud and AI partner.
There's also Verily, the life-sciences arm of Alphabet, as well as Calico, its life-extension spin-off. Verily has its hands in projects spanning robotics, blood-sugar-tracking devices, and work on addiction treatment. The company has also made investments in healthcare through its venture funds GV and Capital G, as well as through Alphabet itself.
Google in November also reached a $2.1 billion deal to acquire Fitbit. The brand, best known for its fitness watches, also has a big business selling a health platform that combines coaching and fitness tracking with employers and health plans.
Beyond working with existing products, Feinberg's oversight includes the health team at Google AI, hardware components, and DeepMind Health. Both Google AI and DeepMind have pursued projects that analyze medical images like eye scans and scans of breast cancer cells, with the hope of aiding medical professionals in diagnosing and treating patients.
Link:
Health strategies of Google, Amazon, Apple, and Microsoft - Business Insider
deep mind Mathematics, Machine Learning & Computer Science
Conditional Probabilities
Let us consider a probability measure of a measurable space . Further, let , valid for the entire post.
Venn diagram of a possible constellation of the sets and
Let us directly start with the formal definition of a conditional probability. Illustrations and explanations follow immediately afterwards.
Definition (Conditional Probability)Let be a probability space and . The real value
is the probability of given that has occurred. is the probability that both events and occur and is the new basic set since .
A conditional probability, denoted by , is a probability measure of an event occurring, given that another event has already occurred. That is, reflects the probability that both events and occur relative to the new basic set .
The objective of is two-fold:
The last bullet-point 2. actually means since we know (by assumption, presumption, assertion or evidence) that has been occurred. In particular, cannot be a null set since . Due to the additivity of a probability space we get as . The knowledge about might be interpreted as an additional piece of information that we have received over time.
The following examples are going to illustrate this very basic concept.
Example (Default Rates)Let us assume that represents the set of all defaulting companies in the world, and represents the defaulting companies in Germany. Hereby, we further assume . Let us further assume that the average probability of default of equals . If we restrict the population to defaulting companies located in Germany, our estimate can be updated by this knowledge. For instance, we could state that .
As a motivation of the above example, the latest S&Ps 2018 Annual Global Corporate Default And RatingTransition Study and 2018 CreditReform Default Study of German companies state average default rates.
Example (Urn)An urn contains 3 white and 3 black balls. Two balls will be drawn successively without putting the balls back to the urn. We are interested in the event
white ball in the second draw
The probability of depends obviously on the result of the first draw. We distinguish two cases as follows.
Notice that . In addition, please realize that and / are independent since we have not put the ball back to the urn.
Let us consider the probability measure derived from the conditional probability in more detail.
Theorem:Let be a probability space, and . The map
defines a probability measure on .
Proof:Apparently, since and for all . Further, . The -additivity follow by
As outlined in the last section of this post, the conditional probability is the probability that both events and occur relative to the new basic set . Let us transform the conditional probability formula as follows:
Notice that
Hence, we can conclude that
(1)
Formula (1) is also called Bayes Rule or Bayes Theorem.
Read the original here:
Google absorbs DeepMind healthcare unit 10 months after …
Google has finally absorbed the healthcare unit of its artificial intelligence company DeepMind, the British company it acquired for 400 million ($500 million) in 2016.
The change means that DeepMind Health, the unit which focuses on using AI to improve medical care, is now part of Google's own dedicated healthcare unit. Google Health was created in November 2018, and is run by big-name healthcare CEO David Feinberg.
DeepMind's clinical lead, Dominic King, announced the change in a blogpost on Wednesday. King will continue to lead the team out of London.
It has taken some 10 months for the integration to happen.
It also comes one month after the DeepMind cofounder overseeing that division, Mustafa Suleyman, confirmed that he was on leave from the business for unspecified reasons. He has said he plans to return to DeepMind before the end of the year.
Read more:The cofounder of Google's AI company DeepMind hit back at 'speculation' over his leave of absence
Suleyman spearheaded DeepMind's "applied" division, which focuses on the practical application of artificial intelligence in areas such as healthcare and energy. DeepMind's other cofounder and CEO, Demis Hassabis, is more focused on the academic side of the business and the firm's research efforts.
One source with knowledge of the matter said Google planned to take more control of DeepMind's "applied" division, leaving Suleyman's future role at the business unclear. The shift would essentially leave DeepMind as a research-only organization, with Google focused on commercializing its findings. "They've created a private university for AI in Britain," the person said.
DeepMind hinted as much in November, when it announced the Streams app would fall under Google's auspices.
DeepMind cofounder, Mustafa Suleyman, who is on leave from the business. DeepMind
DeepMind declined to comment.
The integration sees DeepMind's health partnerships with Britain's state-funded health system, the NHS, continued under Google Health, something that may raise eyebrows. A New Scientist investigation in 2016 revealed that DeepMind, with its Streams app, had extensive access to 1.6 million patients' data in an arrangement with London's Royal Free Hospital. A UK regulator ruled that the data-sharing agreement was unlawful. The revelations triggered public outcry over worries that a US tech giant, Google, might gain access to confidential patient data for profit.
DeepMind's current NHS partnerships include Moorfields Eye Hospital to detect eye disease, and University College Hospital on cancer radiotherapy treatment. In the US, it has partnered the US Department of Veterans Affairs on predicting patient deterioration. Dominic King, DeepMind's clinical lead, wrote in a post: "We see enormous potential in continuing, and scaling, our work with all three partners in the coming years as part of Google Health."
He added: "As has always been the case, our partners are in full control of all patient data and we will only use patient data to help improve care, under their oversight and instructions."
Go here to read the rest:
DeepMind Q&A Dataset – New York University
Hermann et al. (2015) created two awesome datasets using news articles for Q&A research. Each dataset contains many documents (90k and 197k each), and each document companies on average 4 questions approximately. Each question is a sentence with one missing word/phrase which can be found from the accompanying document/context.
The original authors kindly released the scripts and accompanying documentation to generate the datasets (see here). Unfortunately due to instability of WaybackMachine, it is often cumbersome to generate the datasets from scratch using the provided scripts. Furthermore, in certain parts of the world, it turned out to be far from being straight-forward to access the WaybackMachine.
I am making the generated datasets available here. This will hopefully make the datasets used by a wider audience and lead to faster progress in Q&A research.
Hermann, K. M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., & Blunsom, P. (2015). Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (pp. 1684-1692).
Continue reading here:
Working at DeepMind | Glassdoor
Our success depends on many teams joining together for a shared goal. No single discipline has all the answers needed to build AI, and we've found that many exciting new ideas come from dedicated collaboration between different fields. Learn more about our dedicated teams below.
Research
Our research teams work on cutting-edge computer science, neuroscience, ethics, and public policy to responsibly pioneer new AI systems. Research scientists and engineers collaborate across DeepMind and with our partners to create systems that can benefit all parts of society.
Find out more here.
Engineering
Our engineers help accelerate our research by building, maintaining, and optimising the tools and environments we use. From developing bespoke environments to scaling research prototypes, our engineers enable us to perform safe and rigorous experimentation at scale.
Find out more here.
Science
Our multidisciplinary group of researchers and engineers collaborate with expert partners on a wide range of scientific problems. From protein folding to quantum chemistry, were using AI to unlock some of the most fascinating challenges in the natural sciences.
Find out more here.
Ethics & Society
Our interdisciplinary group of policy experts, philosophers, and researchers work with other groups in academia, civil society, and the broader AI community to address using new technologies, putting ethics into practice, and helping society address the impacts of AI.
Find out more here.
DeepMind for Google
Our researchers and engineers work with our partners at Google to apply our systems in the real world. This collaboration has already reduced Googles energy consumption and improved products that are in the hands of hundreds of millions of people around the world.
Find out more here.
Operations
Our dedicated teams for recruitment, people development, property and workplace, travel, executive support, events, communications, finance, legal, and public engagement work across the organisation to maintain, optimise, and nurture our culture and world-leading research.
Find out more here.
More here: