This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
Consider the animal in the following image. If you recognize it, a quick series of neuron activations in your brain will link its image to its name and other information you know about it (habitat, size, diet, lifespan, etc). But if like me, youve never seen this animal before, your mind is now racing through your repertoire of animal species, comparing tails, ears, paws, noses, snouts, and everything else to determine which bucket this odd creature belongs to. Your biological neural network is reprocessing your past experience to deal with a novel situation.
Our brains, honed through millions of years of evolution, are very efficient processing machines, sorting out the ton of information we receive through our sensory inputs, associating known items with their respective categories.
That picture, by the way, is an Indian civet, an endangered species that has nothing to do with cats, dogs, and rodents. It should be placed in its own separate category (viverrids). There you go. You now have a new bucket to place civets in, which includes this variant that was sighted recently in India.
While we have yet to learn much about how the mind works, we are in the midst (or maybe still at the beginning) of an era of creating our own version of the human brain. After decades of research and development, researchers have managed to create deep neural networks that sometimes match or surpass human performance in specific tasks.
[Read: Artificial vs augmented intelligence: whats the difference?]
But one of the recurring themes in discussions about artificial intelligence is whether artificial neural networks used in deep learning work similarly to the biological neural networks of our brains. Many scientists agree that artificial neural networks are a very rough imitation of the brains structure, and some believe that ANNs are statistical inference engines that do not mirror the many functions of the brain. The brain, they believe, contains many wonders that go beyond the mere connection of biological neurons.
A paper recently published in the peer-reviewed journal Neuron challenges the conventional view of the functions of the human brain. Titled Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks, the paper discusses that contrary to the beliefs of many scientists, the human brain is a brute-force big data processor that fits its parameters to the many examples that it experiences. Thats the kind of description usually given to deep neural networks.
Authored by researchers at Princeton University, the thought-provoking paper provides a different perspective on neural networks, analogies between ANNs and their biological counterparts, and future directions for creating more capable artificial intelligence systems.
Neuroscientists generally believe that the complex functionalities of the brain can be broken down into simple, interpretable models.
For instance, I can explain the complex mental process of my analysis of the civet picture (before I knew its name, of course), as such: Its definitely not a bird because it doesnt have feathers and wings. And it certainly isnt a fish. Its probably a mammal, given the furry coat. It could be a cat, given the pointy ears, but the neck is a bit too long and the body shape a bit weird. The snout is a bit rodent-like, but the legs are longer than most rodents and finally I would come to the conclusion that its probably an esoteric species of cat. (In my defense, it is a very distant relative of felines if you insist.)
Artificial neural networks, however, are often dismissed as uninterpretable black boxes. They do not provide rich explanations of their decision process. This is especially true when it comes to the complex deep neural networks that are composed of hundreds (or thousands of layers) and millions (or billions) or parameters.
During their training phase, deep neural networks review millions of images and their associated labels, and then they mindlessly tune their millions of parameters to the patterns they extract from those images. These tuned parameters then allow them to determine which class a new image belongs to. They dont understand the higher-level concepts that I just mentioned (neck, ear, nose, legs, etc.) and only look for consistency between the pixels of an image.
The authors of Direct Fit to Nature acknowledge that neural networksboth biological and artificialcan differ considerably in their circuit architecture, learning rules, and objective functions.
All networks, however, use an iterative optimization process to pursue an objective, given their input or environmenta process we refer to as direct fit, the researchers write. The term direct fit is inspired from the blind fitting process observed in evolution, an elegant but mindless optimization process where different organisms adapt to their environment through a series of random genetic transformations carried out over a very long period.
This framework undercuts the assumptions of traditional experimental approaches and makes unexpected contact with long-standing debates in developmental and ecological psychology, the authors write.
Another problem that the artificial intelligence community faces is the tradeoff between interpretability and generalization. Scientists and researchers are constantly searching for new techniques and structures that can generalize AI capabilities across vaster domains. And experience has shown that, when it comes to artificial neural networks, scale improves generalization. Advances in processing hardware and the availability of large compute resources have enabled researchers to create and train very large neural networks in reasonable timeframes. And these networks have proven to be remarkably better at performing complex tasks such as computer vision and natural language processing.
The problem with artificial neural networks, however, is that the larger they get, the more opaque they become. With their logic spread across millions of parameters, they become much harder to interpret than a simple regression model that assigns a single coefficient to each feature. Simplifying the structure of artificial neural networks (e.g., reducing the number of layers or variables) will make it easier to interpret how they map different input features to their outcomes. But simpler models are also less capable in dealing with the complex and messy data found in nature.
We argue that neural computation is grounded in brute-force direct fitting, which relies on over-parameterized optimization algorithms to increase predictive power (generalization) without explicitly modeling the underlying generative structure of the world, the authors of Direct Fit to Nature write.
Say you want to create an AI system that detects chairs in images and videos. Ideally, you would provide the algorithm with a few images of chairs, and it would be able to detect all types of normal as well as wacky and funky ones.
Are these chairs?
This is one of the long-sought goals of artificial intelligence, creating models that can extrapolate well. This means that, given a few examples of a problem domain, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasnt seen before.
When dealing with simple (mostly artificial) problem domains, it might be possible to reach extrapolation level by tuning a deep neural network to a small set of training data. For instance, such levels of generalization might be achievable in domains with limited features such as sales forecasting and inventory management. (But as weve seen in these pages, even these simple AI models might fall apart when a fundamental change comes to their environment.)
But when it comes to messy and unstructured data such as images and text, small data approaches tend to fail. In images, every pixel effectively becomes a variable, so analyzing a set of 100100 pixel images becomes a problem with 10,000 dimensions, each having thousands or millions of possibilities.
In cases in which there are complex nonlinearities and interactions among variables at different parts of the parameter space, extrapolation from such limited data is bound to fail, the Princeton researchers write.
The human brain, many cognitive scientists believe, can rely on implicit generative rules without being exposed to rich data from the environment. Artificial neural networks, on the other hand, do not have such capabilities, the popular belief is. This is the belief that the authors of Direct Fit to Nature challenge.
Dense sampling of the problem space can flip the problem of prediction on its head, turning an extrapolation-based problem into an interpolation-based problem, the researchers note.
In essence, with enough samples, you will be able to capture a large enough area of the problem domain. This makes it possible to interpolate between samples with simple computations without the need to extract abstract rules to predict the outcome of situations that fall outside the domain of the training examples.
When the data structure is complex and multidimensional, a mindless direct-fit model, capable of interpolation-based prediction within a real-world parameter space, is preferable to a traditional ideal-fit explicit model that fails to explain much variance in the data, the authors of Direct Fit to Nature write.
Extrapolation (left) tries to extract rules from big data and apply them to the entire problem space. Interpolation (right) relies on rich sampling of the problem space to calculate the spaces between samples.
In tandem with advances in computing hardware, the availability of very large data sets has enabled the creation of direct-fit artificial neural networks in the past decade. The internet is rich with all sorts of data from various domains. Scientists create vast deep learning data sets from Wikipedia, social media networks, image repositories, and more. The advent of the internet of things (IoT) has also enabled rich sampling from physical environments (roads, buildings, weather, bodies, etc.).
In many types of applications (i.e., supervised learning algorithms), the gathered data still requires a lot of manual labor to associate each sample with its outcome. But nonetheless, the availability of big data has made it possible to apply the direct-fit approach to complex domains that cant be represented with few samples and general rules.
One argument against this approach is the long tail problem, often described as edge cases. For instance, in image classifications, one of the outstanding problems is that popular training data sets such as ImageNet provides millions of pictures of different types of objects. But since most of the pictures were taken under ideal lighting conditions and from conventional angles, deep neural networks trained on these datasets fail to recognize those objects in rare positions.
ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)
The long tail does not pertain to new examples per se, but to low-frequency or odd examples (e.g. a strange view of a chair, or a chair shaped like an unrelated object) or riding in a new context (like driving in a blizzard or with a flat tire), co-authors of the paper Uri Hasson, Professor at Department of Psychology and Princeton Neuroscience Institute, and Sam Nastase, Postdoctoral researcher at Princeton Neuroscience Institute, told TechTalks in written comments. Note that biological organisms, including people, like ANNs, are bad at extrapolating to contexts they never experienced; e.g. many people fail spectacularly when driving in snow for the first time.
Many developers try to make their deep learning models more robust by blindly adding more samples to the training data set, hoping to cover all possible situations. This usually doesnt solve the problem, because the sampling techniques dont widen the distribution of the data set, and edge cases remain uncovered by the easily collected data samples. The solution, Hasson and Nastase argue, is to expand the interpolation zone by providing a more ecological, embodied sampling regime for artificial neural networks that currently perform poorly in the tail of the distribution.
For example, many of the oddities in classical human visual psychophysics are trivially resolved by allowing the observer to simply move and actively sample the environment (something essentially all biological organisms do), they say. That is, the long-tail phenomenon is in part a sampling deficiency. However, the solution isnt necessarily just more samples (which will in large part come from the body of the distribution), but will instead require more sophisticated sampling observed in biological organisms (e.g. novelty seeking).
This observation is in line with recent research that shows employing a more diverse sampling methodology can in fact improve the performance of computer vision systems.
In fact, the need for sampling from the long tail also applies to the human brain. For instance, consider one of the oft-mentioned criticisms against self-driving cars which posits that their abilities are limited to the environments theyve been trained in.
Even the most experienced drivers can find themselves in a new context where they are not sure how to act. The point is to not train a foolproof car, but a self-driving car that can drive, like humans, in 99 percent of the contexts. Given the diversity of driving contexts, this is not easy, but perhaps doable, Hasson and Nastase say. We often overestimate the generalization capacity of biological neural networks, including humans. But most biological neural networks are fairly brittle; consider for example that raising ocean temperatures 2 degrees will wreak havoc on entire ecosystems.
Many scientists criticize AI systems that rely on very large neural networks, arguing that the human brain is very resource-efficient. The brain is a three-pound mass of matter that uses little over 10 watts of electricity. Deep neural networks, however, often require very large servers that can consume megawatts of power.
But hardware aside, comparing the components of the brain to artificial neural networks paints a different picture. The largest deep neural networks are composed of a few billion parameters. The human brain, in contrast, is constituted of approximately 1,000 trillion synapses, the biological equivalent of ANN parameters. Moreover, the brain is a highly parallel system, which makes it very hard to compare its functionality to that of ANNs.
Although the brain is certainly subject to wiring and metabolic constraints, we should not commit to an argument for scarcity of computational resources as long as we poorly understand the computational machinery in question, the Princeton researchers write in their paper.
Another argument is that, in contrast to ANNs, the biological neural network of the human brain has very poor input mechanisms and doesnt have the capacity to ingest and process very large amounts of data. This makes it inevitable for human brains to learn new tasks without learning the underlying rules.
To be fair, calculating the input entering the brain is complicated. But we often underestimate the huge amount of data that we process. For example, we may be exposed to thousands of visual exemplars of many daily categories a year, and each category may be sampled at thousands of views in each encounter, resulting in a rich training set for the visual system. Similarly, with regard to language, studies estimate that a child is exposed to several million words per year, the authors of the paper write.
One thing that cant be denied, however, is that humans do in fact extract rules from their environment and develop abstract thoughts and concepts that they use to process and analyze new information. This complex symbol manipulation enables humans to compare and draw analogies between different tasks and perform efficient transfer learning. Understanding and applying causality remain among the unique features of the human brain.
It is certainly the case that humans can learn abstract rules and extrapolate to new contexts in a way that exceeds modern ANNs. Calculus is perhaps the best example of learning to apply rules across different contexts. Discovering natural laws in physics is another example, where you learn a very general rule from a set of limited observations, Hasson and Nastase say.
These are the kind of capabilities that emerge not from the activations and interactions of a single neural network but are the result of the accumulated knowledge across many minds and generations.
This is one area that direct-fit models fall short, Hasson and Nastase acknowledge. Scientifically, it is called System 1 and System 2 thinking. System 1 refers to the kind of tasks that can be learned by rote, such as recognizing faces, walking, running, driving. You can perform most of these capabilities subconsciously, while also performing some other task (e.g., walking and talking to someone else at the same time, driving and listening to the radio). System 2, however, requires concentration and conscious thinking (can you solve a differential equation while jogging?).
In the paper, we distinguish fast and automatic System 1 capacities from the slow and deliberate cognitive functions, Hasson and Nastase say. While direct fit allows the brain to be competent while being blind to the solution it learned (similar to all evolved functional solutions in biology), and while it explains the ability of System 1 to learn to perceive and act across many contexts, it still doesnt fully explain a subset of human functions attributed to System 2 which seems to gain some explicit understanding of the underlying structure of the world.
So what do we need to develop AI algorithms that have System 2 capabilities? This is one area where theres much debate in the research community. Some scientists, including deep learning pioneer Yoshua Bengio, believe that pure neural network-based systems will eventually lead to System 2 level AI. New research in the field shows that advanced neural network structures manifest the kind of symbol manipulation capabilities that were previously thought to be off-limits for deep learning.
In Direct Fit to Nature, the authors support the pure neural networkbased approach. In their paper, they write: Although the human mind inspires us to touch the stars, it is grounded in the mindless billions of direct-fit parameters of System 1. Therefore, direct-fit interpolation is not the end goal but rather the starting point for understanding the architecture of higher-order cognition. There is no other substrate from which System 2 could arise.
An alternative view is the creation of hybrid systems that incorporate classic symbolic AI with neural networks. The area has drawn much attention in the past year, and there are several projects that show that rule-based AI and neural networks can complement each other to create systems that are stronger than the sum of their parts.
Although non-neural symbolic computingin the vein of von Neumanns model of a control unit and arithmetic logic unitsis useful in its own right and may be relevant at some level of description, the human System 2 is a product of biological evolution and emerges from neural networks, Hasson and Nastase wrote in their comments to TechTalks.
In their paper, Hasson and Nastase expand on some of the possible components that might develop higher capabilities for neural networks. One interesting suggestion is providing a physical body for neural networks to experience and explore the world like other living beings.
Integrating a network into a body that allows it to interact with objects in the world is necessary for facilitating learning in new environments, Hasson and Nastase said. Asking a language model to learn the meaning of words from the adjacent words in text corpora exposes the network to a highly restrictive and narrow context. If the network has a body and can interact with objects and people in a way that relates to the words, it is likely to get a better sense of the meaning of words in context. Counterintuitively, imposing these sorts of limitations (e.g. a body) on a neural network can force the neural network to learn more useful representations.
This article was originally published byBen Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.
Read next: BetterMe Home Workout and Diet can help you bounce back from the Quarantine 15
Why is queer representation so important? What's it like being trans in tech? How do I participate virtually? You can find all our Pride 2020 coverage here.
Read the original post:
Research: Artificial neural networks are more similar to the brain than we thought - TNW
- Yes, Mr Cummings, energy efficiency is boring if you have a one inch deep mind - The Fifth Estate - July 7th, 2020
- Breaking News - Documentary "Stockton on My Mind" Debuts July 28 on HBO - The Futon Critic - July 7th, 2020
- The Importance and the Consequences of Writing a Will - The Wire - July 7th, 2020
- Deep cultural shifts required: open letter from 500 legal women calls for reform of way judges are appointed and disciplined - UNSW Newsroom - July 7th, 2020
- Married to the Military: Dont neglect your mental health - The Fayetteville Observer - July 7th, 2020
- Suddenly, The Investment Market Comes Out Of The Deep Freeze - Bisnow - July 7th, 2020
- In Conversation: Thandie Newton - Vulture - July 7th, 2020
- Backstory: Emergency by Any Other Name Cuts as Deep - The Wire - July 4th, 2020
- FAMILY MATTERS: Open eyes and mind to teen's ascent to adulthood - Eagle-Tribune - July 4th, 2020
- Experts see 'deep trouble' for Trump overcoming plummet in the polls - Missoula Current - July 4th, 2020
- Love that stifles... - Deccan Herald - July 4th, 2020
- The long road back from COVID-19 - The Boston Globe - July 4th, 2020
- The Ripple: Healing Divisive Minds and Grieving Hearts - Aledo Times Record - July 2nd, 2020
- Deep sea coral garden found in the unlikeliest of places - Metro.co.uk - July 2nd, 2020
- Georgia on my mind - The Parliament Magazine - July 2nd, 2020
- COVID-19 and Teletherapy May Be Changing How Much You Know About Your Therapist - Michigan Medicine - July 2nd, 2020
- A conversation about racism opened my eyes The Foothills Focus - Foothills Focus - July 2nd, 2020
- Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills - CNBC - June 29th, 2020
- Change Our Mind: it is your civic duty to buy performance estates - Top Gear - June 29th, 2020
- Moving forward by turning to the past: Oregon Historical Quarterly takes a deep dive into Oregons white supremacist roots - KGW.com - June 29th, 2020
- Trump's scorched-earth handling of environment extends to oceans too - Las Vegas Sun - June 29th, 2020
- Is that really everything you have to give? | Yon - Tallahassee Democrat - June 29th, 2020
- Why Facebook chose to open its first European A.I. lab in Paris and not London - CNBC - June 23rd, 2020
- Decoding the link between Artificial Neural Networks and Deep Learning Algorithms - Analytics Insight - June 23rd, 2020
- Deftones' 'White Pony': 10 Things We Learned From Chino Moreno - Revolver Magazine - June 23rd, 2020
- Stranger Things: David Harbour compares Hopper to Gandalf the White - lcbcradio.com - June 23rd, 2020
- #9 Ways How Meditation Can Strengthen Our Mind, Body, and Soul - Entrepreneur - June 23rd, 2020
- Which Type of Yoga Should You Try? - United News of Bangladesh - June 23rd, 2020
- If You Want to Change, Start from the Ground Up - SFGate - June 23rd, 2020
- The Ripple: Equality and Balance is an Inside Job - Newton Press Mentor - June 20th, 2020
- Matters of the mind: Impact of lockdown and Covid-19 on marriages - The Indian Express - June 20th, 2020
- Something to Muench on: In a word, priceless! - The Cross Timbers Gazette - June 20th, 2020
- What came to mind when I acted dead - Daily Trust - June 20th, 2020
- The race to develop AI chips heats up as Graphcore says it's shipped 'tens of thousands' - CNBC - June 9th, 2020
- Stay Home, Watch Horror: 5 Deep Space Horror Movies to Stream This Week - Bloody Disgusting - June 9th, 2020
- COVID-19 Pandemic Spurs Strong Demand for Cloud AI in Warehousing, Healthcare and Conversational AI - Olean Times Herald - June 9th, 2020
- Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind - Forbes - June 2nd, 2020
- New York's state of mind: Part II -- Former TH reporter shares account of COVID-19 in Big Apple - telegraphherald.com - June 2nd, 2020
- Dalai Lama and UW expert share message of hope, determination, and education of the mind - University of Wisconsin-Madison - June 2nd, 2020
- The top ten Scottish bands ever (from the mind of Alan Partridge) Aidan Smith - The Scotsman - June 2nd, 2020
- 5 natural ways to boost your mental health during stressful times - wreg.com - June 2nd, 2020
- DeepMind's AI Can Predict the Progression of AMD Eye Condition - Analytics Insight - May 25th, 2020
- New AI from DeepMind and Google can detect a common cause of blindness - The Next Web - May 25th, 2020
- Best Deep Sleep Music To Calm Your Mind at Night - SheKnows - May 25th, 2020
- Crash mastered and all-time high in mind - Tradimo - May 25th, 2020
- 'This is my deep breath': New Jersey pro surfer returns to the water in Ventnor for the first time since coronavirus closed the beach - Press of... - May 25th, 2020
- Google wants to give 'superpowers' to clinicians. Today it's helping them find hotel rooms - FierceHealthcare - May 25th, 2020
- This free AI chatbot helps businesses fight COVID-19 misinformation - The Next Web - May 25th, 2020
- Orange Grove resident helps to heal your mind through sound - Rosebank Killarney Gazette - May 25th, 2020
- AI collaboration develops an 'early warning system' for AMD - AOP - May 25th, 2020
- Mind-Blowing Deep Cosmos Discovery: The Oldest Disk Galaxy From The Earliest Days Of The Universe - Dual Dove - May 25th, 2020
- Facebook's head of AI says Elon Musk 'has no idea what he is talking about' - Business Insider India - May 15th, 2020
- Elon Musk has a complex relationship with the A.I. community - CNBC - May 15th, 2020
- Mind the Digital Gap - World Coal - May 15th, 2020
- New "Mind the Gap" resource from IADSA focuses on threat to eye health from blue light - Nutritional Outlook - May 15th, 2020
- Lockdown really frees the mind to delve deep into thoughts of future - The National - May 15th, 2020
- 'Spring Rain' Is a Superb Graphic Memoir of the Vagaries of Mind and Memory - PopMatters - May 15th, 2020
- Henry Ford III is rising to top of dynasty in deep trouble - San Antonio Express-News - May 15th, 2020
- 6 Nutrition Tips to Help You Sleep Better and Run Faster - runnersworld.com - May 15th, 2020
- Plan2Explore adapts to exploration tasks without fine-tuning - VentureBeat - May 15th, 2020
- The Well Gardened Mind by Sue Stuart-Smith review unwinding with nature - The Guardian - May 6th, 2020
- Can the Modern Mind Find Peace Without Delusion? - On the elusiveness of gratitude and confidence in our times - Chabad.org - May 6th, 2020
- World Asthma Day 2020: Things asthma patients must keep in mind - The Indian Express - May 6th, 2020
- Deep-rooted canola holds on in the dry - Farm Weekly - May 6th, 2020
- Exploring the life and legacy of Selena, 25 years later - KHOU.com - May 6th, 2020
- Machine Learning Artificial intelligence Market 2020, Thrives the Growth at Impressive CAGR Over Forecast Period 2027 COVID-19 Impact on Global... - May 2nd, 2020
- Googles AI can adjust voice emotion, pitch, and speed with 30 minutes of data - VentureBeat - May 2nd, 2020
- The Original Pawleys Island Introduces the Comfort Collection Deep Seating Swing just in time for Mother's Day - PRNewswire - April 22nd, 2020
- 'Take Deep Breaths, Make Your Bed' Therapists Offer Tips for Maintaining Mental Health During the Coronavirus - Evanston RoundTable - April 22nd, 2020
- Putting AI to work against COVID-19 - PHG Foundation - April 22nd, 2020
- ''Like Free-Riding In Deep Snow - Inside The Mind Of A Red Bull Wingsuit Flyer - The Sportsman - April 12th, 2020
- How to meditate if you have never tried before - WHAS11.com - April 12th, 2020
- Apple, Google have teamed up to build system-level COVID-19 contact tracing, interoperable APIs for iOS, Android - MobiHealthNews - April 12th, 2020
- AUDIOBOOKS: Words, books and politics | - theberkshireedge.com - April 12th, 2020
- West Virginia peak now expected today, but Marsh says that doesn't mean back to normal right away - WV MetroNews - West Virginia MetroNews - April 12th, 2020
- DeepMinds AI models transition of glass from a liquid to a solid - VentureBeat - April 9th, 2020
- AI and the coronavirus fight: How artificial intelligence is taking on COVID-19 - ZDNet - April 9th, 2020
- Step away from the news and breathe - Waukon Standard - April 9th, 2020
- Companies are bracing for the toughest phase in business continuity: Karan Bajwa - Livemint - April 9th, 2020
- Applying Artificial Intelligence in the Fight Against The Coronavirus - HIT Consultant - April 1st, 2020