Category Archives: Deep Mind
Can mysticism solve the mind-body problem? – The Stute
I just spent a week at a symposium on the mind-body problem, the deepest of all mysteries. The mind-body problemwhich encompasses consciousness, free will, and the meaning of lifeconcerns who we really are. Are we matter, which just happens to give rise to mind? Or could mind be the basis of reality, as many sages have insisted?
The weeklong powwow, called Physics, Experience and Metaphysics, took place atEsalen Institute, a retreat center in Big Sur, California. Fifteen men and women representing physics, psychology, philosophy, religious studies, and other fields sat in a room overlooking the Pacific and swapped mind-body ideas.
What made the conference unusual, at least for me, was the emphasis on what were called exceptional experiences, including mystical visions. During a mystical experience, as defined by psychologist William James, you thinkyouknowthat you are encountering ultimate reality. The meetings abstract asked whether exceptional experiences can move us forward toward answering the deep unresolved questions of mind and matter and their place in nature.
My colleagues sought to account for mystical visions with a variety of frameworks, involving quantum mechanics, information theory, Hinduism, Buddhism, Jungian psychology, or combinations of the above. These perspectives diverge from conventional materialism, which insists that matter is primary. I kept finding myself playing the role of skeptic, pushing back against my colleagues assertions. Here are points I made, or tried to make, at the meeting.
The Mystical Diversity Problem.Many scholars have tried constructing metaphysical systems out of mystical visions. They often focus on insights that share certain features, notably a sense of oneness with all things, plus feelings of love and bliss. Those fortunate enough to have these experiences often come away convinced that a loving God or spirit underlies everything, and there is no death, only transformation.
Thats a consoling thought. But as William James pointed out, many mystical visions are melancholic or diabolical. You feel profound alienation and emptiness, accompanied by feelings of horror and despair. The immense diversity of mystical experiences thwarts efforts to construct a mystical metaphysics.
The Neo-Geocentrism Problem. Mystics often insist that mind, not matter, is the fundamental stuff of reality, or that mind and matter are two aspects of an underlying ur-stuff. This non-materialist outlook, I think its fair to say, was the majority view at Esalen, and it has become increasingly popular among prominent mind-body theorists, such as neuroscientist Christof Kochand philosopherDavid Chalmers.
I call this viewneo-geocentrism because it revives the ancient assumption that the universe revolves around us. Geocentrism reflected our innate narcissism and anthropomorphism, and so do modern theories that make mindas far as we know a uniquely terrestrial phenomenoncentral to the cosmos. The shift away from geocentrism centuries ago was one of humanitys greatest triumphs, and neo-geocentrism, I fear, represents a step back toward darkness.
The Ineffability Problem.An irony lurks within efforts to create a mystical metaphysics. Mystics often warn that their experiences are impossible to describe, or ineffable, as William James put it. So there is something contradictory about trying to construct an explanatory system involving mysticism. My mystical experiences have reinforced my convictionspelled out in my bookMind-Body Problemsthat there can be no final, definitive solution to the question of who we really are.
The Beauty Problem. Esalen, which is breathtakingly beautiful, made me recall a comment by the physicist and atheist Steven Weinberg: I have to admit that sometimes nature seems more beautiful than strictly necessary. Our world is filled with so much pain and injustice that I cannot believe in a loving God. This is the problem of evil. But the flip side of the problem of evil is the problem of beauty. Beauty, love and friendshipand our hard-won, halting moral progressmake it hard for me to believe that life is just an accident.
So to answer the question posed by my headline: No, mysticism cannot solve the mind-body problem, the mystery of our existence. Quite the contrary. Mysticism rubs our face in the mystery. I dont believe in little miracles, like resurrections or angelic visitations, but I believe in the Big Miracle within which we dwell every moment of our lives, and which no theory or theology will ever explain away.
John Horgan directs the Stevens Center for Science Writings, which is part of the College of Arts & Letters. This column is adapted from one published on hisScientific Americanblog, Cross-check.
View original post here:
Can mysticism solve the mind-body problem? - The Stute
How quieting the mind can benefit your life – Bangor Daily News
Linda Coan O'Kresik | BDN
Linda Coan O'Kresik | BDN
Angela Fileccia leads a class at Om Land Yoga in the Brewer studio.
Breathing in, I know I am breathing in. Breathing out, I know that I am breathing out. Thich Nhat Hanh
In the past two decades, the practice of mindfulness and meditation has gone from being a spiritual practice done at ashrams in India, where it originated thousands of years ago, to the mainstream as the celebrities who tout its benefits. Headlines boast about mindfulness effectiveness at reducing stress, improving sleep and even helping with weight loss. Meditation retreats in beautiful, exotic locations promise a chance to reset and dive deep while phone applications make it easy to zen out on the go.
But what is meditation really? What happens when the mind is quiet, and instead of chatter, it focuses on the simple yet profound act of breathing? In and out.
The Merriam-Webster dictionary defines mindfulness as the practice of maintaining a nonjudgmental state of heightened or complete awareness of ones thoughts, emotions or experiences on a moment-to-moment basis.
Simply put, it is being aware of the present moment and accepting it for what it is.
Informal mindfulness walking entails noticing what is happening as you move from one place to another, said Rebecca MacAulay, assistant professor of psychology at University of Maine in Orono.
Walking from the car to the grocery store, you may become aware of the sky and notice is it gray? Is it blue?
Then perhaps you notice the sensation of your legs, your torso, your muscles moving as youre moving forward, MacAulay said. Youre noticing the temperature on your skin, and youre noticing the temptation to perhaps judge [that temperature].
Its quieting that judgment, thats key.
Our brains are constantly categorizing things, mindfulness stops the categorizing and helps us accept the moment for what it is, MacAulay said. Youre not trying to change it, and youre not trying to influence how you feel, youre simply aware.
Not to mention, with the invention and ever-increasing popularity of social media platforms, many people of all ages are focused on capturing the perfect photo of an activity rather than experiencing it.
I think about Snapchat or the selfies, we are missing the important moments in our lives because were thinking about taking a picture or telling someone about it later, MacAulay said. But then, youre no longer there. Youve left the room, and youre thinking about the future.
Focus or concentration meditation where the user thinks more about deep breathing than perhaps a mantra or statement is believed to activate the frontal-lobe circuitry, the area of the brain focused on attention and cognitive control.
The benefit?
It is improving our ability to not give in, by creating freedom from distraction, you are reinforcing the neurocircuitry that makes us better able to focus, MacAulay said.
That means practicing meditation on a regular basis, even briefly, can make you less likely to curse at the driver who cuts you off or respond negatively to an unplanned change.
Its removing reactivity, MacAulay said. In clinical psychology, we work on pressing pause if someone is feeling strong emotions. With mindfulness [practices], theyre all encouraging you to be aware and in the moment.
Focused meditation is just one of many types of mindfulness, and each carries different benefits. Just as important, though far less researched, MacAulay said, is knowing that mindfulness doesnt work for everyone.
One of the things thats probably less talked about and less understood is who doesnt mindfulness work for, she said. It is becoming a panacea of sorts, and soon mindfulness is going to become the cure-all for everything.
In their book Altered Traits, authors Daniel Goleman and Richard J. Davidson argue that while there are lasting results to meditation, it is not just about hours spent sitting on a pillow. To receive the long-term benefits, practitioners need to make sure they are seeking master teachers and those well-trained in giving feedback and encouraging non-attachment.
Still, MacAulay says benefits are undeniable.
Other types of meditation include loving-kindness meditation, which aims to help the practitioner cultivate a sense of love and kindness toward everything.
Body scanning meditation, on the other hand, encourages people to scan their bodies one area at a time looking for tension. Once noticed, meditators then try to release those areas and let go of the pressure they initially felt.
Still, other meditation such as mindful walking and many styles of yoga, including Vinyasa and Kundalini, combines physical activity with deep breathing.
With walking meditations, often you will start walking very slowly, you may even start off by standing, noticing that moment then taking very slow steps, MacAulay said. Usually its done in some form of a circle or back and forth and involves you noticing what its like to make each movement [required to move forward]. What its like to lift your foot, what you feel as the thigh muscles engage.
While meditation classes and retreats in Maine abound, MacAulay said in the coming year she and her Ph.D. students will focus their research on making sure marginalized communities in the state have access to the skills needed to bring about more mindfulness.
Older adults, she argues, or those in rural communities or in low socio-economic areas, may not have the resources to attend workshops or training sessions about mindfulness. But its those communities, she said, that could benefit greatly.
I recently returned from a mindfulness-based stress reduction training [but] being able to [do] that is a luxury, MacAulay said. I think what researchers need to look at is how we can get these things out in more rural areas, to older adults, more economically disadvantaged adults, it is extremely needed in stress management.
In the coming months, she and her students will begin working on that very idea.
If you think about meditation, it can be a bit esoteric, for example, the brain naturally starts wondering am I doing this right. It can be really hard to pick up on your own, MacAulay said. We want to make it more accessible and look at can we teach these skills in workshops where we will boil down some of the components and make them more accessible.
No. 1 tip on her list? Just try it.
And then, let go of any pretense that it will be easy.
When you first try meditation, you may notice your mind straying, and thats normal, MacAulay said. We spend our lives thinking, our minds want to carry us away because thats how the brain works. With meditation, youre starting to quiet those networks, the self-referential component of the brain that causes us to ruminate, mindfulness can help quiet that. But it takes time, and it takes practice, so dont get discouraged.
Interested in learning more or trying out meditation for yourself? Check out one of these Maine-based classes or practice virtually from anywhere.
Bangor:
The Blue Heron Wellness Center offers drop-in meditation classes as well as other energy/mind-focuses workshops. theblueheronwellnesscenter.com
Om Land Yoga offers many different styles of yoga as well as Mindfulness of Yoga: Overcoming barriers to joy class. omlandyoga.com
Unitarian Universalist Society of Bangor offers a mindfulness meditation group that meets regularly during the month. uubangor.org
Midcoast:
The Midcoast Center for Community Health and Wellness offers several mindfulness programs, including Mindfulness-Based Stress Reduction. midcoasthealth.com/wellness/mindfulness
The Dancing Elephant in Rockland offers Buddhist and mindfulness teachings as well as a mindfulness eating group and a meditation and knitting group. rocklandyoga.com
The Haven in Camden offers meditation retreats and courses, including Hemi-Sync, a binaural technology developed by Robert Monroe, who founded the nonprofit Monroe Institute. gohaven.org
Northern Light Zen Center in Topsham offers meditation practice, training workshops and Zen retreats led by Zen Masters and Master Dharma Teachers of the Kwan Um School of Zen. nlzc.info
Northern Maine:
Araya Wellness offers public, semi-private and private meditation classes in Presque Isle and Mars Hill. arayawellness.com
Portland:
The Portland Zen Meditation Center offers regular Zen meditation classes, as well as community meetings to discuss group issues and individual practice. portlandzencenter.com
Nagaloka Buddhist Center teaches two types of meditation mindfulness of breathing and metta bhavana. http://www.nagalokabuddhistcenter.org
The Mindfulness Center of Maine in Saco offers workshops, courses and consultations about mindfulness, meditation and personal growth. mindfulnesscenter.org
Vajra Vidya Portland is a Tibetan Buddhist meditation center offering retreats, classes and weekly study groups for those just starting with meditation, as well as continuing practitioners. portlandmainebuddhism.org
Open Heart Sangha in the Portland area is a sitting and walking meditation group that follows the teaching of Zen master Thich Nhat Hanh. openheartsangha.org
Online:
Several free and paid apps are available for those interested in practicing meditation anytime, anywhere, including Calm, Headspace, buddhify, Simple Habit, Insight Timer and 10 percent Happier. Many apps geared toward meditation skeptics, those on the go, or anyone looking to start or continue a mindfulness practice.
Continue reading here:
How quieting the mind can benefit your life - Bangor Daily News
Review: ‘The Lodge’ is a slow-burn attack on the mind (Includes first-hand account) – Digital Journal
Divorce can already be complicated, but even more so when children are involved. Things can become all the more difficult if one-half of the former couple is involved in a new relationship, especially if its grown serious and steps are being taken towards making the arrangement more permanent. Hurt feelings are almost inevitable, but in some cases, its much more than that. Trying to navigate all of these things at once can be challenging and forcing the situation can be disastrous. In The Lodge, a father is determined to move on with his new love interest, but his children feel differently. Richards wife (Alicia Silverstone) was not coping well with their separation and his subsequent engagement. Therefore, when tragedy befalls their mother, Richards children, Aidan (Jaeden Martell) and Mia (Lia McHugh), not only blame him but also his new girlfriend, Grace (Riley Keough). After several months, Richard (Richard Armitage) insists they must move forward and plans a family getaway over Christmas break for all four of them. Grace is excited to get to know the kids, but the feeling is not mutual. Returning to work for a few days, Richard leaves them in Graces care. But an unexpected snowstorm results in possible cabin fever as Grace slowly unravels, leaving everyone at the mercy of some sinister ghost from her past.Going to a cabin in the woods, particularly in winter, almost never ends well in movies. A blizzard, mudslide or sheer distance from civilization can completely cut-off vacationers from supplies and assistance. Secluded from the rest of the world and reliant only on each other for survival, one uninvited or unstable guest can turn the whole trip into a nightmare. These retreats do not always turn fatal, but sometimes death isnt the scariest outcome. Trapped with little hope of immediate rescue, one should always remember dont poke the bear.None of the characters are especially innocent in this narrative. Richard disregards his childrens feelings and thrusts them into an undesirable situation far before theyre ready. Grace similarly expects too much too soon, while also not ensuring her medication is safely stowed. The kids certainly make it worse in their adolescent, nave desire to alienate Grace and make her feel unwelcome. The result of their mistakes is horrific and completely preventable. But all must live with the consequences of their actions however long that may be.Writers/directors Severin Fiala and Veronika Franz previously disturbed audiences with their debut feature, Goodnight Mommy. Macabre and torture gives way to emotional turmoil in this bleak modern-day gothic film. The thriller keeps viewers on the edge of their seat as their sympathy jumps from one character to the next. The narrative progresses slowly, allowing the unrest to settle over the picture and dig its claws deep into everyones psyche. While the psychological war being waged inside the cabin is harrowing thanks to terrific performances by the actors, it feels like the sense of isolation couldve been heightened or portrayed better. While its obvious theyre trapped and alone, the unyielding weather keeping them imprisoned together in the house and the fear it should induce doesnt ever really get its due. The environment is a great and forceful personality that should be utilized in a story such as this rather than just pointed to as needed.Nonetheless, this is an intense family drama with a fittingly dark conclusion.Directors: Severin Fiala and Veronika FranzStarring: Richard Armitage, Riley Keough and Jaeden Martell
Visit link:
Review: 'The Lodge' is a slow-burn attack on the mind (Includes first-hand account) - Digital Journal
Is The Recent Criticism For OpenAI by MIT Technology Review Unfair? – Analytics India Magazine
OpenAI had earned plenty of plaudits for its transparent and collaborative culture, but the research organization received a drubbing in MIT Technology Review for allegedly breaching the principles it was founded upon. The caustic article exposed a misalignment between the startups magnanimous mission and how it operates behind closed doors.
Although some doubts were raised about its mission at the time of Microsofts billion-dollar investment in it last year a view that was expressed by Elon Musk, who incidentally was part of the founding team the latest revelations have sent shockwaves through the tech industry.
Spoken anonymously, some employees felt that the energy and sense of purpose it started off with had dissipated. Instead, their accounts suggest that the San Francisco-based startup is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.
Not only was OpenAIs culture put to question, the article also implied that it may be capitalizing off of panic around the existential risk from AI. The calculated release of some of its studies seemed to follow a pattern that suggested the same.
As the dust settles on the hype surrounding this story, it behoves us to reflect on these revelations, albeit through a broader lens. The startups approach, though not without error, seemed unique, especially when seen from the vantage point of big tech companies who were just venturing into the world of AI.
OpenAI has been conducting research spanning a wide range of disciplines that pursues novel ways of looking at existing problems. But unlike bigger tech companies who keep their researchers close to them, OpenAI is on a mission to collaborate with other research outfits by making its findings open to the public.
The startup, which aims to push AI as far as it will go, strongly believes that that cannot happen when researchers work in silos. According to it, if more people get together to reach a collective goal, the end result will trounce anything that would have been accomplished by a single person done in secret. It currently has 89 repositories on GitHub, opening itself to the software development websites 40 million users. These projects offer a chance to explore research aimed at the future, and which would eventually be handed over to anyone who wants it for free.
Such a largely open and unfettered research process is likely to accelerate the progress of AI, taking the world deeper into what it once considered science fiction. In fact, in just four years, the startup has grown to become one of the leading AI research labs in the world today as it continues to democratize AI research.
While this has spawned a slew of experimental research projects, the startups long-range goal has been to create an artificial general intelligence or AGI. This is a machine with the learning and reasoning abilities of a human; a technology that augments rather than replaces human capabilities.
The idea is that even though existing AI systems have proven superior to human intelligence, the applications of narrow AI which gave us breakthrough technologies like digital voice assistants and facial recognition systems are still limited. Projected to advance the continuum of narrow AI, AGI is seen as the next frontier in technology.
In theory, AGI would be able to make better decisions than humans. According to OpenAI, it can impact modern industries, including healthcare, education, and manufacturing, and address some of the most pressing issues the world is facing today.
While naysayers may question the feasibility of such an ambitious mission, AGI has created a new standard for AI and its development could mean that we may soon arrive at solutions to seemingly intractable problems.
This has pushed the notion of openness further and has driven top tech companies to share a lot of their advanced AI research and collaborate on projects to build a secure AI.
For instance, Google open sourced its AI engine TensorFlow in 2015. This allowed experimentation with machine learning (ML) on decentralised data. It also launched a new cloud-based AI Platform that allowed users to collaborate on ML projects. Furthermore, it acquired a startup called DeepMind, which is much like OpenAI in its pursuit to develop advanced AI.
This has also led to a race to set up research facilities focused on advancing AI and Facebook also joined in with its investment in a blue-sky AI lab. Furthermore, Microsofts co-founder Paul Allen had also established the non-profit Allen Institute for Artificial Intelligence to conduct high-impact AI research.
With the objective of promoting and developing AI to drive many tasks of the future, such studies have already made significant headway. Soon, it can help machines understand natural language, and give it the power to learn organically, eventually helping them acquire the ability to think like a human.
In such a scenario, funding or the lack of it should not curb efforts to democratize AI. According to reports, DeepMind has been running at massive losses one to the tune of $570 million in 2019 up from $154 million three years ago. However, the deep coffers of Alphabet which owns DeepMind would ensure that its cogs are well-oiled.
The same could not have been said about OpenAI which, having started off as a non-profit venture, transitioned into a for-profit company to secure additional funding. Since then, it has grown an impressive list of Silicon Valley investors including LinkedIn co-founder Reid Hoffman, PayPal co-founder. Peter Thiel, founding partner of Y Combinator Jessica Livingston, former CTO of Stripe Greg Brockman, and even former CEO of Infosys Vishal Sikka.
What is more, started with nine researchers, OpenAI has an eclectic mix of the best researchers of our time, including Ilya Sutskever, an expert on ML who previously worked on Google Brain. Furthermore, this collaborative effort has also attracted a group of young, talented AI researchers from universities like Stanford, Berkeley, University of California, and New York University.
This cadre of bold thinkers and dreamers who probably make up the smartest people in most rooms will likely foster innovation that promises to transform the world in the years to come.
comments
Read the rest here:
Is The Recent Criticism For OpenAI by MIT Technology Review Unfair? - Analytics India Magazine
The messy, secretive reality behind OpenAIs bid to save the world – MIT Technology Review
Every year, OpenAIs employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. Its mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.
In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabets DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.
Above all, it is lionized for its mission. Its goal is to be the first to create AGIa machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.
Sign up for The Algorithm artificial intelligence, demystified
The implication is that AGI could easily run amok if the technologys development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.
OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to build value for everyone rather than shareholders. Its chartera document so sacred that employees pay is tied to how well they adhere to itfurther declares that OpenAIs primary fiduciary duty is to humanity. Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.
Christie Hemm Klok
But three days at OpenAIs officeand nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the fieldsuggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.
Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation Can machines think? Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.
It is one of the most fundamental questions of all intellectual history, right? says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. Its like, do we understand the origin of the universe? Do we understand matter?
The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. Its not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.
But the resounding consensus within the field is that such advanced capabilities would take decades, even centuriesif indeed its possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late 80s and early 90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. The field felt like a backwater, says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.
Christie Hemm Klok
Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasnt the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.
The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped down as president of startup accelerator Y Combinator to become OpenAIs CEO.)
But more than anything, OpenAIs nonprofit status made a statement. Itll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest, the announcement said. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.
In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. It was a beacon of hope, says Chip Huyen, a machine learning expert who has closely followed the labs journey.
At the intersection of 18th and Folsom Streets in San Francisco, OpenAIs office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters PIONEER BUILDINGthe remnants of its bygone owner, the Pioneer Truck Factorywrap around the corner in faded red paint.
Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space Im restricted to during my visit. Im forbidden to visit the second and third floors, which house everyones desks, several robots, and pretty much everything interesting. When its time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.
wikimedia commons / tfinc
On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. Weve never given someone so much access before, he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.
Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a focused, quiet childhood. He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.
Brockman takes me to lunch to remove me from the office during an all-company meeting. In the caf across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. Its easy to appreciate his charisma as a leader. Recounting memorable passages from the books hes read, he zeroes in on the Valleys favorite narrative, Americas race to the moon. (One story I really love is the story of the janitor, he says, referencing a famous yet probably apocryphal tale. Kennedy goes up to him and asks him, What are you doing? and he says, Oh, Im helping put a man on the moon!) Theres also the transcontinental railroad (It was actually the last megaproject done entirely by hand a project of immense scale that was totally risky) and Thomas Edisons incandescent lightbulb (A committee of distinguished experts said Its never gonna work, and one year later he shipped).
Christie Hemm Klok
Brockman is aware of the gamble OpenAI has taken onand aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. Its the price of daring greatly.
Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was smallformed through a tight web of connectionsand management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.
Musk played no small part in building a collective mythology. The way he presented it to me was Look, I get it. AGI might be far away, but what if its not? recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. What if its even just a 1% or 0.1% chance that its happening in the next five to 10 years? Shouldnt we think about it very carefully? That resonated with me, he says.
But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasnt clear the team itself knew either. Our goal right now is to do the best thing there is to do, Brockman said. Its a little vague.
Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAIs members. After two years, at Brockmans request, Daniela joined too. Imaginewe started with nothing, Brockman says. We just had this ideal that we wanted AGI to go well.
Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the companys existence.
By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that in order to stay relevant, Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass moneywhile somehow also staying true to the mission.
Unbeknownst to the publicand most employeesit was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the labs core values but subtly shifted the language to reflect the new reality. Alongside its commitment to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power, it also stressed the need for resources. We anticipate needing to marshal substantial resources to fulfill our mission, it said, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
We spent a long time internally iterating with employees to get the whole company bought into a set of principles, Brockman says. Things that had to stay invariant even if we changed our structure.
Christie Hemm Klok
That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a capped profit arma for-profit with a 100-fold limit on investors returns, albeit overseen by a board that's part of a nonprofit entity. Shortly after, it announced Microsofts billion-dollar investment (though it didnt reveal that this was split between cash and credits to Azure, Microsofts cloud computing platform).
Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: Early investors in Google have received a roughly 20x return on their capital, they wrote. Your bet is that youll have a corporate structure which returns orders of magnitude more than Google ... but you dont want to unduly concentrate power? How will this work? What exactly is power, if not the concentration of resources?
The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. Can I trust OpenAI? one question asked. Yes, began the answer, followed by a paragraph of explanation.
The charter is the backbone of OpenAI. It serves as the springboard for all the labs strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the companys existence. (By the way, he clarifies halfway through one recitation, I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. Its not like I was reading this before the meeting.)
How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that arent imaginable today. How will you structure yourself to evenly distribute AGI? I think a utility is the best analogy for the vision that we have. But again, its all subject to the charter. How do you compete to reach AGI first without compromising safety? I think there is absolutely this important balancing act, and our best shot at that is whats in the charter.
OpenAI
For Brockman, rigid adherence to the document is what makes OpenAIs structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesnt mindin fact, he agrees with the mentality. Its the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.
In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of effective altruism. They crack jokes using machine-learning terminology to describe their lives: What is your life a function of? What are you optimizing for? Everything is basically a minmax function. To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)
But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employees absorption of the mission. Alongside columns like engineering expertise and research direction in a spreadsheet tab titled Unified Technical Ladder, the last column outlines the culture-related expectations for every level. Level 3: You understand and internalize the OpenAI charter. Level 5: You ensure all projects you and your team-mates work on are consistent with the charter. Level 7: You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.
The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.
But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponized to produce disinformation at immense scale.
The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? It seemed like OpenAI was trying to capitalize off of panic around AI, says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.
Christie Hemm Klok
By May, OpenAI had revised its stance and announced plans for a staged release. Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithms potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, no strong evidence of misuse so far.
Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadnt been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that safety and security concerns would gradually oblige the lab to reduce our traditional publishing in the future.
This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. I think that is definitely part of the success-story framing, said Miles Brundage, a policy research scientist, highlighting something in a Google doc. The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.
But OpenAIs media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the labs big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arms length.
Christie Hemm Klok
This hasnt stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMinds AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAIs achievement. I was not compensated for this.)
And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the labs influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI, says a line under the Policy section. Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message. Another, under Strategy, reads, "Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to."
There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?
Christie Hemm Klok
But little did people know this wasnt the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.
There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; its just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, wont be enough.
Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.
Brockman and Sutskever deny that this is their sole strategy, but the labs tightly guarded research suggests otherwise. A team called Foresight runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the labs all-in, compute-driven strategy is the best approach.
For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didnt know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.
Christie Hemm Klok
In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was sniffing around.
In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. We expect that safety and security concerns will reduce our traditional publishing in the future, the section states, while increasing the importance of sharing safety, policy, and standards research. The spokesperson also added: Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.
One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns werent allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.
The man driving OpenAIs strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.
Amodei divides the labs strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investors portfolio of bets. Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.
As in an investors portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why its important to keep an open mind. Pure language is a direction that the field and even some of us were somewhat skeptical of, he says. But now it's like, Wow, this is really promising.
Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAIs latest top-secret project has supposedly already begun.
Christie Hemm Klok
The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2s sentence constructions or a robots movements.
Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. At some point were going to build AGI, and by that time I want to feel good about these systems operating in the world, he says. Anything where I dont currently feel good, I create and recruit a team to focus on that thing.
For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.
Were in the awkward position of: we dont know what AGI looks like, he says. We dont know when its going to happen. Then, with careful self-awareness, he adds: The mind of any given person is limited. The best thing Ive found is hiring other safety researchers who often have visions which are different than the natural thing I mightve thought of. I want that kind of variation and diversity because thats the only way that you catch everything.
The thing is, OpenAI actually has little variation and diversitya fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musks startup working on computer-brain interfaces, shares the same building and dining room.
Christie Hemm Klok
According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didnt specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)
In fairness, this lack of diversity is typical in AI. Last year a report from the New Yorkbased research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. There is definitely still a lot of work to be done across academia and industry, OpenAIs spokesperson said. Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.
Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New Yorkbased company, the city just had too little diversity.
But if diversity is a problem for the AI industry in general, its something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.
Nor is it at all clear just how OpenAI plans to distribute the benefits of AGI to all of humanity, as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited significant unresolved issues regarding the way in which it would be implemented.) This is my biggest problem with OpenAI, says a former employee, who spoke on condition of anonymity.
Christie Hemm Klok
They are using sophisticated technical practices to try to answer social problems with AI, echoes Britt Paris of Rutgers. It seems like they dont really have the capabilities to actually understand the social. They just understand that thats a sort of a lucrative place to be positioning themselves right now.
Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need, he says. I dont think that that strategy is likely to succeed.
The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to make sure that we are understanding the ramifications.
Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldnt functionally change OpenAIs approach to research. Microsoft was well aligned with the labs values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.
For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didnt even know what promises, if any, had been made to Microsoft.
But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altmans message is clear: OpenAI needs to make money in order to do researchnot the other way around.
This is a hard but necessary trade-off, the leadership has saidone it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.
But the truth is that OpenAI faces this trade-off not only because its not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategynot because its seen as the only way to AGI, but because it seems like the fastest.
Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and theres still time for it to change.
Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldnt omit from this profile. I guess in my opinion, theres problems, she begins hesitantly. Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.
But to me, it feels like they are doing something a little bit right, she says. I got a sense that the folks there are earnestly trying.
Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didnt think it was possible to bake ethics in from the very beginning when developing AI, he intended it to mean that ethical questions couldnt be solved from the beginning, not that they couldnt be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not on a farm, but "on a hobby farm." Brockman considers this distinction important.
In addition, we have clarified that while OpenAI did indeed "shed its nonprofit status," a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. Weve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).
See the article here:
The messy, secretive reality behind OpenAIs bid to save the world - MIT Technology Review
The Mind-Altering Power of Deep Animal Connection – Sierra Magazine
Excerpted from Our Wild Calling by Richard Louv Richard Louv. Reprinted by permission of Algonquin Books. All Rights Reserved.
One morning Lisa Donahue walked into her dining room and saw her six-year-old son, Aidan, and their large retriever, Jack, stretched out together on the dining room carpet. Both were facing away from Donahue. The boy was stroking the dogs side. Then she heard her son say quietly, matter-of-factly, Mommy, I dont have a heart anymore.
Startled, she asked her son what he meant.
My heart is in Jack.
She watched them for a while, in the silence and peace.
This permeability of the heart (or soul or spirit or neurological connection) occurs naturally when were very young. Some people continue to experience it throughout their life, though they may have no words to describe it. They experience it with their companion animals and, if receptive and given a chance, with wild animals, too.
Each animal we encounter has the potential to become part of us or part of who we could become. If we meet them halfway.
Indigenous traditions are fully accustomed to this approach to physical and spiritual existence. The American transcendentalists of the 19th century also saw the divine in nature. That movements leader, Ralph Waldo Emerson, wrote of the great nature in which we rest, as the earth lies in the soft arms of the atmosphere; that Unity, that Over-soul, within which every mans particular being is contained and made one with all other; that common heart.
More recently, nature essayist Barry Lopez, in A Literature of Place, wrote, If youre intimate with a place, a place with whose history yourefamiliar, and you establish an ethical conversation with it, the implication that follows is this: the place knows youre there. It feels you. You will not be forgotten, cut off, abandoned. Our attachment to the natural world is a fundamental human defense against loneliness. Lopez was primarily describing the ways land shapes our inner landscape. Animals, wild and domestic, also do this.
*
We live in fragile worlds. Two are familiar. The first world is the outer habitat of land, air, water, and flesh, the one that supports biological needs of humans and other animals. The second world is our highly individualized and private inner life.
Then there is a mysterious third world, the shared habitat of the heart. This is the deep connection between a person and another animal. It is the permeability of empathy. It is the connection that extends from within us, across the mysterious between, and into the other being. If were lucky, we feel something almost indescribable in return. We can learn to enter this habitat at will. This transportive leap can change our lives and the lives around us for the better.
These definitions are imprecise, not an exact map but more of a metaphorical guide to thinking about our relationship with the natural world. The naturalness of this border-defying communion, as Aidan experienced with his dog, tends to fade when childhood ends. A teenager, before the demands and realism of adulthood set in, may still yearn for such encounters, even subconsciously. What if more young people experienced such transcendent, mind-altering encounters with urban birds or suburban coyotes or a rescue dog? Might a so-called at-risk teenageror any of usexperiencing such a rite of passage set out on a different path to the future?
A few years ago, I had coffee with my friend Scott Reed at a local bookstore. By profession, Scott builds relationships. He works as a community organizer, often through churches, in poor neighborhoods around the United States. Scott is fascinated by transformative encounters. Paraphrasing 20th-century German Jewish religious philosopher Martin Buber, Scott said, The soul is over therenot in heart or head. But over there. Bubers mysticism focuses on the encounter and dialogue between humans: When two people relate to each other authentically and humanly, God is the electricity that surges between them.
This is what Buber called the sacredness of the I-Thou relationship, Scott explained. The divine is in you as well as me, and you discover it in relationship.
The I-Thou relationship is quite different from the more common I-It relationship, which is based on what one can get from another person. In his famous 1923 essay, Ich und Du, Buber writes, No purpose intervenes between I and You, no greed and no anticipation; and longing itself is changed as it plunges from the dream into appearance. Buber is primarily focused on the power of relationship between human beings and between humans and a Western definition of God. But his description of the I-Thou relationship might also be applied to the relationship between a human and a member of another species.
Recently diagnosed with aggressive stage 3 cancer, Scott continues to work and travel. He described how one evening when hed returned from a long trip, even before his family could welcome him, his large dog leaped up and did what he had never done: pushed Scott back and held him, wouldnt let him go. What was that? he asked. He felt it was something older and larger than recognition or affection. That relationship is in the web of life I sense when Im in nature. My breathing is easier there, the oxygen is plentiful, the smell of the leaves, the breath of lifeall of it is connected.
This essential connection or communion with other creaturesthis habitat of the heartis fragile. It needs nourishment to survive, as do they and we.
The heart is a useful metaphor, and perhaps more. An emerging area of neurological inquiry suggests that the heart is a mindful muscle; it resides in a complex physiological part of our body where we feel emotions in ways not yet fully understood. Living more in the moment, as other animals likely do, we are more mindfulmore heartful. The heartin reality or as metaphordoes not exist in isolation. It exists in its own habitat, which contains it but extends beyond self-awareness to other hearts.
In other contexts, this space of connection goes by other names. In the arts, the word lacuna describes the seemingly empty but powerful space in a story; in music, it is the pause or passage in which no notes are played, allowing the listener to feel or project meaning.
Michelle Brenner, a pioneering conflict manager in Australia, prefers the word liminality, a concept developed in the early 20th century to describe the threshold stage between a previous and a new way of perceiving ones identitysometimes referring to the between stage in an initiation. In some cultures, she writes, the liminal space is seen as sacred, to be respected and is holy, something out of this world. . . . In other cultures, it creates anxious uncertainty, fear and disapproval. This betweenness can be found everywhere in nature: between the seasons, at the rivers edge, between bioregions, at the borders of things, between two living beings, and, in Brenners words, in the undecided moments when we are neither here nor there.
There are as many descriptions of the place of connection as there are cultures, including especially those of Indigenous peoples. It is at once strange and familiar.
In human relations, love alters reality. We go mad with love. Limerence is the word for that. The chemical reaction that accompanies human love is measurable but defies full explanation. So it is with our deepest bond with other animals.
A friend who spends most of her waking hours in New York City once told me about an encounter with a pigeona pigeon! she emphasizedthat left her speechless. As she walked to work, she passed the bird on the sidewalk. They looked at each other, and she felt transported. She used that word. Transported. My friend is not a person inclined to seek a shift in consciousness, but there she was on the sidewalk, with that pigeon. In that moment she felt inexplicably touched, elevated. She felt as if she had entered that birds world, and it had entered hers.
Its like an altered state. But without drugs, she said.
And unlike drugs, its generally free of charge, and with no known negative side effects. Depending on the animal.
Original post:
The Mind-Altering Power of Deep Animal Connection - Sierra Magazine
Google Health, the company’s newest product area, has ballooned to more than 500 employees – CNBC
Google's health care projects, which were once scattered across the company, are now starting to come together under one team now working out of the Palo Alto offices formerly occupied by Nest, Google's smart home group, according to several current and former employees.
Google Health, which represents the first major new product area at Google since hardware, began to organize in 2018, and now numbers more than 500 people working under David Feinberg, who joined the company in early 2019. Most of these people were reassigned from other groups within Google, although the company has been hiring and currently has over a dozen open roles.
Google and its parent company, Alphabet, are counting on new businesses as growth slows in its core digital advertising business. Alphabet CEO Sundar Pichai, who was recently promoted from Google's CEO to run the whole conglomerate, has said health care offers the biggest potential for Alphabet to use artificial intelligence to improve outcomes over the next 5 to ten years.
Google's health efforts date back more than a decade to 2006, when it attempted to create a repository of health records and data. Back then, it aimed to connect doctors and hospitals and help consumers aggregate their medical data. However, those early attempts failed in market and the company terminated this first "Google Health" product in 2012. Google then spent several years developing artificial intelligence to analyze imaging scans and other patient documents and identify diseases with the intent of predicting outcomes and reducing costs. It also experimented with other ideas, like adding an option for people searching for medical information to talk to a doctor.
The new Google Health unit is exploring some new ideas, such as helping doctors search medical records and improving health-related Google search results for consumers, but primarily consolidates existing teams that have been working in health for a while.
Google's not the only tech giant working on new efforts centered around the health industry. Amazon, Apple, Facebook and Microsoft have all ramped up efforts in recent years, and have been building out their own teams.
In just over a year under Feinberg's leadership, Google Health has grown to more than 500 employees, according to the company's internal directory and people familiar with the company. These people asked for anonymity as they're not authorized to comment publicly about the company's plans.
Many of these Google Health employees have come over from other groups, including Medical Brain, which involves using voice recognition software to help doctors take notes; and Deep Mind's health division, which was folded into Google Health back in November of 2018 and has worked with the U.K.'s National Health System to alert doctors when patients are experiencing acute kidney injury.
The business model for Google Health is still a work in progress, but its leadership and organizational structure provided some clues as to the company's areas of interest.
Feinberg is high up in Google's internal org chart and has the ear of the top Google execs including Pichai. He reports to Jeff Dean, the company's AI lead and one of its earliest employees.
Dean co-founded Google Brain in 2010, which catapulted the company's deep learning technology into medical analysis. Some of the first health-related projects out of Google Brain included a new computer-based model to screen for signs of diabetic retinopathy in eye scans, and an algorithm to detect breast cancer in X-rays. In 2019, Dean took the helm of the company's AI unit, reporting to Pichai.
Feinberg stood out in interviews for the job because he helped motivate Geisinger to start thinking more deeply about preventative health and not just treating the sick, according to people familiar with the hiring process. During his tenure at Geisinger, the hospital experimented with giving away healthy food to people with chronic conditions, including diabetes. It also pushed for more patients to have genetic tests to screen for diseases before it grew too late to treat them.
Feinberg works closely with Google Cloud CEO Thomas Kurian, who has named healthcare as one of biggest industry verticals for the business as it attempts to catch up with cloud front-runners Amazon and Microsoft.
Another key player at Google Health is Paul Muret, who had been an internal advocate for forming Google Health before Feinberg was hired, say two people who worked there. Muret is a veteran of the company who worked as a vice president of engineering for analytics, followed by video and apps. He's now listed on LinkedIn as a product leader for "AI and Health," and people in the organization say he's in charge on the product side.
The company is now staffing up its team with health industry execs to show that it's not just a group of Silicon Valley techies tinkering with artificial intelligence.
For instance, Feinberg helped recruit Karen DeSalvo as Google's chief health officer. DeSalvo, who was the health commissioner of New Orleans, played a major role in rebuilding the city's health systems in the wake of Hurricane Katrina. Like Feinberg, she's been a big advocate of the idea that there's more to health than just health care. She's pushed for hospitals to consider whether patients have access to transportation services, healthy food and a support system before sending them home.
Google Health has also absorbed a small group from Nest that was looking into home-health monitoring, which would be particularly beneficial for seniors who are hoping to live independently. That group was led by former Nest CTO Yoky Matsuoka, sources say, but she recently left Alphabet, and has reportedly been working as a fellow at Panasonic. Matsuoka co-founded Google's R&D arm, now called X, in 2011, and worked at Apple in between her stints at Google.
She's not the only high-profile departure. A top business development leader, Virginia McFerran, who came from insurance giant UnitedHealth Group, has also left the company. To replace her, the team brought over Matt Klainer, a vice president from the consumer communications products group as its business development lead for Google Health.
Google's parent company, Alphabet, has a number of health-related "Other Bet" businesses that will remain independent from Google Health, including Verily, the life sciences group, and Calico, which is focused on aging.
Recently promoted Alphabet CEO Sundar Pichai stressed that the setup was intentional during the company's most recent earnings call with investors, implying that Alphabet was not planning to consolidate all of its health efforts under one leader anytime soon.
"Our thesis has always been to apply these deep computer science capabilities across Google and our Other Bets to grow and develop into new areas," noted Pichai, when describing the company's work in health.
"The Alphabet structure allows us to have a portfolio of different businesses with different time horizons, without trying to stretch a single management team across different areas," he continued.
--CNBC's Jennifer Elias contributed to this report
Visit link:
Google Health, the company's newest product area, has ballooned to more than 500 employees - CNBC
Ryan Evans: Every invention begins with a curious mind – Akron Beacon Journal
From the simplest devices to the most powerful machines, we depend daily on modern inventions to improve our lives.
From an early age, Ive been curious about how they all work. What makes letters jump from your computer keyboard to your screen when you type? What makes your cars engine start when you simply turn a key or push a button? How does your microwave heat up your soup without hot coils or a burner? For as a long as I can remember, my curious mind has not only prompted me to ask questions like these, its demanded I seek answers. And what I find most fascinating is that none of these things existed until someone actually thought of them.
Earlier this week, we celebrated National Inventors Day and, with it, the power of the human mind to make something from nothing. Look around the room youre in, every object you see including the room itself was designed and developed because someone turned an idea into reality. Ive been an engineer for 20 years and I can tell you from experience that nothing is more satisfying than starting with a blank page, working through a problem and bringing a solution to life.
Where I work at The Timken Co., we design and develop critical components for practically every machine with rotating parts. I lead our research and development (R&D) team, where our mission is to apply our specialized skills to conduct research and develop technical solutions for the materials, engineering and manufacturing processes in support of our bearings business. This means were involved in practically every industry that moves our world forward. From airplanes and automobiles to food processing and wind power, we play a significant role inside the machines that matter the most to many around the world.
We have tremendous brain power and specialized talent on our team. This is essential because we need people who are able to explore for new knowledge and then apply what theyve learned to generate new business opportunities for our company. Our focus, as a team, is to work on things Timken doesnt already know how to do, which requires both free-spirited creative thinking and cooperation with implementation and operations teams.
Lets look at our work with bearings for electric vehicles, for example. They present some fundamental design challenges, like ensuring lubricants work at high speeds. Our traditional vehicle-testing methods wouldnt cut it because the bearings on these vehicles spin so fast. Our team determined that we needed to use our aerospace test rigs to effectively evaluate prototype designs. Thats where our wide-ranging knowledge comes into play. Our deep experience in aerospace is helping us get ahead in developing solutions for electric vehicles.
Much of our work today is focused on designing lighter, more durable bearings to help drive efficiency, improve performance and increase the lifecycle of parts used in electric vehicles and a variety of applications. We work closely with our customers to better understand what keeps them up at night. We collaborate with universities around the world to both share our knowledge and seek diverse points of view on the challenges were trying to solve. In the end, we deliver solutions that make equipment both safer and more sustainable.
The best part of our job in R&D is that we deal in a world of open-ended challenges, where there is usually not a right answer but instead our goal is to find the best answer. We have to first understand our customers needs, identify and plan the things we need to do and then figure out how to do them. We have the freedom to explore a world of possibilities and exercise our curious minds; its an inventors mentality. Its how the Thomas Edisons, Marie Curies and George Washington Carvers approached their work. They were able to not only come up with great ideas, but push the most valuable and beneficial ones to a finish. And this is how future generations of inventors will continue to move our world forward.
Read more:
Ryan Evans: Every invention begins with a curious mind - Akron Beacon Journal
Five Principles of Success – Thrive Global
Here are the Five Principles of Success you have been looking for. Sales, business, self-help, health, relationships, finances, career, and spirituality.
Know Your Outcome
Knowing your outcome is connecting with your purpose and seeing your highest vision of yourself so that your outcome will be so inspiring that motivation and determination are just a side effect. Knowing your outcome is about seeing, hearing, feeling, the result now, and taking those sensory inputs from the future and accessing them today. You may find that when you carry these sights, sounds, and feelings from your highest vision of yourself, you stay inspired, determined, and nothing will change the outcome. Knowing your outcome is more complicated than just writing it down. You can easily visualize, meditate, and bring in divine energy to give this outcome life energy. When an outcome is nurtured into existence, this is real spiritual alchemy. An outcome only comes to life when you have clarity, certainty, and a gut feeling. Clarity comes from a knowing deep down that the mission is for the highest good for all. The way you find clarity is through journaling, meditating, asking the universe for guidance, and deep self-reflection until a spark lights a fire in your heart center that carries a fierce passion that is impossible to extinguish.
Take Action
Taking action is simple when you have clarity on your outcome. You may discover that taking time to breathe deep and rewinding the picture of your outcome all the way to now, you will see the step by step plan that got you to the outcome. It is essential to build a step by step plan that can be broken down into a daily routine. It is important to take action through daily steps that move you toward your goals. These goals need to be specific, measurable, attainable, realistic and have a timeline. SMART goals create the foundation of success, and achieving that success is done by taking action.
Awareness
Aware of whats happening around you is critical because awareness brings insight. Insight shines a light on progress or setbacks and creates solutions out of a struggle. Knowledge of situations around the goal, but most of all, the deeper levels of the self are important. Knowing how you are thinking, feeling, and acting is the awareness that precipitates massive results because once you know yourself on a deeper level, you can adapt.
Flexibility
Adapting is changingto an ever-changing dynamic situation that moves like eddies in a river.Staying in one place in the middle of the river is resisting the chance to flowin the direction of easy success. A boulder is either rolled out of the way ordissolved by the persistent current of a river. Success is only possible whenyou are flowing in a current of soul purpose that rides the waves of deepfulfillment.
Psychology Excellence
The phycology ofexcellence is the next step in flexibility because excellence is easilyattained with the persistence of crashing waves on a beach. Becoming excellentis a process with highs and lows, trial and error, and never giving up. Justlike waves in the ocean crashing on land, there are times of significant effortand accomplishment. Triumph, wins, and success. When the tide recedes back intothe source, the sand exposes all the imperfections of the beach. You can seewhere the seawater didnt grind the sand perfectly. With the water receded allthe way out, the failures are brought to the surface, and maximum flexibilityis applied by the effort of the next wave and the next wave until tiny rocksare smashed into perfect sand.
Read the original:
Five Principles of Success - Thrive Global
Get Lost in the Mind of Yheti with New Album ‘The Party Has Changed’ [Album Review] – EDM Identity
Yheti transcends bass music bounds with ethereal soundscapes and wobbly bass in his newest albumThe Party Has Changed.
Ohio-native Yhetiis one of the rising stars already making quite an impact on the bass music scene. Gaining support from the likes of Space Jesus and G Jones to name a few, his transcendent soundscapes and unparalleled musical vision have swept listeners away.
Now, Yheti has blessed the psychedelic bass world yet again, with his most recent album The Party Has Changed. This 11 track, 31-minute long work holds an extraordinary range of sounds and auditory spectacles that are sure to leave fans in awe.
With tunes such as Signals from Above and All Over Body Hug, his latest release has already set a new standard of creativity in electronic music. The Party Has Changedbrings a refreshing and unparalleled new take on the future of what bass music could be and we are eager to dive in.
Listen to The Party Has Changedon Spotify below, download or stream the album on your favorite platform and read on for a full review of this wild new release!
The tune immerses the listener in an almost dreamlike state, with only a basic percussion beat to hold onto we journey through the playful world of soundscapes Yheti masterfully creates. By the end of the song, we find ourselves in a deeper, almost tribal-like environment before the song fades away with the sound of birds.
We are reintroduced to that environment at the beginning of A Little Bit Goes A Long Way. The track builds with a playful sample of a complicated flute melody before a bouncy bassline introduces itself. As the bass line progresses, the flute melody evolves as if telling a story and then, as swiftly as this track began, it fades away leading into the next song.
The third track of the album commands your full attention as it effortlessly introduces the perfect balance of wobble and weight. Signals from Above pairs uniquely shrill melodies with deep sinking bass lines that are sure to awe.
The next track titled Weird Trumpet is a playful rework of muted trumpet samples layered over-energetic beats. While its name lends truth to the overall aura of the song, the track itself holds its own persona.
From there Inside a Simulation almost immediately juxtaposes the percussion-heavy nature that the past four songs taunt. The tune is as refreshing as it is immersive, and will draw in listeners with its multifaceted ambiance.
Yheti seems to be playing around with repetitive and minimally syllabic phrases for this new song. Yo challenges the listener to not pay attention to not only the melody or the bass but rather the percussion and simple beat.
All Over Body Hug is an absolute beast of a tune. In a thick, almost viscous-like manner, this uncommonly slow song asks the audience to not only listen, but feel the song. Each hit of the bass creates a level of anticipation that is to be admired.
Up next for its turn on aux is the initially deceptive Text from a Star. As it builds in an almost UK style drum and bass manner, the heavy tune evolves into something much more. As it transitions into an expos of its complicated melody the piece explores the many different ways a song can be expressed.
While the theme of complicated flute melodies and heavy percussion remains strong throughout the work, it is presented in a new light on I Lost You. Yhetis tribal sounds could even be described as World Bass as he leads us through this new atmosphere. Then, almost immediately the flute follows us into the bass-heavy tune Life. Beginning in a deep, almost menacing manner, the track evolves slowly to a lighter more childlike energy before ending.
Finishing off the album is the song Pushing Towards the Light. Clocking in at just a short minute and a half, the track is a perfectly unique way to end off such a widely ranged album. It serves as almost a farewell to the mystical world that Yhetis created in The Party Has Changed.
When all is said and done, Yhetis newest addition to his discography is sure to make waves in the bass music community this upcoming summer. With festival season revving up and fans making new summer playlists, Im confident well be hearing the wild sounds of Tyler Yheti all around.
Website | Facebook | Twitter | Instagram | SoundCloud
More:
Get Lost in the Mind of Yheti with New Album 'The Party Has Changed' [Album Review] - EDM Identity