Category Archives: Deep Mind
The U.S. Is In A Bear Market. There Could Be A Recession. But This Is Not 2008. – Forbes
The COVID-19 outbreak has put U.S. stocks market in a bear market, but that doesn't mean we're ... [+] headed to a repeat of the 2008 financial crisis.
It happened on March 11th: after 11 years, the S&P 500 once again entered a bear market. Defined as a decline of 20% or more from recent highs, this is the first bear market for the popular stock index since the 2008 financial crisis.
Theres been a lot of talk recently comparing whats going on in the financial markets as a result of the COVID-19 outbreak to the 2008 financial crisis. While there have been some similarities, most notably daily bouts of extreme volatility and periods of unmitigated selloffs, the events that precipitated the current situation, economic climate prior to the outbreak, and reasons for the selloff are wholly different.
Although this is the first official bear market for the S&P 500 since the financial crisis, we came very close a little over a year ago. At the end of 2018, the S&P was down over 19% from recent highs, on the cusp of falling into a bear market. But instead of falling further, the S&P rebounded very quickly: following a loss of over -9% in December 2018, the S&P finished January 2019 up +8% and closed out the blockbuster year up over +30%.
Were now in what has become the fastest bear market in history. It took only 16 days for the S&P 500 to fall over 20% from the high on February 19th. While the drop was fast and deep, with the possibility of getting deeper before its over, the circumstances are much different than in 2008, and theres an argument that well be able to recover faster because of it.
Lets not lose sight of what sparked the 2008 financial crisis: a housing bubble fueled by incredibly risky (not to mention predatory) lending practices designed to be offset through highly leveraged financial instruments known as credit default swaps. As foreclosures mounted, the pressure on the financial system grew until the U.S. government was able to pull the economy back from the brink of collapse. Obviously, this is an abbreviated version of events, but the point is that in 2008, there was a bubble and a systemic failure.
Fast-forward to 2020 and we find ourselves in a much differentthough still challengingsituation. China, the worlds second-largest economy and biggest exporter of goods was effectively shut down due to the spread of the virus. This alone had a significant impact on the economy: global supply chains were disrupted and demand for goods and services slowed as people stayed home. But for reasons unknown, the financial markets didnt seem to be overly concerned about what was happening abroad.
As the virus began to spread to the U.S. concerns mounted about what social distancing, closures, and working from home would mean for our service-driven economy. When this outbreak finally passes, there will be some pent-up demand for goods, but for a service economy, there isnt always a way to recoup lost salesyoure not going to get two haircuts next month because you couldnt get one this month.
The spat over oil prices and production between Russia and Saudi Arabia was essentially the equivalent of pouring gasoline over a flame: the markets ignited with fresh worry over the economic outlook, triggering selloffs in both the stock and bond marketswhich doesnt happen oftenshowing even patient investors that sometimes theres nowhere to hide during periods of heightened uncertainty.
The concern over what the spread of the coronavirus means for the global economy in the short-term is justified. When there are issues on both the supply and demand side of the equation, earnings and growth will suffer. However, its important to recall the condition in which the U.S. entered this global crisis: unemployment was at historically low levels and though valuations were still high in the stock market, the economy was on rather solid ground.
The bear market was caused by the COVID-19 outbreak and not a systemic collapse of the financial system. This is a really important point to keep in mind as its highly probable that whenever this passes, the economy will be able to recover rather quickly. There are signs that the government will make efforts to help ensure that hourly workers and those not able to work remotely will receive some sort of aid, which will help limit the extent of the downturn and help boost the speed of the recovery.
This is not to say that the world will go back to the way it was right before the outbreak. Companies may look to diversify their supply chain and move production away from China which would impact profits, jobs, and growth.
The current interest rate environment will likely become a greater challenge as time goes on. On Sunday evening, the Federal Reserve cut the Federal Funds rate to zero in the second emergency rate cut in less than two weeks. It seems like only a matter of time until some of the yield curve goes negative. This would be uncharted territory in the United States. Negative-yielding debt is now prevalent across Asia and Europe, but its never happened here.
Given the impact the coronavirus will have on economy during the outbreak, its more likely than not that the U.S. could enter a recession, which is defined as two or more consecutive quarters of negative GDP growth. That said, its also possible that by the time we officially enter a recession, were already coming out of it, as demand picks up and we go back to our normal habits.
Long-term investors need to keep in mind that theyre not investing for the next six months or even six yearsmany investors have decades left in the stock market and your success will ride on the compounded gains over this whole period. Even though the stock market reacts to news in an instantstocks are also valued on the expectations of future growth and earnings. For example, if a stock is trading at 15 times its expected earnings, then next years earnings only represent 1/15th of the stocks value. Harder times in the short-term will impact valuations, but you have to see the light at the end of the tunnel.
In the interim, its critical for investors to stay the course and focus on what they can control during periods of market volatility. As of the writing of this article, the returns of the S&P 500 for the past two days are as follows: March 12th, 2020: -9.51%; March 13th, 2020: +9.29%. thats a swing of nearly 19% over just two days!
This highlights my last and perhaps most important message: not missing out on the markets recovery. The market can turn on a dimesomething weve all lost a few of the past several weeks.
More:
The U.S. Is In A Bear Market. There Could Be A Recession. But This Is Not 2008. - Forbes
Decoding the Future Trajectory of Healthcare with AI – ReadWrite
Artificial Intelligence (AI) is getting increasingly sophisticated day by day in its application, with enhanced efficiency and speed at a lower cost. Every single sector has been reaping benefits from AI in recent times. The Healthcare industry is no exception. Here is decoding the future trajectory of healthcare with AI.
The impact of artificial intelligence in the healthcare industry through machine learning (ML) and natural language processing (NLP) is transforming care delivery. Additionally, patients are expected to gain relatively high access to their health-related information than before through various applications such as smart wearable devices and mobile electronic medical records (EMR).
The personalized healthcare will authorize patients to take the wheel of their well-being, facilitate high-end healthcare, and promote better patient-provider communication to underprivileged areas.
For instance, IBM Watson for Health is helping healthcare organizations to apply cognitive technology to provide a vast amount of power diagnosis and health-related information.
In addition, Googles DeepMind Health is collaborating with researchers, clinicians, and patients in order to solve real-world healthcare problems. Additionally, the company has combined systems neuroscience with machine learning to develop strong general-purpose learning algorithms within neural networks to mimic the human brain.
Companies are working towards developing AI technology to solve several existing challenges, especially within the healthcare space. Strong focus on funding and starting AI healthcare programs played a significant role in Microsoft Corporations decision to launch a 5-year, US$ 40 million program known as AI for Health in January 2019.
The Microsoft program will use artificial intelligence tools to resolve some of the greatest healthcare challenges including global health crises, treatment, and disease diagnosis. Microsoft has also ensured that academia, non-profit, and research organizations have access to this technology, technical experts, and resources to leverage AI for care delivery and research.
In January 2020, these factors influenced Takeda Pharmaceuticals Company and MITs School of Engineering to join hands for three years to drive innovation and application of AI in the healthcare industry and drug development.
AI applications are only centered on three main investment areas: Diagnostics, Engagement, and Digitization. With the rapid advancement in technologies. There are exciting breakthroughs in incorporating AI in medical services.
The most interesting aspect of AI is robots. Robots are not only replacing trained medical staff but also making them more efficient in several areas. Robots help in controlling the cost while potentially providing better care and performing accurate surgery in limited space.
China and the U.S. have started investing in the development of robots to support doctors. In November 2017, a robot in China passed a medical licensing exam using only an AI brain. Also, it was the first-ever semi-automated operating robot that was used to suture blood vessels as fine as 0.03 mm.
In order to prevent coronavirus from spreading, the American doctors are relying on a robot that can measure the patients act and vitals. In addition, robots are also being used for recovery and consulting assistance and transporting units. These robots are showcasing significant potential in revolutionizing medical procedures in the future.
Precision medicine is an emerging approach to disease prevention and treatment. The precision medication approach allows researchers and doctors to predict more accurate treatment and prevention strategies.
The advent of precision medicine technology has allowed healthcare to actively track patients physiology in real-time, take multi-dimensional data, and create predictive algorithms that use collective learnings to calculate individual outcomes.
In recent years, there has been an immense focus on enabling direct-to-consumer genomics. Now, companies are aiming to create patient-centric products within digitization processes and genomics related to ordering complex testing in clinics.
In January 2020, ixLayer, a start-up based in San-Francisco, launched one of its kind precision health testing platforms to enhance the delivery of diagnostic testing and to shorten the complex relationship among physicians, precision health tests, and patients.
Personal health monitoring is a promising example of AI in healthcare. With the emergence of advanced AI and Internet of Medical Things (IoMT), demand for consumer-oriented products such as smart wearables for monitoring well-being is growing significantly.
Owing to the rapid proliferation of smart wearables and mobile apps, enterprises are introducing varied options to monitor personal health.
In October 2019, Gali Health, a health technology company, introduced its Gali AI-powered personal health assistant for people suffering from inflammatory bowel diseases (IBD). It offers health tracking and analytical tools, medically-vetted educational resources, and emotional support to the IBD community.
Similarly, start-ups are also coming forward with innovative devices integrated with state-of-the-art AI technology to contribute to the growing demand for personal health monitoring.
In recent years, AI has been used in numerous ways to support the medical imaging of all kinds. At present, the biggest use for AI is to assist in the analysis of images and perform single narrow recognition tasks.
In the United States, AI is considered highly valuable in enhancing business operations and patients care. It has the greatest impact on patient care by improving the accuracy of clinical outcomes and medical diagnosis.
Strong presence of leading market players in the country is bolstering the demand for medical imaging in hospitals and research centers.
In January 2020, Hitachi Healthcare Americas announced to start a new dedicated R&D center in North America. Medical imaging will leverage the advancements in machine learning and artificial intelligence to bring about next-gen of medical imaging technology.
With a plethora of issues driven by the growing rate of chronic disease and the aging population, the need for new innovative solutions in the healthcare industry is moving on an upswing.
Unleashing AIs complete potential in the healthcare industry is not an easy task. Both healthcare providers and AI developers together will have to tackle all the obstacles on the path towards the integration of new technologies.
Clearing all the hurdles will need a compounding of technological refinement and shifting mindsets. As AI trend become more deep-rooted, it is giving rise to highly ubiquitous discussions. Will AI replace the doctors and medical professionals, especially radiologists and physicians? The answer to this is, it will increase the efficiency of the medical professionals.
Initiatives by IBM Watson and Googles DeepMind will soon unlock the critical answers. However, AI aims to mimic the human brain in healthcare, human judgment, and intuitions that cannot be substituted.
Even though AI is augmenting in existing capabilities of the industry, it is unlikely to fully replace human intervention. AI skilled forces will swap only those who dont want to embrace technology.
Healthcare is a dynamic industry with significant opportunities. However, uncertainty, cost concerns, and complexity are making it an unnerving one.
The best opportunity for healthcare in the near future are hybrid models, where clinicians and physicians will be supported for treatment planning, diagnosis, and identifying risk factors. Also, with an increase in the number of geriatric population and the rise of health-related concerns across the globe, the overall burden of disease management has augmented.
Patients are also expecting better treatment and care. Due to growing innovations in the healthcare industry with respect to improved diagnosis and treatment, AI has gained consideration among the patients and doctors.
In order to develop better medical technology, entrepreneurs, healthcare service providers, investors, policy developers, and patients are coming together.
These factors are set to exhibit a brighter future of AI in the healthcare industry. It is extremely likely that there will be widespread use and massive advancements of AI integrated technology in the next few years. Moreover, healthcare providers are expected to invest in adequate IT infrastructure solutions and data centers to support new technological development.
Healthcare companies should continually integrate new technologies to build strong value and to keep the patients attention.
-
The insights presented in the article are based on a recent research study on Global Artificial Intelligence In Healthcare Market by Future Market Insights.
Abhishek Budholiya is a tech blogger, digital marketing pro, and has contributed to numerous tech magazines. Currently, as a technology and digital branding consultant, he offers his analysis on the tech market research landscape. His forte is analysing the commercial viability of a new breakthrough, a trait you can see in his writing. When he is not ruminating about the tech world, he can be found playing table tennis or hanging out with his friends.
Read this article:
Decoding the Future Trajectory of Healthcare with AI - ReadWrite
Gardening: Five things to keep in mind with ground cover plants – Bournemouth Echo
Hannah Stephenson looks at ground cover plants that'll help suppress weeds and add colour and form to borders.
As spring begins, it may be a good time to think about how to reduce the weeding you have to do in future years - and good ground cover is essential if you want to lessen the backbreaking work of constant weeding.
Densely-planted areas should keep weeds at bay because the dark stops seeds from germinating. Well-chosen ground cover plants can also give a softer appearance to hard surfaces such as brickwork and paving.
Here's a few pointers to keep in mind...
Use the right plants
There are many obvious perennials to use, which will quickly take up space and add colour to the border, such as the wild geranium (cranesbill), but some need more work than others. Many make excellent permanent edging, such as Alchemilla mollis, Bergenia cordifolia 'Purpurea' and Saxifraga x urbium.
Check first, however, that such plants will be suitable for your soil. And remember that deciduous ground cover plants lose their leaves in autumn, so if you are using them in abundance you may find yourself with some ugly gaps in your border during the winter months.
Research fast-growers
Other quick-growing varieties include Persicaria affinis, which thrives in sun or light shade and provides up to a 24-inch evergreen carpet, with pinkish-purple flowers emerging in summer.
Helianthemum 'Praecox' produces small yellow flowers above a 6-inch evergreen carpet of grey-green leaves, flowering between June and August.
Other relatively trouble-free ground cover plants include Ajuga reptans, astilbe, Calluna vulgaris, Erica carnea and Euonymus fortunei 'Emerald Gaiety'.
Watch out for invasive species
Vigorous ground cover has its pros and cons. The advantage is that vigorous plants will fill an area quickly at less cost - but if you ever decide to change your scheme, they may be difficult to get rid of.
Periwinkle (vinca) is extremely hard to get rid of, snow-in-summer will fill a sunny site in just one season but will also take over everything else. And lily-of-the-valley, while wonderfully fragrant, has deep vein-like roots which are almost impossible to eradicate should you wish to do so.
Perennial geraniums are also quite vigorous, but they are easier to contain and do provide some welcome colour during the summer.
Fill difficult spots
If you have a difficult spot to fill, such as a steep bank in shade, Ajuga reptans (bugle) is a strong evergreen, which carries spikes of blue flowers from late spring to midsummer. It spreads quickly, forming a carpet which will easily act as ground cover under shrubs and trees.
Other shade-lovers that make good ground cover include heucheras, which come in a range of colours from almost black to acid green. Their flowers are also a magnet to bees.
Infill with reliable favourites
When planting a border and graduating it from taller plants at the back to lower-growing varieties at the front, you can always add interest by placing a number of taller species forward into the middle ground, which applies particularly to tall hardy perennials such as lupins and delphiniums.
Good infill plants, once you have established the framework of your border, include potentilla, rosemary, spiraea, cystus, acanthus and rudbeckia, while at the lowest level towards the front of the border you could use dwarf hebes, alchemilla, epimedium and lamium.
Plant carefully enough and you should soon have a riot of colour, without the need for too much hoeing or hand-weeding.
More here:
Gardening: Five things to keep in mind with ground cover plants - Bournemouth Echo
Mind Against return on Afterlife with Walking Away and Bloom – The Groove Cartel
Italian duo, Mind Against, returns on Tale Of Us Afterlife with the 2-track EP Walking Away.
After a successful edition of Afterlife in Dubai, debut event in the UAE for the label, Tale Of Us imprint presented yesterday its 35th installment. Produced by the Italian duo Mind Against, Walking Away and Bloom mark the return on the label After the success of Days Gone in 2018.
Kicking off with Walking Away, a long-time played ID that has made dance people all around the world, the tune features elements of deep and progressive house, blending together minimal and techno-oriented beats and creating a unique sound hanged into a limbo between lo-fi and underground music. The hypnotic vocal of Port St. Willow does the rest of the job bringing the listener into a new dimension inside his soul. Centered around uplifting harmonic elements and a resonant drum pattern, Walking Away strikes that special balance of light and dark that were used from Mind Against.
On the B-side, Bloom hits the darkest string of our emotions, delivering a minimal, deep and mysterious feeling, increased by the soft touch of the main piano melody. A melancholy and downtempo interpretation of Mind Against vision with the percussion restraining and sitting deep below atmospheric pads and piano chords, eventually arriving at a gradual and subtle peak.
Mind Against reach euphoric heights once again, though this time by a patient and reflective path but yet delivering two songs that will hardly go unnoticed.
You can download both records here or buy your physical vinyl copy as well.
View original post here:
Mind Against return on Afterlife with Walking Away and Bloom - The Groove Cartel
Fit in my 40s: Mamma mia! Can I really work out by singing Abba? – The Guardian
Mark De-Lisser is a voice coach, lately a media one, making heart-warming reality shows in which he builds community choirs to beat loneliness, or the solitude of dementia, or just to soften the condition of being human. Many benefits are psychological, in a space beyond (but passing through) mindfulness, where you make your heart sing by singing. But there are physical benefits, too, from posture to breathing to a muscular skeletal reboot, which is why Im in De-Lissers studio though I want my heart to sing as much as the next man.
In its anteroom are photos of Mark in front of his choirs: one catches my eye, a bunch of middle-aged women, all in green evening dress like bridesmaids to a bride determined to outshine them, looking absolutely pleased as punch to be standing next to him. I bet theyre singing Abba, I thought. No way on Gods Earth am I singing an Abba song.
Deeper breath, he starts. Singing encourages you to use deeper breath. Its like exercise you release endorphins in the body. There are, he says, benefits to this, including clearing toxins out of the lungs, and thereby the blood. There is a whole deep-breathing culture, spun out of yoga, with precisely this in mind: the benefit here is that you also get to make a noise.
But first, work on your posture: if you store a lot of tension in your shoulders, it makes your voice strained and weak. We spend quite a bit of time relaxing, teaching my arms how to go limp. It is work a person could do on their own, and quite engrossing, provided you can accept relaxation as an activity in its own right (this is what often stands between a person and self-improvement, Ive decided; not a lack of willpower, just an inability to pause).
Of course, at some point I have to sing. Whats your favourite song? asks De-Lisser. My mind is blank. Everyones mind is blank at that point. OK, now its worse than blank; its an abyss. What were you listening to on the way here? No Children, by the Mountain Goats. Its the most complicated song ever. I could probably play it, he says (he is extremely musical, needless to say, a chorister since he was a child). But is there anything else you know? Waterloo! I know the words to Waterloo!
And thats how I came to be singing Abba, in a soundproofed room in Croydon. The first rendition was a washout, thin and breathy and peculiarly whiny. The second was a bit more disciplined in terms of breathing you have to breathe, deeply, wherever the line allows, and this does force you into taking authentic big breaths, whether you like it or not. For the third, Mark tried tirelessly to persuade me to go louder, to try and reach the cheap seats, and I had to say (this is a pure manners question), I dont agree with being incredibly loud unless youre incredibly good. Who decides whats good? he says, which is calming without being all that persuasive.
To improve at singing, you would need a coach, and for more than one lesson; to improve at breathing, though, one song a day, at full volume, is inexplicably energising.
People tend to want to sound like other people. You have to strip away the mimicry before you can hear your natural sound and tone, Mark De-Lisser says.
Continued here:
Fit in my 40s: Mamma mia! Can I really work out by singing Abba? - The Guardian
How Megan Thee Stallion Turned ‘Hot’ Into a State of Mind – The New York Times
The summer of 2019 had a sound, a meme, a hashtag and an entire mood set by a then 24-year-old rapper from Houston named Megan Thee Stallion. #HotGirlSummer started out as a tweet that morphed into a meme that became a chart-topping track featuring Nicki Minaj that catapulted the artist into a national spotlight, with legions of fans she has nicknamed her Hotties. The genius of Hot Girl Summer is that it was much more than a song it was a feeling, propagated by social media, particularly Instagram and TikTok, of freedom and abandon that could contain everything from a performance of Megan twerking while wearing a particularly bright pair of lime green chaps to a photograph of Tom Hanks smiling beatifically while wearing a white dress shirt tied in a knot.
Megan Thee Stallion, born Megan Pete, has been making music and music videos since high school, steadily building a following by releasing songs and freestyling on local radio shows. In 2016, she appeared against a city skyline in a compilation video of local Texas rappers, distinguishing herself with her cool demeanor and laser-precision flow. She later refined her style in a sun-drenched YouTube video called Stalli Freestyle, which collected millions of views, and her EP Tina Snow. Her first album, Fever, followed last year and her latest, Suga, began streaming this month.
I caught up with her on a cold night in February, fresh off a performance on The Tonight Show, Starring Jimmy Fallon, at the 21 Club, a midtown Manhattan restaurant. Megan, a health-administration student at Texas Southern University, arrived in sweatpants and a Dragon Ball Z T-shirt, along with 4oe, her gunmetal gray French bulldog puppy, to talk about the aftermath of going viral, the sanctuary of alter egos and what she hopes to do by the time she turns 40.
Jenna Wortham: As a 90s baby, do you feel as if your career and music are Internet-first?
Meghan Thee Stallion: The main reason I am where I am today is because of the Internet, but the crazy thing is I didnt grow up online. My mom played UGK, Biggie and Lil Kim, and my dad was a big Three 6 Mafia fan, so the music I was listening to was already grown. When I got old enough to curse and rap, I was thinking, What would Biggie think about this? Would Pimp C like this? Even though my career does really well because of the Internet, my style is not new. I watched old DVDs of people in Houston and videos of guys in a circle, rapping, freestyling with each other. Showcases. I was thinking, I have to rap my ass off. Every time I had the opportunity to go somewhere and freestyle, I would do that, because thats what I was looking at.
Arielle Bobb-Willis for The New York Times
I wonder if its that combination the expertise and focus and being so deep in that rap tradition combined with the savviness of new mediums like YouTube and Instagram that drew people in. Definitely. I had to put it out on the Internet every day and hope that it caught on. At first I wasnt thinking about going viral. I wanted people to hear me rapping because thats what I like to do. I already had a big following in high school and college. One day, in 2013, my best friends and I made a twerk video, and it went viral at the school. One of our teachers called us into a meeting, and she pulled up the video on a projector and lectured us. She was like, Is this how you want to be represented? And we looked at each other, like, Yeah!
When I was in high school, I wanted that life to be completely different than my Megan Thee Stallion life, but I couldnt hide it. I was always Thee Stallion, so Ive always been making videos as me. I was shooting videos like every week. My classmates would ask me for pictures. I refused to get an Instagram at first. And then like everybody in high school was on Instagram, so I was like, [big dramatic sigh] Guess Ill get it.
It worked out youre really good at Instagram! Im not even trying. I dont like my Instagram to look like its a commercial. I want you to come to my page and feel like Im still your classmate. I do post when Im taking a quiz because I want my Hotties to know Im still going to school. I want people to look at my page and think, This is real life.
It seems as if you have a good sense of personal boundaries. I know you have alter egos that you step into when you perform. Do they help with that? Those are mostly for my music. In my real life, theres only two of me in my head: Megan and Megan Thee Stallion. Megan is the nerd who wants to watch anime, stay in bed and crack jokes with everyone. Megan Thee Stallion is when I have to perform. My alter egos are my emotions. Tina Snow is when Im feeling confident. Shes based on Tony Snow, Pimp Cs alias. And right now Im doing Suga shes sweet and vulnerable. Shes me telling people its OK to mess up.
How are you thinking about your legacy? You recently filed to trademark Hot Girl Summer. Most artists have difficulty trademarking their original ideas. Getting something trademarked is a long process. But Hot Girl Summer is my thing, its not like LeBron trying to get Taco Tuesday. I saw other companies were using it, and I was like, Thank you for your support, but I have to secure this, because this is mine.
Arielle Bobb-Willis for The New York Times
I think the duality that you possess a person who can be seen in a string bikini, drinking Hennessy on a yacht one day and host a beach cleanup the next empowered a lot of women to realize they can also be multiple selves online and off. Hot Girl Summer isnt about being reckless; its about leaning into all of the parts of yourself. It has been a whole two seasons since the song came out last August. How do you reflect on that period? I dont think you could think about summer without thinking about Hot Girl Summer. The whole Hot Girl aesthetic I think people felt comfortable seeing someone doing whatever they wanted to do. Thats why a lot of women appreciated it. This year, Im working to show people what being a hot girl really is like. Do you know what a candy striper is? Eventually, I want to open an assisted-living facility in Houston, but before that, I want to get girls together to go to different homes or hospitals. Those people dont have anybody, and I think itd be really cute to have a Hot Girl come visit you and volunteer.
All my life Ive been a person whos had my hands in a lot of things. I was a bill collector at one point. I was a bartender. When I started rapping and making money, I was like, Im going to use this to do the things I really want to do: Finish school and start my business. I know Im going to be an artist, and Im going do something in the medical field. I dont want to look up and be 40, and be like, Damn, I wish I wouldve done that.
Jenna Wortham is a staff writer for the magazine and co-host of the podcast Still Processing. She previously wrote about a reboot of the show The L Word for a Screenland column. Arielle Bobb-Willis is a photographer from New York who was recently featured in Apertures The New Black Vanguard. This is her first assignment for the magazine.
This interview has been edited and condensed.
Stylist: E.J. King. Hair: Kellon Williams.
Additional design and development by Jacky Myint.
Read the rest here:
How Megan Thee Stallion Turned 'Hot' Into a State of Mind - The New York Times
What Lies Beneath – Earth Island Journal
Long story short: There is an extraordinary world beneath us. Places of severity we cant see and know little about. It is into these dark worlds, deep inside the earth, that author Robert Macfarlane journeys in search of knowledge in Underland.
In this sequel to his bestseller The Old Ways, nearly ten years in the making, Macfarlane explores our relationship with darkness, burial, and what lies beneath the surface of both place and mind. Like the brilliant professor you had in college, and with an unsparing eye for detail, he explores the subterranean spaces of our 1.9 billion-year-old planet with storybook clarity. But his primary interest is the relationships that exist between landscape and the human heart.
The 425-page tome is divided into three sections: Seeing; Hiding; Haunting, and some of the chapters expose a lidar-like map of the underworld that is not an easy read. He guides us to millennial-old burial sites in Britain, a dark matter research station a half-mile below Yorkshire, which is dedicated to understanding the birth of the universe, and remote Arctic cave-art sites on Norways northern coasts.
Macfarlane examines not only the physical dimensions of this underworld, but also its manifestation in human imaginations in our mythologies and literature. In the underworld three tasks recur across cultures and epochs: to shelter what is precious, to yield what is valuable, and to dispose of what is harmful, he writes.
The blood of the book rises when he goes underground, at times moving along by squirm, the sense of the rock as a hand pressing down first on the skull, then the back, then the whole of the body, a moment spent briefly in its grip.
He joins spelunkers pinballing around caves, enjoying a camaraderie that doesnt require words. In other deep places, he joins thought-provoking scientists alive to the idea of living in the moment. If were not exploring, were not doing anything. Were just waiting, a physicist tells Macfarlane.
In the Epping Forest bordering London, fungal networks divaricate woodland soil, joining individual trees into intercommunicating forests, a cooperative system in which trees talk to one another. At the burial sites in the Mendip Hills of Somerset, where human bodies from the Neolithic era rest, Macfarlane ponders how we are often more tender to the dead than to the living. Traversing the catacombs beneath Paris, he reflects on Victor Hugos words in Les Miserables, Paris has another Paris under herself. Limestone quarrying began under the city in the twelfth century Paris was literally built from its own underland. But the City of Lights also needed to store its dead, so the underworld became Les Catacombs.
In the Slovenian Highlands MacFarlane ventures along a deep, mile-long cave system atop glacial ice, which served as ideal geology for guerilla war during World War II. Mountains were seen no longer as solid structures, but as honeycombs that could be opened, he writes. A good descent was rock fall that didnt hit you, gas that didnt asphyxiate, shoulder-to-the-wall holes that didnt trap you.
Deep time is the chronology of the Underland. The timespans in this realm can stretch millions of years. And yet, geology knows no such word as forever. Deep time runs forward as well as back. Its a dynamic earth cycle mineral becomes animal becomes rock and in deep time supplies calcium for new organisms to build their bodies.
But Underland isnt just about inspiring awe about places and histories unknown. It is, in essence, an exploration of the fragility of our existence on Earth. McFarland highlights in the book what he calls Anthropocene unburials: Reindeer buried in glacial ice a few lifetimes ago are now turning up replete with anthrax spores; an American Cold War missile base containing toxic chemicals, sealed under Greenlands ice 50 years ago, now moving up towards the surface; heatwaves in Britain causing the imprints of ancient burial barrows to come into view.
These unburials, he points out, reveal the terrible harm we are doing our world. What will survive of us is plastic, swine bones, and lead-207, the stable isotope at the end of the uranium-235 decay chain, he writes.
It may all seem a stretch, but there it is. Macfarlane could probably get a free beer in any bar in his native England telling any one of these stories.
Read more:
What Lies Beneath - Earth Island Journal
Britain is ahead of many of its competitors in technology startups – The Economist
Mar 12th 2020
AS A DERIVATIVES trader with Credit Suisse, Nikolay Storonsky was used to gambling, but his riskiest bet was to quit the markets in 2013 and set up Revolut, a fintech startup. It paid off. Last month Revolut raised $500m, becoming Europes most highly valued fintech company, with a valuation of $5.5bn.
Revoluts rise mirrors Britains unicorn scene. A unicorn is defined as a privately held startup valued at more than $1bn in a financing round, initial public offering or acquisition. According to Dealroom.co, a data-analytics firm, Britain has created 63 such companies in the past ten years. That is still far behind the giants, America and China, which have added 820 and 224 respectively, but it is more than twice as many as Germanys 29 and almost five times as many as Frances 13 (see chart).
More interesting than these numbers is a step-change in the rate of growth. Between 2009 and 2013, Britain averaged about two new unicorns a year. Since then the figure has quadrupled. Part of that may be down to overall market optimism in recent years around anything tech-related. But investors may also have worked out how to navigate the valley of death, in which promising innovations would either disappear without being commercialised, or end up being swallowed by dragons. That was the fate of DeepMind, an artificial-intelligence startup, when Google bought it in 2014.
A few British unicorns, such as Graphcore, which designs specialised chips for artificial intelligence, are pure tech companies. But for most, computing is not the product, even if tech is central to the process. Finance, making up nearly a third of Britains unicorns, is the biggest sector, with companies like Revolut, Monzo and OakNorth (all upstart banks) and TransferWise (a money-transfer service). Retail, with ten unicorns (such as, Deliveroo and Ocado, which deliver cooked and supermarket food, respectively) and health (such as Oxford Nanopore, a gene-sequencing company) are also success stories. Some, such as BrewDog, a beer-maker, have nothing to do with technology at all.
The financial crisis may have been partly responsible for the uptick in unicorn production, particularly in finance, because it pushed talent out of established City banks and into entrepreneurship. When Zar Amrolia and Alex Gerko, two maths PhDs at Deutsche Bank, realised the banks spending on compliance would dwarf that on research, they left. In 2015 they set up XTX markets, an algorithmic foreign-exchange company that is now the first non-bank to make the list of the ten largest currency houses by trading volume. Mr Storonsky decided to give up the trading floor to start Revolut because it just wasnt as fun as it used to be. In 2013 tech overtook finance as the preferred destination of MBA graduates from London Business School.
The government has tried to help as well. David Cameron, prime minister from 2010 to 2016, was keen to increase incentives and cut regulatory burdens for startups. The enterprise investment scheme (EIS), which was introduced in 1994 to give startup investors tax rebates and loss reliefs if investments fail, was extended from companies with fewer than 50 employees to those with fewer than 250, and from investments of 2m ($2.6m) to 10m. A new seed EIS offered larger tax relief for smaller companies. Nick Jenkins, founder of Moonpig, an online greeting-card firm, says the EIS incentives served as a catalyst, getting enough startups going to persuade venture-capital firms to pay attention to what was going on in Britain. In 2019 firms in London received $9.7bn in venture-capital funding, more than Berlin, Paris, Amsterdam and Madrid combined.
It was also Mr Cameron who called the referendum that led to Britains decision to leave the European Union. That dismayed many startups, since the EUs freedom-of-movement rules make it easy to attract workers from across the continent. TechUK, a trade body, has given a cautious welcome to the governments plans for a new, points-based system, announced last month and due to launch next year. Ministers hope it will maintain Britains attractiveness to the sorts of skilled workers that startups need. Tech firms also worry that vital data flows between Britain and Europe could be hampered if a trade deal is not negotiated by the end of the year.
There are other clouds on the horizon. Even before the covid-19 outbreak crashed the markets, investors had been cooling on unicorns, many of which have posted persistent losses as they have tried to boost customer numbers. Financial startups in particular could suddenly find life much harder if any of the big incumbent banks can manage to create similarly slick services or apps.
One question is how large British startups can become. In The Social Network, a film depicting the rise of Facebook, Sean Parker, Facebooks first president, tells the sites founder, Mark Zuckerberg, that a million dollars isnt coolyou know whats cool? The answer is a billion dollars. That was ten years ago. Today, quite a lot of British unicorns are billion-dollar cool. But Americas and Chinas home-grown champions are bigger still (AirBnB, for instance, was valued at $35bn in 2019; Didi Chuxing, a Chinese ride-hailing service, hit $62bn in the same year).
Britain has a long way to go before it can boast of any startups approaching that size. But the past five years have demonstrated that the country can indeed breed unicorns. The next challenge is to turn them into dragonsand to keep other dragons from gobbling them all up.
This article appeared in the Britain section of the print edition under the headline "Unicorn lead"
The rest is here:
Britain is ahead of many of its competitors in technology startups - The Economist
The Robots Are Coming – Boston Review
Image: Adobe Stock
In the overhyped age of deep learning, rumors of thinking robots are greatly exaggerated. Still, we cannot leave decisions about even this sort of AI in the hands of those who stand to profit from its use.
EditorsNote: The philosopher Kenneth A. Taylor passed away suddenly this winter. Boston Review is proud to publish this essay, which grows out of talks Ken gave throughout 2019, in collaboration with his estate. Preceding it is an introductory note by Kens colleague,John Perry.
In memoriam Ken Taylor
On December 2, 2019, a few weeks after his sixty-fifth birthday, Ken Taylor announced to all of his Facebook friends that the book he had been working on for years, Referring to the World, finally existed in an almost complete draft. That same day, while at home in the evening, Ken died suddenly and unexpectedly. He is survived by his wife, Claire Yoshida; son, Kiyoshi Taylor; parents, Sam and Seretha Taylor; brother, Daniel; and sister, Diane.
Ken was an extraordinary individual. He truly was larger than life. Whatever the task at handwhether it was explaining some point in the philosophy of language, coaching Kiyoshis little league team, chairing the Stanford Philosophy department and its Symbolic Systems Program, debating at Stanfords Academic Senate, or serving as president of the Pacific Division of the American Philosophical Association (APA)Ken went at it with ferocious energy. He put incredible effort into teaching. He was one of the last Stanford professors to always wear a tie when he taught, to show his respect for the students who make it possible for philosophers to earn a living doing what we like to do. His death leaves a huge gap in the lives of his family, his friends, his colleagues, and the Stanford community.
Ken went to college at Notre Dame. He entered the School of Engineering, but it didnt quite satisfy his interests so he shifted to the Program of Liberal Studies and became its first African American graduate. Ken came from a religious family, and never lost interest in the questions with which religion deals. But by the time he graduated he had become a naturalistic philosopher; his senior essay was on Kant and Darwin.
Ken was clearly very much the same person at Notre Dame that we knew much later. Here is a memory from a Katherine Tillman, a professor in the Liberal Studies Program:
This is how I remember our beloved and brilliant Ken Taylor: always with his hand up in class, always with that curious, questioning look on his face. He would shift a little in his chair and make a stab at what was on his mind to say. Then he would formulate it several more times in questions, one after the other, until he felt he got it just right. And he would listen hard, to his classmates, to his teachers, to whomever could shed some light on what it was he wanted to know. He wouldnt give up, though he might lean back in his chair, fold his arms, and continue with that perplexed look on his face. He would ask questions about everything.Requiescat in pace.
From Notre Dame Taylor went to the University of Chicago; there his interests solidified in the philosophy of language. His dissertation was on reference, the theory of how words refer to things in the world; his advisor was the philosopher of language Leonard Linsky. We managed to lure Taylor to Stanford in 1995, after stops at Middlebury, the University of North Carolina, Wesleyan, the University of Maryland, and Rutgers.
In 2004 Taylor and I launched the pubic radio program Philosophy Talk, billed as the program that questions everythingexcept your intelligence. The theme song is Nice Work if You Can Get It, which expresses the way Ken and I both felt about philosophy. The program dealt with all sorts of topics. We found ourselves reading up on every philosopher we discussedfrom Plato to Sartre to Rawlsand on every topic with a philosophical dimension, from terrorism and misogyny to democracy and genetic engineering. I grew pretty tired of this after a few years. I had learned all I wanted to know about imporant philosophers and topics. I couldnt wait after each Sundays show to get back to my world: the philosophy of language and mind. But Ken seemed to love it more and more with each passing year. He loved to think; he loved forming opinions, theories, hypotheses and criticisms on every possible topic; and he loved talking about them with the parade of distinguished guests that joined us.
Until the turn of the century Kens publications lay pretty solidly in the philosophy of language and mind and closely related areas. But later we begin to find things like How to Vanquish the Still Lingering Shadow of God and How to Hume a Hegel-Kant: A Program for the Naturalization of Normative Consciousness. Normativitythe connection between reason, duty, and lifeis a somewhat more basic issue in philosophy than proper names. By the time of his 2017 APA presidential address, Charting the Landscape of Reason, it seemed to me that Ken had clearly gone far beyond issues of reference, and not only on Sunday morning for Philosophy Talk. He had found a broader and more natural home for his active, searching, and creative mind. He had become a philosopher who had interesting things to say not only about the most basic issues in our field but all sorts of wider concerns. His Facebook page included a steady stream of thoughtful short essays on social, political, and economic issues. As the essay below shows, he could bring philosophy, cognitive science, and common sense to bear on such issues, and wasnt afraid to make radical suggestions.
Some of us are now finishing the references and preparing an index for Referring to the World, to be published by Oxford University Press. His next book was to be The Natural History of Normativity. He died as he was consolidating the results of thirty-five years of exciting productive thinking on reference, and beginning what should have been many, many more productive and exciting years spent illuminating reason and normativity, interpreting the great philosophers of the past, and using his wisdom to shed light on social issuesfrom robots to all sort of other things.
His loss was not just the loss of a family member, friend, mentor and colleague to those who knew him, but the loss, for the whole world, of what would have beenan illuminating and important body of philosophical and practical thinking. His powerful and humane intellect will be sorely missed.
John Perry
Among the works of man, which human life is rightly employed in perfecting and beautifying, the first in importance surely is man himself. Supposing it were possible to get houses built, corn grown, battles fought, causes tried, and even churches erected and prayers said, by machineryby automatons in human formit would be a considerable loss to exchange for theseautomatons even the men and women who at present inhabit the more civilized parts of the world, and who assuredly are but starved specimens of what nature can and will produce. Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.
John Stuart Mill, On Liberty (1859)
Some believe that we are on the cusp of a new age. The day is coming when practically anything that a human can doat least anything that the labor market is willing to pay a human being a decent wage to dowill soon be doable more efficiently and cost effectively by some AI-driven automated device. If and when that day does arrive, those who own the means of production will feel ever increasing pressure to discard human workers in favor of an artificially intelligent work force. They are likely to do so as unhesitatingly as they have always set aside outmoded technology in the past.
We are very unlikely to be inundated anytime soon with a race of thinking robotsat least not if we mean by thinking that peculiar thing that we humans do, done in precisely the way that we humans do it.
To be sure, technology has disrupted labor markets before. But until now, even the most far reaching of those disruptions have been relatively easy to adjust to and manage. That is because new technologies have heretofore tended to displace workers from old jobs that either no longer needed to be doneor at least no longer needed to be done by humansinto either entirely new jobs that were created by the new technology, or into old jobs for which the new technology, directly or indirectly, caused increased demand.
This time things may be radically different. Thanks primarily to AIs presumed potential to equal or surpass every human cognitive achievement or capacity, it may be that many humans will be driven out of the labor market altogether.
Yet it is not necessarily time to panic. Skepticism about the impact of AI is surely warranted on inductive grounds alone. Way back in 1956, at the Dartmouth Summer Research Project on Artificial Intelligence, an event that launched the first AI revolution, the assembled gaggle of AI pioneersall ten of thembreathlessly anticipated that the mystery of fully general artificial intelligence could be solved within a couple of decades at most. In 1961, Minsky, for example, was confidently proclaiming, We are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. Well over a half century later, we are still waiting for the revolution to be fully achieved.
AI has come a long way since those early days: it is now a very big deal. It is a major focus of academic research, and not just among computer scientists. Linguists, psychologists, the legal establishment, the medical establishment, and a whole host of others have gotten into the act in a very big way. AI may soon be talking to us in flawless and idiomatic English, counseling us on fundamental life choices, deciding who gets imprisoned for how long, and diagnosing our most debilitating diseases. AI is also big business. The worldwide investment in AI technology, which stood at something like $12 billion in 2018, will top $200 billion by 2025. Governments are hopping on the AI bandwagon. The Chinese envision the development of a trillion-dollar domestic AI industry in the relatively near term. They clearly believe that the nation that dominates AI will dominate the world. And yet, a sober look at the current state of AI suggests that its promise and potential may still be a tad oversold.
Excessive hype is not confined to the distant past. One reason for my own skepticism is the fact that in recent years the AI landscape has come to be progressively more dominated by AI of the newfangled deep learning variety, rather than by AI of the more or less pass logic-based symbolic processing varietyaffectionately known in some quarters, and derisively known in others, as GOFAI (Good Old Fashion Artificial Intelligence).
It was mostly logic-based, symbolic processing GOFAI that so fired the imaginations of the founders of AI back in 1956. Admittedly, to the extent that you measure success by where time, money, and intellectual energy are currently being invested, GOFAI looks to be something of dead letter. I dont want to rehash the once hot theoretical and philosophical debates over which approach to AIlogic-based symbolic processing, or neural nets and deep learningis the more intellectually satisfying approach. Especially back in the 80s and 90s, those debates raged with what passes in the academic domain as white-hot intensity. They no longer do, but not because they were decisively settled in favor of deep learning and neural nets more generally. Its more that machine learning approaches, mostly in the form of deep learning, have recently achieved many impressive results. Of course, these successes may not be due entirely to the anti-GOFAI character of these approaches. Even GOFAI has gotten into the machine learning act with, for example, Bayesian networks. The more relevant divide may be between probabilistic approaches of various sorts and logic-based approaches.
It is important to distinguish AI-as-engineering from AI-as-cognitive-science. The former is where the real money turns out to be.
However exactly you divide up the AI landscape, it is important to distinguish what I call AI-as-engineering from what I call AI-as-cognitive-science. AI-as-engineering isnt particularly concerned with mimicking the precise way in which the human mind-brain does distinctively human things. The strategy of engineering machines that do things that are in some sense intelligent, even if they do what they do in their own way, is a perfectly fine way to pursue artificial intelligence. AI-as-cognitive science, on the other hand, takes as its primary goal that of understanding and perhaps reverse engineering the human mind. AI pretty much began its life by being in this business, perhaps because human intelligence was the only robust model of intelligence it had to work with. But these days, AI-as-engineering is where the real money turns out to be.
Though there is certainly value in AI-as-engineering, I confess to still have a hankering for AI-as-cognitive science. And that explains why I myself still feel the pull of the old logic-based symbolic processing approach. Whatever its failings, GOFAI had as one among its primary goals that of reverse engineering the human mind. Many decades later, though we have definitely made some progress, we still havent gotten all that far with that particular endeavor. When it comes to that daunting task, just about all the newfangled probability and statistics-based approaches to AImost especially deep learning, but even approaches that have more in common with GOFAI like Bayesian Netsstrike me as if not exactly nonstarters, then at best only a very small part of the truth. Probably the complete answer will involve some synthesis of older approaches and newer approaches and perhaps even approaches we havent even thought of yet. Unfortunately, however, although there are a few voices starting to sing such an ecumenical tune; neither ecumenicalism nor intellectual modesty are exactly the rage these days.
Back when the competition over competing AI paradigms was still a matter of intense theoretical and philosophical dispute, one of the advantages often claimed on behalf of artificial neural nets over logic-based symbolic approaches was that the former but not the latter were directly neuronally inspired. By directly modeling its computational atoms and computational networks on neurons and their interconnections, the thought went, artificial neural nets were bound to be truer to how the actual human brain does its computing than its logic-based symbolic processing competitor could ever hope to be.
Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world.
This is not the occasion to debate such claims at length. My own hunch is that there is little reason to believe that deep learning actually holds the key to finally unlocking the mystery of general purpose, humanlike intelligence. Despite being neuronally inspired, many of the most notable successes of the deep learning paradigm depend crucially on the ability of deep learning architectures to do something that the human brain isnt all that good at: extracting highly predictive, though not necessarily deeply explanatory patterns, on the basis of being trained up, via either supervised or unsupervised learning, on huge data sets, consisting, from the machine eye point of view, of a plethora of weakly correlated feature bundles, without the aid of any top-down direction or built-in worldly knowledge. That is an extraordinarily valuable and computationally powerful, technique for AI-as-engineering. And it is perfectly suited to the age of massive data, since the successes of deep learning wouldnt be possible without big data.
Its not that we humans are pikers at pattern extraction. As a species, we do remarkably well at it, in fact. But I doubt that the capacity for statistical analysis of huge data sets is the core competence on which all other aspects of human cognition are ultimately built. But heres the thing. Once youve invented a really cool new hammerwhich deep learning very much isits a very natural human tendency to start looking for nails to hammer everywhere. Once you are on the lookout for nails everywhere, you can expect to find a lot more of them than you might have at first thought, and you are apt to find some of them in some pretty surprising places.
But if its really AI-as-cognitive science that you are interested in, its important not to lose sight of the fact that it may take a bit more than our cool new deep learning hammer to build a humanlike mind. You cant let your obsession with your cool new hammer make you lose sight of the fact that in some domains, the human mind seems to deploy quite a different trick from the main sorts of tricks that are at the core not only of deep learning but also other statistical paradigms (some of which, again, are card carrying members of the GOFAI family). In particular, the human mind is often able to learn quite a lot from relatively little and comparatively impoverished data. This remarkable fact has led some to conjecture that human mind must come antecedently equipped with a great deal of endogenous, special purpose, task specific cognitive structure and content. If true, that alone would suffice to make the human mind rather unlike your typical deep learning architecture.
Indeed, deep learning takes quite the opposite approach. A deep learning network may be trained up to represent words, say, as points in a micro-featural vector space of, say, three hundred dimensions, and on that basis of such representations, it might learn, after many epochs of training, on a really huge data set, to make the sort of pragmatic inferencesfrom say, John ate some of the cake to John did not eat all of the cakethat humans make quickly, easily and naturally, without a lot of focused training of the sort required by deep learning and similar such approaches. The point is that deep learning can learn to do various cool thingsthings that one might once have thought only human beings can doand although they can do some of those things quite well, it still seems highly unlikely that they do those cool things in precisely the way that we humans do.
I stress again, though, that if you are not primarily interested in AI-as-cognitive science, but solely in AI-as-engineering, you are free to care not one whit whether deep learning architectures and its cousins hold the ultimate key to understanding human cognition in all its manifestations. You are free to embrace and exploit the fact that such architectures are not just good, but extraordinarily good, at what they do, at least when they are given large enough data sets to work with. Still, in thinking about the future of AI, especially in light of both our darkest dystopian nightmares and our brightest utopian dreams, it really does matter whether we are envisioning a future shaped by AI-as-engineering or AI-as-cognitive-science. If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robotsat least not if we mean by thinking that peculiar thing that we humans do, done in precisely the way that we humans do it.
Once youve invented a new hammerwhich deep learning very much isits a very natural human tendency to start looking for nails to hammer everywhere.
Deep learning and its cousins may do what they do better than we could possibly do what they do. But that doesnt imply that they do what we do better than we do what we do. If so, then, at the very least, we neednt fear, at least not yet, that AI will radically outpace humans in our most characteristically human modes of cognition. Nor should we expect the imminent arrival of the so-called singularity in which human intelligence and machine intelligence somehow merge to create a super intelligence that surpasses the limits of each. Given that we still havent managed to understand the full bag of tricks our amazing minds deploy, we havent the slightest clue as to what such a merger would even plausibly consist in.
Nonetheless, it would still be a major mistake to lapse into a false sense of security about the potential impact of AI on the human world. Even if current AI is far from being the holy grail of a science of mind that finally allows us to reverse engineer it, it will still allow us to the engineer extraordinarily powerful cognitive networks, as I will call them, in which human intelligence and artificial intelligence of some kind or other play quite distinctive roles. Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing what I will call the division of cognitive labor between human and artificial intelligence within engineered cognitive networks will be with us to stay. And it will almost certainly be a rather fraught and urgent matter. And this will be thanks in large measure to the power of AI-as-engineering rather than to the power of AI-as-cognitive-science.
Indeed, there is a distinct possibility that AI-as-engineering may eventually reduce the role of human cognitive labor within future cognitive networks to the bare minimum. It is that possibilitynot the possibility of the so-called singularity or the possibility that we will soon be surrounded by a race of free, autonomous, creative, or conscious robots, chafing at our undeserved dominance over themthat should now and for the foreseeable future worry us most. Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world. It will not necessarily do so by superseding human intelligence, but simply by displacing a great deal of it within various engineered cognitive networks. And if thats right, it simply wont take the arrival of anything close to full-scale super AI, as we might call it, to radically disrupt, for good or for ill, the built cognitive world.
Start with the fact that much of the cognitive work that humans are currently tasked to do within extant cognitive networks doesnt come close to requiring the full range of human cognitive capacities to begin with. A human mind is an awesome cognitive instrument, one of the most powerful instruments that nature has seen fit to evolve. (At least on our own lovely little planet! Who knows what sorts of minds evolution has managed to design on the millions upon millions of mind-infested planets that must be out there somewhere?) But stop and ask yourself, how much of the cognitive power of her amazing human mind does a coffee house Barista, say, really use in her daily work?
Not much, I would wager. And precisely for that reason, its not hard to imagine coffee houses of the future in which more and more of the cognitive labor that needs doing within them is done by AI finely tuned to cognitive loads they will need to carry within such cognitive networks. More generally, it is abundantly clear that much of the cognitive labor that needs doing within our total cognitive economy that now happens to be performed by humans is cognitive labor for which we humans are often vastly overqualified. It would be hard to lament the off-loading of such cognitive labor onto AI technology.
Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing the division of cognitive labor between human and artificial intelligence will be with us to stay.
But there is also a flip side. The twenty-first century economy is already a highly data-driven economy. It is likely to become a great deal more so, thanksamong other thingsto the emergence of the internet of things. The built environment will soon be even more replete with so-called smart devices. And these smart devices will constantly be collecting, analyzing and sharing reams and reams of data on every human being who interacts with them. It will not be just the usual suspects, like our computers, smart phones or smart watches, that are so engaged. It will be our cars, our refrigerators, indeed every system or appliance in every building in the world. There will be data-collecting monitors of every sortheart monitors, sleep monitors, baby monitors. There will be smart roads, smart train tracks. There will be smart bridges that constantly monitor their own state and automatically alert the transportation department when they need repair. Perhaps they will shut themselves down and spontaneously reroute traffic while they are waiting for the repair crews to arrive. It will require an extraordinary amount of cognitive labor to keep such a built environment running smoothly. And for much of that cognitive labor, we humans are vastly underqualified. Try, for example, running a data mining operation using nothing but human brain power. Youll see pretty quickly that human brains are not at all the right tool for the job, I would wager.
Perhaps what should really worry us, I am suggesting, is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other cognitive labor will leave us open to something of an AI pincer attack. AI-as-engineering may give us the power to design cognitive networks in which each node is exquisitely fine-tuned to the cognitive load it is tasked to carry. Since distinctively human intelligence will often be either too much or too little for the task at hand, future cognitive networks may assign very little cognitive labor to humans. And that is precisely how it might come about that the demand for human cognitive labor within the overall economy may be substantially diminished. How should we think about the advance of AI in light of its capacity to allow us to re-imagine and re-engineer our cognitive networks in this way? That is the question I address in the remainder of this essay.
There may be lessons to be learned from the ways that we have coped with disruptive technological innovations of the past. So perhaps we should begin by looking backward rather than forward. The first thing to say is that many innovations of the past are now widely seen as good things, at least on balance. They often spared humans work that payed dead-end wages, or work that was dirty and dangerous, or work that was the source of mind-numbing drudgery.
What should really worry us is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other will leave us open to something of an AI pincer attack.
But we should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come. Even looking backward, we can see that new and disruptive technologies have sometimes been the culprit in increasing rather than decreasing the drudgery and oppressiveness of work. They have also served to rob work of a sense of meaning and purpose. The assembly line is perhaps the prime example. The rise of the assembly line doubtlessly played a vital role in making the mass production and distribution of all manner of goods possible. It made the factory worker vastly more productive than, say, the craftsman of old. In so doing, it increased the market for mass produced goods, while simultaneously diminishing the market for the craftsmans handcrafted goods. As such, it played a major role in increasing living standards for many. But it also had the downside effect of turning many human agents into mere appendages within a vast, impersonal and relentless mechanism of production.
All things considered, it would be hard to deny that trading in skilled craftsmanship for unskilled or semiskilled factory labor was a good thing. I do not intend to relitigate that choice here. But it is worth asking whether all things really were consideredand considered not just by those who owned the means of production but collectively by all the relevant stakeholders. I am no historian of political economy. But I venture the conjecture that the answer to that question is a resounding no. More likely than not, disruptive technological change was simply foisted on society as a whole, primarily by those who owned and controlled the means of production, and primarily to serve their own profit, with little, if any intentionality or democratic deliberation and participation on the part of a broader range of stakeholders.
Given the disruptive potential even of AI-as-engineering, we cannot afford to leave decisions about the future development and deployment of even this sort of AI solely in the hands of those who stand to make vast profits from its use. This time around, we have to find a way to ensure that all relevant stakeholders are involved and that we are more intentional and deliberative in our decision making than we were about the disruptive technologies of the past.
I am not necessarily advocating the sort of socialism that would require the means of production to be collectively owned or regulated. But even if we arent willing to go so far as collectively seizing the machines, as it were, we must get past the point of treating not just AI but all technology as a thing unto itself, with a life of its own, whose development and deployment is entirely independent of our collective will. Technology is never self-developing or self-deploying. Technology is always and only developed and deployed by humans, in various political, social, and economic contexts. Ultimately, it is and must be entirely up to us, and up to us collectively, whether, how, and to what end it is developed and deployed. As soon as we lose sight of the fact that it is up to us collectively to determine whether AI is to be developed and deployed in a way that enhances the human world rather than diminishes it, it is all too easy to give in to either utopian cheerleading or dystopian fear mongering. We need to discipline ourselves not to give into either prematurely. Only such discipline will afford us the space to consider various tradeoffs deliberatively, reflectively and intentionally.
We should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come.
Utopian cheerleaders for AI often blithely insist that it is more likely to decrease rather than increase the amount of dirt, danger, or drudgery to which human workers are subject. As long as AI is not turned against usand why should we think that it would be?it will not eliminate the work for which we humans are best suited, but only the work that would be better left to machines in the first place.
I do not mean to dismiss this as an entirely unreasonable thought. Think of coal mining. Time was when coal mining was extraordinarily dangerous and dirty work. Over 100,000 coal miners died in mining accidents in the U.S. alone during the twentieth centurynot to mention the amount of black lung disease they suffered. Thanks largely to automation and computer technology, including robotics and AI technology, your average twenty-first-century coal industry worker relies a lot more on his or her brains than on mere brawn and is subject to a lot less danger and dirt than earlier generations of coal miners were. Moreover, it takes a lot fewer coal miners to extract more coal than the coal miners of old could possibly hope to extract.
To be sure, thanks to certain other forces having nothing to do with the AI revolution, the number of people dedicated to extracting coal from the earth will likely diminish even further in the relatively near term. But that just goes to show that even if we could manage to tame AIs effect on the future of human work, weve still got plenty of other disruptive challenges to face as we begin to re-imagine and re-engineer the made human world. But that just gives us even more reason to be intentional, reflective, and deliberative in thinking about the development and deployment of new technologies. Whatever one technology can do on its own to disrupt the human world, the interactive effects of multiple apparently independent technologies can greatly amplify the total level of disruption to which we may be subject.
I suppose that, if we had to choose, utopian cheerleading would at least feel more satisfying and uplifting than dystopian fear mongering. But we shouldnt be blind to the fact that any utopian buzz we may fall into while contemplating the future may serve to blind us to the fact that AI is very likely to transformperhaps radicallyour collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall in the first place. The point is that that boundary is likely to be drawn, erased, and redrawn by the progress of AI. And as our conception of the proper boundary evolves, our conception of what we humans are here for is likely to evolve right along with it.
The upshot is clear. If it is only relative to our sense of where the boundary is properly drawn that we could possibly know whether to embrace or recoil from the future, then we are now currently in no position to judge on behalf of our future selves which outcomes are to be embraced and which are to be feared. Nor, perhaps, are we entitled to insist that our current sense of where the boundary should be drawn should remain fixed for all time and circumstances.
To drive this last point home, it will help to consider three different cognitive networks in which AI already plays, or soon can be expected to play, a significant role: the air traffic control system, the medical diagnostic and treatment system, and what Ill call the ground traffic control system. My goal in so doing is to examine some subtle ways in which our sense of proper boundaries may shift.
We cannot afford to leave decisions about the future development and deployment even of AI-as-engineering solely in the hands of those who stand to make vast profits from its use.
Begin with the air traffic control system, one of the more developed systems in which brain power and computer power have been jointly engineered to cooperate in systematically discharging a variety of complex cognitive burdens. The system has steadily evolved over many decades into a system in which a surprising amount of cognitive work is done by software rather than humans. To be sure, there are still many humans involved. Human pilots sit in every cockpit and human brains monitor every air traffic control panel. But it is fair to say that humans, especially human pilots, no longer really fly airplanes on their own within this vast cognitive network. Its really the system as a whole that does the flying. Indeed, its only on certain occasions, and on an as needed basis, that the human beings within the system are called upon to do anything at all. Otherwise, they are mostly along for the ride.
This particular human-computer cognitive network works extremely well for the most part. It is extraordinarily safe in comparison with travel by automobile. And it is getting safer all the time. Its ever-increasing safety would seem to be in large measure due to the fact that more and more of the cognitive labor done within the system is being offloaded onto machine intelligence and taken away from human intelligence. Indeed, I would hazard the guess that almost no increases in safety have resulted from taking burdens away from algorithms and machines and giving them to humans instead.
To be sure, this trend started long before AI had reached anything like its current level of sophistication. But with the coming of age of AI-as-engineering you can expect that the trend will only accelerate. For example, starting in the 1970s, decades of effort went into building human-designed rules meant to provide guidance to pilots as to which maneuvers executed in which order would enable them to avoid any possible or pending mid-air collision. In more recent years, engineers have been using AI techniques to help design a new collision avoidance system that will make possible a significant increase in air safety. The secret to the new system is that instead of leaving the discovery of optimal rules of the airways to human ingenuity, the problem has been turned over to the machines. The new system uses computational techniques to derive an optimized decision logic that better deals with various sources of uncertainty and better balances competing system objectives than anything that we humans would be likely to think up on our own. The new system, called Airborne Collision Avoidance System (ACAS) X, promises to pay considerable dividends by reducing both the risks of mid-air collision and the need for alerts that call for corrective maneuvers in the first place.
In all likelihood, the system will not be foolproofprobably no system will ever be. But in comparison with automobile travel, air travel is already extraordinarily safe. Its not because the physics makes flying inherently safer than driving. Indeed, there was a time when flying was much riskier than it currently is. What makes air travel so much safer is primarily the differences between the cognitive networks within which each operates. In the ground traffic control system, almost none of the cognitive labor has been off loaded onto intelligent machines. Within the air traffic control system, a great deal of it has.
To be sure, every now and then, the flight system will call on a human pilot to execute a certain maneuver. When it does, the system typically isnt asking for anything like expert opinion from the human. Though it may sometimes need to do that, in the course of its routine, day-to-day operations, the system relies hardly at all on the ingenuity or intuition of human beings, including human pilots. When the system does need a human pilot to do something, it usually just needs the human to expertly execute a particular sequence of maneuvers. Mostly things go right. Mostly the humans do what they are asked to do, when they are asked to do it. But it should come as no surprise that when things do go wrong, it is quite often the humans and not the machines that are at fault. Humans too often fail to respond, or they respond with the wrong maneuver, or they execute the needed maneuver but in an untimely fashion.
Utopian buzz may serve to blind us to the fact that AI is very likely to transformperhaps radicallyour collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall.
I have focused on the air traffic control system because it is a relatively mature and stable cognitive network in which a robust balance between human and machine cognitive labor has been achieved over time. Given its robustness and stability and the degree of safety it provides, its pretty hard to imagine anyone having any degree of nostalgia for the days when that task of navigating the airways fell more squarely on the shoulders of human beings and less squarely on machines. On the other hand, it is not at all hard to imagine a future in which the cognitive role of humans is reduced even further, if not entirely eliminated. No one would now dream of traveling on an airplane that wasnt furnished with the latest radar system or the latest collision avoidance software. Perhaps the day will soon come when no would dream of traveling on an airplane piloted by, of all things, a human being rather than by a robotic AI pilot.
I suspect that what is true of the air traffic control system may eventually be true of many of the cognitive networks in which human and machine intelligence systematically interact. We may find that the cognitive labor that was once assigned to the human nodes has been given over to intelligent machines for narrow economic reasons aloneespecially if we fail to engage in collective decision making that is intentional, deliberative, and reflective and thereby leave ourselves to the mercy of the short-term economic interests of those who currently own and control the means of production.
We may comfort ourselves that even in such an eventuality, that which is left to us humans will be cognitive work of very high value, finely suited to the distinctive capacities of human beings. But I do not know what would now assure us of the inevitability of such an outcome. Indeed, it may turn out that there isnt really all that much that needs doing within such networks that is best done by human brains at all. It may be, for example, that within most engineered cognitive networks, the human brains that still have a place within them will mostly be along for the ride. Both possibilities are, I think, genuinely live options. And if I had to place a bet, I would bet that for the foreseeable future the total landscape of engineered cognitive networks will increasingly contain engineered networks of both kinds.
In fact, the two system I mentioned earlierthe medical diagnostic and treatment system and the ground transportation systemalready provide evidence of my conjecture. Start with the medical diagnostic and treatment system. Note that a great deal of medical diagnosis involves expertise at interpreting the results of various forms of medical imaging. As things currently stand, it is mostly human beings that do the interpreting. But an impressive variety of machine learning algorithms that can do at least as well as humans are being developed at a rapid pace. For example, CheXNet, developed at Stanford, promises to equal or exceed the performance of human radiologists in the diagnosis a wide variety of difference diseases from X-ray scans. Partly because of the success of CheXNEt and other machine learning algorithms, Geoffrey Hinton, the founding father of deep learning, has come to regard radiologists as an endangered species. On his view, medical schools ought to stop training radiologists beginning right now.
Even if Hinton is right, that doesnt mean that all the cognitive work done by the medical diagnostic and treatment system will soon be done by intelligent machines. Though human-centered radiology may soon come to seem quaint and outmoded, there is, I think, no plausible short- to medium-term future in which human doctors are completely written out of the medical treatment and diagnostic system. For one thing, though the machines beat humans at diagnosis, we still outperform the machines when it comes to the treatmentperhaps because humans are much better at things like empathy than any AI system is now or is likely to be anytime soon. Still, even if the human doctors are never fully eliminated from the diagnostic and treatment cognitive network, it is likely that their enduring roles within such networks will evolve so much that human doctors of tomorrow will bear little resemblance to human doctors of today.
We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst.
By contrast, there is a quite plausible near- to medium-term future in which human beings within the ground traffic control system are gradually reduced to the status of passengers. Someday in the not terribly distant future, our automobiles, buses, trucks, and trains will likely be part of a highly interconnected ground transportation system in which much of the cognitive labor is done by intelligent machines rather than human brains. The system will involve smart vehicles in many different configurations, each loaded with advanced sensors that allow them collect, analyze, and act on huge stores of data, in coordination with each other, the smart roadways on which they travel, and perhaps some centralized information hub that is constantly monitoring the whole. Within this system, our vehicles will navigate the roadways and railways safely and smoothly with very little guidance from humans. Humans will be able to direct the system to get this or that cargo or passenger from here to there. But the details will be left to the system to work out without much, if any, human intervention.
Such a development, if and when it comes to full fruition, will no doubt be accompanied by quantum leaps in safety and efficiency. But no doubt it would be a major source of a possibly permanent and steep decrease in the net demand for human labor of the sort that we referred to at the outset. All around the world, many millions of human beings make their living by driving things from one place to another. Labor of this sort has traditionally been rather secure. It cannot possibly be outsourced to foreign competitors. That is, you cannot transport beer, for example, from Colorado to Ohio by hiring a low-wage driver operating a truck in Beijing. But it may soon be the case that we can outsource such work after all. Not to foreign laborers but to intelligent machines, right here in our midst!
I end where I began. The robots are coming. Eventually, they may come for every one of us. Walls will not contain them. We cannot outrun them. Nor will running faster than the next human being suffice to save us from them. Not in the long run. They are relentless, never breaking pace, never stopping to savor their latest prey before moving on to the next.
If we cannot stop or reverse the robot invasion of the built human world, we must turn and face them. We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst. Should we seek to regulate their development and deployment? Should we accept the inevitability that we will lose much work to them? If so, perhaps we should rethink the very basis of our economy. Nor is it merely questions of money that we must face. There are also questions of meaning. What exactly will we do with ourselves if there is no longer any economic demand for human cognitive labor? How shall we find meaning and purpose in a world without work?
These are the sort of questions that the robot invasion will force us to confront. It should be striking that these are also the questions presaged in my prescient epigraph from Mill. Over a century before the rise of AI, Mill realized that the most urgent question raised by the rise of automation would not be the question of whether automata could perform certain tasks faster or cheaper or more reliably than human beings might. Instead, the most urgent question is what we humans would become in the process of substituting machine labor for human labor. Would such a substitution enhance us or diminish us? That has, in fact, has always been the most urgent question raised by disruptive technologies, though we have seldom recognized it.
This time around, may we face the urgent question head on. And may we do so collectively, deliberatively, reflectively, and intentionally.
Read more here:
The Robots Are Coming - Boston Review
Google is building COVID-19 screening website as Trump declares national emergency – VentureBeat
Alphabets Verily is creating a website for people to screen for coronavirus and help them find testing sites at Target, Walgreens, CVS, and Walmart locations, according to Google. The initiative will begin in the San Francisco Bay Area with hopes of expanding coverage to more areas in the future. The move is part of a public-private partnership to dispense COVID-19 testing to millions of Americans in the weeks ahead, according to Vice President Mike Pence. Testing to confirm COVID-19 cases has been a critical part of response plans in other countries around the world.
The news was announced as President Trump declared a national emergency today in a White House press conference. Trump, Pence, and administration implied the Google screening website would service people nationwide. Positive coronavirus cases have now been found in all 50 states, and on Wednesday COVID-19 was declared a global pandemic by the World Health Organization.
The federal government will point people to the Google website to fill out a screening questionnaire, state their symptoms and risk factors, and if necessary will be told the location of a drive-through testing option. Automated machines will then be used to return results in 24 to 36 hours.
About 1,700 engineers are creating the website today, President Trump said today. VentureBeat reached out to Google for more information about the website.
Dr. Deborah Burke said that nasal swab samples can be delivered to doctors offices and hospitals, then picked up by companies like Quest Diagnostics.
The important piece in this all is, theyve gone from a machine that may have a lower throughput, to the potential to have automated extraction, she said. Its really key for the laboratory people; its an automated extraction of the RNA that then runs in an automated way on the machine with no one touching it. And the result comes out of the other end. She said that going from sample to machine to results removes the manual procedures that were slowing down testing and thus delaying results.
As an emergency executive action announced by Trump today, the Department of Education will waive interest on student loans held by federal government agencies, and he instructed secretary of energy to buy crude oil reserves.
At midnight tonight, the United States will suspend travel from Europe, and U.S. citizens traveling into the country will be asked to take part in a voluntary 14-day quarantine.
Tech giants like Apple, Amazon, Facebook, Google, and Microsoft took part in a teleconference with White House CTO Michael Kratsios to discuss how artificial intelligence and tech can help combat coronavirus. A White House statement said the discussion touched on issues like the creation of tools. Public health officials and authorities from China to Singapore and beyond have used AI as part of solutions to detect and fight coronavirus since the novel disease emerged in December 2019.
Earlier this year, Googles DeepMind also released structure predictions of proteins associated with the virus that causes COVID-19 with the latest version of the AlphaFold system.
These structure predictions have not yet been experimentally verified, but the hope is that by accelerating their release they may contribute to the scientific communitys understanding of how the virus functions and experimental work in developing future treatments, CEO Sundar Pichai said in a blog post last week.
Upon questioning by reporters at the press conference, Trump refused to take responsibility for the heretofore slow U.S. response to the pandemic. He also evaded questions about whether or not he needs to be tested for COVID-19, despite the fact that he was in close proximity days ago with a person who has tested positive Fabio Wajngarten, press secretary to Brazil president Jair Bolsano. Eventually, after multiple reporters pressed him on the issue, Trump said he would get tested but not, he said, because of his contact with the two men. Miami Mayor Francis Suarez was also in contact the two men at Trumps Mar-a-Lago resort and today tested positive for coronavirus.
Throughout the press conference, President Trump, Vice President Pence, and a roster of scientists and executives shook hands and touched the same mic.
See the article here:
Google is building COVID-19 screening website as Trump declares national emergency - VentureBeat