Category Archives: Deep Mind
A deep sleep to awaken your body – The New Indian Express
Express News Service
CHENNAI:Yoga is an ancient science that acts as a restorative therapy for overall relaxation of the mind and body. Yoga Nidra is a unique recuperative technique wherein sleep is used as a meditation process for healing purposes. Also known as yogic sleep, its a guided process wherein experts direct practitioners into a deep state of relaxation that stands on the edge of waking and sleeping, which results in calming the nervous system down.
Yoga Nidra does not involve performing asanas; instead its about relaxing, getting into a meditative state of mind and going into conscious sleep. While in meditation one is awake, in yogic sleep its possible to enter a state of bliss which is deeply healing; the mind and body are relaxed while the consciousness is awake.
Best practice forworking professionalsStress and anxiety have become normal part of working professionals life that puts untold strain on the autonomic nervous system. Stress and anxiety not only affects bodily functions such as breathing, blood flow, heart beat, digestion, etc but also disturbs sleep and decreases focus, creativity, clarity and concentration. All these problems have a direct and indirect effect on the productivity.With Yoga Nidra, our nervous system releases a powerful antioxidant called melatonin into the bloodstream that helps manage blood pressure, digestive ability, stress levels and the immune function better and also induces restful sleep. The relaxation achieved through Yoga Nidra helps in improved concentration, clairvoyance, focus and productivity.
Benefits of Yoga NidraImproves cognitive abilities: Yoga calms the mind and body, and slows down the nervous system activities. Thus, stress is reduced, and its physical and mental symptoms such as muscle tensions and headaches are also released. Yoga Nidra also enhances ones cognitive abilities since the mind stops stressing and instead focuses on thinking clearly, be it for problem-solving purpose, creativity thinking, etc. Not being overwhelmed by stress, the mind is free to function at its full capability. Hence, yogic sleep also stalls the cognitive aging of the brain, and thus results in improved attention span and memory, which are important to carry out everyday activities.
Improves focus and clarity of mind and encourages a state of mindfulness: When one is not stressed, it becomes easy to have a clear mind and focus better on relevant matters, and being mindful. The quality of mindfulness is born out of the acceptance of the present moment without any judgment or worry and living it fully. This quality is another positive effect of the Yoga Nidra practice. Integrating mindfulness into everyday life, allows one to live with a clear, calm purpose that fosters a good quality of life.
Growing self confidence and esteem: It has been noted that with the regular practice of Yoga Nidra, a persons self-esteem and confidence can be thoroughly improved. An essential step in guided meditation and Yoga Nidra is the setting of intentions or sankalpas for oneself, which are essentially goals that one desires to fulfil. Achieving a goal is exhilarating and does wonders for ones confidence and esteem, which is what Yoga Nidra propels one to do.
Improves the quality of sleep and overall health: Yogic sleep is immensely effective in improving sleep quality and regularising its patterns. Since one is less stressed, once one makes a sankalpa to sleep, they do so effortlessly, faster and with regularity. A good nights rest signals the absence of sleep disorders that are the cause and symptoms of many diseases. Through the frequent practice of Yoga Nidra, one enhances ones sleep cycles and it stands to reason that ones healthy as well, since their blood pressure and cholesterol levels are lowered, and while immune and nervous systems functioning is improved, and theres lots of energy at ones disposal.
Diminishes symptoms of stress, anxiety, depression, chronic pain and PTSD: All psychological disorders are primarily born of an unquiet mind and heightened negative emotions. Since Yoga Nidra calms the mind and releases pent-up emotions, it mitigates stress to lead the persons focus on thinking clearer and working better. Hence, Yoga Nidra has been adopted the world over to treat anxiety, depression, chronic pain and post-traumatic stress disorder (PTSD) among others.
All the disorders can be eliminated with the ability of the teacher to direct practitioners into a deep state of relaxation, which results in the mind and body having the opportunity to rest, recover, and recuperate. Since, Yoga Nidra also reduces the instances of inflammation by improving the immune system, most aches and pains are dealt with effectively as well.
Mental health of practitioners has always seemed to have benefitted from the practice of this ancient therapy. The impact of Yoga Nidra on the mental health of college professors 1 saw the intervention group show enhanced results than the control group, with zero exposure to yogic sleep. Some researches clearly state this practice as a simple, effective treatment for insomnia and sleep disorders.
How to practise Yoga Nidra?Practising Yoga Nidra requires a bit of patience initially, so wear comfortable workout clothes and lay down on a yoga mat in the Savasana pose, with eyes closed. Its best to choose a dark corner with no distractions in order to induce the required peace of mind.
Many people practise it right before turning in for the night as they believe it improves the quality of their sleep. Yoga Nidra can completely change ones life with regular practice. So if a relaxed, stress-free life is what you desire, consider adding this practice to your everyday course of life. It can be the secret of your successful and balance professional life.
(The writer is founder of Divine Soul Yoga)
Visit link:
A deep sleep to awaken your body - The New Indian Express
‘I don’t really trust papers out of top AI labs anymore’ – Analytics India Magazine
The role of scientific research in pushing the frontiers of artificial intelligence cannot be overstated. The researchers working at MITs Computer Science and Artificial Intelligence Laboratory, Stanford Artificial Intelligence Laboratory, Oxford University and many other top labs are shaping the future of humanity. In addition, most top AI labs, even the private players such as DeepMind and OpenAI, publish on preprint servers to democratise and share knowledge.
But, how useful are these papers for the community at large?
Recently, a Reddit user published a post titled, I dont really trust papers out of Top Labs anymore. In the post, the user asked: Why should the AI community trust these papers published by a handful of corporations and the occasional universities? Why should I trust that your ideas are even any good? I cant check them; I cant apply them to my own projects.
Citing the research paper titled An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems, the Reddit user said, Its 18 pages of talking through this pretty convoluted evolutionary and multitask learning algorithm; its pretty interesting, solves a bunch of problems. But two notes. One, the big number they cite as the success metric is 99.43 on CIFAR-10, against a SotA of 99.40.
The Reddit user also referred to a chart towards the end of the paper that details how many TPU core-hours were used for just the training regimens that resulted in the final results.
The total is 17,810 core-hours. Lets assume that for someone who doesnt work at Google, youd have to use on-demand pricing of USD3.22 per hour. This means that these trained models cost USD57,348.
Strictly speaking, throwing enough compute at a general enough genetic algorithm will eventually produce arbitrarily good performance, so while you can read this paper and collect interesting ideas about how to use genetic algorithms to accomplish multitask learning by having each new task leverage learned weights from previous tasks by defining modifications to a subset of components of a pre-existing model, he said.
Jathan Sadowski, a senior fellow at Emerging Tech Lab, responded: AI/ML research at places like Google and OpenAI is based on spending absurd amounts of money, compute, and electricity to brute force arbitrary improvements. The inequality, the trade-offs, the wasteall for incremental progress toward a bad future.
The Reddit post has been a source of much debate on social media. Many pointed out that there should be a new journal for papers where one can replicate their results in under eight hours on a single GPU.
Findings that cant be replicated are intrinsically less reliable. And the fact that the ML community is maturing towards decent scientific practices instead of anecdotes is a positive sign, said Leon Derczynski, assistant professor at IT University of Copenhagen.
The replication crisis has been gripping the scientific community for ages. The AI domain is also grappling with it, mostly because researchers often dont share their source code. A replication crisis refers to when scientific studies are difficult or impossible to reproduce.
According to a 2016 Nature survey, more than 70 percent of researchers have tried and failed to reproduce another scientists experiments. Further, more than 50 percent of them have failed to reproduce their own experiments.
Reproducibility is the basis of quality assurance in science as it enables past findings to be independently verified.
The scientific and research community strongly believes that withholding important aspects of studies, especially in domains where larger public good and societal well-being are concerned, does a great disservice.
According to the 2020 State of AI report, only 15 percent of AI studies share their code, and industry researchers are often the culprits. The report criticises OpenAI and DeepMind, two of the worlds best AI research labs, for not open sourcing their code.
In 2020, Google Health published a paper in Nature that described how AI was leveraged to look for signs of breast cancer in medical images. But Google drew flak as it provided little information about its code and how it was tested. Many questioned the viability of the paper, and a group of 31 researchers published another paper in Nature titled Transparency and reproducibility in artificial intelligence. Benjamin Haibe-Kains, one of the papers authors, called Googles paper an advertisement for cool technology with no practical use.
However, things are changing. NeurIPS now asks authors/researchers to produce a reproducibility checklist along with their submissions. This checklist consists of information such as the number of models trained, computing power used, and links to code and datasets. Another initiative called the Papers with Code project was started with a mission to create free and open-source ML papers, code and evaluation tables.
Visit link:
'I don't really trust papers out of top AI labs anymore' - Analytics India Magazine
A Deep Dive Into The World’s Most Popular Personality Test: The MBTI – mindbodygreen.com
Being what's probably the most popular personality assessment in the world today, the MBTI has, of course, come up against some criticism. In addition to the research's mixed results when it comes to the assessment's reliability, Hallett explains some of the main criticisms also include that people's results can change, or that people can feel "boxed-in" by the results.
In a 1993 paper titled "Measuring the MBTI and Coming Up Short," David Pittenger, Ph.D., a professor of psychology at Marshall University, reviews the research on the Myers-Briggs test and raises questions about its underlying concepts. "The MBTI reminds us of the obvious truth that all people are not alike, but then claims that every person can be fit neatly into one of 16 boxes," he writes. "I believe that MBTI attempts to force the complexities of human personality into an artificial and limiting classification scheme. The focus on the 'typing' of people reduces the attention paid to the unique qualities and potential of each individual."
To that, Hackston and Nardi explain that these types are about preferences, and your type doesn't suggest you can't move outside your own preferences. Nardi says you can think of it like whether you're left or right-handed. "If I'm right-handed, that doesn't mean I don't use my left hand, or I don't use my hands together," he explains.
Hackston notes the results are meant to be more of a "springboard" for understanding your preferences so you can recognize your own patterns and actively choose to "go against your type" when situations call for it.
Some experts also do not respect the work of Carl Jung, Katharine Cook Briggs, or Isabel Briggs Myers. Jung, for one thing, has received plenty of criticism, given how much of his theories were based on his own dreams and ideas as opposed to scientific fact. Cook Briggs and Briggs Myers were also not trained psychologists or mental health professionals, though Nardi points out that this particular criticism is "actually incredibly sexist because, at the time, it was very difficult for women to become psychologists or even get into college."
Another criticism of the MBTI is using it to assess or predict performance in the workplace, which Hackston, Hallett, and Nardi all agree is not what this assessment is intended for. "It's not about performanceit's about preference. No personality assessment should be used for hiring, and in some states, it's actually illegal to use it that way," Nardi notes.
See the rest here:
A Deep Dive Into The World's Most Popular Personality Test: The MBTI - mindbodygreen.com
Light Reading VS Deep Reading: What You Read Matters … – Learning Mind
As information becomes more and more web-based, so too does the attention of the younger generations.
We rely on the internet to give us everything, from news to research, and it is having a huge impact on what we read and how we read it.
We have moved away from the deep reading of different texts and journals and moved towards the skim, light reading of web posts, and websites in order to get the information that we want.
This affects the way that our brains process and relay information.Instead of taking in each element of the text we read, we simply find the pieces of information that serve our agendas, rather than developing our brains to help us recall the information later.
When we read deeply, we read much slower; we take in the details of different sensory descriptions and we become much more immersed in the things we read.
Whereas with thelight reading, we read much faster, we look for the pieces of information that we want and we dont really look at everything else around it. This doesnt give us all of the information available from the text and it doesnt exercise our brains as much as deeper reading does.
Deep reading activates the centres in your brain which are responsible for speech, hearing, and vision, and helps them to work together to create an image in our heads.
Reading in this way also develops our ability to perceive and use language, and gives us the greater ability to create more complex sentence structures and fuller descriptions.
When we read deeply, we also take in the information much better than in the case of light reading. The information is stored in the brain when we deep read and is ready to be recalled later on.
Deep reading has also been shown to make us nicer. As we read we can articulate and develop the skill to understand our emotions much better than before, and this helps us to understand emotions in others as well.
Reading things such as poems, novels, and academic reports can massively develop your writing abilities. Make sure to take your time and soak in all of the available information to create a full picture, rather than skim-reading the information.
Putthese types of writings in your life rather than getting information only from online blogs and televisions shows, as they make your brain switch off almost immediately. Even though they are entertaining, they wont develop your writing ability whatsoever.
Unfortunately, in the modern world, reading has become an unpopular pastime, but the benefits of real reading are massive and shouldnt be ignored.
Next time youre writing a paper, a short story, or even just have some free time, take some time to read and read deeply, taking in all the information and really enjoying it.
References:
Contributing writer at Learning Mind
Francesca Forsythe is a professional writer who holds a dual award Master's degree in European Law and Philosophy of Law from Leiden University. She has written for several websites on a range of subjects across lifestyle, relationships, and health & fitness, as well as academic pieces in her fields of study.
Visit link:
Light Reading VS Deep Reading: What You Read Matters ... - Learning Mind
The hype around DeepMinds new AI model misses whats actually cool about it – MIT Technology Review
Nature is trying to tell us something here, which is this doesnt really work, but the field is so believing its own press clippings that it just cant see that, he adds.
Even de Freitass DeepMind colleagues Jackie Kay and Scott Reed, who worked with him on Gato, were more circumspect when I asked them directly about his claims. When asked whether Gato was heading toward AGI, they wouldnt be drawn. I dont actually think its really feasible to make predictions with these kinds of things. I try to avoid that. Its like predicting the stock market, said Kay.
Reed said the question was a difficult one: I think most machine-learning people will studiously avoid answering. Very hard to predict, but, you know, hopefully we get there someday.
In a way, the fact that DeepMind called Gato a generalist might have made it a victim of the AI sectors excessive hype around AGI. The AI systems of today are called narrow, meaning they can only do a specific, restricted set of tasks such as generate text.
Some technologists, including some at DeepMind, think that one day humans will develop broader AI systems that will be able to function as well as or even better than humans. Though some call this artificial general intelligence, others say it is like "belief in magic. Many top researchers, such as Metas chief AI scientist Yann LeCun, question whether it is even possible at all.
Gato is a generalist in the sense that it can do many different things at the same time. But that is a world apart from a general AI that can meaningfully adapt to new tasks that are different from what the model was trained on, says MITs Andreas: Were still quite far from being able to do that.
Making models bigger will also not address the issue that models dont have lifelong learning, which would mean that if taught something once, they would understand all the implications and use it to inform all the other decisions they make, he says.
The hype around tools like Gato is harmful for the general development of AI, argues Emmanuel Kahembwe, an AI and robotics researcher and part of the Black in AI organization cofounded by Timnit Gebru. There are many interesting topics that are left to the side, that are underfunded, that deserve more attention, but thats not what the big tech companies and the bulk of researchers in such tech companies are interested in, he says.
Tech companies ought to take a step back and take stock of why they are building what they are building, says Vilas Dhar, president of the Patrick J. McGovern Foundation, a charity that funds AI projects for good.
AGI speaks to something deeply humanthe idea that we can become more than we are, by building tools that propel us to greatness, he says. And thats really nice, except it also is a way to distract us from the fact that we have real problems that face us today that we should be trying to address using AI.
More here:
The hype around DeepMinds new AI model misses whats actually cool about it - MIT Technology Review
DNAStar Beefs up Protein Structure Prediction With DeepMind-Powered NovaFold AI – GenomeWeb
CHICAGO This month, DNAStar formally launched NovaFold AI, an update to its NovaFoldprotein structure prediction software that incorporates the AlphaFold2 artificial intelligence system from Google sister company DeepMind Technologies. The company had introduced a beta version of the product, dubbed NovaFold AI powered by AlphaFold2, in February.
Steve Darnell, team leader of DNAStar's structural biology and protein team, called NovaFold AI a cloud version of the AlphaFold 2 pipeline. The offering lets usersaccess the AlphaFold protein structure prediction method from within DNAStar'sNovaCloud Services interface of the firm's Protean 3D visualizationand analysis suite.
He said that AlphaFold 2 is designed to help researchers without a bioinformatics background predict structures.
"This release allows customers to more tightly integrate the analysis with running the prediction," added DNAStar General Manager Shawn Grass.
Madison, Wisconsin-based DNAStar sees NovaFold AI as a growth opportunity.
"The structure market has been a little more challenging for us to get into, because when people think of us, they think of just DNA. They don't think of us in the protein area," Grass said. "Our hope is with the excitement and buzz right now about structure prediction that this will give us an opportunity to expand our reach."
DNAStar, which has been in business since 1984, is actually quite active beyond DNA.
Last year, the privately held firm introduced version 17.3 of its Lasergene software for DNA, RNA, and protein sequencing assembly and analysis. That release added viral genome analysis workflow support for Pacific Biosciences and Oxford Nanopore long-read sequences, including data from PCR-amplified fragments generated according to Artic network protocols. The update was also optimized forgenomic analysis and variant identification to support COVID-19 research.
Lasergene, which serves the molecular biology and genomics markets, is one DNAStar's two product lines. It includes a series of applications for tasks like genome visualization, sequence assembly, sequence alignment, and in silico cloning. DNAStar's molecular visualization and analysis platform, Protean 3D, is technically part of Lasergene as well.
"Protean 3D has a tight integration between its sequence representations and its structural representations through a selection model," Darnell explained.
Lasergene products are "general tools" aimed at anyone in molecular biology or genomics looking to manage pipelines and workflows, according to Grass. He said that a fair number of customers mistakenly refer to the company as Lasergene because the product name is well known in some circles.
The other DNAStar product line, for structural biology applications including structure prediction, docking, and antibody modeling, is called Nova.
NovaFold is the commercialization of I-TASSER, protein structure prediction software developed by University of Michigan bioinformatician Yang Zhang. In 2015, DNAStar licensed the exclusive commercial rights to I-TASSER and markets the technology as part of Protean 3D.
The firm also licenses technology for its NovaDock protein-protein docking software from Cancer Research UK. Darnell said that the company's products represent a mixture of DNAStar-developed tools, freely available external tools like AlphaSeek, and licensed technology.
DNAStar is not directly partnering with DeepMind, which makes some of its AlphaFold technology available through open-source channels.
Grass said that DNAStar has not made any additions or other changes to the AI that AlphaFold provides. "It's something we are always looking at, but right now, we felt it was best to leave it just how it was," he said.
What the company does offer with the new release is the ability to add AlphaFold structure predictions as templates to the core NovaFold software to guide the modeling process toward these predictions. "What we also bring to the table is the additional downstream analysis outside of just coordinate generation," according to Darnell.
Most of the tools in both product lines are integrated on DNAStar's NovaCloud infrastructure, which the firm has been building up for the last five or six years.
DeepMind's AlphaFold had the highest score in the 14th and most recent Critical Assessment of Structure Prediction (CASP) competition in 2020, in which entrants are given the amino acid sequences for about 100 proteins to then predict their structures.
"It certainly excels wonderfully at fold recognition and modeling," Darnell said. He added that AlphaFold will be entered in CASP 15 this summer, a competition that will have a greater focus on predicting multimers.
That is a direction DNAStar wants to go in with NovaFold 2.
"The direct structure prediction of multimers is definitely where I believe the field is certainly looking forward toward, as well as different and better approaches for doing multi-domain proteins," Darnell said. "A tool like AlphaFold can identify and actually model different domains in isolation within a multi-domain sequence."
Darnell also expects to see more activity around protein-protein docking technology in subsequent years.
Read more from the original source:
DNAStar Beefs up Protein Structure Prediction With DeepMind-Powered NovaFold AI - GenomeWeb
This Week’s Awesome Tech Stories From Around the Web (Through May 28) – Singularity Hub
ROBOTICS
Dyson Reveals Its Big BetRobotsJasper Jolly | The GuardianDyson has signaled it is placing a big bet on producing robots capable of household chores by 2030, as it looks to move beyond the vacuum cleaners, fans and dryers that made its founder one of the wealthiest British businessmen. The company, founded by billionaire Sir James Dyson, on Wednesday published photographs of robot arms being used in household settings, including cleaning furniture, a claw picking up plates, and a hand-like machine picking up a teddy bear.
The Big New Idea for Making Self-Driving Cars That Can Go AnywhereWill Douglas Heaven | MIT Technology ReviewWhen [the car veered to the side], Kendall grabbed the wheel for a few seconds to correct it. The car veered again; Kendall corrected it. It took less than 20 minutes for the car to learn to stay on the road by itself, he says. This was the first time that reinforcement learningan AI technique that trains a neural network to perform a task via trial and errorhad been used to teach a car to drive from scratch on a real road.
Quantum Internet Inches Closer With Advance in Data TeleportationCade Metz | The New York TimesWhen data travels this way, without actually traveling the distance between the nodes, it cannot be lost. Information can be fed into one side of the connection and then appear on the other, Dr. Hanson said. The information also cannot be intercepted. A future quantum internet, powered by quantum teleportation, could provide a new kind of encryption that is theoretically unbreakable.
Accused of Cheating by an Algorithm, and a Professor She Had Never MetKashmir Hill | The New York TimesSuddenly [during the pandemic], millions of people were forced to take bar exams, tests and quizzes alone at home on their laptops. To prevent the temptation to cheat, and catch those who did, remote proctoring companies offered web browser extensions that detect keystrokes and cursor movements, collect audio from a computers microphone, and record the screen and the feed from a computers camera, bringing surveillance methods used by law enforcement, employers and domestic abusers into an academic setting.
Walmart Is Expanding Its Drone Deliveries to Reach 4 Million HouseholdsMitchell Clark | The VergeIt sounds like Walmarts not just trying to expand the programs footprintthe company also wants to increase the number of packages its delivering via drone. In its press release, the company says its completed hundreds of deliveries within a matter of months. With the expansion, it says itll have the ability to do more than a million drone deliveries a year.
Tiny Robot Crab Doctors Could Roam the Human Body One DayMonisha Ravisetti | CNETNorthwestern University researchers announced on Wednesday their quite adorable prototype of a crab-shaped mini-robot. It can run. It can jump. Its tiny enough to fit inside the o in this sentence. And its record-breaking. The team calls it the smallest remote-controlled walking robot ever constructed.
The Hype Around DeepMinds New AI Model Misses Whats Actually Cool About ItMelissa Heikkil | MIT Technology ReviewUnsurprisingly, de Freitass announcement triggered breathlesspress coverage that DeepMind is on the verge of human-level artificial intelligence. This is not the first time hype has outstripped reality. Other exciting new AI models, such as OpenAIs text generator GPT-3 and image generator DALL-E, have generated similarly grand claims. For many in the field, this kind of feverish discourse overshadows other important research areas in AI.
Could Nuclear Clocks Drive a Technological Revolution?Ethan Siegel | Big ThinkToday, atomic clocks play an essential role in telecommunications, financial transactions, computers, GPS satellite navigation technologies as well as a variety of scientific applications. We can synchronize clocks around the globe with ~nanosecond precisions. But still, there are limits to what we can do, and those are set by the physical limits of atoms. Yet theres a tremendous hope for surpassing all current limits by more than an order of magnitude: nuclear clocks. Heres the science of how it all works.
Niantic Positions Itself as a Capable Rival to Apple, Meta in Coming AR WarsMark Sullivan | Fast CompanyiAbout once a decade for the last 70 years, a new computing platform arrives and changes the way we work, play, communicate with each other, and lead our lives, Niantic founder John Nanke said of AR during his keynote Tuesday in San Francisco. Were now at the beginning of another one of those shifts, and it could be the most consequential one yet. This transition will truly blend the real and the digital world.i
Scientists CRISPRd Tomatoes to Make Them Full of Vitamin DEd Cara | GizmodoThe tomatoes of the future could help boost your levels of the sunshine vitamin. Researchers in the UK say theyve developed genetically edited tomatoes that can produce high levels of vitamin D with just an hour of ultraviolet light exposure. These edited tomatoes would ideally help provide a rich and plant-based source of the essential nutrient, which is commonly lacking in much of the population.
World Builders Put Happy Face on Superintelligent AIEliza Strickland | IEEE SpectrumOne of the biggest challenges in aworld-building competitionthat asked teams to imagine apositive future with superintelligent AI: Make it plausible. Were not trying to push utopia, [the Future of Life Institutes Ann Yelizarova] says, noting that the worlds built for the contest are not perfect places with zero conflicts or struggles. Were just trying to show futures that are not dystopian, so people have something to work toward, she says.
Humans Could Go Extinct. Heres How and Whos Trying to Stop ItErin Carson | CNETiThe end of the world is such a great concept for giving shape to history, says [Oxfords] Anders Sandberg. We want to know how it ends. We want there to be a meaning or a tragedy or a comedy. Maybe a laugh track at the end of the universe. It turns out, scientists, scholars, policy experts and more are studying this question, trying to decipher how humanitys end could come about, and whether theres anything that can be done to prevent it.
Image Credit: niloy tesla / Unsplash
Go here to see the original:
This Week's Awesome Tech Stories From Around the Web (Through May 28) - Singularity Hub
Review: The Mind and the Moon, by Daniel Bergner – The New York Times
It is with great skill that Bergner places Carolines story in context of the history of modern psychiatry. Its hard to do justice to the sweep of the larger story he tells, but probably the most shocking part is the utter randomness that has characterized so much of the modern search for psycho-pharmaceuticals, combined with the utterly devastating side effects they can have. Bergner tracks the history of treatments like lithium, S.S.R.I.s and antipsychotics. In many cases, researchers only stumbled across the drugs potential to ameliorate symptoms. Of lithium, he writes that 19th-century doctors used it to treat kidney stones. Later it was among the ingredients in 7-Up. Even though lithium was approved by the F.D.A. for psychiatric use in 1970, no one had more than a vague concept of how the drug worked neurologically, Bergner notes, and they still dont.
Bergner interviews a group of researchers who, despite the accidental origins of numerous pharmaceuticals, strive today to develop them into substances that will truly improve peoples lives. This is an interesting set of interviewees, all dedicated, hardworking, highly knowledgeable scientists, who frankly acknowledge how poor the efficacy of many drugs is, how much of a toll they can take on people who use them and how little we know about how the brain actually works.
Bergners subjects, as well as the scientists and clinicians he interviews, also attest to the fuzziness of many diagnostic and behavioral boundaries. Standard diagnoses often collapse what some scientists believe are different conditions into one, whereas other diagnoses wall off conditions that are perhaps not so different at all. Its possible that psychosis, for example, is not really one disorder but dozens of them.
Where the history of drug development has been astonishingly haphazard, and our grasp of brain function is disturbingly low-level, the history of psycho-pharmaceutical marketing has been clever and effective. I still recall when an undergraduate friend confidently told me that her recent bout with depression had resulted from a chemical imbalance in her brain. I was dazzled by the explanation. It made her sadness cleaner, more easily resolved, less unglamorous.
It turns out that we had both signed on to the chemical imbalance theory, which proposed, in the 1960s, that depression could result from a deficiency of neurotransmitters. This ultimately evolved into the idea that too many or too few neurochemicals could cause different kinds of mental illness, such as psychosis. Biology became ascendant in our understanding of psychiatric conditions, which led to a vision of medicalized mental health that one of Bergners scientists calls a house of cards. The idea that S.S.R.I.s, for example, could further our understanding of disorders, the scientist observed, was like saying, I have pain so I must have an aspirin deficiency.
Read the original:
Review: The Mind and the Moon, by Daniel Bergner - The New York Times
Deep roots? Try changing with the times | Letters To Editor | santafenewmexican.com – Santa Fe New Mexican
Country
United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe
Ranking Kendrick Lamars The Heart Part 5 Deepfakes From Least To Most Bizarre – Okayplayer
Music - 2 hours ago
Ahead of the release of his upcoming album, Mr. Morale & The Big Steppers, Kendrick Lamar has released another installment of his The Heart series. The Heart Part 5 debuted on Sunday; accompanying its release was a pretty surreal music video where Lamar, against a red backdrop, transforms into different Black celebrities as he performs the song. Through the use of deepfake technology (the use of AI to replace the likeness of one person with another in video and other digital media), Kendrick Lamar becomes O.J. Simpson, Kanye West, JussieSmollett, Will Smith, Kobe Bryant, and Nipsey Hussle throughout the six-minute long video.
Now, deepfakes are inherently weird, and this is surely the first time weve seen them used in such a way by a mainstream rap artist. As inventive as it is, there is a bizarreness to the music video as Lamars face isnt just replaced with the faces of living (and dead) Black celebrities, but he raps some of his verses while donning those faces, too.
But which deepfakes were the most bizarre? How about the least? Well, rather than try and interpret every second of the music video (Kendrick fans on Twitter are already going above and beyond around that), weve done the real hard work of ranking the deepfakes used in The Heart Part 5 from least to most bizarre.
As with all six deepfakes, theyre all pretty accurate in their likeness, especially Jussie Smolletts. Although hes one of the most unexpected of the bunch, hes not the most bizarre.
Considering Kanye is actually a rapper, its not too unnerving seeing his face rapping Kendricks lyrics. However, Kendricks hair paired with Kanyes face might be the most bizarre if our ranking was based solely on that.
Is it just me or does the deepfake of Will Smith just look like a light skin Andr 3000?
The Kobe deepfake is one of two deceased figures used in the video, and that adds to the bizarreness of it all. But the likeness just feels too uncanny.
Its the one that starts everything off and its so unexpected. The moment Kendrick covered his face only for a deepfake of O.J. to appear, I had to scroll back a few seconds to make sure I wasnt losing my mind. That, paired with the fact that its O.J., is why this deepfake is one of the most bizarre ones from the video.
What makes the Nipsey deepfake the most bizarre is that not only is this the other deceased figure used in the video, but that Kendrick also raps from the perspective of the late rapper when he dons his face. As Nipsey, he directs a few lines at the Crenshaw rappers brother Sam Asghedom:
And Sam, Ill be watchin over youMake sure my kids watch all my interviewsMake sure you live all the dreams we produceKeep that genius in your brain on the move
And he also exonerates Nipseys killer (I forgive you, just know your souls in question) which, depending on how you view it, could be seen as taking ones creative license a little too far. But theres no denying how eerie it is to see Kendrick transform into Nipsey and essentially stay as him until the song comes to a close.
Read this article:
Ranking Kendrick Lamars The Heart Part 5 Deepfakes From Least To Most Bizarre - Okayplayer