Category Archives: Deep Mind

Deep Conditioners You Should Have In Stock For Winter – The List

Suppose you want salon-level results at home and don't mind splurging. In that case, Byrdie recommends the Christophe Robin Regenerating Hair Mask with Prickly Pear Seed Oil ($69) because it is perfect for color-treated hair. It is reparative and adds ridiculous shine to your hair.

If you want a more affordable option, you can always get the cult-favorite Aussie 3 Minute Miracle Moist Deep Conditioner ($2.99), as recommended by Refinery29. It has jojoba oil and aloe to moisturize the driest tresses without breaking the bank. If you have dry, curly hair, Vogue recommends Briogeo Don't Despair, Repair! Deep Conditioning Mask ($38) because it has a blend of B-vitamins and collagen to restore dry hair that needs extra hydration. This deep conditioning mask doesn't have sulfates, silicones, or parabens.

Those with fine,thin hair may want to skip this because their hair is prone to becoming greasy. However, you have to fix the damage. That's why Olaplex No. 5 Bond Maintenance Conditioner ($28) is perfect for fine hair, according to Cosmopolitan, because it has less protein and moisture. This means it won't weigh your hair down or make it look flat.

It can be a struggle to maintain blonde without the brassiness. That's why InStyle loves It's a 10 Five Minutes Hair Repair for Blondes ($20.99). It has hydrolyzed collagen to restore hydration and magnolia extracts to enhance and brighten color during the dry months.

Originally posted here:
Deep Conditioners You Should Have In Stock For Winter - The List

Council Post: Reflecting On The Cost Of Our Dream To Build General AI – Analytics India Magazine

It is a long-standing joke among the industry experts that while AI may crunch massive amounts of data, write codes that run huge machinery or even author a book, it would still fail tasks that a three-year-old human child can accomplish. This is also why AI systems still have a long path to trace to be truly called intelligent.

Hubert Dreyfus, a well-known philosopher, was one of the staunchest critics of overestimating computer/AIs capabilities. He wrote three books Alchemy and AI, What Computers Cant Do, and Mind over Machine, where he critically assessed the progress of AI. One of his arguments was that humans learn from implied knowledge, and such capability cannot be incorporated into a machine.

Having said that, there have been tremendous advancements in the human endeavour to move away from Narrow AI towards the Coveted General AI. Many new models like GPT 3, DALL.E, LaMDA, Switch Transformer, etc., are extremely powerful, swallowing billions and even trillions of parameters to multi-task. But, the fact is, we are still far from reaching the goal.

The powerful language models and the newer zero-shot text-to-image generation models are all marching really fast towards the intended goal of performing tasks for which they were not trained. Each one is outwitting the previous one for its applications and uses.

DeepMind, one of the most well known AI research institutes (owned by Alphabet), has centred its ultimate goal at achieving AGI. Interestingly, this year, the lab published a paper titled Reward Is Enough, where the authors suggested that techniques like reward maximisation can help machines develop behaviour that exhibits abilities associated with intelligence. They further concluded that reward maximisation and reinforcement learning, in extension, can help achieve artificial general intelligence.

Lets take a closer look at GPT-3 created by DeepMinds closest competitor, OpenAI, which created a major buzz in the scientific community. It was widely considered a massive breakthrough when achieving General AI.

GPT-3 leverages NLP to imitate human conversations. It has been trained on one of the biggest datasets with 175 billion parameters, making it so powerful that it can complete a paragraph based on a few input words. Moreover, unlike typical narrow AI models, GPT-3 can perform tasks beyond generating human-like text, like translating between languages, reading comprehension tasks without additional input, and even writing codes!

On the flipside, GPT-3 does not have common sense. It almost blindly learns from the material that it has been trained on, scrouged from several pages on the internet. Due to its lack of common sense, GPT-3 can pick biassed, racist and sexist ideas from the internet and rehash them [Note: Open AI is working to mitigate bias and toxicity in many ways, but it is not eliminated yet]. Additionally, the transformer lacks causal reasoning and is unable to generalise correctly beyond the training set, making it far from General AI.

But, as we know, this quest is going to go on. This journey is fraught with billions of computational inputs, thousands of tonnes of energy power consumption and billions of dollars of expense to build and train these AGI models. Most ignore or fail to comprehend its consequences on the environment.

Climate change has become a fundamental crisis of our time, and AI plays a dual role. It can help reduce the effects of the climate crisis and control it through solutions like smart grid designing or developing low emission infrastructure. But, on the other hand, it can lead to the undoing of all sustainability efforts, making the extent of its carbon emission hard to ignore.

Estimates suggest that training a single AI generates close to 300 tonnes of carbon dioxide, equivalent to five times the lifetime emissions of an average car. AI requires increased computational powers, and data centres that store large amounts of AI data consume high energy levels. The high-powered GPUs required to train advanced AI systems need to run for days at a time, utilising tonnes of energy and generating carbon emissions. This is increasing the ethical price of running an AI model.

GPT-3 is one of the largest language models. The Neural Architecture Search process of training general transformer models requires more than 270,000 hours of training and 3000 times the energy, so much so that the training has to be split over dozens of chips and broken down over months. If the input is so massive, the output is worse. A 2019 study found that training an AI language-processing system generates anywhere between 1,400 to 78,000 pounds of emission. This is equivalent to 125 round trip flights between New York and Beijing.

Sure, it is better in performance, but at what cost? Carbontracker suggested training GPT-3 just once requires the same amount of power used by 126 homes in Denmark every year. It is also the same as driving a car to the moon and back.

GPT-3 isnt the only large language model in the market today. Microsoft, Google, and Facebook are working on and have released papers on more complex models involving images and powerful searches that go far and beyond language, to create multi-tasking and multi-modal models.

OpenAI identified how, since 2012, the amount of computing power in training large models has been increasing exponentially with a 3.4 month doubling time. If this is true, one can only imagine the energy consumption and carbon emissions until we reach AGI.

AI could become one of the most significant contributors to climate change if this trend continues.

This entails employing efficient techniques for data processing or search and training models on specialised hardware, like AI accelerators, that are more efficient per watt than general-purpose chips. Google published the paper on Switch Transformers that use more efficient sparse neural nets, facilitating the creation of larger models without increasing computational costs. Researchers such as Lasse Wolff Anthony, who has worked on AI power usage, have suggested that large companies train their models in greener countries such as Estonia or Sweden. Given the availability of greener energy supplies, a models carbon footprint can be reduced by more than 60 times.

The solutions arent many as of now, but attempts are being made to devise them. It is important that we have conversations about it. While innovation is the basis on which a society moves forward, we must also be conscious of the cost such innovation brings. The need of the hour is to strike a balance between the two.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the formhere.

See the rest here:
Council Post: Reflecting On The Cost Of Our Dream To Build General AI - Analytics India Magazine

Signs Your Visceral Fat is Making You Sick Eat This Not That – Eat This, Not That

Having too much abdominal fat is one of the most underrated health issues that's not talked about enough. Unlike the fat that you can see and pinch, visceral fat lies deep within your belly and it wraps around your organs. It's highly dangerous because it can lead to significant health issues like type 2 diabetes, an increased risk of several cancers and raises your chances of having a fatty liver. Dr. Sepehr Lalezari Surgeon and Weight Loss Specialist with Dignity Health St. Mary in Long Beach says "There are various ways to get an idea of dangerous levels of visceral fat but an easy way to get a rough estimate is waist size. For men a waist of >40in is a sign to lose weight and for that number is 35 inches." Eat This, Not That! Health talked to doctors who explained various ways visceral fat can make you sick and what to do about it. Read onand to ensure your health and the health of others, don't miss these Sure Signs You've Already Had COVID.

Dr. Sherry Ross, MD, OB/GYN and Women's Health Expert at Providence Saint John's Health Center in Santa Monica, CA explains, "When present, visceral fat is stored deep in the abdomen in men and women. Men tend to have more visceral fat & tend to store all their fat in their upper bodies. Women seem to have evolved to have a higher amount of subcutaneous body fat, usually stored in the hips, buttock and thigh areas. Fat below the waist is considered less healthy than that found above the waist."

Dr. Jonathan Adam Fialkow, cardiologist and lipidologist at Baptist Health's Miami Cardiac & Vascular Institute explains, "People who deposit energy (calories) in their visceral fat are at increased risk for diabetes and heart disease. This is a hormonal consequence, often, to overeating of processed and refined foods including sugars. Visceral fat is metabolically active and:

RELATED: These Conditions Make You More Likely To Die From COVID

Dr. Lalezari states, "Increased waist circumference is an early indicator of increasing visceral fat. Obesity, particularly abdominal obesity, is associated with a slew of diseases as well as resistance to the effects of insulin often leading to type 2 diabetes mellitus. Insulin resistance, the associated hyperinsulinemia, high blood sugars, and cellular mediators may also lead to damage of the lining of our blood vessels, an abnormal lipid profile, high blood pressure, and inflammation! And why does all this matter? It's because the combination of these things promote the development of atherosclerotic cardiovascular disease (CVD) which leads to devastating heart attacks and strokes!"

RELATED: Simple Ways to Never Get Old, According to Experts

According to Dr. Lalezari, "Obesity is linked to many diseases including heart disease, diabetes, depression, high blood pressure, sleep apnea, the list really goes on and on. Increased weight makes us feel tired as our bodies are in a state of constant inflammation. We tend to feel tired, fatigued, our minds are cloudy and we just don't feel like ourselves. Just the other day a patient told me that after losing 50lbs three months after weight loss surgery, it was the first time in years she had felt like her true self. It almost brought me to tears as she told me about all her struggles with obesity. Obesity not only affects the body but also the mind and soul. But there are ways to beat the sickness and improve our overall well being, you just need to find the right partner to help guide you through the obstacles we all face with weight loss. As I always tell my patients, weight loss is a journey and we will 'journey strong' together!"

RELATED: The #1 Sign You Have High Blood Pressure

Julie Bednarski MHSc, PHEc, RD Founder/CEO Healthy Crunch states, "Having some body fat is perfectly healthy and normal, but the reason why having too much visceral fat around your midsection is concerning as it's close to many vital organs which might put you at risk for healthy complications including Type 2 Diabetes and Heart Disease. A quick way to determine how much visceral fat a person might be carrying is to measure the size of their waist. A woman whose wait measures 35 inches or more is likely to have excess visceral fat. This might increase your risk of developing health conditions linked to excess visceral." And to get through this pandemic at your healthiest, don't miss these 35 Places You're Most Likely to Catch COVID.

The rest is here:
Signs Your Visceral Fat is Making You Sick Eat This Not That - Eat This, Not That

5 Tools to Cope With Anxiety When Therapy Isnt an Option – The Cut

Nervous State

A series about coping with our most anxious year yet.

Photo-Illustration: by The Cut; Photos: Getty Images

While it might feel like everyone you know is in therapy, more than a third of Americans live in an area with a shortage of mental-health professionals; plus, the need for therapy has only increased during the pandemic. Aside from that, so much goes into actually finding a therapist that we often breeze over including the time needed to find one and the money that it takes to continually attend, especially if its an out-of-network provider its easy to see that not everyone who needs therapy actually has access to it. So what can you do when systemic barriers stand in the way of therapy, especially when it comes to managing anxiety?

Instead of fighting our anxiety, Dr. Wendy Suzuki, a psychology professor at the Center for Neural Science at New York University, believes we should better understand it. Anxiety, at its core, is protective. It is actually critical for our survival, Dr. Suzuki tells me. Her recent book, Good Anxiety: Harnessing the Power of the Most Misunderstood Emotion, is devoted to reconsidering the role of anxiety in our lives and what we can do to live with it. Here are the five tools Dr. Suzuki recommends to cope.

In moments of distress, when a panic attack may be setting in, slowing down your breathing is critical. The science behind deep breathing is that it is activating your natural de-stressing part of your nervous system, says Dr. Suzuki. Its called the parasympathetic nervous system, or the rest and digest system. This is the counterpart to the more well-known fight or flight part of our nervous system. The best way to activate it? Deep breathing. While there are many different breathing patterns, Dr. Suzuki suggests a box-breathing method because of how simple it is four counts in, four counts hold, four counts out, and four counts hold. You can do breathing exercises anywhere even if youre in the middle of an anxiety-inducing situation without anyone knowing.

Youre probably tired of hearing this, but exercise really does work. (A lot of other people apparently need to hear it, too: Dr. Suzukis 2018 TED Talk about how moving your body can boost your mood has over 31 million views and counting.) Its like giving yourself a wonderful neurochemical bubble bath, she says. Even if its just a walk around your home or outside, you release chemicals that help your mood. These could be dopamine, serotonin, noradrenaline, and endorphins, which help make you feel happier, less stressed, and less anxious. Its not a magical thing. This is neuroscience that you are taking advantage of here, says Dr. Suzuki. Especially when you feel anxious and you can remove yourself from the situation, its helpful to step out of your mind and focus on your body.

Something that therapy offers is the chance to talk and have someone really listen. Even without it, we can teach ourselves and others that same skill. Practicing mindful listening (and having someone mindfully listen to you) is the easiest recommendation that Dr. Suzuki can give sometimes just being able to vent to a good friend that will really listen to what is going on in your life can be so relieving for you. Starting this practice takes little more thanreaching out to a friend and asking to chat. Explain that you will listen to what theyre feeling, without interrupting or offering advice, and that you think they can do the same for you.

It is an act of caring, to listen deeply and without judgment. You are there as a sounding board and to be supportive. You dont have to solve their problem. That is not the point, says Dr. Suzuki. The point is to listen and to take in their situation so they feel heard.

One way to better manage your anxiety is to understand what provokes it. Most of the anxiety-inducing situations we deal with are not out-of-the-blue surprises, but situations that have been stressors in our lives for a long time. Thats great because we can prepare for them and identify them, says Dr. Suzuki. In doing that, you can better prepare yourself to encounter them and have tools already in place. If you tend to get anxious whenever you have a job interview, it could be helpful to look back and try to find the moment when your anxiety began. Was it before the interview, during, or after? By understanding when and why your anxiety started, you can go into the experience next time with a plan to mitigate the stress.

You can apply those kinds of approaches to every single anxiety-provoking situation in your life. You might think, Oh my God, that takes a lot of preparation. Yes, it does. But once you do that, you start to learn those approaches, and they become more automatic for you, says Dr. Suzuki. And they are powerful.

Dr. Suzukis book is called Good Anxiety for a reason she wants us to understand that anxiety isnt always bad. It can inform us of our values and offer protection, but it takes self-analysis and time to do so. A mind-set is just a belief. So how do you shift your mind-set? You have to believe in what youre shifting it to, says Dr. Suzuki.

Think of it this way: You might feel overwhelmed trying to make sense of the new Omicron variant, but your anxiety about it can also help keep you safe. Your anxiety doesnt have to be your enemy in this situation. Dr. Suzuki suggests using it to your benefit by researching the best way to protect yourself. Shifting your mind-set includes a wonderful opportunity to mitigate the fear of Omicron by educating yourself as much as you can, with sources that you trust, for what your best strategies to travel safely are, says Dr. Suzuki.

While tools for dealing with anxiety on your own can be helpful, there are times when you might need outside help. Hotlines like the Samaritan Helpline, the National Alliance on Mental Illness Helpline, or the 24/7 Crisis Texting Line can help you during a crisis. For less urgent situations, there are warmlines, service lines that offer emotional support and are staffed by volunteers who have experienced mental-health struggles on their own. Other free resources come in the form of apps, such as Mindshift for meditation and mindfulness exercises, iBreathe for breathing exercises, and Moods for tracking how you feel each day.

Get the Cut newsletter delivered daily

Read the original here:
5 Tools to Cope With Anxiety When Therapy Isnt an Option - The Cut

DeepMind Releases Weather Forecasting AI Deep Generative Models of Rainfall – InfoQ.com

DeepMind open-sourced a dataset and trained model snapshot for Deep Generative Models of Rainfall (DGMR), an AI system for short-term precipitation forecasts. In evaluations conducted by 58 expert meteorologists comparing it to other existing methods, DGMR was ranked first in accuracy and usefulness in 89% of test cases.

The model and several experiments were described in an article published in Nature. DeepMind developed DGMR in collaboration with the UK Met Office to perform nowcasting: short-term, high-resolution predictions of precipitation. Using a deep learning technique called generative models, DGMR learns to generate "radar movies": given a short series of radar images of rainfall, it learns to predict future radar images, thus predicting the amount and location of future precipitation. According to DeepMind:

We think this is an exciting area of research and we hope our paper will serve as a foundation for new work by providing data and verification methods that make it possible to both provide competitive verification and operational utility. We also hope this collaboration with the Met Office will promote greater integration of machine learning and environmental science, and better support decision-making in our changing climate.

Nowcasts are often used for assisting decision-making in many areas, such as air traffic control and energy management; thus, their accuracy has economic and safety implications. Current methods, such as STEPS and PySTEPS, often use numeric approaches to solve physics equations that describe weather behavior. These systems model the uncertainty of their predictions by producing ensemblesof predictions. More recently, researchers have developed deep learning models that are trained on datasets of radar observations; however, the DeepMind team note that these models have limited operational usefulness, as they are "unable to provide simultaneously consistent predictions across multiple spatial and temporal aggregations."

The DGMR model is based on a conditional generative adversarial network (GAN). The generator network takes in four observed radar frames as context and generates output predictions for the next 18 frames. The generator is trained along with two discriminator networks which learn to tell the difference between real radar data and generated data; one discriminator focuses on spatial consistency within frames, and the other on temporal consistency across a sequence of frames. The entire system is trained on historical data from radar observations in the UK, from the years 2016 to 2019. The trained model can generate a prediction in "just over a second" using a single NVIDIA V100 GPU.

To evaluate DGMR's performance, DeepMind compared it to three baseline models: PySTEPS, UNet, and MetNet. Besides the general ranking of accuracy and value, a group of expert meteorologists also judged the models' predictions for a single "meteorologically challenging event". In this case study, 93% indicated DGMR's results as their first choice. The DeepMind team also evaluated the models on several metrics, including critical success index (CSI), radially averaged power spectral density (PSD), and continuous ranked probability score (CRPS); on these metrics, DGMR compared "competitively" to the baselines.

AI models for weather forecasting is an active research area. InfoQ previously covered an AI model for predicting electrical outages caused by storms, as well as a model for solving partial differential equations, which could be used for modeling climate. Google recently announced MetNet-2, which "substantially improves on the performance" of MetNet.

In a discussion about DGMR on Reddit, one commenter questioned the usefulness of the approach. Another pointed out,

The GAN is basically just hallucinating plausible details on top of the L1 prediction, but the fact is, this still leads to a higher predictive skill and value! Is the method really garbage if it has higher predictive performance on multiple metrics than other leading deep networks and statistical baselines? Furthermore, there is a ton of research into avoiding GAN mode-dropping that can be integrated into this baseline approach. That seems like a pretty promising way to gain even more performance!

The trained DGMR model and dataset are available on GitHub.

Read the original:
DeepMind Releases Weather Forecasting AI Deep Generative Models of Rainfall - InfoQ.com

‘Deep canvassing’ has helped support gay and trans rights. What about abortion? : Shots – Health News – NPR

Planned Parenthood volunteer Sarah Mahoney checks a list of addresses in Windham, Maine to see which door to knock on next. Patty Wight/Maine Public Radio hide caption

Planned Parenthood volunteer Sarah Mahoney checks a list of addresses in Windham, Maine to see which door to knock on next.

It's Saturday, and Sarah Mahoney is one of several Planned Parenthood volunteers knocking on doors in Windham, Maine, a politically moderate town not far from Portland.

No one answers at the first couple of houses. But as Mahoney heads up the street, she sees a woman out for a walk.

"Hey! We're out canvassing," she says. "Would you mind having a conversation with us?"

Mahoney wants to talk about abortion not a typical topic for a conversation, especially with a stranger.

But the woman, Kerry Kelchner, agrees to talk.

If this were typical door-to-door canvassing, Mahoney might ask Kelchner about a political candidate, remind her to vote and then be on her way.

But Mahoney is deep canvassing a technique that employs longer conversations to move opinions on hot-button issues.

Planned Parenthood in Maine has deployed the strategy for several years amid what it says are increasing threats to reproductive rights. This year alone, states have enacted more than 100 restrictions on abortion, including one in Texas that bans most abortions after six weeks. This month, the US Supreme Court heard arguments in a case about a Mississippi law that could lead to the overturning of Roe v. Wade, the landmark 1973 ruling that established a constitutional right to abortion.

Although state law in Maine protects abortion rights even if Roe v. Wade is overturned, abortion opponents have gained traction in the state in recent years. So volunteers like Mahoney start conversations. And they can get quite personal.

Mahoney first assesses Kelchner's baseline attitude on abortion access on a scale of 0 to 10. A 10 means the interviewee believes anyone should be able to get an abortion for any reason.

Kelchner says she's a 7.

Next, Mahoney asks Kelchner a series of questions to better understand her values.

"Can you tell me a little bit about what shaped your views on abortion?" she asks. "Have you known anybody who's had an abortion, a friend or a family member?"

"My mother," says Kelchner. She explains her parents were young when she was born, and they weren't ready for another baby.

Then Mahoney, who's 60, shares that she also had an abortion. "I was in my early 20s," she says. "I was a little conflicted about it, and I wanted to have a family. I knew I wanted to have a family, but I was in no way ready to do that."

Mahoney points out that she and Kelchner have similar views on what an unplanned pregnancy can mean. Then she asks her opening question again, to see whether Kelchner's feelings about abortion access have shifted on the 0-to-10 scale.

"Still around 7," Kelchner says.

Mahoney probes further: "What would be the circumstances where you would say, 'No they shouldn't have the right to have an abortion?'"

Kelchner pauses. "That's a good question."

They talk more. Ultimately, Kelchner can't think of any circumstance in which she believes someone should be denied an abortion.

"There should be no judgment," she concludes.

"So that would be a 10?" Mahoney asks.

"Yep," says Kelchner.

Planned Parenthood organizer Katie McClelland (L) gives a pep talk to volunteers outside the library in Windham, Maine, on Oct. 16. The volunteers then fan out to different neighborhoods to engage in "deep canvassing" The goal is to have four conversations with voters over the next two hours. Patty Wight/Maine Public Radio hide caption

Planned Parenthood organizer Katie McClelland (L) gives a pep talk to volunteers outside the library in Windham, Maine, on Oct. 16. The volunteers then fan out to different neighborhoods to engage in "deep canvassing" The goal is to have four conversations with voters over the next two hours.

In the five years that she's been deep canvassing for Planned Parenthood, Mahoney says, she hasn't had a single unpleasant conversation.

"What we've found doing this is that it is an effective way to change minds about abortion," says Amy Cookson, director of external communications for Planned Parenthood of Northern New England.

Cookson says Planned Parenthood started deep canvassing in Maine in 2015, after Paul LePage, an anti-abortion Republican, won a second term as governor. Gay rights advocates in California had used deep canvassing on the same-sex marriage issue, and she wondered: "Can it work around abortion stigma?"

The technique has been used to garner support for gay marriage, transgender rights, police reform efforts, and for Biden in the 2020 election.

Joshua Kalla, a political scientist at Yale University, has conducted research that found the technique can change people's deeply held beliefs. The crucial elements are that canvassers listen without judgment and share their own stories.

"So whether the person had an abortion and is talking about their abortion story," says Kalla, "or whether the person is an ally and is talking about a friend or family member who had an abortion and is sharing that story, the effects seem to be quite similar."

Kalla has also studied Planned Parenthood's efforts in Maine and says the group has added something else that's effective: moral reframing. Canvassers listen for the moral values a voter emphasizes and then incorporate those values into the story they share.

But deep canvassing is not exclusively a progressive tactic, Kalla says. Conservative groups can use it, too, and he thinks that would improve political discourse: "You know, it would be good for American society if the way we had political conversation was more grounded, and listening to the other side, and being nonjudgmental, and being curious."

Planned Parenthood volunteer Sarah Mahoney talks to a voter about abortion access outside a home in Windham, Maine on Oct 16, 2021. Patty Wight/Maine Public Radio hide caption

Planned Parenthood volunteer Sarah Mahoney talks to a voter about abortion access outside a home in Windham, Maine on Oct 16, 2021.

Back in Windham, Mahoney continues to walk through the neighborhood. She meets a man outside his apartment building who gives only his first name, Chris. He says he's a 4 on the abortion access scale. He opposes abortion except in cases of sexual assault. Chris tells Mahoney he had a daughter when he was 15.

"Do you talk about, I'm curious, birth control and abortion?" Mahoney asks.

"I do with her a lot," Chris says. She's a teenager, he says, and he's not sure what he'd do if she got pregnant accidentally.

"It's her own life," he says. "I don't know if I would even try to change her mind. Because it's her decision."

As the conversation goes on, Chris seems as though he supports access to abortion. But at the end, he doesn't budge on his rating.

Mahoney says that's OK. Some people won't change their minds right away.

"The worst way to think about this is that it's some kind of Jedi mind trick," she says, "and I'm going to let them talk about themselves and then pow! I'm going to change their mind."

What Mahoney wants most from these conversations is for people to think more deeply about the nuances around abortion, and to identify common ground: "I just feel like we all need to be taking steps to hear one another and move towards each other, instead of just diving into this divisive, contrary, hostile, Red-and-Blue world."

Because of the success Planned Parenthood in Maine has had with deep canvassing, it has trained volunteers in other states, including Texas and Kansas. Next year, Kansas voters will cast ballots on a referendum question that seeks to revoke abortion access as a fundamental right.

This story comes from NPR's health reporting partnership with Maine Public Radio and KHN.

Visit link:
'Deep canvassing' has helped support gay and trans rights. What about abortion? : Shots - Health News - NPR

Video: Dive Into ‘The Deep’ with Mark Matthews & His Custom Painted Shark Bike – Pinkbike.com

Dark, rainy conditions provided the perfect backdrop for hitting some of my favourite lines on my shark-inspired, custom painted Marin Rift Zone. The details on this bike turned out incredible!

I wanted something bright and bold that pops in photos. The bike needed shark attack vibes, blood-stained water, and any shark-themed ideas the artists could think of. I shared all these requirements with my friends at Fresh Paints of Whistler and asked to keep it a surprise. Having no idea what kind of epic design they would come up with, I trusted they would do something ridiculously awesome.

THE BUILD

THE RIDE

In order to stay aligned with the shark theme, I wanted my riding and the atmosphere to echo the feelings of a shark attack.

Sessioning the big, floaty step-up jump was a highlight of this shoot for me. Ive always dreamed of a flowy jump like this thats carved out of the natural landscape. I finished hand building it just in time for filming. An entire trail will eventually run through the gully where its built, and Im documenting the process on YouTube! Subscribers can vote on what I build next. This isnt the last youll see of this zone.

We started stacking shots in October. It was a weather battle, but this helped shape the mood of the video. November rainfall broke records in many BC communities and the locations we filmed on Vancouver Island were no exception. Many days were cut short due to lack of light or too much rain, but we powered through and were rewarded with a moody, dark feel that emulates the deep ocean vibes we were after.

Scott and Jarrett are always a treat to work with. We have collaborated on a handful of large projects now and we have a good creative flow going. Thanks to my epic team for making this video possible, Scott and Brad at Marin Canada, and a huge thank you to Marin Bikes for supporting rad ideas like this!

Supported by: Marin BikesDirector: Scott Bell and Mark MatthewsCinematography and Post Production: Scott BellPhotography: Jarrett Lindal

The full photo album is on Jarrett's website here.

Originally posted here:
Video: Dive Into 'The Deep' with Mark Matthews & His Custom Painted Shark Bike - Pinkbike.com

At six, I realised the truth about Santa. How deep did the lies go? – The Guardian

Christmas was always such a magical time for me when I was young, and the beginning of December 1970, filled with excitement and anticipation, was no different. I was six and though I had already figured out there was no Santa, I didnt quite understand how presents materialised in the pillowcase annually hung from the post of my upper bunk bed. My parents were adamant about Santas existence, but my friends and older brothers had confirmed the awful, heart-wrenching, nihilistic truth of my suspicions.

There were a lot of other existential questions in my mind that year. What was death? Did people seriously spend eternity in a box buried underground? What if they woke up? At school, the alternative of an eternity in heaven was presented by our overtly Christian teacher and, on balance, heaven definitely sounded preferable to an afterlife of maggot-ridden decomposition. The caveat of complete faith and devotion to a bearded man who floated on a cloud seemed a small price to pay for everlasting bliss. God even looked a lot like Santa, only his beard was more straggly and his suit less fun. Maybe God delivered the presents. Sorted. Roll on Christmas.

Then came the curve ball. I remember, that December, looking at a photograph in my mum and dads bedroom. I stared in shock. I asked who was in the picture. Thats Rabindranath Tagore, replied my mum. He wrote plays, songs and poems. My mouth dropped open at this tall, white-bearded figure, who the great pandit-ji Ravi Shankar would later in life tell me looked like the sun. How many people out there have this look? I wondered. Theres God, Santa and now this dude. All with huge beards and a wise grin. It was disconcerting. Which one delivered the presents?

That December my mum also started explaining Hinduism to me. I know it was then because I remember what I was practising on the piano. Suddenly, there were lots more gods, but the beards varied hugely. Many had no beards at all. There were also goddesses, which confused me because the only female Id heard of in Christianity ran around naked in a garden, tempting a man to follow her, with an apple. Also, with Hinduism you were cremated after death, which seemed altogether less boring.

I told my mum we were being presented with an alternative perspective at school, of eternal damnation or heavenly bliss as opposed to the less intimidating magical stories of Krishna and Ganesh at home. She said that Hinduism accepted all other faiths and everything was really about being a good person. That helped a lot, because Id heard Santa only gave presents to kids who were good. So even if there was no Santa, whoever was going to give me the presents felt my moral fibre was important. Everything seemed to tie up. Roll on Christmas.

That year I could not wait to see the TV animation of Rudolph the Red-Nosed Reindeer. Id watched it the year before and it was the most magical thing Id ever seen. I loved Rudolph. I could already play the theme tune on the piano, and Rudolph himself was simply fantastic.

Then it came, and I was so disappointed and unmoved. Rudolph had lost his magic. If there was no Santa, I realised, then Rudolph couldnt possibly be real or meaningful. Just like the Lone Ranger was fiction too. People were making this stuff up. How deep did the lies go?

Christmas finally came and I waited up in bed the whole night. Who should I expect? Could I be wrong about Santa? Was he real after all? Or would I be visited by some other bloke with a longer, more flowing beard? Or should I expect someone blue with eight arms and an elephant trunk? Id seen one of those on the living room wall and Id been told that was also God. Definitely not the one Miss Churchill talked about in class though. Hmm.

So, early on Christmas morning, when Dad ran giggling into the room and slapped a pillowcase full of gifts on my bed, I just shouted: Dad? What are you doing?

Belief is such a strange thing, I learned that Christmas. If we want to believe something, we seem to ignore reality till we have no choice. I do miss that magical world though the world before dad ran into my room with no beard or extra arms before I discovered the lies you hear as an adult are far less innocent and well-meaning than those accompanied by marvellous, warm, cosy dreams.

To mark Coventrys tenure as UK City of Culture, and the 60th anniversary of Coventry Cathedral, Nitin Sawhney has been commissioned to create a new site-specific performance in response to Benjamin Brittens War Requiem. Ghosts in the Ruins takes place on 27-29 January. Tickets are available at coventry2021.co.uk

Link:
At six, I realised the truth about Santa. How deep did the lies go? - The Guardian

2021 was the year of monster AI models – MIT Technology Review

What does it mean for a model to be large? The size of a modela trained neural networkis measured by the number of parameters it has. These are the values in the network that get tweaked over and over again during training and are then used to make the models predictions. Roughly speaking, the more parameters a model has, the more information it can soak up from its training data, and the more accurate its predictions about fresh data will be.

GPT-3 has 175 billion parameters10 times more than its predecessor, GPT-2. But GPT-3 is dwarfed by the class of 2021. Jurassic-1, a commercially available large language model launched by US startup AI21 Labs in September, edged out GPT-3 with 178 billion parameters. Gopher, a new model released by DeepMind in December, has 280 billion parameters. Megatron-Turing NLG has 530 billion. Googles Switch-Transformer and GLaM models have one and 1.2 trillion parameters, respectively.

The trend is not just in the US. This year the Chinese tech giant Huawei built a 200-billion-parameter language model called PanGu. Inspur, another Chinese firm, built Yuan 1.0, a 245-billion-parameter model. Baidu andPeng Cheng Laboratory, a research institute in Shenzhen,announced PCL-BAIDU Wenxin, a model with 280 billion parameters that Baidu is already using in a variety of applications, including internet search, news feeds, and smart speakers. And theBeijing Academy of AI announced Wu Dao 2.0, which has 1.75 trillion parameters.

Meanwhile, South Korean internet search firmNaverannounced a model called HyperCLOVA, with 204 billion parameters.

Every one of these is a notable feat of engineering. For a start, training a model with more than 100 billion parameters is a complex plumbing problem: hundreds of individual GPUsthe hardware of choice for training deep neural networksmust be connected and synchronized, and the training data split must be into chunks and distributed between them in the right order at the right time.

Large language models have becomeprestige projects that showcase a companys technical prowess.Yet few of these new models move the research forward beyond repeating the demonstration that scaling up gets good results.

There are a handful of innovations. Once trained, GooglesSwitch-Transformer and GLaM use a fraction of their parameters to make predictions, so they save computing power. PCL-Baidu Wenxincombines a GPT-3-style model with a knowledge graph, a technique used in old-school symbolic AI to store facts. And alongside Gopher, DeepMind releasedRETRO, a language model with only 7 billion parameters that competes with others 25 times its size by cross-referencing a database of documents when it generates text. This makes RETRO less costly to train than its giant rivals.

Excerpt from:
2021 was the year of monster AI models - MIT Technology Review

2022 technology trend review, part two: AI and graphs – ZDNet

AI has many manifestations, ranging from hardware to applications in domains such as healthcare, and from futuristic models to ethics

In the spirit of the last couple of years, we review developments in what we have identified as the key technology drivers for the 2020s in the world of databases, data management and AI. We are looking back at 2021, trying to identify patterns that will shape 2022.

Today we pick up from where we started with part one of our review, to cover AI and knowledge graphs.

Managing AI and ML in the Enterprise

The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build.

Read More

In principle, we try to approach AI holistically. To take into account positives and negatives, from the shiny to the mundane, and from hardware to software. Hardware has been an ongoing story within the broader story of AI for the last few years, and we feel it's a good place to start our tour.

For the last couple of years, we have been keeping an eye on the growing list of "AI chips" vendors, i.e. companies that have set out to develop new hardware architectures from the ground up, aimed specifically at AI workloads. All of them are looking to get a piece of a seemingly ever-growing pie: as AI keeps expanding, said workloads keep growing, and servicing them as fast and as economically as possible is an obvious goal.

Nvidia continues to dominate this market. Nvidia was already in the market long before AI workloads started boomingand had the acumen and the reflexes to capitalize on this by building a hardware and software ecosystem. Its 2020 move to make Arm a part of this ecosystem is under regulatory scrutiny. However, Nvidia did not remain idle in 2021.

Out of a slew ofannouncements made at Nvidia's GTC event in November 2021. the ones that bring something new on the hardware level have to do with what we would argue characterizes AI's focus in 2021 at large: inference and the edge. Nvidia introduced a number of improvements for theTriton Inference Server. It also introduced theNvidia A2 Tensor Core GPU, a low-power, a small-footprint accelerator for AI inference at the edge that Nvidia claims offer up to 20X more inference performance than CPUs.

And what about the upstarts? SambaNova claims to now be "the world's best-funded AI startup" after a whopping$676M in Series D funding, surpassing $5B in valuation. SambaNova's philosophy is to offer "AI as a service",now including GPT language models, and it looks like 2021 was by and large a go-to-market year for them.

Xilinx, on its part,claims to achieve dramatic speed-up of neural nets versus Nvidia GPUs. Cerebrasclaims to 'absolutely dominate' high-end computeandscored some hefty funding too. Graphcore iscompeting with Nvidia (and Google) in MLPerf results. Tenstorrenthired legendary chip designer Keller. Blaizeraised $71m to bring edge AI to industrial applications. Flex Logixscored $55 million in venture backing, bringing its total haul to $82 million. Last but not least, we havea new horse in the race in NeuReality,ways to mix and match deployment in ONNX and TVM, and thepromise of using AI to design AI chips. If that's not booming innovation, we don't know what is.

According to the Linux Foundation's State of the Edgereport, digital health care, manufacturing, and retail businesses are particularly likely to expand their use of edge computing by 2028. No wonder that AI hardware, frameworks and applications aimed at the edge are proliferating too.

TinyML, the art and science of producing machine learning models frugal enough to work at the edge, is seeing rapid growth andbuilding out an ecosystem. Edge Impulse, astartup that wants to bring machine learning at the edge to everyone, just announced its $34M Series B funding.Edge applications are coming, andAI and its hardwarewill be a big part of that.

Something wecalled in 2020, was prominent in 2021 and will be with us for the years to come is so-called MLOps -- bringing machine learning to production. In 2021, people tried togive names to various phenomena pertaining to MLOps,slice and dice the MLOps domain,apply data version control and continuous machine learning, as well asthe equivalent of test-driven development for dataamong other things. The emphasis is shifting from shiny new models to perhaps more mundane, but practical aspects such as data quality and data pipeline management, andMLOps will continue to grow.

The other thing that's likely to continue to grow, both in terms of sheer size as well as in number, is large language models (LLMs). Somepeople thinkthat LLMs can internalize basic forms of language, whether it's biology, chemistry, or human language, and we're about to see unusual applications of LLMs grow. Others,not so much. Either way, LLMs are proliferating.

In addition to the "usual suspects" -- OpenAI with its GPT3,DeepMind with its latest RETRO LLM, Google with itsever-expanding array of LLMs--Nvidia has now teamed up with Microsoft in the Megatron LLM. But that's not all.

Recently, EleutherAI, a collective of independent AI researchers, open-sourced their 6 billion parameter GPT-j model. In addition, if you are interested in languages beyond English, we now have a large European language model fluent in English, German, French, Spanish, and Italian by Aleph Alpha. Wudao is a Chinese LLM which is also the largest LLM with 1.75 trillion parameters, and HyperCLOVA is a Korean LLM with 204 billion parameters. Plus, there's always other, slightly older / smaller open source LLMs such asGPT2or BERT and its many variations.

Beyond LLMs, both DeepMind and Google have hinted at revolutionary architectures for AI models, withPerceiverandPathways, respectively. Pathways have been criticized for being rather vague. However, we would venture to speculate that it could be based on Perceiver. But since we're in future tech territory, it would be an omission not to mention DeepMind'sNeural Algorithmic Reasoning, a research direction promising to marry classic computer science algorithms with deep learning.

No tour of AI, however condensed, would be complete without as much as an honorary mention toAI ethics. AI ethics has remained top of mind in 2021, and we have seen people ranging fromFTC commissionerstoindustry practitionerseach trying to address AI ethics in their own way. And let's not forget about the ongoingboom of AI applications in healthcare, an area in whichethics should be a top priority with or without AI.

We have been avid proponents ofgraphs of all shapes and sizes-- knowledge graphs, graph databases, graph analytics, data science and AI -- for a long time. So it is with mixed feeling that we report from this front. On the one hand, we have not seen much innovation, except perhaps in one area --graph neural networks. DeepMind'sNeural Algorithmic Reasoningleverages GNNs, too.

On the other hand, that's not necessarily a bad thing, for two reasons. First, there is a major uptake of the technology in the mainstream. By 2025, graph technologies will be used in 80% of data and analytics innovations, up from 10% in 2021, facilitating rapid decision making,Gartner predicts. Reporting onuse cases from the likes of BMW, IKEA, Siemens Energy, Wells Fargo, and UBSis no longer news, and that's a good thing. Yes, there are challenges associated with building and maintaining knowledge graphs, but these challenges are, for the most part, well-understood.

As we have noted,knowledge graphs are practically a 20-year old technologywhose time in the limelight seems to have come. The ways to build knowledge graphs are well-known, as well as the challenges that lie therein. It's no coincidence that some of the most in-demand skills and areas for development in knowledge graphs are around using Natural Language Processing and visual interfaces to build and maintain knowledge graphs, as well as ways to expand from single-user to multi-user scenarios.

And to tie this conversation to the broader picture of AI where it belongs, common challenges seem to be around operationalization and building the right expertise in teams, as those skills are in very high demand. Another important touchpoint is the hybrid AI direction, which is about infusing knowledge in machine learning. Leaders such as Intel's Gadi Singer, LinkedIn's Mike Dillinger and Hybrid Intelligence Centre's Frank van Harmelen allpoint towards the importance of knowledge organization in the form of knowledge graphs for the future of AI.

Knowledge Graphs, Graph Databases and Graph AI are all converging

There is also another important touchpoint between the broader picture in AI and knowledge graphs: data meshes and data fabrics. You'd be excused for mixing up those 2 and theplethora of data-related terms flying around these days. Simplistically, let's just say that adata fabricis meant to serve as the technical substrate for thedata meshnotion of decentralized data management in organizations. That is actually a very good match for knowledge graph technology, and a few vendors in that space have identified that and positioned themselves accordingly. EvenInformatica seems to have noticed.

And what about the substrate for building knowledge graphs, namely graph databases? The word that seems to characterize 2021 for graph databases would be "go to market". It's been a good year for graph databases. A graph database -- Neo4j -- made theTop 20 in DB Enginesfor the 1st time. Neo4j also announced thegeneral availability of its Aura managed cloud serviceandraised a $325 million Series F funding round, the biggest in database history, bringing its valuation to over $2 billion.

The graph database space saw a series of funding rounds and an upcoming IPO. TigerGraph scored$105M Series C, Katana Graph$28.5M Series A,Memgraph $9.34M seed fundingandTerminusDB 3.6M. In the meantime, Bitnine, makers of Agens Graph, startedworking on its IPO-- the first in the market.

On the technical front,GraphQL is still growing in adoption, either aspart of a broader ecosystemor as thecentral component in a data architecture. The bridging of the two graph database worlds in terms of models, RDF and LPG, is still a work in progress, but one that has seensome interesting developments in 2021.

We don't expect the world's honeymoon with graphs and graph databases to last forever, and after the hype, disillusionment will inevitably follow at some point. But we are confident thatthis technology is foundationaland will find it its place despite hiccups.

See the original post:
2022 technology trend review, part two: AI and graphs - ZDNet