Category Archives: Deep Mind

Global Mindfulness Meditation Apps Market 2021 Industry Insights and Major Players are Deep Relax, Smiling Mind, Inner Explorer, Inc. Radford…

Global Mindfulness Meditation Apps Market is the review that has been added to the MarketandResearch.biz information base. The report covers an inside and out outline, depiction of the item, industry scope and expounds market standpoint and development status to 2027. In this report, organizations will come to know the current and fate of market viewpoint in the created and developing business sectors. The report gives an examination of different points of view of the market with the assistance of Porters five powers investigation.

The report features the portion that is relied upon to overwhelm the worldwide Mindfulness Meditation Apps market and the regions that are relied upon to notice the most out of control development during the anticipated period 2021-2027. The report completely concentrates on all periods of the market to give an audit of the current market works.

DOWNLOAD FREE SAMPLE REPORT: https://www.marketandresearch.biz/sample-request/173878

The report incorporates a chance investigation utilizing different insightful apparatuses and past information. To more readily dissect the thinking behind development gauges itemized profiles of top and arising players of the business alongside their arrangements, item particular, and improvement action. The central participants are focusing on development to build productivity and item life.

On the basis of product type of market:

The study explores the key applications/end-users of the market:

Some of the key players considered in the study are:

On the basis of region, the market is segmented into countries:

ACCESS FULL REPORT: https://www.marketandresearch.biz/report/173878/global-mindfulness-meditation-apps-market-2021-by-company-regions-type-and-application-forecast-to-2026

The report gives itemized data with respect to key components including drivers, limitations, openings, and industry-explicit difficulties affecting the development of the worldwide Mindfulness Meditation Apps market. This review helps in breaking down and anticipating the size of the market, as far as worth and volume. The review incorporates conjecture the size of market portions, as far as worth, concerning fundamental locales.

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team (sales@marketandresearch.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

Contact UsMark StoneHead of Business DevelopmentPhone: +1-201-465-4211Email: sales@marketandresearch.biz

See more here:
Global Mindfulness Meditation Apps Market 2021 Industry Insights and Major Players are Deep Relax, Smiling Mind, Inner Explorer, Inc. Radford...

Google Proposes ARDMs: Efficient Autoregressive Models That Learn to Generate in any Order – Synced

Deep generative models that apply a likelihood function to data distribution have made impressive progress in modelling different sources of data such as images, text and video. A popular such model type is autoregressive models (ARMs), which, although effective, require a pre-specified order for their data generation. ARMs consequently may not be the best choice for generation tasks that involve specific types of data, such as images.

In a new paper, a Google Research team proposes Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models and discrete diffusion models that do not require causal masking of model representations and can be trained using an efficient objective that scales favourably to highly-dimensional data.

The team summarises the main contributions of their work as:

The researchers explain that from an engineering perspective, the main challenge in parameterizing an ARM is the need to enforce the triangular or causal dependence. To address this, they took inspiration from modern diffusion-based generative models, deriving an objective that is only optimized for a single step at a time. In this way, a different objective for an order-agnostic ARM could be derived.

The team then leveraged an important property of this parametrization that the distribution over multiple variables is predicted at the same time to enable the parallel and independent generation of variables.

The researchers also identified an interesting property of upscale ARDM training: complexity is not changed by modelling multiple stages. This enabled them to experiment with adding an arbitrary number of stages during training without any increase in computational complexity.

The team applied two methods to the parametrization of the upscaling distributions: direct parametrization, which requires only distribution parameter outputs that are relevant for the current stage, making it efficient; and data parametrization, which can automatically compute the appropriate probabilities for experimentation with new downscaling processes, but may be expensive as a high number of classes are involved.

In their empirical study, the team compared ARDMs to other order-agnostic generative models, evaluating performance on a character modelling task using the text8 dataset. As expected, the proposed ARDMs performed competitively with existing generative models, and outperformed competing approaches on per-image lossless compression.

Overall, the study validates the effectiveness of the proposed ARDMs as a new class of models at the intersection of autoregressive and discrete diffusion models, whose benefits are summarized as:

The paper Autoregressive Diffusion Models is on arXiv.

Author: Hecate He |Editor: Michael Sarazen

We know you dont want to miss any news or research breakthroughs.Subscribe to our popular newsletterSynced Global AI Weeklyto get weekly AI updates.

Like Loading...

See the original post:
Google Proposes ARDMs: Efficient Autoregressive Models That Learn to Generate in any Order - Synced

Here’s what’s big for AI this year, according to the annual State of AI report – Morning Brew

Its that time of year again: Pumpkin spice permeates the air, Thanksgiving flight prices are on the rise, and the annual State of AI report is fresh off the presses.

Spoiler alert: Transformers are in, semiconductors are hot, and AI + structural biology make a great couple.

Quick recap: Every year, Emerging Tech Brew digs into the report, which has been published by UK investors Nathan Benaich and Ian Hogarth since 2018. The authors 2020 predictions netted a ~68% accuracy rate: Nearly six out of eight became a reality, including a new structural-biology breakthrough from DeepMind and Nvidias failure to complete its Arm acquisition.

This year, the report covered top takeaways for research, talent, and politics. Well break down a few of them...

Biology: AI is taking over the structural-biology world, yielding new insights about both cellular machinery and protein shapeswhich could lead to not only drug discovery, but also better understanding of the human body and how diseases work.

Archi-tech-ture: Transformers, a popular architecture used to create models for natural language processing, are now being applied to many other machine learning use cases, like protein structure prediction and computer vision.

Large language models: The powerful and controversial AI tools, which underpin many of the services we use every day, are scaling up and out. And individual countries are looking to create their own dedicated LLMs.

AI talent: Academic organizations dont have enough compute resources to accomplish their projectsbut 88% of top AI faculty have received Big Tech funding, the researchers wrote.

AI safety: Although AI safety is making headlines, fewer than 50 researchers are working on it full-time at the largest AI labs, according to the reportmeaning there are a lot more people whose full-time job it is to build this tech rather than think about potential consequences.

Chips on chips: Semiconductors are hot, hot, hotand the global chip shortage is only increasing demand. Startups in the space are accelerating in a big way, and both countries and corporations are looking to make the semi supply chain more efficient.

Well see you back here next year to see how these sectors of AI move forwardand, of course, which of the reports predictions came true.

Read more:
Here's what's big for AI this year, according to the annual State of AI report - Morning Brew

Google DeepMind AI: Heres how it can bring a revolution in weather forecasting – Yahoo Singapore News

Google DeepMind AI: Here's how it can bring a revolution in weather forecasting

Googles DeepMind AI has a brand new way to predict weather forecasts with the highest accuracy. Get ready to say goodbye to the regular way of weather forecasting.

All about weather forecasting

For us, checking the weather is as simple as opening our phones or asking Siri or Alexa. However, the science of weather forecasting is not that simple. Meteorologists across the world have been trying hard to come up with high-accuracy predictions. After all, its not just predicting when and where it will be sunny, raining, or cloudy. It also includes long-term predictions of the climate and affects policy and conservation greatly. Meteorologists predict the weather by casting, i.e., hourly. casting is harder as it is susceptible to several variable factors.

Great work from our team DeepMind. Understanding and predicting weather has been a topic that has intrigued and challenged humanity. Great to see learning-based AI systems making progress in this important area, tweeted Pushmeet Kohli, a research lead at the firm.

More on DeepMind AI

According to Googles DeepMind AI, using a machine learning AI makes the task easier. With the help of AI, researchers can predict rainfall location and the possibility for the next 90 minutes. The system developed by Google and UKs meteorological office is an expert at predicting short-term weather changes. Moreover, DeepMind has immense experience with neural networks and investigating how proteins fold. Our teams research and build safe AI systems. Were committed to solving intelligence, to advance science and benefit humanity, states DeepMinds official website.

Additionally, the new research published in Nature describes how the new prediction model DGMR or Deep Generative Model of Rainfall can predict changes within 90 minutes. Sing a systematic evaluation by more than 50 expert meteorologists, we show that our generative model ranked first for its accuracy and usefulness in 89% of cases against two competitive methods., stated the researchers in the study. DGMR uses the computational capacity of the neural network and picks out the uncertainty in machine learning. Hence, DeepMind AI can give the most reliable and accurate weather predictions.

This article Google DeepMind AI: Heres how it can bring a revolution in weather forecasting appeared first on BreezyScroll.

Read more on BreezyScroll.

Read more here:
Google DeepMind AI: Heres how it can bring a revolution in weather forecasting - Yahoo Singapore News

Now AI Tells You If It Will Pour In The Next Two Hours – Analytics India Magazine

Alphabet Inc.s AI subsidiary DeepMind has developed a deep-learning tool called DGMR (Deep Generative Models of Rain) for forecasting rain up to two hours ahead of time. It teamed up with the Met Office (UKs national weather service) and claims that this can be an important step in the science of precipitation nowcasting.

The company has made the data used for training available on GitHub with a pre-trained model for the UK. The report of the study has been published in the journal Nature.

To train and evaluate the nowcasting models in the UK, the research used:

The World Meteorological Organisation defines nowcasting as a weather forecasting method in the short term of up to two hours. Weather prediction capabilities such as this can have a tremendous impact in sectors where weather plays a vital role in decision making.

Due to technological advancements in weather forecasting capabilities, high-resolution radar data is now available at high frequency (as frequent as every five minutes at one km resolution).

Advanced deep learning methods already exist in nowcasting. But they come with their own set of challenges. Without constraints, it can produce blurry nowcasts at longer lead times, often causing inaccurate predictions on medium-to-heavy rain events.

DeepMind trained their AI on radar data and analysed the past 20 minutes of observed radar, followed by making predictions for the upcoming 90 minutes.

Often other such methods have given poor performance on medium to heavy rain events, but DeepMinds tool put its attention on medium to heavy rain events. This is important as it is usually heavy rains that seriously impact the economy and the people.

DeepMind uses deep generative models (DGMs), which essentially learn probability distributions of data and allow for easy generation of samples from their learned distributions. They have the property to simulate many samples from the conditional distribution of future radar given learning from past radar. What makes DGMs such a powerful tool is their ability to learn from observational data as well as represent uncertainty across multiple spatial and temporal scales.

Image Source: DeepMind | Comparison of DGMR with radar data and other two forecasting techniques (PySTEPS and UNet) for heavy rainfall over the eastern US in April 2019

Image Source: DeepMind

Google, which acquired DeepMind in 2014, has been conducting different forms of research in precipitation forecasting recently. In 2020, it presented MetNet: A Neural Weather Model for Precipitation Forecasting. It is a DNN (deep neural network) capable of predicting future precipitation at one km resolution over 2-minute intervals at timescales up to 8 hours into the future. Here, the inputs to the network are sourced automatically from radar stations and satellite networks without the need for human annotation. The output that we get from this is a probability distribution that we can use to infer the most likely precipitation rates. This, of course, comes with associated uncertainties in each geographical region.

Just a few months before bringing out this research, DeepMind came out with another one titled, Machine Learning for Precipitation Nowcasting from Radar Images. This research looked into the development of machine learning models for precipitation forecasting too. It made highly localised physics-free predictions that apply to the immediate future, Google said. This research focusing on 0-6 hour forecasts was able to generate forecasts that have a 1km resolution with a total latency of 5-10 minutes.

Though challenges still remain and research in this area is still in its nascent stage, machine learning integrated with environmental science can have a crucial impact on decision making in the always-dynamic climate in todays world.

Sreejani Bhattacharyya is a journalist with a postgraduate degree in economics. When not writing, she is found reading on geopolitics, economy and philosophy. She can be reached at [emailprotected]

Excerpt from:
Now AI Tells You If It Will Pour In The Next Two Hours - Analytics India Magazine

Global Blockchain Technology in Healthcare Market (2021 to 2026) – Featuring Accenture, Capgemini and DeepMind Health Among Others -…

DUBLIN--(BUSINESS WIRE)--The "Global Blockchain Technology in Healthcare Market (2021-2026) by Application, End-Use, Geography, Competitive Analysis and the Impact of Covid-19 with Ansoff Analysis" report has been added to ResearchAndMarkets.com's offering.

The Global Blockchain Technology in Healthcare Market is estimated to be USD 12.45 Bn in 2021 and is expected to reach USD 55.83 Bn by 2026, growing at a CAGR of 35%.

The major factor contributing to the growth of the market is the increasing focus to improve the patient's engagement and deliver patient-centric care. In addition, the increasing penetration of high-speed network technologies initiating blockchain as a service and reducing the risk of the counterfeited drugs factors contributing to the growth of the market. The factors hindering the market are technical challenges pertaining to scalability and lack of awareness in emerging countries. The rising government initiative, emerging investment, and partnership across the industry for integrating blockchain in the healthcare sector are anticipated to create lucrative opportunities.

Recent Developments

1. Cleveland Clinic, IBM, Aetna, and Anthem have partnered to form a blockchain health firm, called Avaneer Health. - 9th June 2021

2. Aetna, Anthem, Health Care Service Corporation (HCSC), PNC Bank, and IBM announced a new collaboration, to design and create a network using blockchain technology and to improve transparency and interoperability in the healthcare industry. The aim is to create an inclusive blockchain network that can benefit multiple members of the healthcare ecosystem in a highly secure and shared environment. - 24th January 2019

Competitive Quadrant

The report includes Competitive Quadrant, a proprietary tool to analyse and evaluate the position of companies based on their Industry Position score and Market Performance score. The tool uses various factors for categorizing the players into four categories. Some of these factors considered for analysis are financial performance over the last 3 years, growth strategies, innovation score, new product launches, investments, growth in market share, etc.

Why buy this report?

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/x7xyc8

See more here:
Global Blockchain Technology in Healthcare Market (2021 to 2026) - Featuring Accenture, Capgemini and DeepMind Health Among Others -...

10 Most Popular Google AI Projects that Everyone Should Know – Analytics Insight

Google is an absolute giant in the IT world. It creates various software tools for almost any imaginable area of activity, existing today. Every complex problem today, now has a solution provided by Google, be it a smart voice helper or an intelligent shopping list. The tech industry has now become more exciting than ever. In this article, we talk about the essential Google AI projects that we should know about to understand their relevance and features.

TensorFlow: It is undoubtedly the most popular Google AI It is a free and open platform for machine learning implementations. It not only allows robust and independent ML production but also provides research powers for experimental purposes, and enables simple and high-level layers for model creation. The data and tools processed through TensorFlow can be accessed at any time and from any location.

Dopamine: It is a platform for prototyping reinforcement learning algorithms. Reinforcement learning algorithms are concerned with how a certain software agent behaves in a given situation. It is a TensorFlow-based platform that enables users to freely experiment with reinforcement learning algorithms. Its dependable and adaptable, therefore, attempting to create new things will be simple and enjoyable.

Google Open Source: Open Source is one of the most attractive philosophies of the current century because nobody likes secured and secret coding. Google stimulates the creation of unique and useful projects with this tool. Code-In challenges, competition, and widespread popularization are some of the few features and facilities provided by Google Open Source.

AdaNet: AdaNet is a TensorFlow-based system that enables the automated learning of high-level models with little interaction from an expert. It learns the structure of a neural network using its AdaNet algorithm and gives learning guarantees. The most important feature of this network is that it provides a framework for enhancing ensemble learning to obtain more advanced models.

Magenta: It is one of those rare applications that portrays the influence of artificial intelligence in creative fields. It focuses on generating art and music by using deep learning and reinforcement learning. Magenta focuses on developing solutions and simplifying complex problems for artists and musicians.

Kuberflow:Kuberflow is among the most significant Google AI It is a machine learning toolkit that focuses on simplifying machine learning deployment. The Kuberflow users can deploy open-source and top-notch machine learning systems. This project has a thriving community of developers and professionals where users can share questions, their work, and discuss other related topics.

DeepMind Lab: Googles DeepMind Lab provides a three-dimensional platform for researching and developing machine learning and AI systems. Its simple API allows the users to experiment with various AI architectures. This platform leverages DeepMind Lab to train and develop learning agents. It includes a variety of puzzles with deep reinforcement learning.

Bullet Physics:Bullet Physics is one of Google AIs most special initiatives. It is a software development kit that focuses on body dynamics, collisions, and interactions between rigid and soft bodies. This Python package utilizes machine learning, physical simulations, and the Physics Bullet SDK also includes robotics technology.

Cloud AI: Cloud AI works in large systems. It gives an ability to interact with more advanced technologies, not just basic ML solutions. Cloud AI has collaborated with other successful projects of Google, like Cloud ML, which is a set of machine learning tools for specific operations.

CoLaboratory: It is very demonstrative and supports various add-ons and instruments. It is excellent for remote computing and can open access for the developing files. Similar to other Google documents, it provides an opportunity to work in different files at the same time.

Visit link:
10 Most Popular Google AI Projects that Everyone Should Know - Analytics Insight

How this company is using data-driven drug discovery to fight disease – The Globe and Mail

Cyclica harnesses AI and machine learning, along with a vast library of global human genome discovery, to model potential protein interactions and drastically speed up the drug discovery process.

Peter Power/The Globe and Mail

It can take, on average, more than a decade and about $1-billion for a new pharmaceutical drug to make its way from the lab to the prescription pad.

Just five in 5,000 drugs that enter preclinical testing advance to human clinical trials. From there, only about one in five of those drugs is approved for human use, according to a review by the California Biomedical Research Association.

There are many reasons why it takes so long and costs so much money, says Naheed Kurji, president and chief executive officer of Toronto-based Cyclica Inc., an artificial intelligence (AI)-driven biotech drug discovery platform. When you take a drug, and you place it into a complex biological system like a human or an animal, its interacting with upwards of 300 proteins. And those other proteins are not known, initially. Theyre oftentimes undesirable and they can lead to side effects.

Story continues below advertisement

These side effects are one of the main reasons only one in 5,000 potential drugs ever makes it to a medicine cabinet.

Cyclica harnesses AI and machine learning, along with a vast library of global human genome discovery, to model potential protein interactions and drastically speed up the drug discovery process.

We are building the biotech pipeline of the future, Mr. Kurji says.

The seed of Cyclica was planted in 2011 at an MBA business case competition at the University of Torontos Rotman School of Management, presented by company co-founder Jason Mitakidis.

The proposal won the competition hands down, says Mr. Kurji, who was in the audience that day. Cyclica launched in 2013. Mr. Kurji joined shortly after as co-founder and chief financial officer and became president and CEO when Mr. Mitakidis left the company in 2016.

From humble beginnings in a basement office with a small team of co-op students, today Cyclica has more than 70 employees and advisers at its headquarters in Toronto, a team in the U.S. and another in the United Kingdom. The company has consultants all over the world and partnerships with biotech players in Brazil, Singapore, Korea, China, the U.S., Europe, the U.K., India and Australia, among others.

Disease is most often a malfunctioning of a biological protein in the human body. Computational techniques have been used for decades to pinpoint these biological drivers of disease, the malfunctioning proteins, and then find a molecular key that could be turned into medicine to address the malfunction. But those earlier efforts were limited.

Story continues below advertisement

The techniques that they were using were too slow, they were too expensive and the quality of the predictions just were not that high, Mr. Kurji says.

Then three things happened that drastically changed the landscape, he says: First, the Human Genome Project produced reams of data on genetics and the genome. Second, the cloud made available unprecedented computational horsepower. And third, AI and machine learning began to take hold.

A field of about 15 companies in the space when Cyclica launched has grown to more than 400 worldwide today.

Cyclica has two platforms powered by the Google Cloud: Ligand Design and Ligand Express.

The underlying technology of these platforms is an AI-driven database of all publicly available known protein structures, as well as third-party proprietary data that Cyclica has acquired. Recently, the company integrated Google Deep Minds Alpha Fold 2 protein structure database, as well.

After pinpointing the malfunctioning protein that is the root cause of disease, the next step in drug development is to identify a molecule that will bind with that protein to address the malfunction.

Story continues below advertisement

Cyclicas platforms can investigate molecules by matching them against all the proteins in the human body, explains Andreas Windemuth, the companys chief scientific officer.

Traditionally, this research takes a target-based approach, examining the molecule for the one function it is hoped to affect.

What our platform does is really provides a panoramic view of the molecule, he says.

Cyclicas database makes available approximately 85 per cent of the human proteome collection of all human proteins as well as other species.

Were sort of packaging all the knowledge about the drug-protein binding into our AI model and that can then be applied for discovering drugs, Dr. Windemuth says.

The AI system keeps getting better over time as more data are added, adds Stephen MacKinnon, Cyclicas vice-president of research and development, and it operates much faster than other forms of prediction.

Story continues below advertisement

Thats what allows us to extrapolate those predictions to many, many more proteins not just predict for that one target protein in the tunnel, but for all the proteins in the cell, Dr. MacKinnon explains.

Cyclica co-founder and CEO Naheed Kurji in his home office in Toronto on Sept. 30.

Peter Power/The Globe and Mail

In short, Cyclicas Ai-driven platforms can test thousands of proteins and millions of molecules in a fraction of the time.

Dr. Windemuth says the hope is that by speeding up and streamlining the drug discovery process, development costs will decrease and, ultimately, the cost of drugs to consumers will go down as well.

Every month [in development] is worth many millions of dollars and the failure rate is enormous, he says. We can make it faster, and we can reduce the failure rate.

Cyclica has switched gears from its initial focus of licensing its technology to the pharmaceutical industry. The company now sometimes partners with early-stage biotech companies working on a specific disease, becoming investors and using their technology to advance drug development, or with academic groups looking to commercialize their research.

But the primary focus is their own drug discovery pipeline.

Story continues below advertisement

We recognized that to capture the value that our platform was creating, we wouldnt do that through just revenue-generating deals with Big Pharma. We had to ideate, create and invent our own drug discovery pipeline, Mr. Kurji says.

The company recently collaborated with researchers at the university formerly known as Ryerson, the University of Toronto and the Vector Institute to explore existing drugs that might be repurposed to treat symptoms of COVID-19. The results, which identified a drug currently used to treat lung cancer, are currently being submitted for peer review.

Over the past three years, Cyclica has created about eight companies and has more than 80 programs in its portfolio. None is in the clinical phase yet, Mr. Kurji says.

Theres no AI and drug discovery company that has a drug that has gone through the clinical [phase] to market approval. Its still too soon, he says. In a space thats only eight years old but theres been a substantial amount of progress across the industry.

CDKL5 Deficiency Disorder (CDD) is a rare genetic condition that affects one in every 40,000 to 60,000 children born.

A genetic form of epilepsy, CDD affects mostly girls and it can have devastating symptoms that include the onset of severe seizures as early as a week after birth.

Story continues below advertisement

It is honestly devastating for the child because it stops all the developmental process, says Cleber Trujillo, the lead senior neuroscientist at Stemonix, a subsidiary of Vyant Bio Inc., a biotech drug discovery company based in New Jersey. They can be really frequent, several times per day, these seizures.

The disorder is caused by a mutation in the CDKL5, or cyclin-dependent kinase-like 5, which is the gene responsible for creating a protein necessary for normal brain development and function. The exact reason for the mutation is unknown and there is no treatment or cure.

Cyclica and Vyant Bio recently announced a strategic collaboration to use Cyclicas AI-driven platform to identify potential pathways to the treatment of the disorder.

Vyant has exceptionally good models for the disease activity, Dr. MacKinnon says. And Cyclica has an AI-driven database of global human genome information that helps researchers such as Vyant Bio to identify and model potential target proteins that can be used to build a drug to treat the disorder.

This really exemplifies partnership, as the researchers coming to us have a good sense of the biology, have these good models for how a disease exists in a cell and we work together to come up with drugs or drug candidates, that will likely have these effects on the systems that theyre looking to achieve for therapeutic outcomes, Dr. MacKinnon says.

The aim is to find target molecules, Dr. Trujillo explains, and then search or screen for compounds that can interact with the target to improve the cells biology.

Cyclicas biotech pipeline means researchers dont start from scratch when looking for proteomes that could potentially work, he says.

Its really hard to find a drug from billions of different possibilities, Dr. Trujillo says. They can create a list that we think are the top candidates.

If we can, in collaboration [with Cyclica], narrow down and join efforts on the biology side or the modelling side, with their expertise, I feel that we can accelerate and make better models and find better compounds.

CDD is a rare disorder but one that is becoming more prevalent, owing largely to a better understanding of the disorder and better screening, he says.

The disorder significantly shortens the lives of sufferers, Dr. Trujillo says, whether from the disease itself or the severe seizures that can cause massive neurological damage.

Its devastating for the family and caregivers, also, he says.

Read more:
How this company is using data-driven drug discovery to fight disease - The Globe and Mail

DeepMind tells Google it has no idea how to make AI less toxic – The Next Web

Did you knowNeural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021.Secure your ticket now!

Opening the black box. Reducing the massive power consumption it takes to train deep learning models. Unlocking the secret to sentience. These are among the loftiest outstanding problems in artificial intelligence. Whoever has the talent and budget to solve them will be handsomely rewarded with gobs and gobs of money.

But theres an even greater challenge stymieing the machine learning community, and its starting to make the worlds smartest developers look a bit silly. We cant get the machines to stop being racist, xenophobic, bigoted, and misogynistic.

Nearly every big tech outfit and several billion-dollar non-profits are heavily invested in solving AIs toxicity problem. And, according to the latest study on the subject, were not really getting anywhere.

The prob: Text generators, such as OpenAIs GPT-3, are toxic. Currently, OpenAI has to limit usage when it comes to GPT-3 because, without myriad filters in place, its almost certain to generate offensive text.

In essence, numerous researchers have learned that text generators trained on unmitigated datasets (such as those containing conversations from Reddit) tend towards bigotry.

Its pretty easy to reckon why: because a massive percentage of human discourse on the internet is biased with bigotry towards minority groups.

Background: It didnt seem like toxicity was going to be an insurmountable problem back when deep learning exploded in 2014.

We all remember that time Googles AI mistook a turtle for a gun right? Thats very unlikely to happen now. Computer visions gotten much better in the interim.

But progress has been less forthcoming in the field of NLP (natural language processing).

Simply put, the only way to stop a system such as GPT-3 from spewing out toxic language is to block it from doing so. But this solution has its own problems.

Whats new: DeepMind, the creators of AlphaGo and a Google sister company under the Alphabet umbrella, recently conducted a study of state-of-the-art toxicity interventions for NLP agents.

The results were discouraging.

Per a preprint paper from the DeepMind research team:

We demonstrate that while basic intervention strategies can effectively optimize previously established automatic metrics on the REALTOXICITYPROMPTS dataset, this comes at the cost of reduced LM (language model) coverage for both texts about, and dialects of, marginalized groups.

Additionally, we find that human raters often disagree with high automatic toxicity scores after strong toxicity reduction interventions highlighting further the nuances involved in careful evaluation of LM toxicity.

The researchers ran the intervention paradigms through their paces and compared their efficacy with that of human evaluators.

A group of paid study participants evaluated text generated by state-of-the-art text generators and rated its output for toxicity. When the researchers compared the humans assessment to the machines, they found a large discrepancy.

AI may have a superhuman ability to generate toxic language but, like most bigots, it has no clue what the heck its talking about. Intervention techniques failed to accurately identify toxic output with the same accuracy as humans.

Quick take: This is a big deal. Text generators are poised to become ubiquitous in the business world. But if we cant make them non-offensive, they cant be deployed.

Right now, a text-generator that cant tell the difference between a phrase such as gay people exist and gay people shouldnt exist, isnt very useful. Especially when the current solution to keeping it from generating text like the latter is to block it from using any language related to the LGBTQ+ community.

Blocking references to minorities as a method to solve toxic language is the NLP equivalent of a sign that says for use by straight whites only.

The scary part is that DeepMind, one of the worlds most talented AI labs, conducted this study and then forwarded the results to Jigsaw. Thats Googles crack problem-solving team. Its been unsuccessfully trying to solve this problem since 2016.

The near-future doesnt look bright for NLP.

You can read the whole paper here.

Follow this link:
DeepMind tells Google it has no idea how to make AI less toxic - The Next Web

There are some important things to keep in mind for first time home buyers – KTBS

If you're one of the many who just bought or is about to buy your first home, this is for you. There is a lot to do before you move in and as soon as you get the keys and many people don't know where to start.

Fortunately, Bailey Carson, Head of Everyday Services at Angi and a home care expert, is here to offer some tips on how to prepare for day one and what to do to get your house to feel like home.

Carson said, "buying your first home is a big milestone and a large investment. If this is your first time, you may be feeling a little overwhelmed by what to do next. I'd start by booking a cleaner to do a deep clean before you move in. You'll also want to transfer utilities right away, as well as turn on your hot water heater and re-key locks."

Find the circuit breaker and your water and gas shut-off valves so you know where to go in case of an emergency. Consider adding a security system and getting that set up from day one as well. A lot of people get anxious during their first night in a new home and a security system can help put you at ease.

As soon as you have access to your new home, you'll want to stock your bathrooms. The last thing you want on move-in day is to be searching for toilet paper, soap, and towels. You'll also want to think about having some essentials on hand like trash and recycling bins, cleaning supplies, and paper towels. This will make that first day all the more smooth," said Carson.

If you're planning to sleep in your new home right away, make sure your toiletries, towels and bedding are all easy to access so you can rinse off and get some rest after a long day. If you need one, include a curtain rod, shower curtain and liner, and rings or hooks in the bathroom box so you don't flood your bathroom on day one.

Carson added, "for fall move-in dates, there are several projects you should take on right away before winter comes. Try cleaning out your dryer vent to prevent any fires. Also, weatherstripping may need to be replaced and cracks may need to be sealed. Also wrap insulation around any outdoor faucets and pipes and finally, if you have a fireplace, definitely bring in a pro to inspect it and also clean it before you start using it."

If you'll be moving in during the winter, make sure the sprinkler system and hoses are drained and stored away. You don't want them freezing throughout the winter. Keep an eye on the basement or crawl space for leaks and regularly inspect your roof, gutters, and downspouts. Clean out your drains and exhausts and cover your AC unit while it's not in use.

Carson added, "maintenance projects may not seem like the most fun thing to do when you get into your new home, but they will help you get started on the right foot. Also think about your home inspection report. If any major issues came up in that, you'll want to get those fixed right away. Or, if you had to waive a home inspection, that's a great thing to do even once the buying process is complete, to identify any of those major risk areas."

Do you have questions about your home projects or home care business? Tweet your question using #AskingAngi, and you may get some tips in an upcoming segment.

See the original post here:
There are some important things to keep in mind for first time home buyers - KTBS