Category Archives: Deep Mind

Why DeepMind Acquired This Robotics Startup – Analytics India Magazine

Earlier this week, Alphabet-owned DeepMind acquired a physics simulation platform MuJoCo, which stands for Multi-Joint Dynamics with Contact.

After the acquisition, the DeepMind Robotics Simulation team, which had been using MuJoCo in the past, is planning to fully open-source the platform in 2022 and make it freely available for everyone to support research everywhere.

Check out the GitHub repository of MuJoCo here. This will be its future home for this platform. However, for now, you can download the latest version of MuJoCo 2.1.0 for free on its website.

MuJoCo was first developed by Emo Todorov for Roboti and was available as a commercial product from 2015 to 2021. After DeepMind acquired MuJoCo, it is making it freely available to everyone. However, the details of the financial transactions are yet to be disclosed.

Post-acquisition, Roboti will continue to support existing paid licenses until they expire. In addition, the legacy MuJoCo release (versions 2.0 and earlier) will remain available for download, with a free activation key file, valid until October 2031.

MuJoCo is a physics engine that aims to facilitate research and development in robotics, graphics, biomechanics, animation, and other domains requiring fast and accurate simulation. It is one of the first full-featured simulators designed from scratch for model-based optimisation, particularly through contacts.

The platform makes it possible to scale up computationally intensive techniques such as optimal control, physically consistent state estimation, system identification and automated mechanism design and apply them to complex dynamical systems in contact-rich behaviours. Plus, it has more traditional applications such as testing and validating control schemes before deployment on physical robots, interactive scientific visualisation, virtual environments, animation, and gaming.

DeepMind MuJoCo is not alone. Other simulator platforms include Facebooks Habitat 2.0 and AI2s ManipulaTHOR. However, what sets them apart is its contact model, which accurately and efficiently captures the salient features of contacting objects. Like other rigid-body simulators, it avoids the fine details of deformations at the contact site and often runs much faster than in real-time.

Unlike other simulators, MuJoCo resolves contact forces using the convex Gauss Principle, said the DeepMind Robotics Simulation team. The convexity ensures unique solutions and well-defined inverse dynamics. Plus, the model is flexible, providing multiple parameters which are tuned to approximate a wide range of contact phenomena.

Further, the DeepMind team said that their platform is based on real physics and takes no shortcuts. According to them, many simulations were initially designed for purposes like gaming and cinema; they sometimes take shortcuts that prioritise stability over accuracy. For example, they may ignore gyroscopic forces or directly modify velocities.

That, in the context of optimisation, can be particularly harmful. In contrast, MuJoCo is a second-order continuous-time simulator, implementing the full equations of motion. In other words, MuJoCo closely adheres to the equations that govern our world non-trivial physical phenomena like Newtons Cradle, and unintuitive ones like the Dzhanibekov effect, happens naturally.

The team also said that the MuJoCo core engine is written in pure C, making it easily portable to various architectures. In addition to this, the platform also provides fast and convenient computations of commonly used quantities, like kinematic Jacobians and inertia matrices.

MuJoCo offers powerful scene descriptions. It uses cascading defaults avoiding multiple repeated values and contains elements for real-world robotic components like tendons, actuators, equality constraints, motion-capture markers, and sensors. Soon, it plans to include standardising MJCF as an open format to extend its usefulness beyond the MuJoCo ecosystem.

Besides this, MuJoCo includes two powerful features that support musculoskeletal models of humans and animals. It captures the complexity of biological muscles, including activation states and force-length-velocity curves.

DeepMind has been heavily investing in robotics research. Recently, it introduced RGB-Stacking, a new benchmark for vision-based robotic manipulation.

The recent acquisition comes at a time when there is a dearth of data in robotics research. This is one of the reasons why DeepMinds arch-rival OpenAI went on to shut down its robotics arm indefinitely. But, this is not stopping DeepMind, as its teams are trying to get around this paucity of data with a technique called sim-to-real, in a big way.

Now, with the acquisition of MuJoCo, open-sourcing the library seems like a smooth move for the company, and surely going to benefit the robotics ecosystem as a whole.

See the rest here:
Why DeepMind Acquired This Robotics Startup - Analytics India Magazine

Deeper Is Not Necessarily Better: Princeton U & Intel’s 12-Layer Parallel Networks Achieve Performance Competitive With SOTA Deep Networks -…

While it is generally accepted that network depth is responsible for the high performance of todays deep learning (DL) models, adding depth also brings downsides such as increased latency and computational burdens, which can bottleneck progress in DL. Is it possible to achieve similarly high performance without deep networks?

In the new paper Non-deep Networks, a research team from Princeton University and Intel Labs argues that it is, proposing ParNet (Parallel Networks), a novel non-deep architecture that achieves performance competitive with its state-of-the-art deep counterparts.

The team summarizes their studys contributions as:

The main design feature of ParNet is its use of parallel subnetworks or substructures (referred to as streams in the paper) that process features at different resolutions. The features from different streams are fused at a later stage in the network used for downstream tasks. This approach enables ParNet to function effectively with a network depth of only 12 layers, orders of magnitude lower than ResNet models, for example, which in extreme cases can include up to 1000 layers.

A key ParNet component is its RepVGG-SSE, a modified Rep-VGG block with a purpose-built Skip-Squeeze-Excitation module. ParNet also contains a downsampling block that reduces resolution and increases width to enable multi-scale processing, and a fusion block that combines information from multiple resolutions.

In their empirical study, the team compared the proposed ParNet with state-of-the-art deep neural networks baselines such as ResNet110 and DenseNet on large-scale visual recognition benchmarks that included ImageNet, CIFAR and MS-COCO.

The results show that a ParNet with a depth of just 12 layers was able to achieve top-1 accuracies of over 80 percent on ImageNet, 96 percent on CIFAR10, and 81 percent on CIFAR100. The team also demonstrated a detection network with a 12 layer backbone that achieved an average precision of 48 percent on the MS-COCO large-scale object detection, segmentation and captioning dataset.

Overall, the study provides the first empirical proof that non-deep networks can perform competitively with their deep counterparts on large-scale visual recognition benchmarks. The team hopes their work can contribute to the development of neural networks that are a better fit for future multi-chip processors.

The code is available on the projects GitHub. The paper Non-deep Networks is on arXiv.

Author: Hecate He |Editor: Michael Sarazen

We know you dont want to miss any news or research breakthroughs.Subscribe to our popular newsletterSynced Global AI Weeklyto get weekly AI updates.

Like Loading...

Go here to read the rest:
Deeper Is Not Necessarily Better: Princeton U & Intel's 12-Layer Parallel Networks Achieve Performance Competitive With SOTA Deep Networks -...

How AI is reinventing what computers are – MIT Technology Review

Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, theres something remarkable going on.

Googles latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a neural engine, also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And its changing how we think about computing.

What does that mean? Well, computers havent changed much in 40 or 50 years. Theyre smaller and faster, but theyre still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how theyre programmed, and how theyre used. Ultimately, it will change what they are for.

The core of computing is changing from number-crunching to decision-making, says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes.

The first change concerns how computersand the chips that control themare made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moores Law.

But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure its available when and where its needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second.

Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware.

For example, the chip inside the Pixel 6 is a new mobile version of Googles tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-precision calculations required by neural networks. Google has used these chips in-house since 2015: they process peoples photos and natural-language search queries. Googles sister company DeepMind uses them to train its AIs.

In the last couple of years, Google has made TPUs available to other companies, and these chipsas well as similar ones being developed by othersare becoming the default inside the worlds data centers.

AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithma type of AI that learns how to solve a task through trial and errorto design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think ofbut they worked. This kind of AI could one day develop better, more efficient chips.

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK.

Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. Its a fundamentally different way of thinking.

Read the rest here:
How AI is reinventing what computers are - MIT Technology Review

Incorporating This Into Your Daily Routine Can Bolster Your Brain Health & Mood – mindbodygreen.com

Spirituality and the brain: Whats the connection? Well admit, the neuroscience has been a bit limited (even though research has gotten closer to mapping the specific brain circuit responsible for spirituality), but Lisa Miller, Ph.D., an award-winning researcher in spirituality and psychology and the author of The Awakened Brain, is on the case.

Specifically, she combed through MRI scans of participants who have struggled with feelings of sadness (blues) to assess whether a sense of spirituality had any effect on their mental well-beingand, frankly, the results are astounding. People who [had] a spiritual response to suffering showed entirely different brains, she says on this episode of the mindbodygreen podcast. They showed not thinning but thickening across the regions of perception and reflection, the parietal, precuneus, and occipital [regions].

In other words, a sense of spirituality can have a huge impact on your brain health and mood. The question becomes: How do you incorporate spirituality into your everyday life? According to Miller, a deep sense of awareness is not tied to religion, per serather, the ability to connect spiritually is innate within us. We are all born with this capacity to see into the deeper nature of life, but the muscle has been left to atrophy in the great majority of people in our country, she says.

Below, she offers her personal tips to flex those spiritual muscles.

See the original post here:
Incorporating This Into Your Daily Routine Can Bolster Your Brain Health & Mood - mindbodygreen.com

The Ideal Color To Surround Yourself With Right Now, According To Astrologers – mindbodygreen.com

Scorpio season can be a heavy and intense time, especially if Scorpio placements in your birth chart are scarce and you're not used to its qualities. To ease into it, and even harness its potent and mysterious energy, consider incorporating Scorpio colors into your life from October 23 to November 22.

Don't shy away from your dark-colored clothes at this time, especially blacks and deep reds. (Just in time for Halloween, right?) And it doesn't have to stop at your wardrobe! Maybe you opt for some new decor in your home or office, swapping out a brightly colored piece of art for a darker, more brooding one.

If you normally shy away from looks like dark lipstick, (vegan) leather, and things of the like, now's the time to embrace them. And on that same note, you might also want to avoid lighter colors like pastels, that don't complement Scorpio's palette. Libra, for example, is associated with pinks and bluesand we're leaving that energy behind come October 23.

Read more:
The Ideal Color To Surround Yourself With Right Now, According To Astrologers - mindbodygreen.com

A.I. Predicts the Shapes of Molecules to Come – The New …

For some years now John McGeehan, a biologist and the director of the Center for Enzyme Innovation in Portsmouth, England, has been searching for a molecule that could break down the 150 million tons of soda bottles and other plastic waste strewn across the globe.

Working with researchers on both sides of the Atlantic, he has found a few good options. But his task is that of the most demanding locksmith: to pinpoint the chemical compounds that on their own will twist and fold into the microscopic shape that can fit perfectly into the molecules of a plastic bottle and split them apart, like a key opening a door.

Determining the exact chemical contents of any given enzyme is a fairly simple challenge these days. But identifying its three-dimensional shape can involve years of biochemical experimentation. So last fall, after reading that an artificial intelligence lab in London called DeepMind had built a system that automatically predicts the shapes of enzymes and other proteins, Dr. McGeehan asked the lab if it could help with his project.

Toward the end of one workweek, he sent DeepMind a list of seven enzymes. The following Monday, the lab returned shapes for all seven. This moved us a year ahead of where we were, if not two, Dr. McGeehan said.

Now, any biochemist can speed their work in much the same way. On Thursday, DeepMind released the predicted shapes of more than 350,000 proteins the microscopic mechanisms that drive the behavior of bacteria, viruses, the human body and all other living things. This new database includes the three-dimensional structures for all proteins expressed by the human genome, as well as those for proteins that appear in 20 other organisms, including the mouse, the fruit fly and the E. coli bacterium.

This vast and detailed biological map which provides roughly 250,000 shapes that were previously unknown may accelerate the ability to understand diseases, develop new medicines and repurpose existing drugs. It may also lead to new kinds of biological tools, like an enzyme that efficiently breaks down plastic bottles and converts them into materials that are easily reused and recycled.

This can take you ahead in time influence the way you are thinking about problems and help solve them faster, said Gira Bhabha, an assistant professor in the department of cell biology at New York University. Whether you study neuroscience or immunology whatever your field of biology this can be useful.

This new knowledge is its own sort of key: If scientists can determine the shape of a protein, they can determine how other molecules will bind to it. This might reveal, say, how bacteria resist antibiotics and how to counter that resistance. Bacteria resist antibiotics by expressing certain proteins; if scientists were able to identify the shapes of these proteins, they could develop new antibiotics or new medicines that suppress them.

In the past, pinpointing the shape of a protein required months, years or even decades of trial-and-error experiments involving X-rays, microscopes and other tools on the lab bench. But DeepMind can significantly shrink the timeline with its A.I. technology, known as AlphaFold.

When Dr. McGeehan sent DeepMind his list of seven enzymes, he told the lab that he had already identified shapes for two of them, but he did not say which two. This was a way of testing how well the system worked; AlphaFold passed the test, correctly predicting both shapes.

It was even more remarkable, Dr. McGeehan said, that the predictions arrived within days. He later learned that AlphaFold had in fact completed the task in just a few hours.

AlphaFold predicts protein structures using what is called a neural network, a mathematical system that can learn tasks by analyzing vast amounts of data in this case, thousands of known proteins and their physical shapes and extrapolating into the unknown.

This is the same technology that identifies the commands you bark into your smartphone, recognizes faces in the photos you post to Facebook and that translates one language into another on Google Translate and other services. But many experts believe AlphaFold is one of the technologys most powerful applications.

It shows that A.I. can do useful things amid the complexity of the real world, said Jack Clark, one of the authors of the A.I. Index, an effort to track the progress of artificial intelligence technology across the globe.

As Dr. McGeehan discovered, it can be remarkably accurate. AlphaFold can predict the shape of a protein with an accuracy that rivals physical experiments about 63 percent of the time, according to independent benchmark tests that compare its predictions to known protein structures. Most experts had assumed that a technology this powerful was still years away.

I thought it would take another 10 years, said Randy Read, a professor at the University of Cambridge. This was a complete change.

But the systems accuracy does vary, so some of the predictions in DeepMinds database will be less useful than others. Each prediction in the database comes with a confidence score indicating how accurate it is likely to be. DeepMind researchers estimate that the system provides a good prediction about 95 percent of the time.

As a result, the system cannot completely replace physical experiments. It is used alongside work on the lab bench, helping scientists determine which experiments they should run and filling the gaps when experiments are unsuccessful. Using AlphaFold, researchers at the University of Colorado Boulder recently helped identify a protein structure they had struggled to identify for more than a decade.

The developers of DeepMind have opted to freely share its database of protein structures rather than sell access, with the hope of spurring progress across the biological sciences. We are interested in maximum impact, said Demis Hassabis, chief executive and co-founder of DeepMind, which is owned by the same parent company as Google but operates more like a research lab than a commercial business.

Some scientists have compared DeepMinds new database to the Human Genome Project. Completed in 2003, the Human Genome Project provided a map of all human genes. Now, DeepMind has provided a map of the roughly 20,000 proteins expressed by the human genome another step toward understanding how our bodies work and how we can respond when things go wrong.

The hope is also that the technology will continue to evolve. A lab at the University of Washington has built a similar system called RoseTTAFold, and like DeepMind, it has openly shared the computer code that drives its system. Anyone can use the technology, and anyone can work to improve it.

Even before DeepMind began openly sharing its technology and data, AlphaFold was feeding a wide range of projects. University of Colorado researchers are using the technology to understand how bacteria like E. coli and salmonella develop a resistance to antibiotics, and to develop ways of combating this resistance. At the University of California, San Francisco, researchers have used the tool to improve their understanding of the coronavirus.

The coronavirus wreaks havoc on the body through 26 different proteins. With help from AlphaFold, the researchers have improved their understanding of one key protein and are hoping the technology can help increase their understanding of the other 25.

If this comes too late to have an impact on the current pandemic, it could help in preparing for the next one. A better understanding of these proteins will help us not only target this virus but other viruses, said Kliment Verba, one of the researchers in San Francisco.

The possibilities are myriad. After DeepMind gave Dr. McGeehan shapes for seven enzymes that could potentially rid the world of plastic waste, he sent the lab a list of 93 more. Theyre working on these now, he said.

Continued here:
A.I. Predicts the Shapes of Molecules to Come - The New ...

These weird virtual creatures evolve their bodies to solve problems – MIT Technology Review

Its already known that certain bodies accelerate learning, says Bongard. This work shows that AI that can search for such bodies. Bongards lab has developed robot bodies that are adapted to particular tasks, such as giving callus-like coatings to feet to reduce wear and tear. Gupta and his colleagues extend this idea, says Bongard. They show that the right body can also speed up changes in the robots brain.

Ultimately, this technique could reverse the way we think of building physical robots, says Gupta. Instead of starting with a fixed body configuration and then training the robot to do a particular task, you could use DERL to let the optimal body plan for that task evolve and then build that.

Guptas unimals are part of a broad shift in how researchers are thinking about AI. Instead of training AIs on specific tasks, such as playing Go or analyzing a medical scan, researchers are starting to drop bots into virtual sandboxessuch as POET, OpenAIs virtual hide-and-seek arena, and DeepMinds virtual playground XLandand getting them to learn how to solve multiple tasks in ever-changing, open-ended training dojos. Instead of mastering a single challenge, AIs trained in this way learn general skills.

For Gupta, free-form exploration will be key for the next generation of AIs. We need truly open-ended environments to create intelligent agents, he says.

Originally posted here:
These weird virtual creatures evolve their bodies to solve problems - MIT Technology Review

As Great Resignation Draws On, Smaller Teams and Larger Workloads, Concerns Over Retention Top of Mind for Leaders – Business Wire

NEW YORK--(BUSINESS WIRE)--On the heels of the latest labor turnover survey from the U.S. government, ExecOnline, the pioneer of online leadership development for enterprises, released new survey findings that highlight the top challenges leaders say affect them most today. Results from the survey, which polled thousands of senior level executives participating in ExecOnline leadership development programs, reveal that in addition to issues related to a burned out and depleted workforce, managers are also struggling with todays post-remote, hybrid work environment.

As the Great Resignation draws on with record numbers of workers quitting their jobs, the companys latest survey found that 52 percent of leaders said managing workloads with smaller teams is now their top challenge, up from 38 percent who said so one year ago.

The results also revealed fewer leaders now feel that supporting their teams well-being is a top challenge, a trend likely tied to the installment of new HR programs and benefits for employees working from home and dealing with changes in their personal and professional lives. Specifically, 61 percent of leaders in Q2 2020 - the beginning of the pandemic - said that supporting their teams well-being was their top challenge, but in Q3 2021 that number dropped to 41 percent.

Even with this decrease, retention and burnout still rank high when thinking about their organizations post-pandemic plans. Specifically, 56 percent of respondents said they are moderately concerned, or extremely concerned, about burnout - a figure that has remained relatively unchanged over the past year - and 43 percent are concerned about retention.

Burnout and fatigue are not issues that are going away, and in part are fueling whats behind the Great Resignation, said ExecOnline Co-Founder and CEO Stephen Bailey. Now is not the time for leaders to simply check the box when it comes to offering wellness support. As we return to the workplace and continue to embrace flexible work, organizations must prioritize developing leaders who can effectively and empathetically manage their teams well-being and the challenges that are preventing them from performing at their very best.

Leadership Capabilities Decrease as Return to Work Plans Accelerate

Marrying with the downward trend of leaders who say supporting their teams well-being is a top challenge, many leadership capabilities are seemingly on the downswing according to their employees. On average, critical capabilities, such as change leadership, strategic leadership and team leadership, were at an all-time high (67%) in Q1 2021; however, by Q3, confidence in those capabilities decreased to 61 percent - the lowest reported in a year. Interestingly, leadership capabilities were on the rise through Q1 2021 as the workforce acclimated to remote work, but declined as part of the global workforce began returning to the office and as leaders began navigating new challenges related to a post-remote work environment.

Todays leaders have the added responsibility of ensuring their teams are adequately prepared for and able to thrive in a post-remote work environment, a role that also now requires skills and capabilities that foster a deep knowledge of communication, empathy, and connectivity, continued Bailey. This is a critical moment for companies to invest in developing senior leaders with these capabilities or risk an even greater fallout from the impact of the pandemic and the Great Resignation.

Post-Pandemic Work Environments Will Require Clear Communication, Empathy, Inclusive Leadership Abilities

The pandemic and ensuing shift to a work-from-home model changed the basic structure and needs of workforces. Similarly as more employees are returning to the workplace, they are looking for different capabilities and skills from their managers that will help them better navigate a future hybrid work environment.

For future hybrid work success, 51 percent said their leaders will need to be able to demonstrate empathy to support workers needs; 44 percent want leaders to have skills that foster the concept of inclusivity; and 32 percent feel leaders will need skills that show an understanding of diversity and equity issues.

ExecOnlines survey did uncover good news on the diversity, equity and inclusion front: 69 percent of survey respondents said their senior leadership exhibits capabilities that match diversity leadership, up from the 65 percent who said this one year ago and the only leadership capability to see an increase.

The future of work is less about where we work and more about how we work. Now more than ever, its important for teams to see their managers and senior leaders addressing topics and issues related to the challenges of todays business environment, added Bailey.

ExecOnline polls thousands of its program participants five times a year to gauge top trends in leadership development, culture, and barriers to effective work.

About ExecOnline

As the pioneer of online leadership development for enterprises, ExecOnline has delivered transformational learning experiences to corporate leaders at over 500 global organizations since 2012. Through partnerships with elite business schools such as Berkeley Haas, Chicago Booth, Columbia, UVA Darden School of Business, Tuck at Dartmouth, Duke CE, IMD, Ivey, MIT-Sloan, Stanford GSB, Wharton and Yale, ExecOnline consistently provides top-tier leadership courses. As a Forbes Technology Company to Watch, our proprietary online ecosystem combines the engagement of on-campus study with the convenience of online education, through dynamic, high-impact experiences tailored to the unique strategic, innovation and operational concerns of corporate executives. Follow ExecOnline on LinkedIn and Twitter. Visit execonline.com to learn more.

Link:
As Great Resignation Draws On, Smaller Teams and Larger Workloads, Concerns Over Retention Top of Mind for Leaders - Business Wire

Global Mindfulness Meditation Apps Market 2021 Industry Insights and Major Players are Deep Relax, Smiling Mind, Inner Explorer, Inc. Radford…

Global Mindfulness Meditation Apps Market is the review that has been added to the MarketandResearch.biz information base. The report covers an inside and out outline, depiction of the item, industry scope and expounds market standpoint and development status to 2027. In this report, organizations will come to know the current and fate of market viewpoint in the created and developing business sectors. The report gives an examination of different points of view of the market with the assistance of Porters five powers investigation.

The report features the portion that is relied upon to overwhelm the worldwide Mindfulness Meditation Apps market and the regions that are relied upon to notice the most out of control development during the anticipated period 2021-2027. The report completely concentrates on all periods of the market to give an audit of the current market works.

DOWNLOAD FREE SAMPLE REPORT: https://www.marketandresearch.biz/sample-request/173878

The report incorporates a chance investigation utilizing different insightful apparatuses and past information. To more readily dissect the thinking behind development gauges itemized profiles of top and arising players of the business alongside their arrangements, item particular, and improvement action. The central participants are focusing on development to build productivity and item life.

On the basis of product type of market:

The study explores the key applications/end-users of the market:

Some of the key players considered in the study are:

On the basis of region, the market is segmented into countries:

ACCESS FULL REPORT: https://www.marketandresearch.biz/report/173878/global-mindfulness-meditation-apps-market-2021-by-company-regions-type-and-application-forecast-to-2026

The report gives itemized data with respect to key components including drivers, limitations, openings, and industry-explicit difficulties affecting the development of the worldwide Mindfulness Meditation Apps market. This review helps in breaking down and anticipating the size of the market, as far as worth and volume. The review incorporates conjecture the size of market portions, as far as worth, concerning fundamental locales.

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team (sales@marketandresearch.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

Contact UsMark StoneHead of Business DevelopmentPhone: +1-201-465-4211Email: sales@marketandresearch.biz

See more here:
Global Mindfulness Meditation Apps Market 2021 Industry Insights and Major Players are Deep Relax, Smiling Mind, Inner Explorer, Inc. Radford...

Google Proposes ARDMs: Efficient Autoregressive Models That Learn to Generate in any Order – Synced

Deep generative models that apply a likelihood function to data distribution have made impressive progress in modelling different sources of data such as images, text and video. A popular such model type is autoregressive models (ARMs), which, although effective, require a pre-specified order for their data generation. ARMs consequently may not be the best choice for generation tasks that involve specific types of data, such as images.

In a new paper, a Google Research team proposes Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models and discrete diffusion models that do not require causal masking of model representations and can be trained using an efficient objective that scales favourably to highly-dimensional data.

The team summarises the main contributions of their work as:

The researchers explain that from an engineering perspective, the main challenge in parameterizing an ARM is the need to enforce the triangular or causal dependence. To address this, they took inspiration from modern diffusion-based generative models, deriving an objective that is only optimized for a single step at a time. In this way, a different objective for an order-agnostic ARM could be derived.

The team then leveraged an important property of this parametrization that the distribution over multiple variables is predicted at the same time to enable the parallel and independent generation of variables.

The researchers also identified an interesting property of upscale ARDM training: complexity is not changed by modelling multiple stages. This enabled them to experiment with adding an arbitrary number of stages during training without any increase in computational complexity.

The team applied two methods to the parametrization of the upscaling distributions: direct parametrization, which requires only distribution parameter outputs that are relevant for the current stage, making it efficient; and data parametrization, which can automatically compute the appropriate probabilities for experimentation with new downscaling processes, but may be expensive as a high number of classes are involved.

In their empirical study, the team compared ARDMs to other order-agnostic generative models, evaluating performance on a character modelling task using the text8 dataset. As expected, the proposed ARDMs performed competitively with existing generative models, and outperformed competing approaches on per-image lossless compression.

Overall, the study validates the effectiveness of the proposed ARDMs as a new class of models at the intersection of autoregressive and discrete diffusion models, whose benefits are summarized as:

The paper Autoregressive Diffusion Models is on arXiv.

Author: Hecate He |Editor: Michael Sarazen

We know you dont want to miss any news or research breakthroughs.Subscribe to our popular newsletterSynced Global AI Weeklyto get weekly AI updates.

Like Loading...

See the original post:
Google Proposes ARDMs: Efficient Autoregressive Models That Learn to Generate in any Order - Synced