Category Archives: Deep Mind

The Number One Voice Certain To Drive Your Leadership Off The Deep End – Forbes

The Number One Voice Certain to Drive Your Leadership Off the Deep End

By: Rachel Tenenbaum

You are not the voice of the mind; you are just the one that hears it. Michael Singer

One of the most astute and celebrated quotes in the personal and professional development realm,Michael Singerspeaks a truth that transforms - lives, leadership, and minds - in a simple sentence.

The first time I read this quote, I resisted it, feeling that it was a radically preposterous claim: Who am I if not the voice of the mind?

Years later, it has helped an astounding number of my clients lives, and its a premise I continue to explore and reflect on regularly. And I am not alone.

Multiple TEDTalk speakers, authors, and coaches likeShirzad ChamineandTara Mohr, psychologistEthan Kross, and psychologist/NYT bestselling authorRick Hansonare a few of the many who are cultivating a greater awareness of this pivotal life and leadership transforming truth: This voice is not you but instead the militant teachers, demoralizing colleagues or leaders, hypercritical parents, or detrimental societal norms from your past.

I argue that we do not have one naysaying voice, but a few who incessantly chatter: analytical commentators who thrive most when they instigate and influence nearly every move.

[Related:Don't Let Your Negative Nellie Keep You From Moving Forward]

Activated bythe areas of the brainthat constantly scan for threats, these voicesintendto keep you safe. They insist their advice makes life easy, but instead of being helpful, these voices are like tweens claiming they are expert drivers:Give them the wheel and you are bound to crash, and likely incite a pileup on the nearest interstate.

These voices do not belong in the boardroom, at home, or in your car, and they certainly should not dictate yourdecision-making.

These voices drive (and are driven by) fear, insecurity, self-righteousness, resentment, ego, and so much more. They bully, delay, and stop you from leading. They undermine yourauthorityand creativity, hijack your confidence, and destabilize your effectiveness.

For some, they judge or belittle, and are the echo that seeds endless doubt:

Who do you think you are?

Do you really think they will buy that pitch?

Your team is sliding, your tactics are terrible, and its all your fault.

For others, they are the voices that bully and push, convincing you that the only way to achieve your goals is to bulldoze over everyone, ignore policies, and reject the guidance of others. The aftermath leaves a destructive mess, and mitigates your actual objectives.

No matter the approach, they sabotage rational executive function, crippling your ability to see and think clearly. Your capacity to reason and to reflect declines and your ability to creatively problem-solve retreats with it.

[Related:Why Confidence Matters in Your Work and Your Life]

These voices create divides - within ourselves, and with others. For many, they have been around for so long, theyve seamlessly woven themselves into the fabric of our minds and our lives.

Research estimates that we have approximately 50,000 70,000 thoughts a day: 80% of those come from the negative voices, habitually undermining our self-confidence andself-authority. Without critical awareness,when we blindly listen to and believe we are these voices, they win, every time.

Knowing these voices will not disappear, how do you live, lead, and even harness the best of these voices?

First, pause and take a breath.

Then,detach and acknowledge their presence.

And finally, question with sincere curiosity.

Since the voices are linked to theemotionalareas of the brain, that conscious pause and breath begin to quiet the anxious activity, enabling you to create distance between yourself and the chatter. As you detach, you transform the conditioned, reflexive judgment (of yourself and others) into healthy discernment. You begin to see the voices for what they are: mere thoughts muddied in layers of fear and reactionary theories.

With time, this mindful awareness coupled with sincere curiosity enables you to not only identify the reflexive drivers, but also the kernels of wisdom and insight buried beneath.

Your team is sliding, your tactics are terrible, and its all your fault becomes the team needs a pick-me-up, and my reflexive berating is not helping. We need to reconnect to our teams values and goals, and creatively assess this issue together.

Do you really think they will buy that pitch? which only begs a black and white answer with no room for vision, becomes what would take this pitch to the next level? Perhaps even realizing that nothing else is needed, when you look with clear sight.

And finally, that bulldozing style diminishes, turning into a strategic approach, fostering collaboration and long-term buy-in.

These skills - the ability to pause, detach, listen, observe, and accurately question and discern - are essential skills for every leader to cultivate; they are also the fundamental building blocks of mindfulness.

One of the most powerful,underutilizedaccelerators to leadership, mindfulness not only trains each of these critical skills but also enables you to assess who should be driving the car of your life. By mindfully placing the strong, intentional, wise, and aware voice at the helm, youll achieve more, react less, and lead with greater clarity.

[Related:Thoughtful Leadership and I]

A certified professional coach and facilitator,Rachel TenenbaumPCC, CNTC, helps individuals understand and optimize their brains. Her focus is on leadership development and personal development in and outside organizations. She offers a global community meditation on Sundays - you can find more informationhere.

Excerpt from:
The Number One Voice Certain To Drive Your Leadership Off The Deep End - Forbes

Consumer Companies Bail on Non-Core Assets: "Deep is the New Wide" – Mergers & Acquisitions

Consumer sector players are increasingly eager to part with assets seen as non-core in a market that rewards specialization, says Kearney partner Bahige El-Rayes. Deep is the new wide, El-Rayes says. It used to be about diversified products, now its about consistency, agility. In many cases consumer [companies] realized they dont need everything they have.

That realization is widely felt. Three quarters of sector executives at the top 20 consumer companies surveyed by global strategy and management consulting firm Kearney said that they could carry out a major divestiture. The rationale? Strategic restructuring to strengthen balance sheets and enable growth.

The novelty in the survey data, released today, is in the type of growth corporates seek on the back of disposals. Rather than deploying capital in megadeals that fundamentally reorient strategy, executives tell Kearney that they want to pursue small, discrete acquisitions that give them exposure to a market without weighing in as a potential drag on the balance sheet should markets shift.

The word is defined optionality, El-Rayes explains. Companies are increasingly like venture capitalists; they need to be more agile and less certain in terms of trends and capability.

To capture upside to an emerging technology, for instance, executives are now thinking, There are a ton of technologies. I want to be positioned but dont want a billion dollar investment. I want a footprint but not too much where Ill be exposed in a few years.

Grocery shoppers are clamoring for personalized nutrition and shopping options that improve gut health, but how companies position themselves for a possibly transient shift in preferences is less clear.

Health and food are coming together, and its not certain if its a fad or not, El-Rayes frames the scenario. Is that something that consumer packaged goods companies have a role to play in, or are they the big evil? Who is the culprit for the processed sugar? Where are the growth opportunities? Should packaged goods companies double down on that?

Rather than seeking a transformational merger of equals, corporates could play the possible outcomes by acquiring a small health food company. The parent can drive reverse integration such that the values and positioning of the fresh-focused target permeate the wholeco, driving organizational change without the price tag and legacy assets of a larger deal.

Mondelezs 2019 acquisition of a majority stake in Perfect Snack owner Perfect Brands is a zeitgeist transaction: the maker of Oreos and Cadbury eggs acquired organic, non-GMO clean snacks in the deal. Meanwhile, Cargill invested $75 million in textured vegetable protein maker Puris to bring its total investment to $100 million the same year.

This preference for option value is evident in trending valuations. Deal multiples for small and midsize deals are increasing relative to large deals, which have cumulatively fallen 34 percent since 2018. When asked what sized targets they plan to acquire, approximately 60 percent of respondents pointed to $500 million targets and below.

While divestitures are top of mind in consumer sector boardrooms across the globe, so are the depths of redeployment. Kearney, working with Dealogic data, projects consumer M&A to rebound off 2020 lows should first quarter transaction rates continue apace.

El-Rayes is co-author of the report Forged in Crisis, Poised to Innovate, and leads the UK and Ireland consumer practice of Kearney.

Read the original:
Consumer Companies Bail on Non-Core Assets: "Deep is the New Wide" - Mergers & Acquisitions

Bengio Team Proposes Flow Network-Based Generative Models That Learn a Stochastic Policy From a Sequence of Actions – Synced

For standard reinforcement learning (RL) algorithms, the maximization of expected return is achieved by selecting the single highest-reward sequence of actions. But for tasks in a combinatorial domain such as drug molecule synthesis, where exploration is important the desired goal is no longer to simply generate the single highest-reward sequence of actions, but rather to carefully sample a diverse set of high-return solutions.

To address this specific machine learning problem, a research team from Mila, McGill University, Universit de Montral, DeepMind and Microsoft has proposed GFlowNet, a novel flow-network-based generative method that can turn a given positive reward into a generative policy that samples with a probability proportional to the return.

The team summarizes their contributions as:

In graph theory, a flow network is a directed graph with sources and sinks, where each edge has a capacity, and each edge receives a flow. The motivation task of this flow network is iterative black-box optimization, where the agent has to compute a reward for a large batch of candidates at each round. The idea behind the proposed GFlowNet is to view the probability assigned to an action given a state as the flow associated with a network whose nodes are states and outgoing edges from that node are deterministic transitions driven by an action.

The team defines a flow network with a single source, where the sinks of the network correspond to the terminal states. Given the graph structure and the outflow of the sinks, they attempt to calculate a valid flow between nodes. Notably, such construction corresponds to a generative model, and the researchers conduct rigorous proofs and explanations to prove that the result is a terminal state (a sink), with probability proportional to the return if we follow the flow.

Based on the above theoretical results, the researchers then create a learning algorithm. They propose approximating the flows such that the flow conditions are obtained at convergence with enough capacity in their estimator of the flows. In this way, they will yield an objection function for GFlowNet where the minimization result of this objective achieves their desiderata a generative policy that samples with a probability proportional to the return.

The team conducted various experiments to evaluate the performance of the proposed GFlowNet. For example, in an experiment that generates small molecules, the team reported the empirical distribution of rewards and the average reward of the top-k as a function of learning.

Compared to the baseline MARS (Xie et al., 2021), GFlowNet found more high-reward molecules. The results also show that for both GFlowNet and MARS, the more molecules are visited, the better they become, with a slow convergence towards the proxys max reward.

Overall, GFlowNet achieves competitive results against baseline methods on the molecule synthesis domain task and performs well on a simple domain where there are many modes to the reward function. The research team believes GFlowNet can serve as an alternative approach for turning an energy function into a fast generative model.

The implementations are available on the project GitHub. The paper Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation is on arXiv.

Author: Hecate He |Editor: Michael Sarazen, Chain Zhang

We know you dont want to miss any news or research breakthroughs.Subscribe to our popular newsletterSynced Global AI Weeklyto get weekly AI updates.

Like Loading...

Follow this link:
Bengio Team Proposes Flow Network-Based Generative Models That Learn a Stochastic Policy From a Sequence of Actions - Synced

AI in Healthcare Market Drivers, Challenges, Opportunities and Competitive Strategy Over 2021-2031 | Nuance Communications, Inc., DeepMind…

Scope of Trending Report:

GlobalAI in Healthcare Market:The report provides a valuable source of insightful data for business strategists and competitive analysis of the AI in Healthcare Market. The main aim of this AI in Healthcare report is to help the user understand the market about its definition, segmentation, market potential, influential trends, and the challenges that the market is facing. This report will aid the users in understanding the market in depth.

The report on AI in Healthcare market offers an overview of several major countries spread across various geographic regions over the globe. The report concentrates on recognizing various market developments, dynamics, growth drivers and factors hampering the market growth. Further, the report delivers comprehensive insights into numerous growth opportunities and challenges based on various types of products, applications, end users and countries, among others.

Download a FREE sample copy of this report:https://www.insightslice.com/request-sample/489

We provide detailed product mapping and investigation of various market scenarios. Our expert analysts provide a thorough analysis and breakdown of the market presence of key market leaders. We strive to stay updated with the recent developments and follow the latest company news related to the industry players operating in the global AI in Healthcare market. This helps us to comprehensively analyze the individual standing of the companies as well as the competitive landscape. Our vendor landscape analysis offers a complete study to help you gain the upper hand in the competition.

The major manufacturers covered in this report:Nuance Communications, Inc., DeepMind Technologies Limited, IBM Corporation, Intel Corporation and Microsoft and NVIDIA Corporation.

Scope of the report: AI in Healthcare Market

The research takes a closer look at prominent factors driving the growth rate of the prominent product categories across major geography. Furthermore, the study covers a lot of the sales, gross margin, consumption capacity, spending power and customer preference across various countries. The report offers clear indications how the AI in Healthcare market is expected to witness numerous exciting opportunities in the years to come. Critical aspects including the growing requirement, demand and supply status, customer preference, distribution channels and others are presented through resources such as charts, tables, and infographics.

Request For Customization: https://www.insightslice.com/request-customization/489

The report answers questions such as:

COVID 19 Impact Analysis on AI in Healthcare Market

Given the scale of the pandemic, technology will play a crucial role in addressing every facet of COVID-19. There is also a gradual increase in the number of use cases of AI in Healthcare to surge the demand. Major applications introduced using AI in Healthcare systems are for security assessment and identity verification. In many countries, law enforcement and organization have shifted from legacy systems to AI in Healthcare solutions to reduce the overall spread of COVID-19.

The AI in Healthcare market report covers the following regions:

* North America:U.S., Canada, Mexico* South America:Brazil, Venezuela, Argentina, Ecuador, Peru, Colombia, Costa Rica* Europe:U.K., Germany, Italy, France, Netherlands, Belgium, Spain, Denmark* APAC:China, Japan, Australia, South Korea, India, Taiwan, Malaysia, Hong Kong* The Middle East and Africa:Israel, South Africa, Saudi Arabia

HowInsightSLICE is Different Form Other market research Company :

InsightSLICE is a prominent market research and consulting firm offering action-ready syndicated research reports, custom market analysis, consulting services, and competitive analysis through various recommendations related to emerging market trends, technologies, and potential absolute dollar opportunities.

Note: * The discount is offered at the Standard Price of the report.

Ask For Discount Before Purchasing This Business Report @ https://www.insightslice.com/request-discount/489

The study on Global AI in Healthcare Market provides crucial insights such as:

About Us:

We are a team of research analysts and management consultants with a common vision to assist individuals and organizations in achieving their short and long term strategic goals by extending quality research services. The inception of insightSLICE was done to support established companies, start-ups as well as non-profit organizations across various industries including Packaging, Automotive, Healthcare, Chemicals & Materials, Industrial Automation, Consumer Goods, Electronics & Semiconductor, IT & Telecom and Energy among others. Our in-house team of seasoned analysts hold considerable experience in the research industry.

Contact Info422 Larkfield Ctr #1001Santa Rosa,CA 95403-1408info@insightslice.com+1 (707) 736-6633

More:
AI in Healthcare Market Drivers, Challenges, Opportunities and Competitive Strategy Over 2021-2031 | Nuance Communications, Inc., DeepMind...

It’s more than just skin-deep: Feel and look amazing with Bubble Skincare | Sponsored – Harvard Crimson

*Sabrina is a fictional character whose experiences are meant to represent young people around the country.

In a world where young people are constantly bombarded with unrealistic expectations, information overload, and knowledge of the worlds issues, it is a stressful time for teens and young adults to grow up. Furthermore, as their skin begins to change, they are met with a skincare industry that is often confusing, misinforming, and overwhelming. Upon realizing that this industry did not have a high quality, accessible, and affordable option for teens and young people, Bubble Skincare sought to change this by creating a brand that would focus not only on helping people take care of their skin, but also making sure that people are empowered to take care of their mental health. Sabrina, a fictional freshman in college, just started using Bubble Skincare for her night routine, and has already begun to see a difference in her skin and feeling more confident.

Whether you already have some understanding of skin care or are completely new, whether you have dry skin, oily skin, sensitive skin, or anything in between, and no matter your identity, Bubble Skincare has something for you to start looking and feeling better. Furthermore, not only will you set a foundation for healthy skin and mind as you continue to grow and thrive, you will also help others with Bubble Skincare donating 1 percent of proceeds to organizations that provide mental support to teens.

Put your best face forward with Bubble treat both your skin and yourself with love!

The Crimson's news and opinion teamsincluding writers, editors, photographers, and designerswere not involved in the production of this article.

Read more:
It's more than just skin-deep: Feel and look amazing with Bubble Skincare | Sponsored - Harvard Crimson

NVIDIA and the battle for the future of AI chips – Wired.co.uk

An AI chip is any processor that has been optimised to run machine learning workloads, via programming frameworks such as Googles TensorFlow and Facebooks PyTorch. AI chips dont necessarily do all the work when training or running a deep-learning model, but operate as accelerators by quickly churning through the most intense workloads. For example, NVIDIAs AI-system-in-a-box, the DGX A100, uses eight of its own A100 Ampere GPUs as accelerators, but also features a 128-core AMD CPU.

AI isnt new, but we previously lacked the computing power to make deep learning models possible, leaving researchers waiting on the hardware to catch up to their ideas. GPUs came in and opened the doors, says Rodrigo Liang, co-founder and CEO of SambaNova, another startup making AI chips.

In 2012, a researcher at the University of Toronto, Alex Krizhevsky, walloped other competitors in the annual ImageNet computer vision challenge, which pits researchers against each other to develop algorithms that can identify images or objects within them. Krizhevsky used deep learning powered by GPUs to beat hand-coded efforts for the first time. By 2015, all the top results at ImageNet contests were using GPUs.

Deep learning research exploded. Offering 20x or more performance boosts, NVIDIAs technology worked so well that when British chip startup Graphcores co-founders set up shop, they couldnt get a meeting with investors. What we heard from VCs was: what's AI? says co-founder and CTO Simon Knowles, recalling a trip to California to seek funding in 2015. It was really surprising. A few months later, at the beginning of 2016, that had all changed. Then, everyone was hot for AI, Knowles says. However, they were not hot for chips. A new chip architecture wasnt deemed necessary; NVIDIA had the industry covered.

Whats in a name?

GPU, IPU, RPU theyre all used to churn through datasets for deep learning, but the names do reflect differences in architecture.

Graphcore

Graphcores Colossus MK2 IPU is massively parallel with processors operated independently, a technique called multiple instruction, multiple data. Software is written sequentially, but neural network algorithms need to do everything at once. To address this, one solution is to lay out all the data and its constraints, like declaring the structure of the problem, says Graphcore CTO Simon Knowles. Its a graph hence the name of his company.

But, in May 2016, Google changed everything, with what Cerebras Feldman calls a swashbuckling strategic decision, announcing it had developed its own chips for AI applications. These were called Tensor Processing Units (TPUs), and designed to work with the companys TensorFlow machine learning programming framework. Knowles says the move sent a signal to investors that perhaps there was a market for new processor designs. Suddenly all the VCs were like: where are those crazy Brits? he says. Since then, Graphcore has raised $710 million (515 million).

NVIDIAs rivals argue that GPUs were designed for graphics rather than machine learning, and that though their massive processing capabilities mean they work better than CPUs for AI tasks, their market dominance has only lasted this long due to careful optimisation and complex layers of software. NVIDIA has done a fabulous job hiding the complexity of a GPU, says Graphcore co-founder and CEO Nigel Toon. It works because of the software libraries theyve created, the frameworks and the optimisations that allow the complexity to be hidden. Its a really heavy lifting job that NVIDIA has undertaken there.

But forget GPUs, the argument goes, and you might design an AI chip from scratch that has an entirely new architecture. There are plenty to choose from. Googles TPUs are application-specific integrated circuits (ASICs), designed for specific workloads; Cerebras makes a Wafer-Scale Engine, a behemoth chip 56 times larger than any other; IBM and BrainChip make neuromorphic chips, modelled on the human brain; and Mythic and Graphcore both make Intelligence Processing Units (IPU), though their designs differ. There are plenty more.

But Cantazaro argues the many chips are simply variations of AI accelerators the name given to any hardware that boosts AI. "We talk about a GPU or TPU or an IPU or whatever, but people get too attached to those letters," he says. We call our GPU that because of the history of what weve done but the GPU has always been about accelerated computing, and the nature of the workloads people care about is in flux.

Can anyone compete? NVIDIA dominates the core benchmark, MLPerf, which is the gold standard for deep-learning chips, though benchmarks are tricky beasts. Analyst Karl Freund of Cambrian AI Research notes that MLPerf, a benchmarking tool designed by academics and industry players including Google, is dominated by Google and NVIDIA, but that startups usually dont bother to complete all of it because the costs of setting up a system are better spent elsewhere.

NVIDIA does bother and annually bests Googles TPU. Google invented MLPerf to show how good their TPU was, says Marc Hamilton, head of solutions architecture and engineering at NVIDIA Jensen [Huang] said it would be really nice if we show Google every time they ran the MLPerf benchmark how our GPUs were just a little bit faster than the TPU.

To ensure it came out on top for one version of the benchmark, NVIDIA upgraded an in-house supercomputer from 36 DGX boxes to a whopping 96. That required recabling the entire system. To do it quickly enough, they simply cut through the cables which Hamilton says was about a million dollars worth of kit and had new equipment shipped in. This may serve to highlight the bonkers behaviour driven by benchmarks, but it also inspired a redesign of DGX: the current-generation blocks can now be combined in groups of 20 without any rewiring.

Read the original:
NVIDIA and the battle for the future of AI chips - Wired.co.uk

Accelerating Deep Learning on the JVM with Apache Spark and NVIDIA GPUs – InfoQ.com

Key Takeaways

Many large enterprises and AWS customers are interested in adopting deep learning with business use cases ranging from customer service (including object detection from images and video streams, sentiment analysis) to fraud detection and collaboration. However, until recently, there were multiple difficulties with implementing deep learning in enterprise applications:

In this tutorial we share how the combination of Deep Java Learning, Apache Spark 3.x, and NVIDIA GPU computing simplifies deep learning pipelines while improving performance and reducing costs. In this post, you learn about the following:

Data processing and deep learning are often split into two pipelines, one for ETL processing, and one for model training. Enabling deep learning frameworks to integrate with ETL jobs allows for more streamlined ETL/DL pipelines.

Apache Spark has emerged as the standard framework for large-scale, distributed, data analytics processing. Apache Spark's popularity comes from the ease-of-use APIs and high-performance big data processing. Spark is integrated with high-level operators and libraries for SQL, stream processing, machine learning (ML), and graph processing.

Many developers are looking for an efficient and easy way to integrate their deep learning (DL) applications with Spark. However, there is no official support for DL in Spark. There are libraries that try to solve this problem such as TensorFlowOnSpark, Elephas, and CERN, but most of them are engine-dependent. Also most of the Deep Learning Frameworks (PyTorch, TensorFlow, Apache MXNet) do not have good support for the Java Virtual Machine (JVM), which Spark runs on.

In this section, well walk through several DL use cases for different industries using Scala.

Machine learning and deep learning have many applications in the financial industry. J.P. Morgan summarized six initiatives for their machine learning applications: Anomaly Detection, Intelligent Pricing, News Analytics, Quantitative Client Intelligence, Smart Documents, Virtual Assistants. This indicates deep learning has its position in many business areas in financial institutions. A good example for this point comes from Monzo bank, a fast-growing UK-based challenger bank, which reached its 3 million customers in 2019. They successfully automated 30% to 50% of the potential users enquiries by applying Recurrent Neural Networks (RNNs) on their users sequential event data.

Customer experience is an important topic for most financial institutions. Another example of applying deep learning to improve customer experience is Mastercard, a first-tier global payment solution company. Mastercard successfully built a deep learning-based customer propensity recommendation system with Apache Spark and their credit card transaction data. Such a recommender can provide better and more suitable goods and services to their customers, potentially benefiting the customer, the merchants and Mastercard. Before this project, Mastercard built a Spark ML recommendation pipeline with traditional machine learning methods (i.e. matrix factorization with Alternating Least Square, or ALS) on their data consisting of over 1.4 billion transactions. In order to determine if new deep learning methods could improve the performance of their existing recommender system, they benchmarked 2 deep learning methods: Neural Collaborative Filtering and Wide and Deep Model. Both achieved a significant improvement compared to the traditional ALS implementation.

Financial systems require very high fault-tolerance and security levels. Java was widely used in these companies to achieve better stability. Since Financial systems also face the challenges of huge amounts of data (1.4 Billion transactions), big data pipelines like Apache Spark are a natural choice to process the data. The combination of Java/Scala with Apache Spark is predominant in these fields.

As the data continues to grow, there is a new type of company that mines and analyzes business data. They serve as a third-party to help their client to explore the valuable information from their data. This data is typically system logs, anonymous non-sensitive customer information, sales and transaction records. As an example, TalkingData is a data intelligence service provider that offers data products and services to provide businesses insights on consumer behavior, preferences, and trends. One of TalkingDatas core services is leveraging machine learning and deep learning models to predict consumer behaviors (e.g., likelihood of a particular group to buy a house or a car) and use these insights for targeted advertising. Currently, TalkingData is using a Scala based big data pipeline to process hundreds of million data a day. They built a Deep Learning model and used it across a Spark cluster to do distributed inference tasks. Compared to single machine inference, the Spark cluster reduced the total inference time from 8 hours to less than 3 hours. They chose DJL with Spark for the following reasons:

For the online retail industry, recommendations and Ads are important to provide a better customer experience and revenue. The data sizes are usually enormous and they need a big data pipeline for them to clean up and extract the valuable information. Apache Spark becomes a natural fit to help deal with these tasks.

Today more and more companies are taking a personalized approach to content and marketing. Amazon Retail used Apache Spark on Amazon EMR to achieve this goal. They created a multi-label classification model to understand customer action propensity across thousands of product categories and used these propensities to create a personalized experience for customers. Amazon Retail built a Scala-based big data pipeline to consume hundreds of million records and used DJL to do DL inference on their model.

As shown above, many companies and institutions are using Apache Spark for their Deep Learning tasks. However, with the growing size and complexity of their Deep Learning models, developers are leveraging GPUs to do their training and inference jobs. The CPU only computational power on Apache Spark is not sufficient enough to handle large models.

GPUs, with their massively parallel architecture, are driving the advancement of deep learning (DL) in the past several years. With GPUs, you can exploit data parallelism through columnar data processing instead of traditional row-based reading designed initially for CPUs. This provides higher performance and cost savings.

Apache Spark 3.0 represents a key milestone in this advancement, combining GPU acceleration with large-scale distributed data processing and analytics. Spark 3.0 can now schedule GPU-accelerated ML and DL applications on Spark clusters with GPUs. Spark conveys these resource requests to the underlying cluster manager. Also, when combined with the RAPIDS Accelerator for Apache Spark, Spark can now accelerate SQL and DataFrame data processing with GPUs without code changes. Because this functionality allows you to run distributed ETL, DL training, and inference at scale, it helps accelerate big data pipelines to leverage DL applications.

In Spark 3.0, you can now have a single pipeline, from data ingestion to data preparation to model training on a GPU-powered cluster.

Before Apache Spark 3.0, using GPUs was difficult. Users had to manually assign NVIDIA GPU devices to a Spark job and hardcode all configurations for every executor/task to leverage different GPUs on a single machine. Because the Apache Hadoop 3.1 Yarn cluster manager allows GPU coordination among different machines, Apache Spark can now work alongside it to help pass the device arrangement to different tasks. Users can simply specify the number of GPUs to use and how those GPUs should be shared between tasks. Spark handles the assignment and coordination of the tasks.

To leverage the best power from it, lets discuss the following two components:

The RAPIDS Accelerator for Apache Spark combines the power of the RAPIDS library and the scale of the Spark distributed computing framework. In addition, RAPIDS integration with ML/DL frameworks enables the acceleration of model training and tuning. This allows data scientists and ML engineers to have a unified, GPU-accelerated pipeline for ETL and analytics, while ML and DL applications leverage the same GPU infrastructure, removing bottlenecks, increasing performance, and simplifying clusters.

Apache Spark-accelerated end-to-end ML platform stack

NVIDIA worked with the Apache Spark community to add GPU acceleration on several leading platforms, including Google Cloud, Databricks, Cloudera and Amazon EMR making it easy and cost-effective to launch scalable, cloud-managed Apache Spark clusters with GPU acceleration.

For its experiments to compare CPU vs. GPU performance for Spark 3.0.1 on AWS EMR, the NVIDIA RAPIDS accelerator team uses 10 TB of simulated data and queries designed to mimic large scale ETL from a retail or company (similar to TPC-DS). This comparison was run both on a CPU cluster and a GPU cluster with 3TB TPC-DS data stored on AWS S3. The CPU cluster consisted of 8 instances of m5d.2xlarge as workers and 1 instance of m5d.xlarge as a master. The GPU cluster consisted of 8 instances of g4dn.2xlarge as workers, which has one NVIDIA T4 GPU in each instance (the most cost-effective GPU instances in the cloud for ML) and 1 instance of m5d.xlarge as a master. The CPU cluster costs $3.91 per hour and the GPU cluster costs $6.24 per hour.

In this experiment, the RAPIDS Accelerator team used a query similar to TPC-DS query 97. Query 97 calculates counts of promotional sales and total sales, and their ratio from the web channel for a particular item category and month to customers in a given time zone. You can see from the Spark Physical plan and DAG for query 97 shown below, that every line of the Physical plan has a GPU prefix attached to it, meaning that every operation of that query runs entirely on the GPU.

Spark SQL query 97 DAG

With this query running almost completely on the GPU, processing time was sped up by a factor of up to 2.6x with 39% cost savings compared to running the job on the Spark CPU cluster. Note that there was no tuning, nor code changes for this query.

Improvements in query time and total costs.

In addition, the NVIDIA RAPIDS accelerator team has run queries with Spark windowing operators on EMR and seen speeds up to 30x faster on GPU than CPU on large datasets.

Deep Java Library (DJL) is a Deep Learning Framework written in Java, supporting both training and inference. DJL is built on top of modern Deep Learning engines (TensorFlow, PyTorch, MXNet, etc). It provides a viable solution for users who are interested in Scala/Java or are looking for a solution to integrate DL into their Scala-based big data pipeline. DJL aims to make deep-learning open source tools accessible to developers/data engineers who use primarily Java/Scala by using familiar concepts and intuitive APIs. You can easily use DJL to train your model or deploy a model trained using Python from a variety of engines without any additional conversion.

By combining Spark 3.x, the Rapids Accelerator for Spark and DJL, users can now build an end-to-end GPU accelerated Scala-based big data + DL pipeline using Apache Spark.

Now lets walk through an example using Apache Spark 3.0 with GPU for image classification tasks. This example shows a common Image Classification task on Apache Spark for Online Retail. It can be used to do content filtering like eliminating inappropriate images that merchants have uploaded. The full project is available in the DJL demo repository.

For full setup information, refer to the Gradle project setup. The following section highlights some key components you need to know.

First, well import the Spark dependencies. Spark SQL and ML libraries are used to store and process the images.

Next, we import the DJL-related dependencies. We use DJL API and PyTorch packages, which provide the core DJL features and load a DL engine to run for inference. We also leverage PyTorch-native-cu101 to run on GPU with CUDA 10.1.

1.2 Load model

To load a model in DJL, we provide a URL (e.g., file://, hdfs://, s3://, https://) hosting the model. The model will be downloaded and imported from that URL.

The input type here is a Row in Spark SQL. The output type is a Classification result. We also defined a Translator (not shown in this document) named MyTranslator that deals with preprocessing and post-processing work. The model we load here is a pre-trained PyTorch ResNet18 model from torchvision.

In the main function, we download images and store them into the hdfs. After that, we can create a SparkSession and use the built-in Spark image loading mechanism to load all images into Spark SQL. After this step, we use mapPartition to fetch the GPU information.

As shown in the following, TaskContext.resources()("gpu") stores the assigned GPU for this partition. We can assign the GPU id to the model to load the model on that particular GPU. This step will ensure all GPUs on a single device are properly used. To run inference, run predictor.predict(row).

Next, we run ./gradlew jar to bundle everything we need into a single jar and run it in a Spark cluster.

With EMR release version 6.2.0 and later, you can quickly and easily create scalable and secure clusters with Apache Spark 3.x, the RAPIDS Accelerator, and NVIDIA GPU-powered Amazon EC2 instances. (To set up a cluster using the EMR console follow the instructions in this article. )

To set up a Spark cluster using AWS CLI, create a GPU cluster with three instances using the command below. To run the command successfully, youll need to change myKey to your EC2 pem key name. The --region flag can also be removed if you have that preconfigured in your AWS CLI.

We use the g3s.xlarge instance type for testing purposes. You can choose from a variety of GPU instances that are available in AWS. The total run time for the cluster setup is around 10 to 15 minutes.

Now, we can run the distributed inference job on Spark. You can choose to do it on the EMR console or from the command line.

The following command tells Spark to run a Yarn cluster and setup-script to find GPUs on different devices. The GPU amount per task is set to 0.5, which means that two tasks share one GPU. You may also need to set the CPU number accordingly to ensure they match. For example, if you have an 8-core CPU and you set spark.task.cpus to 2, it means that four tasks can run in parallel on a single machine.

To achieve the best performance, you can set spark.task.resource.gpu.amount to 0.25, which allows four tasks to share the same GPU. This helps to maximize the performance because all cores in the GPU and CPU are used. Without a balanced setup, some cores will be in an idle state, which wastes resources.

This script takes around 4 to 6 minutes to finish, and you will get a printout inference result as output.

DL on Spark is growing rapidly with more applications and toolkits. Users can build their own DL with NVIDIA GPUs for better performance. Please check out the link below for more information about DJL and the Rapids Accelerator for Spark:

Haoxuan Wang is a data scientist and software developer of Barclays, and a community member of DJL (djl.ai). He is keen to building advanced data solutions for the bank by applying innovative ideas. His main technical interest involves natural language processing, graph neural network and distributed system. He was awarded a masters degree (distinction) in data science from University College London (UCL) in 2019.

Qing Lan is a Software Development Engineer who is passionate about Efficient Architectural Design on Morden Software/Application System. Focused on Parallel Computing and Distributed System Design. Currently working on Deep Learning Acceleration and Deep Learning Framework optimization.

Carol McDonald works in technical marketing focusing on Spark and data science. Carol has experience in many roles, including technical marketing, software architecture and development, training, technology evangelism, and developer outreach for companies including: NVIDIA, SUN, and IBM. Carol writes industry architectures, best practices, patterns, prototypes, tutorials, demos, blog posts, whitepapers, and ebooks. She has traveled worldwide, speaking and giving hands-on labs; and has developed complex, mission-critical applications in the banking, health insurance, and telecom industries. Carol holds an MS in computer science from the University of Tennessee and a BS in geology from Vanderbilt University. Carol is fluent in English, French, and German.

See original here:
Accelerating Deep Learning on the JVM with Apache Spark and NVIDIA GPUs - InfoQ.com

Review: Bo Burnhams Inside is a successful depiction of a lonely mind – Los Angeles Times

So 2020 happened stuck inside for a year. Many grew and became aware of the person they were. Becoming friends with themselves while others did the quite opposite becoming distanced and acquainted with themselves.

In Bo Burnhams Inside we reaped the fruits of his mind where we find the bare bones of what it means to be a person and how it is ever-changing. Burnham manages to make even the seemingly meaningless topics delve into something bigger. He sings about facetime-ing with your mom, a white womans Instagram, and even the internet as a whole from a carnival barker tone.

This connects to technology, and how being indoors connects you to the outside world but makes you lonelier than ever. Theres also overstimulations lessening of the human experience.

Using music, humor, and intelligence as the forefront but behind all of the knee-slapping content, you are left vulnerable. Theres almost an inability to write about such a raw experience.

Not only were we blessed with Inside but also got a taste of the creative process with clips of setting up the camera and adjusting the lighting only getting more and more impressive. It was visually gorgeous.

Netflix describes Inside as a comedy special that does it a disservice; the only funny thing about it was the irony of how one could convey such an accurate message in a total un #deep manner that Bo has joked so much about.

Bos previous specials make happy and what. culminated in a final number diving deep into the introspection and mental instability that he seems to have perfected for longer, clearer, and less fearful exploitation for much of the second half of Inside.

Inside provides a place of comfort while simultaneously giving you a panic attack whilst watching. The meticulous details such as choosing the correct genre of song that goes with the lyrics.

Emotionally devastating what a privilege it was. It is frightening that a piece of art like this would go unnoticed especially with the lack of featuring and pushing by Netflix. W

hat is remarkable about Bo Burnhams work in Inside is that it presents as a greatest hits album of an artist of 20 years all while feeling current and progressive in the same breath. To make the audience feel comfortable and familiar while challenging in new forms and modalities is worth taking note of; and highly commendable.

So 2020 happened and because of it, we got to see Bo Burnham live up to his potential and create a deranged masterpiece that every artist couldve hoped to create in a time like this.

Related

Read this article:
Review: Bo Burnhams Inside is a successful depiction of a lonely mind - Los Angeles Times

AI in Europe: Who’s leading the way and where is it heading? – Siliconrepublic.com

Ireland may be the big adopter of AI in the EU, but a new report from Forrester suggests Europe is still slightly behind other regions.

Artificial intelligence is often slated as the technology that will transform the way we live and do business. But some have embraced it more than others.

Among EU countries, Ireland has the highest share of businesses using AI applications.

Thats according to European Commission data from 2020, which found that 23pc of enterprises in Ireland used any of these four AI applications: analysing big data internally using machine learning; analysing big data using natural language processing, generation or speech recognition; using a chatbot or virtual agent; or using service robots.

Overall, 7pc of enterprises in the EU with at least 10 people employed used at least one of these AI applications in 2020.

Behind Ireland, the countries with the widest uptake of AI tech were Malta (19pc), Finland (12pc) and Denmark (11pc). At the other end of the scale were Cyprus (3pc), Hungary (3pc), Slovenia (3pc) and Latvia (2pc).

A recent report from research and advisory company Forrester said theres a widespread perception that data privacy regulations, ethical concerns and reluctance to adopt cutting-edge tech have resulted in European companies being less advanced in terms of AI adoption that companies in other regions.

A 2020 survey it conducted with responses from data decision-makers in France, Germany and the UK confirmed that there is a lag, but the gap may not be as wide as many perceive it to be.

However, compared with people from other parts of the world, European respondents were less bullish overall about the benefits of AI, according to Forrester.

While 31pc of North American decision-makers surveyed said the benefits of AI were increased automation and improved operational effectiveness, only 28pc of European respondents said the same.

One-third of those in North America said it could also increase revenue growth and improve customer experiences, but only 27pc of those in Europe agreed.

Forrester added that while Europe produces AI excellence, it has trouble scaling start-ups.

Large European companies including Airbus, Bosch, Rolls-Royce and Siemens have been innovating with AI, and Europe has been the birthplace of start-ups such as DeepMind and Featurespace.

However, many start-ups have been acquired by companies outside of the region (with Google snapping up UK-based DeepMind, for example) or have migrated their headquarters to the US.

But the EU is keen to give AI a boost. The European Commission aims to reach an annual investment of 20bn over the course of this decade to help Europe become a global leader in this area of tech. At the same time, it is focusing on making AI ethical and human-centred.

Much like how it took the baton on data protection laws, the European Commission is hoping to set new oversight standards in a bid to create trustworthy AI.

Earlier this year, it outlined a new set of proposals that would classify different AI applications depending on their risks and implement varying degrees of restrictions.

So while other regions may be slightly ahead of Europe when it comes to AI uptake, Forresters report said that European companies are not lagging far behind and the bloc is certainly leading the way in terms of its focus on ethics and trustworthy AI.

Go here to see the original:
AI in Europe: Who's leading the way and where is it heading? - Siliconrepublic.com

Alma Allens biomorphic sculptures have minds of their own – Wallpaper*

Alma Allens biomorphic sculptures have minds of their own

In a bold takeover of Kasmins gallery and sculpture garden in New York, American artist Alma Allen introduces his latest series of curious creatures in bronze

If you didnt know they were rendered in static bronze, you might be mistaken for thinking that Alma Allens sculptures were alive. Imbued with organic, surreal and creature-like characteristics, they appear to be growing or evolving;blink and you might find them somewhere else.

Allens biomorphic sculptures can currently be found both indoors and outdoors at Kasmin, New York. In the gallerys 514 West 28th Street location, Allen is presenting more than 20 small-scale bronzes. Hyper-polished almost to the point of liquidity, these works are both lifeforms in their own right and proposals for future large-scale works. Atop Kasmins elevated and newly rewilded urban garden, the artists monumental outdoor sculptures can be experienced by all those who walk the adjacent New York High Line.

Allens deep affinity with the natural world stems from a childhood spent in Utah, where proximity to the desert allowed him the chance to roam, whittle wood, and hand-carve stones that he stumbled upon. The sculptures are often in the act of doing something: they are going away, or leaving, or interacting with something invisible, Allen has previously said. Even though they seem static as objects, they are not static in my mind. In my mind, they are part of a much larger universe. They are interacting with each other as well, with works I made 20 years ago.

Allen begins his process by instinctively hand-sculpting intimately scaled model clay or wax forms. Its a gradual emergence as the artist works and reworks until each has a life of its own. The artist casts and finishes the sculptures at his own foundry, on site at his studio in the hills of Tepoztln, Mexico.

Installation view of Alma Allens exhibition at Kasmin gallerys514 West 28th Street location. Photography:Diego Flores

The forms and shapes of Allens work are only half the story; a great deal exists on the surface. The artists expressive and tactile finishes involve welding smaller pieces together, brazing, polishing, and developing chemical patinas until surfaces almost resemble paintings, or even rippling landscapes.

Kasmin is in the process ofimagining new ways to bring Allens sculpture to the public. The gallery ispartnering with Membit, an augmented reality platform to create anart anywhereexperience.Through this, users are able to introduce a 3D image of one of Allens sculptures into their own environments, whether athome, a local wilderness, or in a public space.

Here, small-scale meets monumental, exhibiting the versatility and ambition of Allens work. Whether occupying the clean white walls of the gallery or reaching for the skyline in Kasmins urban garden, Allens sculptures feel very much at home.

Photography:Diego Flores

Photography:Diego Flores

Go here to read the rest:
Alma Allens biomorphic sculptures have minds of their own - Wallpaper*