Category Archives: Deep Mind

Terence Crawford has next foe in mind after impressive knockout win – New York Post

Terence Crawfords latest opponent had not only never been knocked down, but Crawford couldnt recall seeing him even hurt.

Then again, hed never been in a ring with a fighter like Crawford before.

Crawford dropped Egidijus Kavaliauskas three times before stopping him in the ninth round Saturday night to remain unbeaten and defend his welterweight title at Madison Square Garden.

I wanted to give the crowd a knockout, Crawford said. When I started letting my hands go, I started landing more fatal shots.

Crawford knocked down the challenger once in the seventh round and twice more in the ninth before referee Ricky Gonzalez stopped it at 44 seconds of the round.

Crawford (36-0, 27 KOs) absorbed perhaps more shots than usual but seemed to enjoy getting to show he has power, too, letting out a big smile as Kavaliauskas returned to his corner looking frustrated after one round late in the fight.

I thought I had to entertain you all for a little bit, Crawford said. Hes a strong fighter, durable, and I thought Id give the crowd something to cheer for.

Kavaliauskas (21-1-1), a Lithuanian who was the mandatory challenger for Crawfords WBO belt, had some good moments in the first few rounds before Crawford took control midway through the fight and then poured it on late.

Crawford fought cautiously at the outset and Kavaliauskas showed why there was reason to when he landed a big right early in the third round and then a couple more punches inside as Crawford tried to hold on. Crawford ended up going to a knee but Kavaliauskas wasnt credited with a knockdown, the referee apparently determining Crawford had been pushed down.

Crawford said afterward he wasnt hurt by that shot and it wasnt long before he was the one doing more damage.

Kavaliauskas kept throwing big punches that drove Crawford backward when they landed, but Crawford used his speed advantage to slip out of the way of many of them while landing his own combinations.

Crawford took a hard shot early in the seventh but then began answering and finally caught Kavaliauskas with a looping right near the ear that sent him to the canvas.

Crawford finished it two rounds later, first using a three-punch combination to set up a right uppercut that sent Kavaliauskas to the canvas. He got up but Crawford then threw a right hook that returned the two-time Olympian to the canvas and the fight was immediately waved off.

The 32-year-old Crawford bristled this week when asked if getting in tougher fights would earn him extra appreciation, saying all that mattered was winning. But this fight certainly appeared harder than the skilled Nebraska natives first three after moving up to welterweight, all stoppages, after he won all four major belts at 140.

Hes still searching for better opposition in the deep 147-pound division and promoter Bob Arum indicated Crawford may look next to veteran Shawn Porter, who is coming off a competitive loss to Errol Spence Jr. in a unification bout in September.

A Crawford-Spence bout would likely be the most attractive possible, but Spence was injured in a car accident and its unknown when he can fight again. That could leave Porter as the next choice.

Porter is the next best guy, Arum said. He proved himself with Spence.

Crawford said hes ready for whichever fighter is next.

Ill fight anybody. Ive been saying that for I dont know how long, Crawford said.

Earlier, Teofimo Lopez won a lightweight belt with a second-round stoppage of Richard Commey, and Michael Conlan beat Olympic rival Vladimir Nikitin.

Lopez (15-0, 12 KOs) was spectacular in his first title fight, wobbling Commey with a left hand early in the second round and then flooring him with a hard right hand. He finished the fight with a barrage of punches in the corner and perhaps next moves on to a 135-pound unification bout with two-time Olympic gold medalist Vasiliy Lomachenko.

Conlan (13-0, 7 KOs) had lost to Nikitin twice as an amateur, including in the 2016 Olympic quarterfinals. He blasted the international boxing federation for being corrupt after the decision was announced and extended his middle finger to the judges at ringside.

He had also lost a close fight to Nikitin in 2013 but the judges saw this one as no contest, giving Conlan a lopsided decision by scores of 100-90, 99-91 and 98-92.

View post:

Terence Crawford has next foe in mind after impressive knockout win - New York Post

What Are Normalising Flows And Why Should We Care – Analytics India Magazine

Machine learning developers and researchers are constantly in pursuit of finding a well-defined probabilistic model that would correctly describe the processes that produce data. A central need in all of the machine learning is to develop the tools and theories to develop better-specified models that lead to even better insights of data.

One such attempt has been made by Danilo Rezende in the form of normalising flows. Today building probability distributions as the normalising flow is an active area of ML research.

Normalizing flows operate by pushing an initial density through a series of transformations to produce a richer, more multimodal distribution like a fluid flowing through a set of tubes. Flows can be used for joint generative and predictive modelling by using them as the core component of a hybrid model.

Normalizing flows provide a general way of constructing flexible probability distributions over continuous random variables.

Let x be a D-dimensional real vector, and suppose we would like to define a joint distribution over x. The main idea of flow-based modelling is to express x as a transformation T of a real vector u sampled from a distribution of the flow-based model.

According to the Google Brain team, the key idea behind normalising of flows can be summarised as follows:

The flow can be thought of as an architecture, where the last layer is a (generalised) linear model operating on the features and these features distribution can be viewed as a regulariser on the feature space. In turn, flows are effective in any application requiring a probabilistic model with either of those capabilities.

Normalizing flows, due to their ability to be expressive while still allowing for exact likelihood calculations, are often used for probabilistic modelling of data. They have two primitive operations: density calculation and sampling.

For example, invertible ResNets have been explored for classification with residual flows and have witnessed a first big improvement. The improvement can be something as significant as the reduction of the models memory footprint by obviating the need to store activations for backpropagation.

This achievement may help one understand to what degree discarding information is crucial to deep learnings success.

Normalizing flows allow us to control the complexity of the posterior at run-time by simply increasing the flow length of the sequence.

Rippel and Adams (2013), were the first to recognise that parameterizing flows with deep neural networks could result in quite general and expressive distribution classes.

Like with deep neural networks, normalizing the intermediate representations is crucial for maintaining stable gradients throughout the flow.

Normalizing flows can also be integrated into traditional Markov chain Monte Carlo (MCMC) sampling by using the flow to reparameterize the target distribution. Since the efficiency of Monte Carlo methods drastically depends on the target distribution, normalizing flows would make it easier to explore.

Normalizing flows can be thought of as implementing a generalised reparameterization trick, as they leverage a transformation of a fixed distribution to draw samples from a distribution of interest.

For instance, the Generative Model has been a popular application of flows in machine learning. Here are some other examples:

In a paper titled, Normalizing Flows for Probabilistic Modeling and Inference, researchers from DeepMind investigated the state of flow models in detail.

They have listed the kind of flow models that have been in use, their evolution and their significance in domains like reinforcement learning, imitation learning, image, audio, text classification and many more.

The authors also speculate that many flow designs and specific implementations will inevitably become out-of-date as work on normalizing flows continues, we have attempted to isolate foundational ideas that will continue to guide the field well into the future.

The large scale adoption of normalising flows in place of conventional probabilistic models is advantageous because unlike other probabilistic models that require approximate inference as they scale, flows usually admit analytical calculations and exact sampling even in high dimensions.

However, the obstacles that are currently preventing wider application of normalizing flows are similar to those faced by any probabilistic models. With the way research is accelerating, the team at DeepMind are optimistic about the future of flow models.

comments

Continue reading here:

What Are Normalising Flows And Why Should We Care - Analytics India Magazine

Artificial Intelligence Job Demand Could Live Up to Hype – Dice Insights

Anyone whos worked in technology knows that certainbuzzwords rip through the industry every few years, sending executives into afever. Artificial intelligence, Big Data, Hadoop, and Web 2.0 (please,lets do our best to forget that last one) are just a few of the more notable.But which ones will translate into actual opportunities and jobs for all thetechnologists out there?

If the hype doesnt match the actual industry impact, thenmany thousands of workers will have pursued a particular technology ordiscipline for nothing. But if the hype is justified, then folks can buildsatisfying careers (and make a lot of money). The stakes couldnt be higher.

As we head into 2020, one thing is pretty clear: Artificial intelligence (A.I.) seems like one of those much-hyped terms that might actually translate into a really robust sub-industry. For example, LinkedIns 2020 Emerging Jobs Report (PDF) puts Artificial Intelligence Specialist as its number-one emerging job, with 74 percent annual growth over the past four years.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

That outpaced robotics engineer (40 percent annual growthduring the same four-year period), datascientist (37 percent annual growth), full stack engineer (35 percentannual growth), and site reliability engineer (34 percent growth). (In order toarrive at its conclusions, LinkedIn crunched data from all of its publicprofiles over the past five years.)

Sounds pretty solid, right? Even so, the A.I. industry comeswith a relatively high bar to entry, which could restrict the pipeline oftalent for the next few years. Employers want A.I. experts skilled in machinelearning, deep learning, Python,natural language processing, and platforms suchas TensorFlow. Those are skills that take quite some time to learn, to putit mildly, and demand a pretty strong background in programming andmathematics.

Theres also the issue of company buy-in. Executives lovebuzzwords, but they often balk at the cost of spinning up the relatedtechnology. At this years The WallStreet Journals Future of Everything Festival, Arvind Krishna, IBMs seniorvice president of cloud and cognitive software, suggested that projects tend todie once companies realize theyll need to spend a lot of time prepping thenecessary datasets: And so you run out of patience along the way, because youspend your first year just collecting and cleansing the data.

Plus, existing A.I. initiatives have amixed track record so far. Ubers attempt to build a self-driving car platformhas hit some snags, toput it mildly; IBMs much-hyped Watson platform has failedto meet some hospitals expectations for successful healthcare dataanalysis; and some analysts and pundits think that even well-monetized projectssuch as Googles DeepMind haventeither scaled or commercialized.

Nonetheless, the future seems prettybright for artificial intelligence and machine-learning initiatives. Even ifsome high-profile projects crash and burn, its clear from the data thatcompanies are rapidly hiring various types of employees with A.I. skillsclusters. According to Burning Glass, which analyzes millions of job postingsfrom across the U.S., jobs that involve A.I. are projected to grow 40.1 percentover the next decade; the median salary for these positions is $105,007 (forthose with a PhD, it drifts up to $112,300).

Positions associated with A.I. skills clusters include:

If you work in any of these roles, A.I. and machine learningtools and techniques will likely become a part of your workflow over the nextseveral years. That means its important to learn as much as possible about A.I.Fortunately, there are a lot of resources online that can help you out,including a Google crashcourse,completewith 25 lessons and 40+ exercises, thats a good introduction to machinelearning concepts. HackerNoonalso offersan interesting breakdown of machine learning andartificialintelligence.

See the rest here:

Artificial Intelligence Job Demand Could Live Up to Hype - Dice Insights

Google Isn’t Looking To Revolutionize Health Care, It Just Wants To Improve On The Status Quo – Newsweek

To appreciate how much Alphabet, the parent company of Google, is betting on health care, here are a few of the initiatives and subsidiaries the company has formed or acquired: Verily (new technologies for diagnosing, managing and treating diseases), Google Fit (tracking and encouraging healthier lifestyles), Calico (research aimed at treating and even slowing aging), DeepMind Health (applying AI to health and health care), Senosis (turning smartphones into health monitors) and the recently acquired Fitbit (activity tracking).

While these efforts have to a large extent worked independently of one another, Google seems determined to unify them under its "Google Health" initiative. If the company succeeds in creating an interwoven set of software, data and hardware tools and services, it could become one of the most influential players in health care. Unlike Amazon, which seems aimed at creating an alternative to what today's health care organizations offer, Google appears determined to become an essential supplier to those organizations.

Google's most visible efforts so far have centered on using artificial intelligence and other advanced data-analysis technologies to organize and make use of patient data at large health care organizations. Virtually everyone in health care agrees that somewhere hidden away in patients' records are new insights into how to catch, manage and treat illness more effectively and at lower cost. "Google is focused on getting the value out of what's in health care organizations' data," says Maia Hightower, the University of Utah's chief medical information officer.

Google has already gotten into hot water over its contracts to handle patient data for hospitals. Although such sharing doesn't appear to violate patient confidentiality regulationshealth care organizations are allowed to pass data to the health care suppliers they work with as long as they don't further share itmany people object to their personal medical information ending up in the hands of a company known for gathering data on its users and re-selling it to others. Some patients and privacy advocacy groups charge that giving Google access to medical records is a violation of privacy.

Google's ventures in health monitoring have been less controversial. Verily, for instance, makes devices that monitor the blood-sugar level of people with diabetes and it embeds tiny electronic circuits in contact lenses, to track eye disease, and even in baby diapers, to alert parents of the need for a change. Fitbit and Google Fit fall into this category as well.

One of Google's biggest and possibly most ambitious health-related efforts is simply that of returning better information when someone Googles a health concern. That may sound like old hat, but Google wants such searches to provide a critical first step in getting a diagnosis, finding the right medical help and learning about good self-care practices. Compared to the hodge-podge of good and dubious information that pops up in health searches now, that would be a big step forward.

More here:

Google Isn't Looking To Revolutionize Health Care, It Just Wants To Improve On The Status Quo - Newsweek

Opinion | Frankenstein monsters will not be taking our jobs anytime soon – Livemint

Herbert Simon of Carnegie-Mellon, Tom McCarthy, and others are credited with having founded the field of artificial intelligence (AI) on the claim that human intelligence can be so precisely described that a machine can be made to simulate it". According to computer scientists Stuart Russell and Peter Norvig, the term artificial intelligence" is applied when a machine mimics cognitive" functions that humans associate with other human minds, such as learning" and problem solving". The goal of machine learning" is that an ideal AI computer program should then have the ability to change itself to take actions that maximize its chance of success at performing a task.

Sharp minds are at work at places such as Alphabet (Googles) Deep Mind who claim that their final goal is to reach artificial general intelligence", or AGI. Others talk about the concept of a singularity", the moment in time when AGI becomes smart enough to be an intelligence that can make itself better without human intervention. In other words, machine learning will completely take control away from its human creators.

While I defer to these great minds, I would still argue that both those concepts are hollow. The danger lies in assuming that AGI is human intelligence, since human intelligence is not general". Learning is not intelligence. To expect that learning machines can make themselves more intelligent (as opposed to more efficient at performing tasks) is farfetched.

In my opinion, all that AI has been able to do until now is take algorithmic concepts that have long been known, and efficiently apply these to large volumes of data. To be specific, these fall in the areas of pattern-matching and predictive analyses. Hence, the term data scientist" and the huge number of job openings for those who understand some of the basic concepts of statistics, such as regression analysis that checks how variables are linked, the Box-Jenkins model that studies data in a time series, and the Bayes equation used for estimating the probability of various outcomes.

Philosophers have held that while tasks can be automated, there is one thing that a soulless machine can never do, and that is having living consciousness". If you doubt this, then simply ask yourself who is listening to these words as you read them to yourself. Is it your human learning" neurons, or some other, larger field of consciousness into which words and thoughts like these come and go and are understood? If a voice arises in your head that disagrees with what you are reading, who is it that is aware of the voice?

It would behoove us then to better understand where consciousness exists. I recognize that I am now venturing onto thin ice. There is no consensus between philosophers and scientists or indeed within academia on what consciousness is. The one thing everyone does agree on is that the phenomenon does, in fact, exist. Nonetheless, in a recent article in Scientific American, Christof Koch cites neurological studies that seem to have established that conscious awareness exists in the highly integrated and complex cerebral cortex in our brains, and is not to be found in the more primitive cerebellum, which governs our motor activities. Mankind has the largest cerebral cortex relative to other forms of life on earth. According to Koch, very little happens to consciousness if a cerebellum has been operated upon by a surgeon. This is because the cerebellum, unlike the cerebrum, is exceedingly uniform and parallel.

So far, so good. The finding relates to many philosophical schools of thought that say that man is the most evolved and sentient of all beings, at least on earth, and is the only animal capable of realizing that he, in fact, possesses the faculty of consciousness. Eastern philosophers would have it that the recognition of this consciousness as being both a limited aspect as well as the full expression of an all-pervading universal consciousness is the (spiritual) goal of life.

Switching to consciousness in information technology, there are two rival schools of thought, one called the global neuronal workspace (GNW), and the other called integrated information theory (IIT), posited by Koch and his collaborators. GNW holds that consciousness rises from information being processed in a specific manner. It says that AI programs process a sparse, shared repository of information; all the while, this information is also concurrently shared by a host of subsidiary processes in the system. According to GNW, once such a sparse set of information leaves the AI programs processing space and is replaced by another set of sparse information, the new information can also simultaneously be broadcast to the subsidiary processes that can suo motu make changes to handle their own subsidiary tasks. It is at this point, according to GNW, that the information becomes conscious".

In contrast, IIT has an outside-in" view of consciousness, since it starts at the experience and works backwards from there to find the conscious experiencer". Each experience is unique and exists only for the experiencer. IIT theorists postulate that any complex and interconnected mechanism whose structure encodes a set of cause-and-effect relationships will have these properties, and so will have some level of consciousness. In other words, it will feel like something from the inside. However, if the mechanism is anything like our cerebellum, it would lack integration and complexity, and will not be aware of anything. IIT says that consciousness is an intrinsic causal power associated with complex mechanisms such as the cerebral cortex. Programming for consciousness will never create a conscious computer.

So, consciousness cannot be computed; it has to be built into the structure of the system. This will take decades, as we still need to observe and probe the vast groups of highly heterogeneous and dissimilar neurons that make up the cerebral cortex of our brain to further isolate and understand the precise signifiers of consciousness. It will be quite a while yet before we have a Frankensteins monster to deal with.

Siddharth Pai is founder of Siana Capital, a venture fund management company focused on deep science and tech in India

See the rest here:

Opinion | Frankenstein monsters will not be taking our jobs anytime soon - Livemint

DeepMind co-founder moves to Google as the AI lab positions itself for the future – The Verge

The personnel changes at Alphabet continue, this time with Mustafa Suleyman one of the three co-founders of the companys influential AI lab DeepMind moving to Google.

Suleyman announced the news on Twitter, saying that after a wonderful decade at DeepMind, he would be joining Google to work with the companys head of AI Jeff Dean and its chief legal officer Kent Walker. The exact details of Suleymans new role are unclear but a representative for the company told The Verge it would involve work on AI policy.

The move is notable, though, as it was reported earlier this year that Suleyman had been placed on leave from DeepMind. (DeepMind disputed these reports, saying it was a mutual decision intended to give Suleyman time out ... after 10 hectic years.) Some speculated that Suleymans move was the fallout of reported tensions between DeepMind and Google, as the former struggled to commercialize its technology.

Although DeepMind has achieved a number of research milestones in the AI world, most notably the success of its AlphaGo program in 2016, the lab has also recorded significant financial losses. In 2018, it doubled its revenues to 102.8 million ($135 million), but its expenditures also rose to 470.2 million ($618 million) and it recorded a total debt of more than 1 billion ($1.3 billion).

Suleyman, who founded DeepMind in 2010 along with Demis Hassabis (now CEO) and Shane Legg (now chief scientist), had spearheaded the companys health team, which offered the lab one avenue to monetize its research. DeepMinds engineers designed a number of health algorithms that broke new ground, and its team built an assistant app for nurses and doctors that promised to save time and money. But the venture was also criticized strongly for its mishandling of UK medical data, and in 2018 was absorbed into Google Health.

In addition to this, Suleyman also led the DeepMind for Google team, which aimed to put the companys research to practical uses in Google products, delivering tangible commercial benefits like improved battery life on Android devices and a more natural voice for Google Assistant.

Its difficult to parse the meaning behind Suleymans move to Google without more details on his new role, but its clear that DeepMind is still working out how to position itself for the future as highlighted by the publication of a blog post by Hassabis timed with the announcement of Suleymans departure.

In the post, Hassabis charts the journey of DeepMind from unlikely start-up to major scientific organization. And although he highlights collaborations the lab has made with other parts of Alphabet, he ultimately focuses on the fundamental breakthroughs and grand challenges that DeepMind hopes to tackle most notably, using artificial intelligence to augment scientific research. It seems clear that long-term research, not short-term profits, are still the priority for DeepMinds scientists.

View post:

DeepMind co-founder moves to Google as the AI lab positions itself for the future - The Verge

Stopping a Mars mission from messing with the mind – Axios

What's happening: IBM, Airbus and the German Aerospace Center just launched CIMON-2 an upgraded robotic assistant that can read a persons tone of voice to the International Space Station.

Researchers are also studying how the brain and body might change during long trips in space, affecting a person's cognition.

The big picture: "From Mars, the Earth is seen as a dot, basically a small dot; greenish, blue dot. So everything that is important to you, your history, your family, your culture, your country, becomes an insignificant point in the universe," University of California, San Francisco psychiatrist Nick Kanas told Axios in August.

What's next: NASA may consider using its Gateway the small space station the agency plans to place in orbit around the Moon in the coming years as a simulation for a Mars mission in space.

Go deeper: Where to hunt for life on Mars

Editor's note: This piece was corrected to show Nick Kanas is a psychiatrist (not a psychologist).

Read the original here:

Stopping a Mars mission from messing with the mind - Axios

Feldman: Impeachment articles are ‘high crimes’ Founders had in mind | TheHill – The Hill

The articles of impeachment under consideration by the House clearly allege high crimes and misdemeanors under the Constitution. Apart from the factual truth of allegations, the articles comport with the definition of impeachable conduct. Start with abuse of office for personal advantage or gain, directly aimed at distorting the electoral process. For the Framers, this conduct was the classic form of a high crime or misdemeanor.

Their words demonstrate as much. At the Constitutional Convention in 1787, George Mason of Virginia warned of the danger that presidential electors could be corrupted by the candidates. Corruption meant the conferral of improper benefits for personal gain. James Madison worried about the presidency being used for a scheme of peculation, in other words, self-dealing or embezzlement for personal advantage.

What is more, the two impeachment trials best known to the Framers both involved abuse of office for improper personal gain. Warren Hastings, who was the former British governor general of Bengal, was undergoing his impeachment even as the Framers met in Philadelphia. George Mason referred to Hastings at the convention. Hastings was impeached for, among other things, corruption, peculation, and extortion.

The basic claim against him was that he had solicited and received bribes and gifts from people in Bengal while in office as governor general. Lord Macclesfield, who was the treasurer of England, was impeached in 1725, for taking payments to sell offices. The articles of impeachment charged Lord Macclesfield with seeking personal gain under color of office, that is, while he occupied the official role of the treasurer of England.

The takeaway from this historical evidence is that abuse of office for personal gain is the archetype of a high crime and misdemeanor under the Constitution. Thus, if the facts show that President TrumpDonald John TrumpRepublicans consider skipping witnesses in Trump impeachment trial Bombshell Afghanistan report bolsters calls for end to 'forever wars' Lawmakers dismiss Chinese retaliatory threat to US tech MORE sought his own personal gain when he solicited Ukrainian President Volodymyr Zelensky to announce investigations of the Bidens and of Crowd Strike, then he has undoubtedly committed an impeachable offense.

As for obstruction of Congress, the violation lies in President Trump issuing a blanket denial of the authority of Congress to engage in its impeachment investigation and his direction to all executive branch officials not to appear before Congress or cooperate. The precedent for considering this conduct lies in the article of impeachment adopted by the House Judiciary Committee charging Richard Nixon with obstruction of Congress. However, even President Nixon did not entirely stonewall Congress. In fact, he partially cooperated with the congressional inquiry. The charge against President Trump is for conduct that exceeds that of President Nixon when it comes to obstruction of Congress.

Under constitutional logic, the basis for this article of impeachment is fundamentally about the separation of powers. The Constitution gives the House of Representatives the sole power of impeachment. That means the House holds the power and authority to oversee the conduct of the president to determine whether or not he should be impeached. Thus, by issuing a blanket refusal to cooperate in the impeachment investigation, President Trump denied the House its constitutional authority.

The only remedy under the Constitution for this denial of congressional authority is impeachment. Consider that the executive branch cannot be responsible for presidential oversight. Consider further that the judiciary likely could not compel presidential participation in an impeachment inquiry, since impeachment is a power conferred on Congress, not the judiciary. It follows that impeachment itself is the clearly defined remedy for presidential refusal of the granted impeachment authority.

If the president had the authority to reject an impeachment inquiry, he would literally be above the law. He would not be subject to control by the other two branches if he committed wrongdoing. The name the Framers had for such an unimpeachable officer was a monarch. Obstruction of Congress is impeachable under the Constitution because it undercuts the basic structure of democracy that is founded in this country.

Noah Feldman is the Felix Frankfurter Professor at Harvard Law School. He testified before the House Judiciary Committee on the impeachment of President Trump this month. He is a columnist for Bloomberg Opinion and hosts the Deep Background podcast. He also served as a law clerk for Supreme Court Justice David Souter and is the author of numerous books including The Three Lives of James Madison: Genius, Partisan, President.

Read more:

Feldman: Impeachment articles are 'high crimes' Founders had in mind | TheHill - The Hill

For the Holidays, the Gift of Self-Care – The New York Times

Most of us already know that self-care is good for us. Research shows that people who practice self-care have better quality of life, are admitted less frequently to a hospital, and live longer than those who report poor self-care.

While self-care is a simple concept, it can be remarkably difficult to enact. It may feel selfish or too time-consuming to focus on your own needs, and many of us dont know where or how to start. Haemin Sunim suggests a simple five-step plan to give yourself the gift of self-care this holiday season.

Start by just taking a deep breath. Become mindful of your breathing. Youll notice that when you begin, your breathing is shorter and more shallow, but as you continue, your breathing becomes deeper. Take just a few minutes each day to focus on your breathing. As my breathing becomes much deeper and Im paying attention to it, I feel much more centered and calm, Haemin Sunim said. I feel I can manage whatever is happening right now.

Acceptance of ourselves, our feelings and of lifes imperfections is a common theme in Love for Imperfect Things. The path to self-care starts with acceptance, especially of our struggles. If we accept the struggling self, our state of mind will soon undergo a change, Haemin Sunim writes. When we regard our difficult emotions as a problem and try to overcome them, we only struggle more. In contrast, when we accept them, strangely enough our mind stops struggling and suddenly grows quiet. Rather than trying to change or control difficult emotions from the inside, allow them to be there, and your mind will rest.

Begin to practice acceptance through a simple writing exercise. Write down the situation you must accept and all that you are feeling. Write down the things in your life that are weighing on you, and the things you need to do. Rather than trying to carry those heavy burdens in your heart or your head, you see clearly on paper what it is you need to do, Haemin Sunim said. Whether the issue is work, family demands or holiday stress, the goal is to leave it all on the paper. Now go to bed and when you wake up, choose the easiest task on the list to complete. In the morning, rather than resisting, I will simply do the easiest thing I can do from the list, Haemin Sunim said. Once I finish the easiest task, its much easier to work on the second.

More here:

For the Holidays, the Gift of Self-Care - The New York Times

AI Index 2019 assesses global AI research, investment, and impact – VentureBeat

Leaders in the AI community came together to release the 2019 AI Index report today, an annual attempt to examine the biggest trends shaping the AI industry, breakthrough research, and AIs impact to society.

It also examines trends like AI hiring practices, private investment, AI research contributions by nation, researchers leaving academia for industry, and how much AI plays a role in specific industries. The report also notes strides in the reduction of the amount of time it takes to train AI systems and computing costs, two of the biggest hindrances to AI adoption rates.

In a year and a half, the time required to train a large image classification system on cloud infrastructure has fallen from about three hours in October 2017 to about 88 seconds in July, 2019, the report reads.

Some highlights:

The report is compiled by the Stanford Human-Centered AI Institute in collaboration with people from OpenAI. It originated in 2016 as part of AI 100, a century-long Stanford study of AIs progress and impact.

What we set out to do was to be religious about the quality and objectivity of the data, Stanford University professor emeritus and steering committee chair Yoav Shoham told VentureBeat in a phone interview.

Shoham has been on the AI Index steering committee since the beginning and acted as chair of a group that put the report together. Others include MIT economist Erik Brynjolfsson, Partnership on AI executive director Terah Lyons, and others from SRI International, Harvard University, OpenAI, and the McKinsey Global Institute.

The work is intended to help the general public to understand progress in the field and inform policymakers and business decision makers about how their country ranks compared to other nations.

Now in its third year, the report has three times more data sources than at its launch, authors told VentureBeat, and for the first time comes with a Global AI Vibrancy tool, a way to compare countries across 34 axes.

Shoham called it premature to make national AI rankings, as some previous works have done.

Its tempting to just do a ranking of countries, just measure some things, add a bunch of numbers, and say, you know U.S. is number one and China is number two, and what have you, he said. We didnt want to do that because when you do that, you distort things and theres so many dimensions you could look at. And eventually, its a good idea to have something like a ranking, but we think its way premature to do it.

The Global Vibrancy tool gives the choice to measure by overall numbers as well as per capita trends to recognize hot spots in places such as Israel, which produces more per capita deep learning research than any other country, or advanced AI leaders like Finland and Singapore.

Earlier this year a consultancy firm working with the United Nations determined roughly 30 nations currently have national AI strategies.

For example, according to Elseviers Scopus, which looks at publication rates for repositories like arXiv, Europe produces more AI research papers than any other part of the world, but Israel has the highest per capita deep learning research and the United States produces the most-cited AI research.

Corporate or industry affiliation with AI research is growing, and is most likely to occur in U.S., China, Japan, France, Germany, and the U.K.

Ten years ago, 20 years ago, all innovation happened in academia, and then industry picked up bits and pieces of it, perfected it and commercialized it. Thats no longer true. The lines are blurred and people cross over, Shoham said. I think the leading academic institutions are coming to terms that this is the new normal.

Though 60% of PhD candidates go to industry over academia today compared to 20% in 2004, academic research still outpunches government and corporate papers it makes up 92% of AI publications from China, 90% from Europe, and 85% from the U.S., according to the report.

The report also assesses progress in benchmarks and methods to track AI across disciplines like image classification and progress in methods to train AI systems for common use cases like translation or ActivityNet for event recognition in videos.

In some regards, Shoham says progress results are mixed, as some AI systems that achieve high results in a benchmark may prove to be more brittle than those results may indicate.

Shoham looks to work in conversational AI, his field of research, for an example. Some systems may perform well on a benchmark like Stanfords SQuAD question and answering test, but appear to be overfit to narrow tasks.

The thing is these are highly specialized tasks and domains, and as soon as you go out of domain, the performance drops dramatically and the committee knows it, Shoham said. Theres a lot to be excited about genuinely, including all these systems that I mentioned, but were quite far away from human level understanding of language right now. So we try to be nuanced about that in the report.

The report also cites instances of human-level performance by AI systems such as DeepMinds AlphaStar beating a human in Starcraft II and detection of diabetic retinopathy in images of eyes using deep learning.

Here is the original post:

AI Index 2019 assesses global AI research, investment, and impact - VentureBeat