Category Archives: Machine Learning

Intel + Cornell Pioneering Work in the Science of Smell – insideBIGDATA

Nature Machine Intelligence published a joint paper from researchers at Intel Labs and Cornell University demonstrating the ability of Intels neuromorphic test chip, Loihi, to learn and recognize 10 hazardous chemicals, even in the presence of significant noise and occlusion. The work demonstrates how neuromorphic computing could be used to detect smells that are precursors to explosives, narcotics and more.

Loihi learned each new odor from a single example without disrupting the previously learned smells, requiring up to 3000x fewer training samples per class compared to a deep learning solution and demonstrating superior recognition accuracy. The research shows how the self-learning, low-power, and brain-like properties of neuromorphic chips combined with algorithms derived from neuroscience could be the answer to creating electronic nose systems that recognize odors under real-world conditions more effectively than conventional solutions.

We are developing neural algorithms on Loihi that mimic what happens in your brain when you smell something, said Nabil Imam, senior research scientist in Intels Neuromorphic Computing Lab. This work is a prime example of contemporary research at the crossroads of neuroscience and artificial intelligence and demonstrates Loihis potential to provide important sensing capabilities that could benefit various industries.

Intel Labs is driving computer-science research that contributes to a third generation of AI. Key focus areas include neuromorphic computing, which is concerned with emulating the neural structure and operation of the human brain, as well as probabilistic computing, which createsalgorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world.

Sign up for the free insideBIGDATAnewsletter.

See original here:
Intel + Cornell Pioneering Work in the Science of Smell - insideBIGDATA

Data to the Rescue! Predicting and Preventing Accidents at Sea – JAXenter

Watch Dr. Yonit Hoffman's Machine Learning Conference session

Accidents at sea happen all the time. Their costs in terms of lives, money and environmental destruction are huge. Wouldnt it be great if they could be predicted and perhaps prevented? Dr. Yonit Hoffmans Machine Learning Conference session discusses new ways of preventing sea accidents with the power of data science.

Does machine learning hold the key to preventing accidents at sea?

With more than 350 years of history, the marine insurance industry is the first data science profession to try to predict accidents and estimate future risk. Yet the old ways no longer work, new waves of data and algorithms can offer significant improvements and are going to revolutionise the industry.

In her Machine Learning Conference session, Dr. Yonit Hoffman will show that it is now possible to predict accidents, and how data on a ships behaviour such as location, speed, maps and weather can help. She will show how fragments of information on ship movements can be gathered and taken all the way to machine learning models. In this session, she discusses the challenges, including introducing machine learning to an industry that still uses paper and quills (yes, really!) and explaining the models using SHAP.

Dr. Yonit Hoffman is a Senior Data Scientist at Windward, a world leader in maritime risk analytics. Before investigating supertanker accidents, she researched human cells and cancer at the Weizmann Institute, where she received her PhD and MSc. in Bioinformatics. Yonit also holds a BSc. in computer science and biology from Tel Aviv University.

See the original post here:
Data to the Rescue! Predicting and Preventing Accidents at Sea - JAXenter

Return On Artificial Intelligence: The Challenge And The Opportunity – Forbes

Moving up the charts with AI

There is increasing awareness that the greatest problems with artificial intelligence are not primarily technical, but rather how to achieve value from the technology. This was a growing problem even in the booming economy of the last several years, but a much more important issue in the current pandemic-driven recessionary economic climate.

Older AI technologies like natural language processing, and newer ones like deep learning, work well for the most part and are capable of providing considerable value to organizations that implement them. The challenges are with large-scale implementation and deployment of AI, which are necessary to achieve value. There is substantial evidence of this in surveys.

In an MIT Sloan Management Review/BCG survey, seven out of 10 companies surveyed report minimal or no impact from AI so far. Among the 90% of companies that have made some investment in AI, fewer than 2 out of 5 report business gains from AI in the past three years.This number improves to 3 out of 5 when we include companies that have made significant investments in AI. Even so, this means 40% of organizations making significant investments in AI do not report business gains from AI.

NewVantage Partners 2019 Big Data and AI Executive surveyFirms report ongoing interest and an active embrace of AI technologies and solutions, with 91.5% of firms reporting ongoing investment in AI. But only 14.6% of firms report that they have deployed AI capabilities into widespread production. Perhaps as a result, the percentage of respondents agreeing that their pace of investment in AI and big data was accelerating fell from 92% in 2018 to 52% in 2019.

Deloitte 2018 State of Enterprise AI surveyThe top 3 challenges with AI were implementation issues, integrating AI into the companys roles and functions, and data issuesall factors involved in large-scale deployment.

In a 2018 McKinsey Global Survey of AI, most respondents whose companies have deployed AI in a specific function report achieving moderate or significant value from that use, but only 21 percent of respondents report embedding AI into multiple business units or functions.

In short, AI has not yet achieved much return on investment. It has yet to substantially improve the lives of workers, the productivity and performance of organizations, or the effective functions of societies. It is capable of doing all these things, but is being held back from its potential impact by a series of factors I will describe below.

Whats Holding AI Back

Ill describe the factors that are preventing AI from having a substantial return in terms of the letters of our new organization: the ROAI Institute. Although it primarily stands for return on artificial intelligence, it also works to describe the missing or critical ingredients for a successful return:

ReengineeringThe business process reengineering movement of the 1980s and early 90s, in which I wrote the first article and book (admittedly by only a few weeks in both cases) described an opportunity for substantial change in broad business processes based on the capabilities of information technology. Then the technology catalyst was enterprise systems and the Internet; now its artificial intelligence and business analytics.

There is a great opportunitythus far only rarely pursuedto redesign business processes and tasks around AI. Since AI thus far is a relatively narrow technology, task redesign is more feasible now, and essential if organizations are to derive value from AI. Process and task design has become a question of what machines will do vs. what tasks are best suited to humans.

We are not condemned to narrow task redesign forever, however. Combinations of multiple AI technologies can lead to change in entire end to end processesnew product and service development, customer service, order management, procure to pay, and the like.

Organizations need to embrace this new form of reengineering while avoiding the problems that derailed the movement in the past; I called it The Fad that Forgot People. Forgetting people, and their interactions with AI, would also lead to the derailing of AI technology as a vehicle for positive change.

Organization and CultureAI is the child of big data and analytics, and is likely to be subject to the same organization and culture issues as the parent. Unfortunately, there are plenty of survey results suggesting that firms are struggling to achieve data-driven cultures.

The 2019 NewVantage Partners survey of large U.S. firms I cite above found that only 31.0% of companies say they are data-driven. This number has declined from 37.1% in 2017 and 32.4% in 2018. 28% said in 2019 that they have a data culture. 77% reported that business adoption of big data and AI initiatives remains a major challenge. Executives cited multiple factors (organizational alignment, agility, resistance), with 95% stemming from cultural challenges (people and process), and only 5% relating to technology.

A 2019 Deloitte survey of US executives on their perspectives on analytical insights found that most executives63%do not believe their companies are analytics-driven. 37% say their companies are either analytical competitors (10%) or analytical companies (27%). 67% of executives say they are not comfortable accessing or using data from their tools and resources; even 37% of companies with strong data-driven cultures express discomfort.

The absence of a data-driven culture affects AI as much as any technology. It means that the company and its leaders are unlikely to be motivated or knowledgeable about AI, and hence unlikely to build the necessary AI capabilities to succeed. Even if AI applications are successfully developed, they may not be broadly implemented or adopted by users. In addition to culture, AI systems may be a poor fit with an organization for reasons of organizational structure, strategy, or badly-executed change management. In short, the organizational and cultural dimension is critical for any firm seeking to achieve return on AI.

Algorithms and DataAlgorithms are, of course, the key technical feature of most AI systemsat least those based on machine learning. And its impossible to separate data from algorithms, since machine learning algorithms learn from data. In fact, the greatest impediment to effective algorithms is insufficient, poor quality, or unlabeled data. Other algorithm-related challenges for AI implementation include:

InvestmentOne key driver of lack of return from AI is the simple failure to invest enough. Survey data suggest most companies dont invest much yet, and I mentioned one above suggesting that investment levels have peaked in many large firms. And the issue is not just the level of investment, but also how the investments are being managed. Few companies are demanding ROI analysis both before and after implementation; they apparently view AI as experimental, even though the most common version of it (supervised machine learning) has been available for over fifty years. The same companies may not plan for increased investment at the deployment stagetypically one or two orders of magnitude more than a pilotonly focusing on pre-deployment AI applications.

Of course, with any technology it can be difficult to attribute revenue or profit gains to the application. Smart companies seek intermediate measures of effectiveness, including user behavior changes, task performance, process changes, and so forththat would precede improvements in financial outcomes. But its rare for these to be measured by companies either.

A Program of Research and Structured Action

Along with several other veterans of big data and AI, I am forming the Return on AI Institute, which will carry out programs of research and structured action, including surveys, case studies, workshops, methodologies, and guidelines for projects and programs. The ROAI Institute is a benefit corporation that will be supported by companies and organizations who desire to get more value out of their AI investments

Our focus will be less on AI technology-though technological breakthroughs and trends will be considered for their potential to improve returnsand more on the factors defined in this article that improve deployment, organizational change, and financial and social returns. We will focus on the important social dimension of AI in our work as wellis it improving work or the quality of life, solving social or healthcare problems, or making government bodies more responsive? Those types of benefits will be described in our work in addition to the financial ones.

Our research and recommendations will address topics such as:

Please contact me at tdavenport@babson.edu if you care about these issues with regard to your own organization and are interested in approaches to them. AI is a powerful and potentially beneficial technology, but its benefits wont be realized without considerable attention to ROAI.

More:
Return On Artificial Intelligence: The Challenge And The Opportunity - Forbes

Noble.AI Contributes to TensorFlow, Google’s Open-Source AI Library and the Most Popular Deep Learning – AiThority

Noble.AI, whose artificial intelligence (AI) software is purpose-built for engineers, scientists, and researchers and enables them to innovate and make discoveries faster, announced that it had completed contributions to TensorFlow, the worlds most popular open-source framework for deep learning created by Google.

Part of Nobles mission is building AI thats accessible to engineers, scientists and researchers, anytime and anywhere, without needing to learn or re-skill into computer science or AI theory, said Dr.Matthew C. Levy, Founder and CEO of Noble.AI. He continued, The reason why were making this symbolic contribution open-source is so people have greater access to tools amenable to R&D problems.

Recommended AI News: The Environmental Impact of Your Favorite Movies And Shows

TensorFlow is an end-to-end open source platform for machine learning originally developed by the Google Brain team. Today it is used by more than 60,000 GitHub developers and has achieved more than 140,000 stars and 80,000 forks of the codebase.

Recommended AI News: Chainalysis And Paxful Create New Compliance Standard For Peer-To-Peer Cryptocurrency Exchanges

Noble.AIs specific contribution helps to augment the sparse matrix capabilities of TensorFlow. Often, matrices represent mathematical operations that need to be performed on input data, such as in calculating the temporal derivative of time-series data. In many common physics and R&D scenarios these matrices can be sparsely populated such that a tiny fraction, often less than one percent, of all elements in the matrix are non-zero. In this setting, storing the entire matrix in a computers memory is cumbersome and often impossible all together at R&D industrial scale. In these cases, it often becomes advantageous to use sparse matrix operations.

Recommended AI News: 5 Technical Things you Should Know about Regression Testing and Retesting

Read the rest here:
Noble.AI Contributes to TensorFlow, Google's Open-Source AI Library and the Most Popular Deep Learning - AiThority

PSD2: How machine learning reduces friction and satisfies SCA – The Paypers

Andy Renshaw, Feedzai: It crosses borders but doesnt have a passport. Its meant to protect people but can make them angry. Its competitive by nature but doesnt want you to fail. What is it?

If the PSD2 regulations and Strong Customer Authentication (SCA) feel like a riddle to you, youre not alone. SCA places strict two-factor authentication requirements upon financial institutions (FIs) at a time when FIs are facing stiff competition for customers. On top of that, the variety of payment types, along with the sheer number of transactions, continue to increase.

According to UK Finance, the number of debit card transactions surpassed cash transactions since 2017, while mobile banking surged over the past year, particularly for contactless payments. The number of contactless payment transactions per customer is growing; this increase in transactions also raises the potential for customer friction.

The number of transactions isnt the only thing thats shown an exponential increase; the speed at which FIs must process them is too. Customers expect to send, receive, and access money with the swipe of a screen. Driven by customer expectations, instant payments are gaining traction across the globe with no sign of slowing down.

Considering the sheer number of transactions combined with the need to authenticate payments in real-time, the demands placed on FIs can create a real dilemma. In this competitive environment, how can organisations reduce fraud and satisfy regulations without increasing customer friction?

For countries that fall under PSD2s regulation, the answer lies in the one known way to avoid customer friction while meeting the regulatory requirement: keep fraud rates at or below SCA exemption thresholds.

How machine learning keeps fraud rates below the exemption threshold to bypass SCA requirements

Demonstrating significantly low fraud rates allows financial institutions to bypass the SCA requirement. The logic behind this is simple: if the FIs systems can prevent fraud at such high rates, they've demonstrated their systems are secure without authentication.

SCA exemption thresholds are:

Exemption Threshold Value

Remote electronic card-based payment

Remote electronic credit transfers

EUR 500

below 0.01% fraud rate

below 0.01% fraud rate

EUR 250

below 0.06% fraud rate

below 0.01% fraud rate

EUR 100

below 0.13% fraud rate

below 0.015% fraud rate

Looking at these numbers, you might think that achieving SCA exemption thresholds is impossible. After all, bank transfer scams rose 40% in the first six months of 2019. But state-of-the-art technology rises to the challenge of increased fraud. Artificial intelligence, and more specifically machine learning, makes achieving SCA exemption thresholds possible.

How machine learning achieves SCA exemption threshold values

Every transaction has hundreds of data points, called entities. Entities include time, date, location, device, card, cardless, sender, receiver, merchant, customer age the possibilities are almost endless. When data is cleaned and connected, meaning it doesnt live in siloed systems, the power of machine learning to provide actionable insights on that data is historically unprecedented.

Robust machine learning technology uses both rules and models and learns from both historical and real-time profiles of virtually every data point or entity in a transaction. The more data we feed the machine, the better it gets at learning fraud patterns. Over time, the machine learns to accurately score transactions in less than a second without the need for customer authentication.

Machine learning creates streamlined and flexible workflows

Of course, sometimes, authentication is inevitable. For example, if a customer who generally initiates a transaction in Brighton, suddenly initiates a transaction from Mumbai without a travel note on the account, authentication should be required. But if machine learning platforms have flexible data science environments that embed authentication steps seamlessly into the transaction workflow, the experience can be as customer-centric as possible.

Streamlined workflows must extend to the fraud analysts job

Flexible workflows arent just important to instant payments theyre important to all payments. And they cant just be a back-end experience in the data science environment. Fraud analysts need flexibility in their workflows too. They're under pressure to make decisions quickly and accurately, which means they need a full view of the customer not just the transaction.

Information provided at a transactional level doesnt allow analysts to connect all the dots. In this scenario, analysts are left opening up several case managers in an attempt to piece together a complete and accurate fraud picture. Its time-consuming and ultimately costly, not to mention the wear and tear on employee satisfaction. But some machine learning risk platforms can show both authentication and fraud decisions at the customer level, ensuring analysts have a 360-degree view of the customer.

Machine learning prevents instant payments from becoming instant losses

Instant payments can provide immediate customer satisfaction, but also instant fraud losses. Scoring transactions in real-time means institutions can increase the security around the payments going through their system before its too late.

Real-time transaction scoring requires a colossal amount of processing power because it cant use batch processing, an efficient method when dealing with high volumes of data. Thats because the lag time between when a customer transacts and when a batch is processed makes this method incongruent with instant payments. Therefore, scoring transactions in real-time requires supercomputers with super processing powers. The costs associated with this make hosting systems on the cloud more practical than hosting at the FIs premises, often referred to as on prem. Of course, FIs need to consider other factors, including cybersecurity concerns before determining where they should host their machine learning platform.

Providing exceptional customer experiences by keeping fraud at or below PSD2s SCA threshold can seem like a magic trick, but its not. Its the combined intelligence of humans and machines to provide the most effective method we have today to curb and prevent fraud losses. Its how we solve the friction-security puzzle and deliver customer satisfaction while satisfying SCA.

About Andy Renshaw

Andy Renshaw, Vice President of Banking Solutions at Feedzai, has over 20 years of experience in banking and the financial services industry, leading large programs and teams in fraud management and AML. Prior to joining Feedzai, Andy held roles in global financial institutions such as Lloyds Banking Group, Citibank, and Capital One, where he helped fight against the ever-evolving financial crime landscape as a technical expert, fraud prevention expert, and a lead product owner for fraud transformation.

About Feedzai

Feedzai is the market leader in fighting fraud with AI. Were coding the future of commerce with todays most advanced risk management platform powered by big data and machine learning. Founded and developed by data scientists and aerospace engineers, Feedzai has one mission: to make banking and commerce safe. The worlds largest banks, processors, and retailers use Feedzais fraud prevention and anti-money laundering products to manage risk while improving customer experience.

Read more:
PSD2: How machine learning reduces friction and satisfies SCA - The Paypers

Neural networks facilitate optimization in the search for new materials – MIT News

When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.

As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.

The findings are reported in the journal ACS Central Science, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet PhD 19, Sahasrajit Ramesh, and graduate student Chenru Duan.

The study looked at a set of materials called transition metal complexes. These can exist in a vast number of different forms, and Kulik says they are really fascinating, functional materials that are unlike a lot of other material phases. The only way to understand why they work the way they do is to study them using quantum mechanics.

To predict the properties of any one of millions of these materials would require either time-consuming and resource-intensive spectroscopy and other lab work, or time-consuming, highly complex physics-based computer modeling for each possible candidate material or combination of materials. Each such study could consume hours to days of work.

Instead, Kulik and her team took a small number of different possible materials and used them to teach an advanced machine-learning neural network about the relationship between the materials chemical compositions and their physical properties. That knowledge was then applied to generate suggestions for the next generation of possible materials to be used for the next round of training of the neural network. Through four successive iterations of this process, the neural network improved significantly each time, until reaching a point where it was clear that further iterations would not yield any further improvements.

This iterative optimization system greatly streamlined the process of arriving at potential solutions that satisfied the two conflicting criteria being sought. This kind of process of finding the best solutions in situations, where improving one factor tends to worsen the other, is known as a Pareto front, representing a graph of the points such that any further improvement of one factor would make the other worse. In other words, the graph represents the best possible compromise points, depending on the relative importance assigned to each factor.

Training typical neural networks requires very large data sets, ranging from thousands to millions of examples, but Kulik and her team were able to use this iterative process, based on the Pareto front model, to streamline the process and provide reliable results using only the few hundred samples.

In the case of screening for the flow battery materials, the desired characteristics were in conflict, as is often the case: The optimum material would have high solubility and a high energy density (the ability to store energy for a given weight). But increasing solubility tends to decrease the energy density, and vice versa.

Not only was the neural network able to rapidly come up with promising candidates, it also was able to assign levels of confidence to its different predictions through each iteration, which helped to allow the refinement of the sample selection at each step. We developed a better than best-in-class uncertainty quantification technique for really knowing when these models were going to fail, Kulik says.

The challenge they chose for the proof-of-concept trial was materials for use in redox flow batteries, a type of battery that holds promise for large, grid-scale batteries that could play a significant role in enabling clean, renewable energy. Transition metal complexes are the preferred category of materials for such batteries, Kulik says, but there are too many possibilities to evaluate by conventional means. They started out with a list of 3 million such complexes before ultimately whittling that down to the eight good candidates, along with a set of design rules that should enable experimentalists to explore the potential of these candidates and their variations.

Through that process, the neural net both gets increasingly smarter about the [design] space, but also increasingly pessimistic that anything beyond what weve already characterized can further improve on what we already know, she says.

Apart from the specific transition metal complexes suggested for further investigation using this system, she says, the method itself could have much broader applications. We do view it as the framework that can be applied to any materials design challenge where you're really trying to address multiple objectives at once. You know, all of the most interesting materials design challenges are ones where you have one thing you're trying to improve, but improving that worsens another. And for us, the redox flow battery redox couple was just a good demonstration of where we think we can go with this machine learning and accelerated materials discovery.

For example, optimizing catalysts for various chemical and industrial processes is another kind of such complex materials search, Kulik says. Presently used catalysts often involve rare and expensive elements, so finding similarly effective compounds based on abundant and inexpensive materials could be a significant advantage.

This paper represents, I believe, the first application of multidimensional directed improvement in the chemical sciences, she says. But the long-term significance of the work is in the methodology itself, because of things that might not be possible at all otherwise. You start to realize that even with parallel computations, these are cases where we wouldn't have come up with a design principle in any other way. And these leads that are coming out of our work, these are not necessarily at all ideas that were already known from the literature or that an expert would have been able to point you to.

This is a beautiful combination of concepts in statistics, applied math, and physical science that is going to be extremely useful in engineering applications, says George Schatz, a professor of chemistry and of chemical and biological engineering at Northwestern University, who was not associated with this work. He says this research addresses how to do machine learning when there are multiple objectives. Kuliks approach uses leading edge methods to train an artificial neural network that is used to predict which combination of transition metal ions and organic ligands will be best for redox flow battery electrolytes.

Schatz says this method can be used in many different contexts, so it has the potential to transform machine learning, which is a major activity around the world.

The work was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the Burroughs Wellcome Fund, and the AAAS Mar ion Milligan Mason Award.

See the original post:
Neural networks facilitate optimization in the search for new materials - MIT News

Machine learning teams with antibody science on COVID-19 treatment discovery – AI in Healthcare

Two data scientists say theyve created AI algorithms that can do in a week what biological researchers might otherwise spend years trying to pull off in a laboratory: discover antibody-based treatments that have a fighting chance to beat back COVID-19.

In fact, studies have shown it takes an average of five years and half a billion dollars to find and fine-tune antibodies in a lab, Andrew Satz and Brett Averso, both execs of a 12-member startup called EVQLV, explain.

Speaking with their alma mater, Columbia Universitys Data Science Institute, Satz and Averso say their machine-learning algorithms can help by cutting the chances of costly experimental failures in the lab.

We fail in the computer as much as possible to reduce the possibility of downstream failure in the laboratory, Satz tells the institutes news division. [T]hat shaves a significant amount of time from laborious and time-consuming work.

The rest is here:
Machine learning teams with antibody science on COVID-19 treatment discovery - AI in Healthcare

Natural Language Processing is an Untapped AI Tool for Innovation – Yahoo Finance

Natural language processing (NLP) will improve processes including technology landscaping, competitive analysis, and weak signal detection

BOSTON, March 26, 2020 /PRNewswire/ --Innovation leaders are seeking ways to use artificial intelligence (AI) effectively to extract value and leverage data for maximum impact. Lux considers natural language processing (NLP) and topic modeling the AI tools of choice. These tools have the potential to accelerate the front end of innovation across many industries, but remain underutilized. According to Lux Research's new whitepaper, "Improving the Front End of Innovation with Artificial Intelligence and Machine Learning," NLP can improve processes including technology landscaping, competitive analysis, and weak signal detection.

(PRNewsfoto/Lux Research)

NLP enables rapid analysis of huge volumes of text, which is where most of the data driving innovation lives.

"When utilized effectively, machine learning can quickly mine data to produce actionable insights, significantly decreasing the time it takes for a comprehensive analysis to be performed. An analysis that would have previously taken weeks can now be reduced to days," said Kevin See, Ph.D., VP of Digital Products for Lux Research.

The speed conferred through NLP is enabled by the comprehensiveness of topic modeling, which extracts important concepts from text while eliminating the human assumption and bias associated with it. "Previously, an investigation was hindered by either the limited knowledge or bias of the primary investigator, both of which are mitigated when using machine learning. A beneficial technology or idea is less likely to be missed due to an error in human judgement," explained See.

There are many relevant applications that use machine learning to leverage speed and comprehensiveness in innovation. Landscaping is used to build a taxonomy that defines the trends for key areas of innovation under a specific topic. Concept similarity can take one piece of content and find other relevant articles, patents, or news to accelerate the innovation process. Topic modeling can also be used for competitive portfolio analysis when applied to a corporation instead of a technology, or for weak signal detection when applied to large data sets like news or Twitter.

When defining a successful AI and machine learning strategy, there are a few key points to consider, including whether you'll buy or build your technology, what data sources you'll use, and how you'll leverage experts to define and interpret the data. It's also important to adapt a culture of acceptance of these tools so that valued human resources see them as an asset to their skills rather than competition. "The confidence and speed AI and machine learning bring to the decision-making process is enabling innovation to happen at a more rapid pace than ever before, but don't think this means humans are no longer needed," said See. People are still necessary to define the starting points of an analysis, label topics, and extract insights from the data collected. "It's clear that a collaboration between humans and machines can generate better results, benefiting all involved," See continued.

For more information, download a copy of Lux Research's whitepaper here.

About Lux Research

Story continues

Lux Research is a leading provider of tech-enabled research and advisory services, helping clients drive growth through technology innovation. A pioneer in the research industry, Lux uniquely combines technical expertise and business insights with a proprietary intelligence platform, using advanced analytics and data science to surface true leading indicators. With quality data derived from primary research, fact-based analysis, and opinions that challenge traditional thinking, Lux empowers clients to make more informed decisions today to ensure future success.

For more information, visit http://www.luxresearchinc.com, read our blog, connect on LinkedIn, or follow @LuxResearch.

Contact Information: Jess Bonner press@luxresearchinc.com(617) 502-3219

View original content to download multimedia:http://www.prnewswire.com/news-releases/natural-language-processing-is-an-untapped-ai-tool-for-innovation-301030014.html

SOURCE Lux Research

See the article here:
Natural Language Processing is an Untapped AI Tool for Innovation - Yahoo Finance

Coronavirus lockdown: 10 free online computer science courses from Harvard, Princeton & other top universities to study – Gadgets Now

As India fights the spread of coronavirus disease with 21 days of lockdown, it may be a good idea to utilise the extra time at home to learn something new. There are lots of free online computer science courses from top universities like Harvard, Princeton, Stanford, MIT and others available online which you can start anytime and learn at your own pace. Class Central, a platform for free online courses, lists out thousands of courses in computer science, business, data science, humanities and more. Here are 10 free online computer science courses from Harvard, Princeton & other top universities that you may want to consider to upskill yourself and make the most of the lockdown period. (Note that only basic or introductory courses are listed and there are thousands of free online courses available which you can try.)

Read more:
Coronavirus lockdown: 10 free online computer science courses from Harvard, Princeton & other top universities to study - Gadgets Now

Udacity offers free tech training to laid-off workers due to the coronavirus pandemic – CNBC

A nanodegree in autonomous vehicles is just one of 40 programs that Udacity is offering for free to workers laid off in the wake of the COVID-19 pandemic.

Udacity

Online learning platform Udacity is responding to the COVID-19 pandemic by offering free tech training to workers laid off as a result of the crisis.

On Thursday the Mountain View, California-based company revealed that in the wake of layoffs and furloughs by major U.S. corporations, including Marriott International, Hilton Hotels and GE Aviation, it will offer its courses known as nanodegrees for free to individuals in the U.S. who have been let go because of the coronavirus. The average price for an individual signing up for a nanodegree is about $400 a month, and the degrees take anywhere from four to six months to complete, according to the company.

The hope is that while individuals wait to go back to work, or in the event that the layoff is permanent, they can get training in fields that are driving so much of today's digital transformation. Udacity's courses include artificial intelligence, machine learning, digital marketing, product management, data analysis, cloud computing, autonomous vehicles, among others.

Gabe Dalporto, CEO of Udacity, said that over the past few weeks, as he and his senior leadership team heard projections of skyrocketing unemployment numbers as a result of COVID-19, he felt the need to act. "I think those reports were a giant wake-up call for everybody," he says. "This [virus] will create disruption across the board and in many industries, and we wanted to do our part to help."

A nanodegree in autonomous vehicles is just one of 40 programs that Udacity is offering for free to workers laid off in the wake of the COVID-19 pandemic.

Udacity

Dalporto says Udacity is funding the scholarships completely and that displaced workers can apply for them at udacity.com/pledge-to-americas-workers beginning March 26. Udacity will take the first 50 eligible applicants from each company that applies, and within 48 hours individuals should be able to begin the coursework. Dalporto says the offer will be good for the first 20 companies that apply and that "after that we'll evaluate and figure out how many more scholarships we are going to fund."

The company also announced this week that any individual, regardless of whether they've been laid off, can enroll for free in any one of Udacity's 40 different nanodegree programs. Users will get the first month free when they enroll in a monthly subscription, but Dalporto pointed out that many students can complete a course in a month if they dedicate enough time to it.

Udacity's offerings at this time underscore the growing disconnect between the skills workers have and the talent that organizations need today and in the years ahead. The company recently signed a deal with Royal Dutch Shell, for instance, to provide training in artificial intelligence. Shell says about 2,000 of its 82,000 employees have either expressed interest in the AI offerings or have been approached by their managers about taking the courses on everything from Python programming to training neural networks. Shell says the training is completely voluntary.

We have to be asking how are we going to help them get the skills they need to be successful in their careers moving forward when this is all behind us.

Gabe Dalporto

CEO of Udacity

And as more workers lose their jobs in the wake of the COVID-19 pandemic, it will be even more crucial that they're able to reenter the job market armed with the skills companies are looking for. According to the World Economic Forum's Future of Jobs report, at least 54% of all employees will need reskilling and upskilling by 2022. Yet only 30% of employees at risk of job displacement because of technological change received any training over the past year.

"America is facing a massive shortage of workers with the right technical skills, and as employers, retraining your existing workforce to address that shortage is the most efficient, cost-effective way to fill those gaps in an organization," Dalporto says. "The great irony in the world right now is that at the same time that a lot of people are going to lose their jobs, there are areas in corporations where managers just can't hire enough people for jobs in data analytics, cloud computing and AI."

Dalporto, who grew up in West Virginia, says he sees this point vividly every time he revisits his hometown. "When I go back, I see so many businesses and companies boarded up and people laid off because they didn't keep pace with automation and people didn't upskill," he says. As a result, many of these workers wind up in minimum wage jobs and that "just creates a lot of pain for them and their families," he adds. What's happening now is only fueling that cycleone that Dalporto says can be minimized with the right action.

"Laying people off is never an easy decision, but companies have to move the conversation beyond how many weeks of severance they're going to offer," he says. "We have to be asking how are we going to help them get the skills they need to be successful in their careers moving forward when this is all behind us."

Continue reading here:
Udacity offers free tech training to laid-off workers due to the coronavirus pandemic - CNBC