Category Archives: Machine Learning
Five Strategies for Putting AI at the Center of Digital Transformation – Knowledge@Wharton
Across industries, companies are applying artificial intelligence to their businesses, with mixed results. What separates the AI projects that succeed from the ones that dont often has to do with the business strategies organizations follow when applying AI, writes Wharton professor of operations, information and decisions Kartik Hosanagar in this opinion piece. Hosanagar is faculty director of Wharton AI for Business, a new Analytics at Wharton initiative that will support students through research, curriculum, and experiential learning to investigate AI applications. He also designed and instructs Wharton Onlines Artificial Intelligence for Business course.
While many people perceive artificial intelligence to be the technology of the future, AI is already here. Many companies across a range of industries have been applying AI to improve their businesses from Spotify using machine learning for music recommendations to smart home devices like Google Home and Amazon Alexa. That said, there have also been some early failures, such as Microsofts social-learning chatbot, Tay, which turned anti-social after interacting with hostile Twitter followers, and IBM Watsons inability to deliver results in personalized health care. What separates the AI projects that succeed from the ones that dont often has to do with the business strategies organizations follow when applying AI. The following strategies can help business leaders not only effectively apply AI in their organizations, but succeed in adapting it to innovate, compete and excel.
1. View AI as a tool, not a goal.
One pitfall companies might encounter in the process of starting new AI initiatives is that the concentrated focus and excitement around AI might lead to AI being viewed as a goal in and of itself. But executives should be cautious about developing a strategy specifically for AI, and instead focus on the role AI can play in supporting the broader strategy of the company. A recent report from MIT Sloan Management Review and Boston Consulting Group calls this backward from strategy, not forward from AI.
As such, instead of exhaustively looking for all the areas AI could fit in, a better approach would be for companies to analyze existing goals and challenges with a close eye for the problems that AI is uniquely equipped to solve. For example, machine learning algorithms bring distinct strengths in terms of their predictive power given high-quality training data. Companies can start by looking for existing challenges that could benefit from these strengths, as those areas are likely to be ones where applying AI is not only possible, but could actually disproportionately benefit the business.
The application of machine learning algorithms for credit card fraud detection is one example of where AIs particular strengths make it a very valuable tool in assisting with a longstanding problem. In the past, fraudulent transactions were generally only identified after the fact. However, AI allows banks to detect and block fraud in real time. Because banks already had large volumes of data on past fraudulent transactions and their characteristics, the raw material from which to train machine learning algorithms is readily available. Moreover, predicting whether particular transactions are fraudulent and blocking them in real time is precisely the type of repetitive task that an algorithm can do at a speed and scale that humans cannot match.
2. Take a portfolio approach.
Over the long term, viewing AI as a tool and finding AI applications that are particularly well matched with business strategy will be most valuable. However, I wouldnt recommend that companies pool all their AI resources into a single, large, moonshot project when they are first getting started. Rather, I advocate taking a portfolio approach to AI projects that includes both quick wins and long-term projects. This approach will allow companies to gain experience with AI and build consensus internally, which can then support the success of larger, more strategic and transformative projects later down the line.
Specifically, quick wins are smaller projects that involve optimizing internal employee touch points. For example, companies might think about specific pain points that employees experience in their day-to-day work, and then brainstorm ways AI technologies could make some of these tasks faster or easier. Voice-based tools for scheduling or managing internal meetings or voice interfaces for search are some examples of applications for internal use. While these projects are unlikely to transform the business, they do serve the important purpose of exposing employees, some of whom may initially be skeptics, to the benefits of AI. These projects also provide companies with a low-risk opportunity to build skills in working with large volumes of data, which will be needed when tackling larger AI projects.
The second part of the portfolio approach, long-term projects, is what will be most impactful and where it is important to find areas that support the existing business strategy. Rather than looking for simple ways to optimize the employee experience, long-term projects should involve rethinking entire end-to-end processes and potentially even coming up with new visions for what otherwise standard customer experiences could look like. For example, a long-term project for a car insurance company could involve creating a fully automated claims process in which customers can photograph the damage of their car and use an app to settle their claims. Building systems like this that improve efficiency and create seamless new customer experiences requires technical skills and consensus on AI, which earlier quick wins will help to build.
The skills needed for embarking on AI projects are unlikely to exist in sufficient numbers in most companies, making reskilling particularly important.
3. Reskill and invest in your talent.
In addition to developing skills through quick wins, companies should take a structured approach to growing their talent base, with a focus on both reskilling internal employees in addition to hiring external experts. Focusing on growing the talent base is particularly important given that most engineers in a company would have been trained in computer science before the recent interest in machine learning. As such, the skills needed for embarking on AI projects are unlikely to exist in sufficient numbers in most companies, making reskilling particularly important.
In its early days of working with AI, Google launched an internal training program where employees were invited to spend six months working in a machine learning team with a mentor. At the end of this time, Google distributed these experts into product teams across the company in order to ensure that the entire organization could benefit from AI-related reskilling. There are many new online courses to reskill employees in AI economically.
The MIT Sloan Management Review-BCG report mentioned above also found that, in addition to developing talent in producing AI technologies, an equally important area is that of consuming AI technologies. Managers, in particular, need to have skills to consult AI tools and act on recommendations or insights from these tools. This is because AI systems are unlikely to automate entire processes from the get-go. Rather, AI is likely to be used in situations where humans remain in the loop. Managers will need basic statistical knowledge in order to understand the limitations and capabilities of modern machine learning and to decide when to lean on machine learning models.
4. Focus on the long term.
Given that AI is a new field, it is largely inevitable that companies will experience early failures. Early failures should not discourage companies from continuing to invest in AI. Rather, companies should be aware of, and resist, the tendency to retreat after an early failure.
Historically, many companies have stumbled in their early initiatives with new technologies, such as when working with the internet and with cloud and mobile computing. The companies that retreated, that stopped or scaled back their efforts after initial failures, tended to be in a worse position long term than those that persisted. I anticipate that a similar trend will occur with AI technologies. That is, many companies will fail in their early AI efforts, but AI itself is here to stay. The companies that persist and learn to use AI well will get ahead, while those that avoid AI after their early failures will end up lagging behind.
AI shouldnt be abandoned given that the alternative, human decision-makers, are biased too.
5. Address AI-specific risks and biases aggressively.
Companies should be aware of new risks that AI can pose and proactively manage these risks from the outset. Initiating AI projects without an awareness of these unique risks can lead to unintended negative impacts on society, as well as leave the organizations themselves susceptible to additional reputational, legal, and regulatory risks (as mentioned in my book, A Humans Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control).
There have been many recent cases where AI technologies have discriminated against historically disadvantaged groups. For example, mortgage algorithms have been shown to have a racial bias, and an algorithm created by Amazon to assist with hiring was shown to have a gender bias, though this was actually caught by Amazon itself prior to the algorithm being used. This type of bias in algorithms is thought to occur because, like humans, algorithms are products of both nature and nurture. While nature is the logic of the algorithm itself, nurture is the data that algorithms are trained on. These datasets are usually compilations of human behaviors oftentimes specific choices or judgments that human decision-makers have previously made on the topic in question, such as which employees to hire or which loan applications to approve. The datasets are therefore made up of biased decisions from humans themselves that the algorithms learn from and incorporate. As such, it is important to note that algorithms are generally not creating wholly new biases, but rather learning from the historical biases of humans and exacerbating them by applying them on a much larger, and therefore even more damaging, scale.
AI shouldnt be abandoned given that the alternative, human decision-makers, are biased too. Rather, companies should be aware of the kinds of social harms that can result from AI technologies and rigorously audit their algorithms to catch biases before they negatively impact society. Proceeding with AI initiatives without an awareness of these social risks can lead to reputational, legal, and regulatory risks for firms, and most importantly can have extremely damaging impacts on society.
Originally posted here:
Five Strategies for Putting AI at the Center of Digital Transformation - Knowledge@Wharton
Millions of historic newspaper images get the machine learning treatment at the Library of Congress – TechCrunch
Historians interested in the way events and people were chronicled in the old days once had to sort through card catalogs for old papers, then microfiche scans, then digital listings but modern advances can index them down to each individual word and photo. A new effort from the Library of Congress has digitized and organized photos and illustrations from centuries of news using state of the art machine learning.
Led by Ben Lee, a researcher from the University of Washington occupying the Librarys Innovator in Residence position, the Newspaper Navigator collects and surfaces data from images from some 16 million pages of newspapers throughout American history.
Lee and his colleagues were inspired by work already being done in Chronicling America, an ongoing digitization effort for old newspapers and other such print materials. While that work used optical character recognition to scan the contents of all the papers, there was also a crowdsourced project in which people identified and outlined images for further analysis. Volunteers drew boxes around images relating to World War I, then transcribed the captions and categorized the picture.
This limited effort set the team thinking.
I loved it because it emphasized the visual nature of the pages seeing the visual diversity of the content coming out of the project, I just thought it was so cool, and I wondered what it would be like to chronicle content like this from all over America, Lee told TechCrunch.
He also realized that what the volunteers had created was in fact an ideal set of training data for a machine learning system. The question was, could we use this stuff to create an object detection model to go through every newspaper, to throw open the treasure chest?
The answer, happily, was yes. Using the initial human-powered work of outlining images and captions as training data, they built an AI agent that could do so on its own. After the usual tweaking and optimizing, they set it loose on the full Chronicling America database of newspaper scans.
It ran for 19 days nonstop definitely the largest computing job Ive ever run, said Lee. But the results are remarkable: millions of images spanning three centuries (from 1789 to 1963) and organized with metadata pulled from their own captions. The team describes their work in a paper you can read here.
Assuming the captions are at all accurate, these images until recently only accessible by trudging through the archives date by date and document by document can be searched for by their contents, like any other corpus.
Looking for pictures of the president in 1870? No need to browse dozens of papers looking for potential hits and double-checking the contents in the caption just search Newspaper Navigator for president 1870. Or if you want editorial cartoons from the World War II era, you can just get all illustrations from a date range. (The team has already zipped up the photos into yearly packages and plans other collections.)
Here are a few examples of newspaper pages with the machine learning systems determinations overlaid on them (warning: plenty of hat ads and racism):
Thats fun for a few minutes for casual browsers, but the key thing is what it opens up for researchers and other sets of documents. The team is throwing a data jam today to celebrate the release of the data set and tools, during which they hope to both discover and enable new applications.
Hopefully it will be a great way to get people together to think of creative ways the data set can be used, said Lee. The idea Im really excited by from a machine learning perspective is trying to build out a user interface where people can build their own data set. Political cartoons or fashion ads, just let users define theyre interested in and train a classifier based on that.
A sample of what you might get if you asked for maps from the Civil War era.
In other words, Newspaper Navigators AI agent could be the parent for a whole brood of more specific ones that could be used to scan and digitize other collections. Thats actually the plan within the Library of Congress, where the digital collections team has been delighted by the possibilities brought up by Newspaper Navigator, and machine learning in general.
One of the things were interested in is how computation can expand the way were enabling search and discovery, said Kate Zwaard. Because we have OCR, you can find things it would have taken months or weeks to find. The Librarys book collection has all these beautiful plates and illustrations. But if you want to know like, what pictures are there of the Madonna and child, some are categorized, but others are inside books that arent catalogued.
That could change in a hurry with an image-and-caption AI systematically poring over them.
Newspaper Navigator, the code behind it and all the images and results from it are completely public domain, free to use or modify for any purpose. You can dive into the code at the projects GitHub.
Follow this link:
Millions of historic newspaper images get the machine learning treatment at the Library of Congress - TechCrunch
Could quantum machine learning hold the key to treating COVID-19? – Tech Wire Asia
Sundar Pichai, CEO of Alphabet with one of Googles quantum computers. Source: AFP PHOTO / GOOGLE/HANDOUT
Scientific researchers are hard at work around the planet, feverishly crunching data using the worlds most powerful supercomputers in the hopes of a speedier breakthrough in finding a vaccine for the novel coronavirus.
Researchers at Penn State University think that they have hit upon a solution that could greatly accelerate the process of discovering a COVID-19 treatment, employing an innovative hybrid branch of research known as quantum machine learning.
When it comes to a computer science-driven approach to identifying a cure, most methodologies harness machine learning to screen different compounds one at a time to see if they might bond with the virus main protease, or protein.
This process is arduous and time-consuming, despite the fact that the most powerful computers were actually condensing years (maybe decades) of drug testing into less than two years time. Discovering any new drug that can cure a disease is like finding a needle in a haystack, said lead researcher Swaroop Ghosh, the Joseph R. and Janice M. Monkowski Career Development Assistant Professor of Electrical Engineering and Computer Science and Engineering at Penn State.
It is also incredibly expensive. Ghosh says the current pipeline for discovering new drugs can take between five and ten years from the concept stage to being released to the market, and could cost billions in the process.
High-performance computing such as supercomputers and artificial intelligence (AI) canhelp accelerate this process by screeningbillions of chemical compounds quicklyto findrelevant drugcandidates, he elaborated.
This approach works when enough chemical compounds are available in the pipeline, but unfortunately this is not true for COVID-19. This project will explorequantum machine learning to unlock new capabilities in drug discovery by generating complex compounds quickly.
Quantum machine learning is an emerging field that combines elements of machine learning with quantum physics. Ghosh and his doctoral students had in the past developed a toolset for solving a specific set of problems known as combinatorial optimization problems, using quantum computing.
Drug discovery computation aligns with combinatorial optimization problems, allowing the researchers to tap the same toolset in the hopes of speeding up the process of discovering a cure, in a more cost-effective fashion.
Artificial intelligence for drug discovery is a very new area, Ghosh said. The biggest challenge is finding an unknown solution to the problem by using technologies that are still evolving that is, quantum computing and quantum machine learning. We are excited about the prospects of quantum computing in addressing a current critical issue and contributing our bit in resolving this grave challenge.
Joe Devanesan | @thecrystalcrown
Joe's interest in tech began when, as a child, he first saw footage of the Apollo space missions. He still holds out hope to either see the first man on the moon, or Jetsons-style flying cars in his lifetime.
Read the original here:
Could quantum machine learning hold the key to treating COVID-19? - Tech Wire Asia
Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights
Last year, the fastest-growing job title in the world was that of the machine learning (ML) engineer, and this looks set to continue for the foreseeable future. According to Indeed, the average base salary of an ML engineer in the US is $146,085, and the number of machine learning engineer openings grew by 344% between 2015 and 2018. Machine learning engineers dominate the job postings around artificial intelligence (A.I.), with 94% of job advertisements that contain AI or ML terminology targeting machine learning engineers specifically.
This demonstrates that organizations understand how profound an effect machine learning promises to have on businesses and society. AI and ML are predicted to drive a Fourth Industrial Revolution that will see vast improvements in global productivity and open up new avenues for innovation; by 2030, its predicted that the global economy will be$15.7 trillion richersolely because of developments from these technologies.
The scale of demand for machine learning engineers is also unsurprising given how complex the role is. The goal of machine learning engineers is todeploy and manage machine learning modelsthat process and learn from the patterns and structures in vast quantities of data, into applications running in production, to unlock real business value while ensuring compliance with corporate governance standards.
To do this, machine learning engineers have to sit at the intersection of three complex disciplines. The first discipline is data science, which is where the theoretical models that inform machine learning are created; the second discipline is DevOps, which focuses on the infrastructure and processes for scaling the operationalization of applications; and the third is software engineering, which is needed to make scalable and reliable code to run machine learning programs.
Its the fact that machine learning engineers have to be at ease in the language of data science, software engineering, and DevOps that makes them so scarceand their value to organizations so great. A machine learning engineer has to have a deep skill-set; they must know multiple programming languages, have a very strong grasp of mathematics, and be able to understand andapply theoretical topics in computer science and statistics. They have to be comfortable with taking state-of-the-art models, which may only work in a specialized environment, andconverting them into robust and scalable systems that are fit for a business environment.
As a burgeoning occupation, the role of a machine learning engineer is constantly evolving. The tools and capabilities that these engineers have in 2020 are radically different from those they had available in 2015, and this is set to continue evolve as the specialism matures. One of the best ways to understand what the role of a machine learning engineer means to an organization is to look at the challenges they face in practice, and how they evolve over time.
Four major challenges that every machine learning engineer has to deal with are data provenance, good data, reproducibility, and model monitoring.
Across a models development and deployment lifecycle, theres interaction between a variety of systems and teams. This results in a highly complex chain of data from a variety of sources. At the same time, there is a greater demand than ever for data to be audited, and there to be a clear lineage of its organizational uses. This is increasingly a priority for regulators, with financial regulators now demandingthat all machine learning data be stored for seven years for auditing purposes.
This does not only make the data and metadata used in models more complex, but it also makes the interactions between the constituent pieces of data far more complex. This means machine learning engineers need to put the right infrastructure in place to ensure the right data and metadata is accessible, all while making sure it is properly organized.
Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now
In 2016, it was estimated that the US alonelost $3.1 trillionto bad datadata thats improperly formatted, duplicated, or incomplete. People and businesses across all sectors lose time and money because of this, but in a job that requires building and running accurate models reliant on input data, these issues can seriously jeopardize projects.
IBM estimates that around80 percent of a data scientists timeis spentfinding, cleaning up, and organizing the data they put into their models. Over time, however, increasingly sophisticated error and anomaly detection programs will likely be used to comb through datasets and screen out information that is incomplete or inaccurate.
This means that, as time goes on and machine learning capabilities continue to develop, well see machine learning engineers have more tools in their belt to clean up the information their programs use, and thus be able to focus more time spent on putting together ML programs themselves.
Reproducibility is often defined as the ability to be able to keep a snapshot of the state of a specific machine learning model, and being able to reproduce the same experiment with the exact same results regardless of the time and location. This involves a great level of complexity, given that machine learning requires reproducibility of three components: 1) code, 2) artifacts, and 3) data. If one of these change, then the result will change.
To add to this complexity, its also necessary to keep reproducibility of entire pipelines that may consist of two or more of these atomic steps, which introduces an exponential level of complexity. For machine learning, reproducibility is important because it lets engineers and data scientists know that the results of a model can be relied upon when they are deployed live, as they will be the same if they are run today as if they were run in two years.
Designing infrastructure for machine learning that is reproducible is a huge challenge. It will continue to be a thorn in the side of machine learning engineers for many years to come. One thing that may make this easier in coming years is the rise of universally accepted frameworks for machine learning test environments, which will provide a consistent barometer for engineers to measure their efforts against.
Its easy to forget that the lifecycle of a machine learning model only begins when its deployed to production. Consequently, a machine learning engineer not only needs to do the work of coding, testing, and deploying a model, but theyll have to also develop the right tools to monitor it.
The production environment of a model can often throw up scenarios the machine learning engineer didnt anticipate when they were creating it. Without monitoring and intervention after deployment, its likely that a model can end up being rendered dysfunctional or produce skewed results by unexpected data. Without accurate monitoring, results can often slowly drift away from what is expected due to input data becoming misaligned with the data a model was trained with, producing less and less effective or logical results.
Adversarial attacks on models, often far more sophisticated than tweets and a chatbot, are of increasing concern, and it is clear that monitoring by machine learning engineers is needed to stop a model being rendered counterproductive by unexpected data. As more machine learning models are deployed, and as more economic output becomes dependent upon these models, this challenge is only going to grow in prominence for machine learning engineers going forward.
One of the most exciting things about the role of the machine learning engineer is that its a job thats still being defined, and still faces so many open problems. That means machine learning engineers get the thrill of working in a constantly changing field that deals with cutting-edge problems.
Challenges such as data quality may be problems we can make major progress towards in the coming years. Other challenges, such monitoring, look set to become more pressing in the more immediate future. Given the constant flux of machine learning engineering as an occupation, its of little wonder that curiosity and an innovative mindset are essential qualities for this relatively new profession.
Alex Housley is CEO ofSeldon.
See more here:
Machine Learning Engineer: Challenges and Changes Facing the Profession - Dice Insights
How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends
The global healthcare industry is booming. As per recent research, it is expected to cross the $2 trillion mark this year, despite the sluggish economic outlook and global trade tensions. Human beings, in general, are living longer and healthier lives.
There is increased awareness about living organ donation. Robots are being used for gallbladder removals, hip replacements, and kidney transplants. Early diagnosis of skin cancers with minimum human error is a reality. Breast reconstructive surgeries have enabled breast cancer survivors to partake in rebuilding their glands.
All these jobs were unthinkable sixty years ago. Now is an exciting time for the global health care sector as it progresses along its journey for the future.
However, as the worldwide population of 7.7 billion is likely to reach 8.5 billion by 2030, meeting health needs could be a challenge. That is where significant advancements in machine learning (ML) can help identify infection risks, improve the accuracy of diagnostics, and design personalized treatment plans.
source: Deloitte Insights 2020 global health care outlook
In many cases, this technology can even enhance workflow efficiency in hospitals. The possibilities are endless and exciting, which brings us to an essential segment of the article:
Do you understand the concept of the LACE index?
Designed in Ontario in 2004, it identifies patients who are at risk of readmission or death within 30 days of being discharged from the hospital. The calculation is based on four factors length of stay of the patient in the hospital, acuity of admission, concurring diseases, and emergency room visits.
The LACE index is widely accepted as a quality of care barometer and is famously based on the theory of machine learning. Using the past health records of the patients, the concept helps to predict their future state of health. It enables medical professionals to allocate resources on time to reduce the mortality rate.
This technological advancement has started to lay the foundation for closer collaboration among industry stakeholders, affordable and less invasive surgery options, holistic therapies, and new care delivery models. Here are five examples of current and emerging ML innovations:
From the initial screening of drug compounds to calculating the success rates of a specific medicine based on physiological factors of the patients the Knight Cancer Institute in Oregon and Microsofts Project Hanover are currently applying this technology to personalize drug combinations to cure blood cancer.
Machine learning has also given birth to new methodologies such as precision medicine and next-generation sequencing that can ensure a drug has the right effect on the patients. For example, today, medical professionals can develop algorithms to understand disease processes and innovative design treatments for ailments like Type 2 diabetes.
Signing up volunteers for clinical trials is not easy. Many filters have to be applied to see who is fit for the study. With machine learning, collecting patient data such as past medical records, psychological behavior, family health history, and more is easy.
In addition, the technology is also used to monitor biological metrics of the volunteers and the possible harm of the clinical trials in the long-run. With such compelling data in hand, medical professionals can reduce the trial period, thereby reducing overall costs and increasing experiment effectiveness.
Every human body functions differently. Reactions to a food item, medicine, or season differ. That is why we have allergies. When such is the case, why is customizing the treatment options based on the patients medical data still such an odd thought?
Machine learning helps medical professionals determine the risk for each patient, depending on their symptoms, past medical records, and family history using micro-bio sensors. These minute gadgets monitor patient health and flag abnormalities without bias, thus enabling more sophisticated capabilities of measuring health.
Cisco reports that machine-to-machine connection in global healthcare is growing at a rate of 30% CAGR which is the highest compared to any other industry!
Machine learning is mainly used to mine and analyze patient data to find out patterns and carry out the diagnosis of so many medical conditions, one of them being skin cancer.
Over 5.4mn people in the US are diagnosed with this disease annually. Unfortunately, the diagnosis is a virtual and time-taking process. It relies on long clinical screenings, comprising a biopsy, dermoscopy, and histopathological examination.
But machine learning changes all that. Moleanalyzer, an Australia-based AI software application, calculates and compares the size, diameter, and structure of the moles. It enables the user to take pictures at predefined intervals to help differentiate between benign and malignant lesions on the skin.
The analysis lets oncologists confirm their skin cancer diagnosis using evaluation techniques combined with ML, and they can start the treatment faster than usual. Where experts could identify malignant skin tumors, only 86.6% correctly, Moleanalyzer successfully detected 95%.
Healthcare providers have to ideally submit reports to the government with necessary patient records that are treated at their hospitals.
Compliance policies are continually evolving, which is why it is even more critical to ensure the hospital sites to check if they are being compliant and functioning within the legal boundaries. With machine learning, it is easy to collect data from different sources, using different methods and formatting them correctly.
For data managers, comparing patient data from various clinics to ensure they are compliant could be an overwhelming process. Machine learning helps gather, compare, and maintain that data as per the standards laid down by the government, informs Dr. Nick Oberheiden, Founder and Attorney, Oberheiden P.C.
The healthcare industry is steadily transforming through innovative technologies like AI and ML. The latter will soon get integrated into practice as a diagnostic aid, particularly in primary care. It plays a crucial role in shaping a predictive, personalized, and preventive future, making treating people a breeze. What are your thoughts?
Image: Depositphotos.com
More here:
How Machine Learning Is Redefining The Healthcare Industry - Small Business Trends
Udacity partners with AWS to offer scholarships on machine learning for working professionals – Business Insider India
All applicants will be able to join the AWS Machine Learning Foundations Course. While applications are on currently, enrollment for the course begins on May 19.
This course will provide an understanding of software engineering and AWS machine learning concepts including production-level coding and practice object-oriented programming. They will also learn about deep learning techniques and its applications using AWS DeepComposer. Advertisement
A major reason behind the increasing uptake of such niche courses among the modern-age learners has to do with the growing relevance of technology across all spheres the world over. In its wake, many high-value job roles are coming up that require a person to possess immense technical proficiency and knowledge in order to assume them. And machine learning is one of the key components of the ongoing AI revolution driving digital transformation worldwide, said Gabriel Dalporto, CEO of Udacity.
The top 325 performers in the foundation course will be awarded with a scholarship to join Udacitys Machine Learning Engineer Nanodegree program. In this advanced course, the students will work on ML tools from AWS. This includes real-time projects that are focussed on specific machine learning skills.
Advertisement
The Nanodegree program scholarship will begin on August 19.
See also:Advertisement
Here are five apps you need to prepare for JEE Main and NEET competitive exams
Read the original:
Udacity partners with AWS to offer scholarships on machine learning for working professionals - Business Insider India
Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International
The effect the coronavirus pandemic is having on energy systems and environmental policy in Europe was discussed at a recent machine learning and climate change workshop, along with the help artificial intelligence can offer to those planning electricity access in Africa.
The impact of Covid-19 on the energy system was discussed in an online climate change workshop that also considered how machine learning can help electricity planning in Africa.
This years International Conference on Learning Representations event included a workshop held by the Climate Change AI group of academics and artificial intelligence industry representatives which considered how machine learning can help tackle climate change.
Bjarne Steffen, senior researcher at the energy politics group at ETH Zrich, shared his insights at the workshop on how Covid-19 and the accompanying economic crisis are affecting recently introduced green policies. The crisis hit at a time when energy policies were experiencing increasing momentum towards climate action, especially in Europe, said Steffen, who added the coronavirus pandemic has cast into doubt the implementation of such progressive policies.
The academic said there was a risk of overreacting to the public health crisis, as far as progress towards climate change goals was concerned.
Lobbying
Many interest groups from carbon-intensive industries are pushing to remove the emissions trading system and other green policies, said Steffen. In cases where those policies are having a serious impact on carbon-emitting industries, governments should offer temporary waivers during this temporary crisis, instead of overhauling the regulatory structure.
However, the ETH Zrich researcher said any temptation to impose environmental conditions to bail-outs for carbon-intensive industries should be resisted. While it is tempting to push a green agenda in the relief packages, tying short-term environmental conditions to bail-outs is impractical, given the uncertainty in how long this crisis will last, he said. It is better to include provisions that will give more control over future decisions to decarbonize industries, such as the government taking equity shares in companies.
Steffen shared with pv magazine readers an article published in Joule which can be accessed here, and which articulates his arguments about how Covid-19 could affect the energy transition.
Covid-19 in the U.K.
The electricity system in the U.K. is also being affected by Covid-19, according to Jack Kelly, founder of London-based, not-for-profit, greenhouse gas emission reduction research laboratory Open Climate Fix.
The crisis has reduced overall electricity use in the U.K., said Kelly. Residential use has increased but this has not offset reductions in commercial and industrial loads.
Steve Wallace, a power system manager at British electricity system operator National Grid ESO recently told U.K. broadcaster the BBC electricity demand has fallen 15-20% across the U.K. The National Grid ESO blog has stated the fall-off makes managing grid functions such as voltage regulation more challenging.
Open Climate Fixs Kelly noted even events such as a nationally-coordinated round of applause for key workers was followed by a dramatic surge in demand, stating:On April 16, the National Grid saw a nearly 1 GW spike in electricity demand over 10 minutes after everyone finished clapping for healthcare workers and went about the rest of their evenings.
Read pv magazines coverage of Covid-19; and tell us how it is affecting your solar and energy storage operations. Email editors@pv-magazine.com to share your experiences.
Climate Change AI workshop panelists also discussed the impact machine learning could have on improving electricity planning in Africa. The Electricity Growth and Use in Developing Economies (e-Guide) initiative funded by fossil fuel philanthropic organization the Rockefeller Foundationaims to use data to improve the planning and operation of electricity systems in developing countries.
E-Guide members Nathan Williams, an assistant professor at the Rochester Institute of Technology (RIT) in New York state, and Simone Fobi, a PhD student at Columbia University in NYC, spoke about their work at the Climate Change AI workshop, which closed on Thursday. Williams emphasized the importance of demand prediction, saying: Uncertainty around current and future electricity consumption leads to inefficient planning. The weak link for energy planning tools is the poor quality of demand data.
Fobi said: We are trying to use machine learning to make use of lower-quality data and still be able to make strong predictions.
The market maturity of individual solar home systems and PV mini-grids in Africa mean more complex electrification plan modeling is required.
Modeling
When we are doing [electricity] access planning, we are trying to figure out where the demand will be and how much demand will exist so we can propose the right technology, added Fobi. This makes demand estimation crucial to efficient planning.
Unlike many traditional modeling approaches, machine learning is scalable and transferable. Rochesters Williams has been using data from nations such as Kenya, which are more advanced in their electrification efforts, to train machine learning models to make predictions to guide electrification efforts in countries which are not as far down the track.
Williams also discussed work being undertaken by e-Guide members at the Colorado School of Mines, which uses nighttime satellite imagery and machine learning to assess the reliability of grid infrastructure in India.
Rural power
Another e-Guide project, led by Jay Taneja at the University of Massachusetts, Amherst and co-funded by the Energy and Economic Growth program by police reform organization Oxford Policy Management uses satellite imagery to identify productive uses of electricity in rural areas by detecting pollution signals from diesel irrigation pumps.
Though good quality data is often not readily available for Africa, Williams added, it does exist.
We have spent years developing trusting relationships with utilities, said the RIT academic. Once our partners realize the value proposition we can offer, they are enthusiastic about sharing their data We cant do machine learning without high-quality data and this requires that organizations can effectively collect, organize, store and work with data. Data can transform the electricity sector but capacity building is crucial.
By Dustin Zubke
This article was amended on 06/05/20 to indicate the Energy and Economic Growth program is administered by Oxford Policy Management, rather than U.S. university Berkeley, as previously stated.
Go here to read the rest:
Tackling climate change with machine learning: Covid-19 and the energy transition - pv magazine International
Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…
Originally published in Medium, April 28, 2020
The landscape is evolving quickly. Machine Learning will transition to a commonplace part of every Software Engineers toolkit.
In every field we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that.
Lets unpack.
Machine Learning Engineer as a role is a consequence of the massive hype fueling buzzwords like AI and Data Science in the enterprise. In the early days of Machine Learning, it was a very necessary role. And it commanded a nice little pay bump for many! But Machine Learning Engineer has taken on many different personalities depending on who you ask.
The purists among us say a Machine Learning Engineer is someone who takes models out of the lab and into production. They scale Machine Learning systems, turn reference implementations into production-ready software, and oftentimes cross over into Data Engineering. Theyre typically strong programmers who also have some fundamental knowledge of the models they work with.
But this sounds a lot like a normal software engineer.
Ask some of the top tech companies what Machine Learning Engineer means to them and you might get 10 different answers from 10 survey participants. This should be unsurprising. This is a relatively young role and the folks posting these jobs are managers, oftentimes of many decades who dont have the time (or will) to understand the space.
To continue reading this article, click here.
Read more from the original source:
Machine Learning Engineers Will Not Exist In 10 Years - Machine Learning Times - machine learning & data science news - The Predictive Analytics...
The Struggle is Real 3 Considerations to Make Machine Learning More Effective – Martechcube
Can machine learning enable success?As marketers, we all want to provide great customer experiences, scale our programs, drive improved outcomes, and yes be more efficient.Its hard for me to think of any marketer that doesnt want these things. The key is identifying the best way to do so.Machine learning solutions are often discussed as a popular approach because they are based on data and -in theory- the intelligence can be applied quickly.A simple example is providing someone the next best offer based on a prior action.Yet, many of us can feel stuck getting the most out of machine learning initiatives.Here are some of the common roadblocks:
For machine learning to be a go-to approach, here are a few simple considerations that I have seen make a big difference when evaluating or using machine learning tools.
1.It all starts with the quality and diversity of your data.If your data is a mess (not unified) or is incomplete (only a few sources), your machine learning output will be suboptimal.Thats why you have to get data collection right.It is important to include first-party and third-party data or any key sources where your organization is interacting with a buyer (ex: call center data).Make sure the data is tagged correctly and then is normalized and enriched.Even the best models will generate undesirable outputs if the input dataset is incomplete or the data has not been standardized.
2.It helps to have a hypothesis.You dont want to just start wading around in data wondering what might be interesting.Machine learning algorithms look for patterns in the data, but youre likely aware of many of these patternsits just too much data for humans to sift through. Start with a feasible premise and then let that drive the model.Sometimes teams are looking for big wins, which are great but its important to have a mix of small wins too incremental improvements are impactful too.
3.Set your technology up to be flexible.Your business is not static, so it is smart to anticipate and plan for change. It is important to select machine learning solutions that can adapt to whatever new challenge or technology could be brought in.
In summary, machine learning will be part of your strategy at some point if it is not already.All of us want to be able to automate manual tasks, discover insights that will improve the business, and ultimately work more efficiently.It is just like building a house, we all want that impressive kitchen, but if the foundation is shaky no one is using the stove to make a souffl.Make sure you have a very thoughtful data strategy that looks to the fundamental need of any machine learning algorithmgood input dataand your results will definitely be better.And, remember it is never too late to get this going.If you need help,check this out.
Heidi BullockHeidi is the Chief Marketing officer at Tealium. She has immense expertise in marketing SaaS products. She has prime expertise in product marketing and revenue generation across the customer life cycle (acquisition marketing, up-sell cross-sell).
More here:
The Struggle is Real 3 Considerations to Make Machine Learning More Effective - Martechcube
Microsoft: This is how to protect your machine-learning applications – TechRepublic
Understanding failures and attacks can help us build safer AI applications.
Modern machine learning (ML) has become an important tool in a very short time. We're using ML models across our organisations, either rolling our own in R and Python, using tools like TensorFlow to learn and explore our data, or building on cloud- and container-hosted services like Azure's Cognitive Services. It's a technology that helps predict maintenance schedules, spots fraud and damaged parts, and parses our speech, responding in a flexible way.
SEE:Prescriptive analytics: An insider's guide (free PDF)(TechRepublic)
The models that drive our ML applications are incredibly complex, training neural networks on large data sets. But there's a big problem: they're hard to explain or understand. Why does a model parse a red blob with white text as a stop sign and not a soft drink advert? It's that complexity which hides the underlying risks that are baked into our models, and the possible attacks that can severely disrupt the business processes and services we're building using those very models.
It's easy to imagine an attack on a self-driving car that could make it ignore stop signs, simply by changing a few details on the sign, or a facial recognition system that would detect a pixelated bandanna as Brad Pitt. These adversarial attacks take advantage of the ML models, guiding them to respond in a way that's not how they're intended to operate, distorting the input data by changing the physical inputs.
Microsoft is thinking a lot about how to protect machine learning systems. They're key to its future -- from tools being built into Office, to its Azure cloud-scale services, and managing its own and your networks, even delivering security services through ML-powered tools like Azure Sentinel. With so much investment riding on its machine-learning services, it's no wonder that many of Microsoft's presentations at the RSA security conference focused on understanding the security issues with ML and on how to protect machine-learning systems.
Attacks on machine-learning systems need access to the models used, so you need to keep your models private. That goes for small models that might be helping run your production lines as much as the massive models that drive the likes of Google, Bing and Facebook. If I get access to your model, I can work out how to affect it, either looking for the right data to feed it that will poison the results, or finding a way past the model to get the results I want.
Much of this work has been published in a paper in conjunction with the Berkman Klein Center, on failure modes in machine learning. As the paper points out, a lot of work has been done in finding ways to attack machine learning, but not much on how to defend it. We need to build a credible set of defences around machine learning's neural networks, in much the same way as we protect our physical and virtual network infrastructures.
Attacks on ML systems are failures of the underlying models. They are responding in unexpected, and possibly detrimental ways. We need to understand what the failure modes of machine-learning systems are, and then understand how we can respond to those failures. The paper talks about two failure modes: intentional failures, where an attacker deliberately subverts a system, and unintentional failures, where there's an unsafe element in the ML model being used that appears correct but delivers bad outcomes.
By understanding the failure modes we can build threat models and apply them to our ML-based applications and services, and then respond to those threats and defend our new applications.
The paper suggests 11 different attack classifications, many of which get around our standard defence models. It's possible to compromise a machine-learning system without needing access to the underlying software and hardware, so standard authorisation techniques can't protect ML-based systems and we need to consider alternative approaches.
What are these attacks? The first, perturbation attacks, modify queries to change the response to one the attackers desire. That's matched by poisoning attacks, which achieve the same result by contaminating the training data. Machine-learning models often include important intellectual property, and some attacks like model inversion aim to extract that data. Similarly, a membership inference attack will try to determine whether specific data was in the initial training set. Closely related is the concept of model stealing, using queries to extract the model.
SEE:5G: What it means for IoT(free PDF)
Other attacks include reprogramming the system around the ML model, so that either results or inputs are changed. Closely related are adversarial attacks that change physical objects, adding duct tape to signs to confuse navigation or using specially printed bandanas to disrupt facial-recognition systems. Some attacks depend on the provider: a malicious provider can extract training data from customer systems. They can add backdoors to systems, or compromise models as they're downloaded.
While many of these attacks are new and targeted specifically at machine-learning systems, they are still computer systems and applications, and are vulnerable to existing exploits and techniques, allowing attackers to use familiar approaches to disrupt ML applications.
It's a long list of attack types, but understanding what's possible allows us to think about the threats our applications face. More importantly they provide an opportunity to think about defences and how we protect machine-learning systems: building better, more secure training sets, locking down ML platforms, and controlling access to inputs and outputs, working with trusted applications and services.
Attacks are not the only risk: we must be aware of unintended failures -- problems that come from the algorithms we use or from how we've designed and tested our ML systems. We need to understand how reinforcement learning systems behave, how systems respond in different environments, if there are natural adversarial effects, or how changing inputs can change results.
If we're to defend machine-learning applications, we need to ensure that they have been tested as fully as possible, in as many conditions as possible. The apocryphal stories of early machine-learning systems that identified trees instead of tanks, because all the training images were of tanks under trees, are a sign that these aren't new problems, and that we need to be careful about how we train, test, and deploy machine learning. We can only defend against intentional attacks if we know that we've protected ourselves and our systems from mistakes we've made. The old adage "test, test, and test again" is key to building secure and safe machine learning -- even when we're using pre-built models and service APIs.
Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays
Follow this link:
Microsoft: This is how to protect your machine-learning applications - TechRepublic