Category Archives: Machine Learning
Leveraging AI and machine learning in RAN automation – Ericsson
The left side of Figure 3 illustrates how the task of efficiently operating a RAN to best utilize the deployed resources (base stations or frequencies) can be divided into different control loops acting according to different time scales and with different scopes. A successful RAN automation solution will require the use of AI/ML technologies [6] in all of these control loops to ensure functionality that can work autonomously in different deployments and environments in an optimal way.
The two fastest control loops (purple and orange) are related to traditional RRM. Examples include scheduling and link adaptation in the purple (layer 1 and 2) control loop and bearer management and handover in the orange (layer 3) control loop. Functionality in these control loops has already been autonomous for quite some time, with the decision-making based on internal data for scheduling and handover in a timeframe ranging from milliseconds (ms) to several hundred ms, for example. From an architecture perspective, these control loops are implemented in the RAN network function domain shown in Figure 3.
The slower control loops shown on the left side of Figure 3 represent network design (dark green) and network optimization and assurance (light green). In contrast to the two fast control loops, these slower loops are to a large degree manual at present. Network design covers activities related to the design and deployment of the full RAN, while network automation covers observation and optimization of the deployed functionality. Network optimization and assurance is done by observing the performance of a certain functionality and changing the exposed configuration parameters to alter the behavior of the deployed functionality, so that it assures the intents in the specific environment where it has been deployed. From an architecture perspective, these control loops are implemented in the RAN automation application domain [7].
The green control loops encompass the bulk of the manual work that will disappear as a result of RAN automation, which explains why AI/ML is already being implemented in those loops [8]. It would, however, be a mistake to restrict the RAN automation solution to just the green control loops. AI/ML also makes it possible to enhance the functionality in the purple and orange control loops to make them more adaptive and robust for deployment in different environments. This, in turn, minimizes the amount of configuration optimization that is needed in the light-green control loop.
While the control loops in Figure 3 are all internal to the RAN domain, some of the functionality in a robust RAN automation solution will depend on resources from other domains. That functionality would be implemented as part of the RAN automation application domain. The RAN automation platform domain will provide the services required for cross-domain interaction.
One example of RAN automation functionality in the RAN automation application domain is the automated deployment and configuration of ERAN. In ERAN deployments, AI/ML is used to cluster basebands that share radio coverage and therefore should be configured to coordinate functionality such as scheduling [8]. To do this, data from several network functions needs to be clustered to understand which of them share radio coverage. This process requires topology and inventory information that will be made available to the rApps through the services exposed by the network automation platform over R1.
The outcome of the clustering results is a configuration of the basebands that should coordinate as well as a request for resources from the transport domain. This information can also be obtained by services provided by transport automation applications exposing services through the R1 framework. When designing the rApp for clustering, it is beneficial to have detailed knowledge about the implementation of coordination functionality in the RAN network function to understand how the clustering analysis in the rApp should be performed.
An example of RAN automation functionality in the network function domain is AI/ML-based link adaptation, where AI/ML-based functionality optimizes the selection of the modulation and coding scheme for either maximum throughput or minimum delay, removing the block error rate target parameter and thereby the need for configuration-based optimization. Another example is secondary carrier prediction [8], where AI/ML is used to learn coverage relations between different carriers for a certain deployment. Both of these examples use data that is internal to the network function.
As the objective of RAN automation is to replace the manual work of developing, installing, deploying, managing, optimizing and retiring RAN functions, it is certain to have a significant impact on the way that the LCM of RAN software works. Specifically, as AI/ML has proven to be an efficient tool to develop functionality for RAN automation, different options for training and inference of ML models will drive corresponding options for the LCM of software with AI/ML-based functionality.
Figure 4 presents a process view of the LCM of RAN components, ranging from the initial idea for a RAN component to its eventual retirement. A RAN component is defined as either a pure software entity or a hardware/software (physical network function) entity. As the different steps in the LCM structure include the manual work associated with RAN operations, it is a useful model to describe how RAN automation changes the processes, reduces the manual effort and improves the quality and performance of the RAN.
Read more from the original source:
Leveraging AI and machine learning in RAN automation - Ericsson
Publisher Discovery relaunches affiliate analysis tools with AI and machine learning – Pro News Report
(ProNewsReport Editorial):- London, United Kingdom Oct 26, 2021 (Issuewire.com)Publisher Discovery is excited to announce the launch of a brand newAI-driven platform incorporating advanced machine learning technology. The development of Publisher Discovery competitor analysis tools followed the initial trialling of the in-network application in conjunction with Affiliate Future, a leading UK affiliate network. This enabled their merchants access to the first application of AI and Machine Learning in the management and recruitment of new affiliates.
Results from the in-network technology have shown hugely increased user efficiency in internal affiliate recruitment saving hours of account management time each week. The effectiveness of the technology led to the award of Highly Commended in the Best Use of AI category at this years Performance Marketing Awards in London.
This really proved the power of AI in affiliate recruitment. Publisher Discovery CEO Tom Bourne explained The ability to use AI to analyse the network data and from that to match, the best affiliates to the right merchant programmes have proven its worth in recruitment time savings as well as in commercial terms.
John Vickers of Affiliate Future which trialled the initial installation of Publisher Discoverys Cloudfind app said This has been a great proof of concept for us and has helped our clients to achieve some impressive results really quickly. Weve been really looking forward to adding the new tools analysing the external competitors and initial tests have shown the same impressive results.
The new platform uses AI and machine learning to help advertisers to analyse their competitors affiliate programmes, understand more about the publishers and find their best affiliates to recruit. This will provide a much simplified and far more intuitive platform enabling quick searches to add to your recruitment process.
The new technology launch has been complemented by an upgraded UI in the platform and is reflected in the new branding and website launched just recently.
Publisher Discovery showed these new technologies at Affiliate Summit East in New York last week and will be giving live demonstrations on their stand at PerformanceIn.live later this year. You can read more on the website at publisherdiscovery.com.
See the original post:
Publisher Discovery relaunches affiliate analysis tools with AI and machine learning - Pro News Report
Machine Learning as a Service Market Size is Projected to Reach 22.10 Bn in 2027, Growing with 39.2% CAGR Says Brandessence Market Research -…
LONDON, Oct. 26, 2021 /PRNewswire/ -- Brandessence Market Research has published a new report title, "The machine learning as a service market reached USD 2.27 billion in 2020. The market is likely to grow at 39.2%, reaching USD 22.10 billion in 2027". The covid-19 pandemic has pushed many firms out of their comfort zones, and strongly towards digitalization, increased data analysis, and subsequent data automation. The machine learning promises a major prospect as migration to cloud also emerges as a key trends, thanks to increased interest in the arena by key tech players, lowering costs of data storage, and tremendous growth of data insights.
Machine Learning as a Service Market: Expert Analysis
"The fast learning pace of machine learning model has accelerated a shift towards subscription-based model, which has enabled a cost-effective pricing model for end-consumers. The model is increasingly offers core services like natural language processing, general machine learning, and computer vision algorithms. The increasing adoption of IoTs, automation, and cloud-based services will continue to drive adoption towards cost-effective data servicing centers. The growth of big data generators, super computers, and highly effective system for gaining new insights are key to growth for the machine learning as a service market", report leading analysts at Brandessence market research.
Request a Sample Report: @https://brandessenceresearch.com/requestSample/PostId/1669
Machine Learning as a Service Market: Segment Analysis
The machine learning as a service market report is divided into Network Analytics and Automated Traffic Management as applications. Among these, the growth of augmented reality, risk analytics, predictive maintenance, fraud detection, and others continue to drive strong growth for the network analytics segment. Machine learning is essential for growth of automated traffic management, and network analytics. Furthermore, with the advent of 5G technology, the growth of machine learning remains promising as large datasets continue to drive growth of cloud, gaming sphere, analytics, and more.
The automated traffic management for automated electric vehicle also remains a promising prospect. The promising horizon of smart cities, the growth of smart infrastructure, and growing demand for interconnected automated vehicles remain promising opportunities for growth for players in the machine learning as a service market.
The machine learning as a service market report categorizes organizations into small, medium, and large as end-users for machine learning as a service market. The growth of the subscription-based models remains a promising prospect for small organizations, as increasing deployment in cloud, and growing importance of digitalization and analytics continues to drive growth of the machine learning as a service market. The machine learning as a service also continues to remain appealing to medium-sized, and large organizations alike. The machine learning as a service model is likely to show promising demand for all end-users, with large organizations being likely to hold largest share of total revenues, thanks to their sizable demand.
Machine Learning as a Service Market: Competitive Analysis
The machine learning as a service market is a fragmented landscape, with increased small players leading innovation, thanks to technologies like TinyML. The landscape continues to grow due to innovation, as open-ended operating systems continue to drive down costs, and create new opportunities for players in the machine learning as a service market. Some key players in the machine learning as a service market are IBM Corporation, FICO, Microsoft, Amazon Web Services, Google Inc., Hewlett Packard Enterprises, AT&T, BigML Inc., Ersatz Labs., SAS Institute Inc., FICO, Yottamine Analytics, Amazon Web Services, Microsoft Corporation, Prediction Labs Ltd., and IBM Corporation
In December 2020, HP started offering HPE Greenlake, a cloud platform with pay-per-use model to cater to most demanding and data-intensive workloads. The platform runs on the power of AI, and ML to create new products, and experiences through a hybrid cloud model.
Eurobank, a major Greek bank has expanded its use of FICO compliance solutions, after a new regulation from the European Union. The bank operates in six countries, and its new machine learning solutions offers it new capabilities like doing KYC and AML checks in real-time, and processes like digital on-boarding.
Request for Methodology of this report:https://brandessenceresearch.com/requestMethodology/PostId/1669
Machine Learning Trends
The increased advancement in machine learning has moved the traditional model from AI-based analytics towards advanced image processing. This advancement has made machine learning a key tool for healthcare imaging diagnostics. Furthermore, the machine learning model remains a frontrunner in in new technologies like OpenAI, which can generate new visual designs based on text input. The growing advancement in Open AI model remains a promising prospect for global manufacturing, distribution, packaging, and sales alike.
The advancement of machine learning in healthcare remains a promising prospect. Machine learning with the help of data analytics, and visual lab results promises a new era for understanding genetic sequencing. The AI algorithms are trained to look at data using multi-modal techniques like optical character recognition, and machine vision to improve medical diagnosis.
The growth of new innovations like Tiny MK, a model that runs on hardware constrained devices for powering refrigerators, cars, and utility meters also remains a new promise for players in the machine learning as a service market. The tiny algorithms promise to capture gestures and common sounds like baby crying, or gunshots for applications like tracking environmental conditions, asset location and orientation, and even vital medical signs. The increasing storage, battery, and functionalities of the Tiny MK remain a major promise for a wide variety of commercial applications.
The data learning continues to drive demand for data labeling, a subset of the core functionality. The increasing demand for human-led data labeling has driven low-cost data labeling services, mostly arising in Asia Pacific region. The semi-supervised, and novice data labeling services present a challenge in applications like voice assistants led commercial functions. The growth of these applications has also driven demand for automated data labeling. The improvement in automated data labeling continues to make way for new processes like PlatformOps, MLOps, and DataOps, which are now combinedly referred to as, the 'XOps'.
Machine learning and AI also promise to team up with employees to take burden off of mundane responsibilities. For example, sales employee often need to login information like customer name, their address, and other important details. Today, online assistants through customer management systems deliver this information to employees to begin a smooth conversation about consumer concerns. Furthermore, machine learning systems are also advancing troubleshooting, sales through text-based and voice-based assistants online. This not only saves various companies significant amounts of money, but also reduces downtime, prepares them for digital adoption, and more. Young, tech-savvy consumers also increasingly prefer to contact customer service through online portals, saving tons of money for companies in customer service, and increasingly report higher satisfaction.
Key Benefits of GlobalMachine Learning as a Service Industry Report
Get Complete Access of Research Report: https://brandessenceresearch.com/technology-and-media/machine-learning-as-a-service-market
Related Reports:
Robotic Process Automation IndustryProjected to Reach $ 18339.95 Mn in 2027
Platform As A Service MarketForecast 2021-2027
Mobility As A Service MarketForecast 2021-2027
UCaaS MarketSize Will Reach USD 32.28 Bn by 2027
Brandessence Market Research & Consulting Pvt ltd.
Brandessence market research publishes market research reports & business insights produced by highly qualified and experienced industry analysts. Our research reports are available in a wide range of industry verticals including aviation, food & beverage, healthcare, ICT, Construction, Chemicals and lot more. Brand Essence Market Research report will be best fit for senior executives, business development managers, marketing managers, consultants, CEOs, CIOs, COOs, and Directors, governments, agencies, organizations and Ph.D. Students. We have a delivery center in Pune, India and our sales office is in London.
Website: https://brandessenceresearch.com
Blog: Digital Map Companies
Contacts: Mr. Vishal Sawant Email: [emailprotected] Email: [emailprotected] Corporate Sales: +44-2038074155 Asia Office: +917447409162
SOURCE Brandessence Market Research And Consulting Private Limited
Machine Learning Reveals Aggression Symptoms in Childhood ADHD – Technology Networks
Child psychiatric disorders, such as oppositional defiant disorder and attention-deficit/hyperactivity disorder (ADHD), can feature outbursts of anger and physical aggression. A better understanding of what drives these symptoms could help inform treatment strategies. Yale researchers have now used a machine learning-based approach to uncover disruptions of brain connectivity in children displaying aggression.
While previous research has focused on specific brain regions, the new study identifies patterns of neural connections across the entire brain that are linked to aggressive behavior in children. The findings, published in the journalMolecular Psychiatry, build on a novel model of brain functioning called the connectome that describes this pattern of brain-wide connections.Maladaptive aggression can result in harm to self or others. This challenging behavior is one of the main reasons for referrals to child mental health services, saidDenis Sukhodolsky, senior author and associate professor in the Yale Child Study Center. Connectome-based modeling offers a new account of brain networks involved in aggressive behavior.
For the study, which is the first of its kind, researchers collected fMRI (functional magnetic resonance imaging) data while children performed an emotional face perception task in which they observed faces making calm or fearful expressions. Seeing faces that express emotion can engage brain states relevant to emotion generation and regulation, both of which have been linked to aggressive behavior, researchers said. The scientists then applied machine learning analyses to identify neural connections that distinguished children with and without histories of aggressive behavior.
They found that patterns in brain networks involved in social and emotional processes such as feeling frustrated with homework or understanding why a friend is upset predicted aggressive behavior. To confirm these findings, the researchers then tested them in a separate dataset and found that the same brain networks predicted aggression. In particular, abnormal connectivity to the dorsolateral prefrontal cortex a key region involved in the regulation of emotions and higher cognitive functions like attention and decision-making emerged as a consistent predictor of aggression when tested in subgroups of children with aggressive behavior and disorders such as anxiety, ADHD, and autism.
These neural connections to the dorsolateral prefrontal cortex could represent a marker of aggression that is common across several childhood psychiatric disorders.This study suggests that the robustness of these large-scale brain networks and their connectivity with the prefrontal cortex may represent a neural marker of aggression that can be leveraged in clinical studies, saidKarim Ibrahim, associate research scientist at the Yale Child Study Center and first author of the paper. The human functional connectome describes the vast interconnectedness of the brain. Understanding the connectome is on the frontier of neuroscience because it can provide us with valuable information for developing brain biomarkers of psychiatric disorders.
Added Sukhodolsky: This connectome model of aggression could also help us develop clinical interventions that can improve the coordination among these brain networks and hubs like the prefrontal cortex. Such interventions could include teaching the emotion regulation skills necessary for modulating negative emotions such as frustration and anger.
Reference: Ibrahim K, Noble S, He G, et al. Large-scale functional brain networks of maladaptive childhood aggression identified by connectome-based predictive modeling. Mol Psychiatry. Published online October 25, 2021:1-15. doi:10.1038/s41380-021-01317-5
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.
See original here:
Machine Learning Reveals Aggression Symptoms in Childhood ADHD - Technology Networks
AT&T and H2O.ai Launch Co-Developed Artificial Intelligence Feature Store with Industry-First Capabilities – Yahoo Finance
H2O AI Feature Store, currently in production use at AT&T, delivers a repository for collaborating, sharing, reusing and discovering machine learning features to speed AI project deployments, improve ROI and is now available to any company or organization
DALLAS and MOUNTAIN VIEW, Calif., Oct. 28, 2021 /PRNewswire/ --
What's the news? AT&T and H2O.ai jointly built an artificial intelligence (AI) feature store to manage and reuse data and machine learning engineering capabilities. The AI Feature Store houses and distributes the features data scientists, developers and engineers need to build AI models. The AI Feature Store is in production at AT&T, meeting the high levels of performance, reliability and scalability required to meet AT&T's demand. Today, AT&T and H2O.ai are announcing that the same solution in production at AT&T, including all its industry-first capabilities, will now be available as the "H2O AI Feature Store" to any company or organization.
What is a feature store? Data scientists and AI experts use data engineering tools to create "features," which are a combination of relevant data and derived data that predict an outcome (e.g., churn, likely to buy, demand forecasting). Building features is time consuming work, and typically data scientists build features from scratch every time they start a new project. Data scientists and AI experts spend up to 80% of their time on feature engineering, and because teams do not have a way to share this work, the same work is repeated by teams throughout the organization. Also, it is important that features are available for both training and real-time inference to avoid training-serving skew which causes model performance problems and contributes to project failure. Feature stores allow data scientists to build more accurate features and deploy these features in production in hours instead of months. Until now there weren't places to store and access features from previous projects. As data and AI are and will continue to be important to every business, demand is growing to make these features reusable. Feature stores are seen as a critical component of the infrastructure stack for machine learning because they solve the hardest problem with operationalizing machine learningbuilding and serving machine learning data to production.
Story continues
How is AT&T using its feature store? AT&T carries more than 465 petabytes of data traffic across its global network on an average day. When you add in the data generated internally from our different applications, in our stores, among our field technicians, and across other parts of our business, turning data into actionable intelligence as quickly as possible is vital to our success. AT&T's implementation of the AI Feature Store has been instrumental in helping turn this massive trove of data into actionable intelligence.
Who will use the H2O AI Feature Store? We know other organizations feel the same way about making their own data actionable. H2O.ai, the leading AI cloud platform provider, has co-developed the feature store with us, and now together we are offering the production-tested feature store as a software platform for other companies and organizations to use with their own data. From financial services to health organizations and pharmaceutical makers, retail, software developers and more, we know the demand for reliable, easy-to-use, and secure feature stores is booming. Any organization currently using AI or planning to use AI will want to consider the value of a feature store. We expect customers to use the H2O AI Feature Store for forecasting, personalization and recommendation engines, dynamic pricing optimization, supply chain optimization, logistics and transportation optimization, and more. We are using the feature store at AT&T for network optimization, fraud prevention, tax calculations and predictive maintenance.
The H2 AI Feature Store includes industry-first capabilities, including integration with multiple data and machine learning pipelines, which can be applied to an on-premise data lake or by leveraging cloud and SaaS providers.
The H2O AI Feature Store also includes Automatic Feature Recommendations, an industry first, which let data scientists select the features they want to update and improve and receive recommendations to do so. The H2O AI Feature Store recommends new features and feature updates to improve the AI model performance. The data scientists review the suggested updates and accept the recommendations they want to include.
What are people saying?
"Feature stores are one of the hottest areas of AI development right now, because being able to reuse and repurpose data engineering tools is critical as those tools become increasingly complex and expensive to build," said Andy Markus, Chief Data Officer, AT&T. "These storehouses are vital not only to our own work, but to other businesses, as well. With our expertise in managing and analyzing huge data flows, combined with H2O.ai's deep AI expertise, we understand what business customers are looking for in this space and our Feature Store offering meets this need."
"Data is a team sport and collaboration with domain experts is key to discovering and sharing features. Feature Stores are the digital 'water coolers' for data science," said Sri Ambati, CEO and founder of H2O.ai. "We are building AI right into the Feature Store and have taken an open, modular and scalable approach to tightly integrate into the diverse feature engineering pipelines while preserving sub-millisecond latencies needed to react to fast-changing business conditions. AI-powered feature stores focus on discoverability and reuse by automatically recommending highly predictive features to our customers using FeatureRank. AT&T has built a world-class data and AI team and we are privileged to collaborate with them on their AI journey."
To learn more about H2O AI Feature Store please visit http://www.h2o.ai/feature-store and sign up to join our preview program or for a demo.
Please join AT&T and H2O.ai on October 28th at 2:00 PT CT at AT&T Business Summit for a discussion on the future of AI as a Service. Register at https://register-bizsummit.att.com
*About AT&T CommunicationsWe help family, friends and neighbors connect in meaningful ways every day. From the first phone call 140+ years ago to mobile video streaming, we @ATT innovate to improve lives. AT&T Communications is part of AT&T Inc. (NYSE:T). For more information, please visit us at att.com.
*About H2O.aiH2O.ai is the leading AI cloud company, on a mission to democratize AI for everyone. Customers use the H2O AI Hybrid Cloud platform to rapidly solve complex business problems and accelerate the discovery of new ideas. H2O.ai is the trusted AI provider to more than 20,000 global organizations, including AT&T, Allergan, Bon Secours Mercy Health, Capital One, Commonwealth Bank of Australia, GlaxoSmithKline, Hitachi, Kaiser Permanente, Procter & Gamble, PayPal, PwC, Reckitt, Unilever and Walgreens, over half of the Fortune 500 and one million data scientists. Goldman Sachs, NVIDIA and Wells Fargo are not only customers and partners, but strategic investors in the company. H2O.ai's customers have honored the company with a Net Promoter Score (NPS) of 78 the highest in the industry based on breadth of technology and deep employee expertise. The world's top 20 Kaggle Grandmasters (the community of best-in-the-world machine learning practitioners and data scientists) are employees of H2O.ai. A strong AI for Good ethos to make the world a better place and Responsible AI drive the company's purpose. Please join our movement at http://www.h2O.ai.
AT&T Inc. logo (PRNewsfoto/AT&T Communications)
H2O.ai logo
Cision
View original content to download multimedia:https://www.prnewswire.com/news-releases/att-and-h2oai-launch-co-developed-artificial-intelligence-feature-store-with-industry-first-capabilities-301410998.html
SOURCE AT&T Communications
Before Machines Can Be Autonomous, Humans Must Work to Ensure Their Safety – University of Virginia
As the self-driving car travels down a dark, rural road, a deer lingering among the trees up ahead looks poised to dart into the cars path. Will the vehicle know exactly what to do to keep everyone safe?
Some computer scientists and engineers arent so sure. But researchers at the University of Virginia School of Engineering and Applied Science are hard at work developing methods they hope will bring greater confidence to the machine-learning world not only for self-driving cars, but for planes that can land on their own and drones that can deliver your groceries.
The heart of the problem stems from the fact that key functions of the software that guides self-driving cars and other machines through their autonomous motions are not written by humans instead, those functions are the product of machine learning. Machine-learned functions are expressed in a form that makes it essentially impossible for humans to understand the rules and logic they encode, thereby making it very difficult to evaluate whether the software is safe and in the best interest of humanity.
Researchers in UVAs Leading Engineering for Safe Software Lab the LESS Lab, as its commonly known are working to develop the methods necessary to provide society with the confidence to trust emerging autonomous systems.
The teams researchers are Matthew B. Dwyer, Robert Thomson Distinguished Professor; Sebastian Elbaum, Anita Jones Faculty Fellow and Professor; Lu Feng, assistant professor; Yonghwi Kwon, John Knight Career Enhancement Assistant Professor; Mary Lou Soffa, Owens R. Cheatham Professor of Sciences; and Kevin Sullivan, associate professor. All hold appointments in the UVA Engineering Department of Computer Science. Feng holds a joint appointment in the Department of Engineering Systems and Environment.
Since its creation in 2018, the LESS Lab has rapidly grown to support more than 20 graduate students, publish more than 50 publications and obtain competitive external awards totaling more than $10 million in research funding. The awards have come from such agencies as the National Science Foundation, the Defense Advanced Research Projects Agency, the Air Force Office of Scientific Research and the Army Research Office.
The labs growth trajectory matches the scope and urgency of the problem these researchers are trying to solve.
An inflection point in the rapid rise of machine learning happened just a decade ago when computer vision researchers won the ImageNet Large Scale Visual Recognition Challenge to identify objects in photos using a machine-learning solution. Google took notice and quickly moved to capitalize on the use of data-driven algorithms.
Other tech companies followed suit, and public demand for machine-learning applications snowballed. Last year, Forbes estimated that the global machine-learning market grew at a compound annual growth rate of 44% and is on track to become a $21 billion market by 2024.
But as the technology ramped up, computer scientists started sounding the alarm that mathematical methods to validate and verify the software were lagging.
Government agencies like the Defense Advanced Research Projects Agency responded to the concerns. In 2017, DARPA launched the Assured Autonomy program to develop mathematically verifiable approaches for assuring an acceptable level of safety with data-driven, machine-learning algorithms.
UVA Engineerings Department of Computer Science also took action, extending its expertise with a strategic cluster of hires in software engineering, programming languages and cyber-physical systems. Experts combined efforts in cross-cutting research collaborations, including the LESS Lab, to focus on solving a global software problem in need of an urgent solution.
By this time, the self-driving car industry was firmly in the spotlight and the problems with machine learning were becoming more evident. A fatal Uber crash in 2018 was recorded as the first human death from an autonomous vehicle.
The progress toward practical autonomous driving was really fast and dramatic, shaped like a hockey stick curve, and much faster than the rate of growth for the techniques that can ensure their safety, Elbaum said.
This August, tech blog Engadget reported that one self-driving car company, Waymo, had put in 20 million miles of on-the-road testing. Yet, devastating failures continue to occur; as the software has gotten more and more complex, no amount of real-world testing would be enough to find all the bugs.
And the software failures that have occurred have made many wary of on-the-road testing.
Even if a simulation works 100 times, would you jump into an autonomous car and let it drive you around? Probably not, Dwyer said. Probably you want some significant on-the-street testing to further increase your confidence. And at the same time, you dont want to be the one in the car when that test goes on.
And then there is the need to anticipate and test every single obstacle that might come up in the 4-million-mile public roads network.
Think about the complexities of a particular scenario, like driving on the highway with hauler trucks on either side so that an autonomous vehicle needs to navigate crosswinds at high speeds and around curves, Dwyer said. Getting a physical setup that matches that challenging scenario would be difficult to do in real-world testing. But in simulation that can get that much easier.
And thats one area where LESS Lab researchers are making real headway. They are developing sophisticated virtual reality simulations that can accurately create such difficult scenarios that might otherwise be impossible to test on the road.
There is a huge gap between testing in the real world versus testing in simulation, Elbaum said. We want to close that gap and create methods in which we can expose autonomous systems to the many complex scenarios they will face. This will substantially reduce testing time and cost, and it is a safer way to test.
Having simulations so accurate they can stand in for real-world driving would be the equivalent of rocket fuel for the huge amount of sophisticated testing it will take to fix the timeline and human cost problem. But it is just half of the solution.
The other half is developing mathematical guarantees that can prove the software will do what it is supposed to do all the time. And that will require a whole new set of mathematical frameworks.
Before machine learning, engineers would write explicit, step-by-step instructions for the computer to follow. The logic was deterministic and absolute, so humans could use formal mathematical rules that existed to test the code and guarantee it worked.
So if you really wanted a property that the autonomous system only makes sharp turns when there is an obstacle in front of it, before machine learning, mechanical engineers would say, I want this to be true, and software engineers would build that rule into the software, Dwyer said.
Today, the computer continuously improves a probability that an algorithm will produce an expected outcome by being fed images of examples to learn from. The process is no longer based on absolutes, following rules a human can understand.
With machine learning, you can give an autonomous system examples of sharp turns only with obstacles in front of it, Dwyer said. But that doesnt mean that the computer will learn and write a program that guarantees that is always true. Previously, humans checked the rules they codified into systems. Now we need a new way to check that a program the machine wrote observes the rules.
Elbaum stresses that there are still a lot of open-ended questions that need to be answered in the quest for concrete, tangible methods. We are playing catch-up to ensure these systems do what they are supposed to do, he said.
That is why the combined strength of the LESS Labs faculty and students is critically important for speeding up the discovery. Just as important is the commitment of the LESS Lab in working collectively and in concert with other UVA experts on machine learning and cyber-physical systems to enable a future where people can trust autonomous systems.
The labs mission could not be more relevant if we are ever to realize the promise of self-driving cars, much less eliminate the fears of a deeply skeptical public.
If your autonomous car is driving on a sunny day down the road with no other cars, it should work, Dwyer said. If its rainy and its at night and its crowded and there is a deer on the side of the road, it should also work, and in every other scenario.
We want to make it work all the time, for everybody, and in every context.
Read more here:
Before Machines Can Be Autonomous, Humans Must Work to Ensure Their Safety - University of Virginia
Machine Learning Tutorial | Machine Learning with Python …
Machine Learning tutorial provides basic and advanced concepts of machine learning. Our machine learning tutorial is designed for students and working professionals.
Machine learning is a growing technology which enables computers to learn automatically from past data. Machine learning uses various algorithms for building mathematical models and making predictions using historical data or information. Currently, it is being used for various tasks such as image recognition, speech recognition, email filtering, Facebook auto-tagging, recommender system, and many more.
This machine learning tutorial gives you an introduction to machine learning along with the wide range of machine learning techniques such as Supervised, Unsupervised, and Reinforcement learning. You will learn about regression and classification models, clustering methods, hidden Markov models, and various sequential models.
In the real world, we are surrounded by humans who can learn everything from their experiences with their learning capability, and we have computers or machines which work on our instructions. But can a machine also learn from experiences or past data like a human does? So here comes the role of Machine Learning.
Machine Learning is said as a subset of artificial intelligence that is mainly concerned with the development of algorithms which allow a computer to learn from the data and past experiences on their own. The term machine learning was first introduced by Arthur Samuel in 1959. We can define it in a summarized way as:
With the help of sample historical data, which is known as training data, machine learning algorithms build a mathematical model that helps in making predictions or decisions without being explicitly programmed. Machine learning brings computer science and statistics together for creating predictive models. Machine learning constructs or uses the algorithms that learn from historical data. The more we will provide the information, the higher will be the performance.
A machine has the ability to learn if it can improve its performance by gaining more data.
A Machine Learning system learns from historical data, builds the prediction models, and whenever it receives new data, predicts the output for it. The accuracy of predicted output depends upon the amount of data, as the huge amount of data helps to build a better model which predicts the output more accurately.
Suppose we have a complex problem, where we need to perform some predictions, so instead of writing a code for it, we just need to feed the data to generic algorithms, and with the help of these algorithms, machine builds the logic as per the data and predict the output. Machine learning has changed our way of thinking about the problem. The below block diagram explains the working of Machine Learning algorithm:
The need for machine learning is increasing day by day. The reason behind the need for machine learning is that it is capable of doing tasks that are too complex for a person to implement directly. As a human, we have some limitations as we cannot access the huge amount of data manually, so for this, we need some computer systems and here comes the machine learning to make things easy for us.
We can train machine learning algorithms by providing them the huge amount of data and let them explore the data, construct the models, and predict the required output automatically. The performance of the machine learning algorithm depends on the amount of data, and it can be determined by the cost function. With the help of machine learning, we can save both time and money.
The importance of machine learning can be easily understood by its uses cases, Currently, machine learning is used in self-driving cars, cyber fraud detection, face recognition, and friend suggestion by Facebook, etc. Various top companies such as Netflix and Amazon have build machine learning models that are using a vast amount of data to analyze the user interest and recommend product accordingly.
Following are some key points which show the importance of Machine Learning:
At a broad level, machine learning can be classified into three types:
Supervised learning is a type of machine learning method in which we provide sample labeled data to the machine learning system in order to train it, and on that basis, it predicts the output.
The system creates a model using labeled data to understand the datasets and learn about each data, once the training and processing are done then we test the model by providing a sample data to check whether it is predicting the exact output or not.
The goal of supervised learning is to map input data with the output data. The supervised learning is based on supervision, and it is the same as when a student learns things in the supervision of the teacher. The example of supervised learning is spam filtering.
Supervised learning can be grouped further in two categories of algorithms:
Unsupervised learning is a learning method in which a machine learns without any supervision.
The training is provided to the machine with the set of data that has not been labeled, classified, or categorized, and the algorithm needs to act on that data without any supervision. The goal of unsupervised learning is to restructure the input data into new features or a group of objects with similar patterns.
In unsupervised learning, we don't have a predetermined result. The machine tries to find useful insights from the huge amount of data. It can be further classifieds into two categories of algorithms:
Reinforcement learning is a feedback-based learning method, in which a learning agent gets a reward for each right action and gets a penalty for each wrong action. The agent learns automatically with these feedbacks and improves its performance. In reinforcement learning, the agent interacts with the environment and explores it. The goal of an agent is to get the most reward points, and hence, it improves its performance.
The robotic dog, which automatically learns the movement of his arms, is an example of Reinforcement learning.
Before some years (about 40-50 years), machine learning was science fiction, but today it is the part of our daily life. Machine learning is making our day to day life easy from self-driving cars to Amazon virtual assistant "Alexa". However, the idea behind machine learning is so old and has a long history. Below some milestones are given which have occurred in the history of machine learning:
Now machine learning has got a great advancement in its research, and it is present everywhere around us, such as self-driving cars, Amazon Alexa, Catboats, recommender system, and many more. It includes Supervised, unsupervised, and reinforcement learning with clustering, classification, decision tree, SVM algorithms, etc.
Modern machine learning models can be used for making various predictions, including weather prediction, disease prediction, stock market analysis, etc.
Before learning machine learning, you must have the basic knowledge of followings so that you can easily understand the concepts of machine learning:
Our Machine learning tutorial is designed to help beginner and professionals.
We assure you that you will not find any difficulty while learning our Machine learning tutorial. But if there is any mistake in this tutorial, kindly post the problem or error in the contact form so that we can improve it.
Read the original:
Machine Learning Tutorial | Machine Learning with Python ...
Machine learning in the cloud is helping businesses innovate – MIT Technology Review
In the past decade, machine learning has become a familiar technology for improving the efficiency and accuracy of processes like recommendations, supply chain forecasting, developing chatbots, image and text search, and automated customer service functions, to name a few. Machine learning today is becoming even more pervasive, impacting every market segment and industry, including manufacturing, SaaS platforms, health care, reservations and customer support routing, natural language processing (NLP) tasks such as intelligent document processing, and even food services.
Take the case of Dominos Pizza, which has been using machine learning tools created to improve efficiencies in pizza production. Dominos had a project called Project 3/10, which aimed to have a pizza ready for pickup within three minutes of an order, or have it delivered within 10 minutes of an order, says Dr. Bratin Saha, vice president and general manager of machine learning services for Amazon AI. If you want to hit those goals, you have to be able to predict when a pizza order will come in. They use predictive machine learning models to achieve that.
The recent rise of machine learning across diverse industries has been driven by improvements in other technological areas, says Sahanot the least of which is the increasing compute power in cloud data centers.
Over the last few years, explains Saha, the amount of total compute that can be thrown at machine learning problems has been doubling almost every four months. That's 5 to 6 times more than Moore's Law. As a result, a lot of functions that once could only be done by humansthings like detecting an object or understanding speechare being performed by computers and machine learning models.
At AWS, everything we do works back from the customer and figuring out how we reduce their pain points and how we make it easier for them to do machine learning. At the bottom of the stack of machine learning services, we are innovating on the machine learning infrastructure so that we can make it cheaper for customers to do machine learning and faster for customers to do machine learning. There we have two AWS innovations. One is Inferentia and the other is Trainium.
The current machine learning use cases that help companies optimize the value of their data to perform tasks and improve products is just the beginning, Saha says.
Machine learning is just going to get more pervasive. Companies will see that they're able to fundamentally transform the way they do business. Theyll see they are fundamentally transforming the customer experience, and they will embrace machine learning.
AWS Machine Learning Infrastructure
Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma. This is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.
Our topic today is machine learning in the cloud. Across all industries, the exponential increase of data collection demands faster and novel ways to analyze data, but also learn from it to make better business decisions. This is how machine learning in the cloud helps fuel innovation for enterprises, from startups to legacy players.
Two words for you: data innovation. My guest is Dr. Bratin Saha, vice president and general manager of machine learning services for Amazon AI. He has held executive roles at NVIDIA and Intel. This episode of Business Lab is produced in association with AWS. Welcome, Bratin.
Dr. Bratin Saha: Thank you for having me, Laurel. It's great to be here.
Laurel: Off the top, could you give some examples of how AWS customers are using machine learning to solve their business problems?
Bratin: Let's start with the definition of what we mean by machine learning. Machine learning is a process where a computer and an algorithm can use data, usually historical data, to understand patterns, and then use that information to make predictions about the future. Businesses have been using machine learning to do a variety of things, like personalizing recommendations, improving supply chain forecasting, making chatbots, using it in health care, and so on.
For example, Autodesk was able to use the machine learning infrastructure we have for their chatbots to improve their ability to handle requests by almost five times. They were able to use the improved chatbots to address more than 100,000 customer questions per month.
Then there's Nerd Wallet. Nerd Wallet is a personal finance startup that did not personalize the recommendations they were giving to customers based on the customer's preferences. Theyre now using AWS machine learning services to tailor the recommendations to what a person actually wants to see, which has significantly improved their business.
Then we have customers like Thomson Reuters. Thomson Reuters is one of the world's most trusted providers of answers, with teams of experts. They use machine learning to mine data to connect and organize information to make it easier for them to provide answers to questions.
In the financial sector, we have seen a lot of uptake in machine learning applications. One company, for example, is a payment service provider, was able to build a fraud detection model in just 30 minutes.
The reason Im giving you so many examples is to show how machine learning is becoming pervasive. It's going across geos, going across market segments, and being used by companies of all kinds. I have a few other examples I want to share to show how machine learning is also touching industries like manufacturing, food delivery, and so on.
Domino's Pizza, for example, had a project called Project 3/10, where they wanted to have a pizza ready for pickup within three minutes of an order, or have it delivered within 10 minutes of an order. If you want to hit those goals, you have to be able to predict when a pizza order will come in. They use machine learning models to look at the history of orders. Then they use the machine learning model that was trained on that order history. They were then able to use that to predict when an order would come in, and they were able to deploy this to many stores, and they were able to hit the targets.
Machine learning has become pervasive in how our customers are doing business. It's starting to be adopted in virtually every industry. We have more than several hundred thousand customers using our machine learning services. One of our machine learning services, Amazon SageMaker, has been one of the fastest growing services in AWS history.
Laurel: Just to recap, customers can use machine learning services to solve a number of problems. Some of the high-level problems would be a recommendation engine, image search, text search, and customer service, but then, also, to improve the quality of the product itself.
I like the Domino's Pizza example. Everyone understands how a pizza business may work. But if the goal is to turn pizzas around as quickly as possible, to increase that customer satisfaction, Domino's had to be in a place to collect data, be able to analyze that historic data on when orders came in, how quickly they turned around those orders, how often people ordered what they ordered, et cetera. That was what the prediction model was based on, correct?
Bratin: Yes. You asked a question about how we think about machine learning services. If you look at the AWS machine learning stack, we think about it as a three-layered service. The bottom layer is the machine learning infrastructure.
What I mean by this is when you have a model, you are training the model to predict something. Then the predictions are where you do this thing called inference. At the bottom layer, we provide the most optimized infrastructure, so customers can build their own machine learning systems.
Then there's a layer on top of that, where customers come and tell us, "You know what? I just want to be focused on the machine learning. I don't want to build a machine learning infrastructure." This is where Amazon SageMaker comes in.
Then there's a layer on top of that, which is what we call AI services, where we have pre-trained models that can be used for many use cases.
So, we look at machine learning as three layers. Different customers use services at different layers, based on what they want, based on the kind of data science expertise they have, and based on the kind of investments they want to make.
The other part of our view goes back to what you mentioned at the beginning, which is data and innovation. Machine learning is fundamentally about gaining insights from data, and using those insights to make predictions about the future. Then you use those predictions to derive business value.
In the case of Domino's Pizza, there is data around historical order patterns that can be used to predict future order patterns. The business value there is improving customer service by getting orders ready in time. Another example is Freddy's Frozen Custard, which used machine learning to customize menus. As a result of that, they were able to get a double-digit increase in sales. So, it's really about having data, and then using machine learning to gain insights from that data. Once you've gained insights from that data, you use those insights to drive better business outcomes. This goes back what you mentioned at the beginning: you start with data and then you use machine learning to innovate on top of it.
Laurel: What are some of the challenges organizations have as they start their machine learning journeys?
Bratin: The first thing is to collect data and make sure it is structured wellclean datathat doesn't have a lot of anomalies. Then, because machine learning models typically get better if you can train them with more and more data, you need to continue collecting vast amounts of data. We often see customers create data lakes in the cloud, like on Amazon S3, for example. So, the first step is getting your data in order and then potentially creating data lakes in the cloud that you can use to feed your data-based innovation.
The next step is to get the right infrastructure in place. That is where some customers say, "Look, I want to just build the whole infrastructure myself," but the vast majority of customers say, "Look, I just want to be able to use a managed service because I don't want to have to invest in building the infrastructure and maintaining the infrastructure, and so on.
The next is to choose a business case. If you haven't done machine learning before, then you want to get started with a business case that leads to a good business outcome. Often what can happen with machine learning is to see it's cool, do some really cool demos, but those dont translate into business outcomes, so you start experiments and you don't really get the support that you need.
Finally, you need commitment because machine learning is a very iterative process. You're training a model. The first model you train may not get you the results you desire. There's a process of experimentation and iteration that you have to go through, and it can take you a few months to get results. So, putting together a team and giving them the support they need is the final part.
If I had to put this in terms of a sequence of steps, it's important to have data and a data culture. Its important in most cases for customers to choose to use a managed service to build and train their models in the cloud, simply because you get storage a lot easier and you get compute a lot easier. The third is to choose a use case that is going to have business value, so that your company knows this is something that you want to deploy at scale. And then, finally, be patient and be willing to experiment and iterate, because it often takes a little bit of time to get the data you need to train the models well and actually get the business value.
Laurel: Right, because it's not something that happens overnight.
Bratin: It does not happen overnight.
Laurel: How do companies prepare to take advantage of data? Because, like you said, this is a four-step process, but you still have to have patience at the end to be iterative and experimental. For example, do you have ideas on how companies can think about their data in ways that makes them better prepared to see success, perhaps with their first experiment, and then perhaps be a little bit more adventurous as they try other data sets or other ways of approaching the data?
Bratin: Yes. Companies usually start with a use case where they have a history of having good data. What I mean by a history of having good data is that they have a record of transactions that have been made, and most of the records are accurate. For example, you don't have a lot of empty record transactions.
Typically, we have seen that the level of data maturity varies between different parts of a company. You start with the part of a company where the data culture is a lot more prevalent. You start from there so that you have a record of historical transactions that you stored. You really want to have fairly dense data to use to train your models.
Laurel: Why is now the right time for companies to start thinking about deploying machine learning in the cloud?
Bratin: I think there is a confluence of factors happening now. One is that machine learning over the last five years has really taken off. That is because the amount of compute available has been increasing at a very fast rate. If you go back to the IT revolution, the IT revolution was driven by Moore's Law. Under Moore's Law, compute doubled every 18 months.
Over the last few years, the amount of total compute has been doubling almost every four months. That's five times more than Moore's Law. The amount of progress we have seen in the last four to five years has been really amazing. As a result, a lot of functions that once could only be done by humanslike detecting an object or understanding speechare being performed by computers and machine learning models. As a result of that, a lot of capabilities are getting unleashed. That is what has led to this enormous increase in the applicability of machine learningyou can use it for personalization, you can use it in health care and finance, you can use it for tasks like churn prediction, fraud detection, and so on.
One reason that now is a good time to get started on machine learning in the cloud is just the enormous amount of progress in the last few years that is unleashing these new capabilities that were previously not possible.
The second reason is that a lot of the machine learning services being built in the cloud are making machine learning accessible to a lot more people. Even if you look at four to five years ago, machine learning was something that only very expert practitioners could do and only a handful of companies were able to do because they had expert practitioners. Today, we have more than a hundred thousand customers using our machine learning services. That tells you that machine learning has been democratized to a large extent, so that many more companies can start using machine learning and transforming their business.
Then comes the third reason, which is that you have amazing capabilities that are now possible, and you have cloud-based tools that are democratizing these capabilities. The easiest way to get access to these tools and these capabilities is through the cloud because, first, it provides the foundation of compute and data. Machine learning is, at its core, about throwing a lot of compute on data. In the cloud, you get access to the latest compute. You pay as you go, and you don't have to make upfront huge investments to set up compute farms. You also get all the storage and the security and privacy and encryption, and so onall of that core infrastructure that is needed to get machine learning going.
Laurel: So Bratin, how does AWS innovate to help organizations with machine learning, model training, and inference?
Bratin: At AWS, everything we do works back from the customer and figuring out how we reduce their pain points and how we make it easier for them to do machine learning. At the bottom of the stack of machine learning services, we are innovating on the machine learning infrastructure so that we can make it cheaper for customers to do machine learning and faster for customers to do machine learning. There we have two AWS innovations. One is Inferentia and the other is Trainium. These are custom chips that we designed at AWS that are purpose-built for inference, which is the process of making machine learning predictions, and for training. Inferentia today provides the lowest cost inference instances in the cloud. And Trainium, when it becomes available later this year, will be providing the most powerful and the most cost-effective training instances in the cloud.
We have a number of customers using Inferentia today. Autodesk uses Inferentia to host their chatbot models, and they were able to improve the cost and latencies by almost five times. Airbnb has over four million hosts who welcome more than 900 million guests in almost every country. Airbnb saw a two-times improvement in throughput by using the Inferentia instances, which means that they were able to serve almost twice as many requests for customer support than they would otherwise have been able to do. Another company called Sprinklr develops a SaaS customer experience platform, and they have an AI-driven unified customer experience management platform. They were able to deploy the natural language processing models in Inferentia, and they saw significant performance improvements as well.
Even internally, our Alexa team was able to move their inferences over from GPUs to Inferentia-based systems, and they saw more than a 50% improvement in cost due to these Inferentia-based systems. So, we have that at the lowest layer of the infrastructure. On top of that, we have the managed services, where we are innovating so that customers become a lot more productive. That is where we have SageMaker Studio, which is the world's first IDE, that offers tools like debuggers and profilers and explainability, and a host of other toolslike a visual data preparation toolthat make customers a lot more productive. At the top of it, we have AI services where we provide pre-trained models for use cases like search and document processingKendra for search, Textract for document processing, image and video recognitionwhere we are innovating to make it easier for customers to address these use cases right out of the box.
Laurel: So, there are some benefits, for sure, for machine learning services in the cloudlike improved customer service, improved quality, and, hopefully, increased profit, but what key performance indicators are important for the success of machine learning projects, and why are these particular indicators so important?
Bratin: We are working back from the customer, working back from the pain points based on what customers tell us, and inventing on behalf of the customers to see how we can innovate to make it easier for them to do machine learning. One part of machine learning, as I mentioned, is predictions. Often, the big cost in machine learning in terms of infrastructure is in the inference. That is why we came out with Inferentia, which are today the most cost-effective machine learning instances in the cloud. So, we are innovating at the hardware level.
We also announced Tranium. That will be the most powerful and the most cost-effective training instances in the cloud. So, we are first innovating at the infrastructure layer so that we can provide customers with the most cost-effective compute.
Next, we have been looking at the pain points of what it takes to build an ML service. You need data collection services, you need a way to set up a distributed infrastructure, you need a way to set up an inference system and be able to auto scale it, and so on. We have been thinking a lot about how to build this infrastructure and innovation around the customers.
Then we have been looking at some of the use cases. So, for a lot of these use cases, whether it be search, or object recognition and detection, or intelligent document processing, we have services that customers can directly use. And we continue to innovate on behalf of them. I'm sure we'll come up with a lot more features this year and next to see how we can make it easier for our customers to use machine learning.
Laurel: What key performance indicators are important for the success of machine learning projects? We talked a little bit about how you like to improve customer service and quality, and of course increase profit, but to assign a KPI to a machine learning model, that's something a bit different. And why are they so important?
Bratin: To assign the KPIs, you need to work back from your use case. So, let's say you want to use machine learning to reduce fraud. Your overall KPI is, what was the reduction in fraud detection? Or let's say you want to use it for churn reduction. You are running a business, your customers are coming, but a certain number of them are churning off. You want to then start with, how do I reduce my customer churn by some percent? So, you start with the top-level KPI, which is a business outcome that you want to achieve, and how to get an improvement in that business outcome.
Lets take the churn prediction example. At the end of the day, what is happening is you have a machine learning model that is using data and the amount of training it had to make certain predictions around which customer is going to churn. That boils down, then, to the accuracy of the model. If the model is saying 100 people are going to churn, how many of them actually churn? So, that becomes a question of accuracy. And then you also want to look at how well the machine learning model detected all the cases.
So, there are two aspects of quality that you're looking for. One is, of the things that the model predicted, how many of them actually happened? Let's say this model predicted these 100 customers are going to churn. How many of them actually churn? And let's just say 95 of them actually churn. So, you have a 95% precision there. The other aspect is, suppose you're running this business and you have 1,000 customers. And let's say in a particular year, 200 of them churned. How many of those 200 did the model predict would actually churn? That is called recall, which is, given the total set, how much is the machine learning model able to predict? So, fundamentally, you start from this business metric, which is what is the outcome I want to get, and then you can convert this down into model accuracy metrics in terms of precision, which is how accurate was the model in predicting certain things, and then recall, which is how exhaustive or how comprehensive was the model in detecting all situations.
So, at a high level, these are the things you're looking for. And then you'll go down to lower-level metrics. The models are running on certain instances on certain pieces of compute: what was the infrastructure cost and how do I reduce those costs? These services, for example, are being used to handle surges during Prime Day or Black Friday, and so on. So, then you get to those lower-level metrics, which is, am I able to handle surges in traffic? Its really a hierarchical set of KPIs. Start with the business metric, get down to the model metrics, and then get down to the infrastructure metrics.
Laurel: When you think about machine learning in the cloud in the next three to five years, what are you seeing? What are you thinking about? What can companies do now to prepare for what will come?
Bratin: I think what will happen is that machine learning will get more pervasive. Because what will happen is customers will see that they're able to fundamentally transform the way to do business. Companies will see that they fundamentally are transforming the customer experience, and they will embrace machine learning. We have seen that at Amazon as wellwe have a long history of investing in machine learning. We have been doing this for more than 20 years, and we have changed how we serve customers with amazon.com or Alexa or Amazon Go, Prime. And now with AWS, where we have taken this knowledge that we have gained over the past two decades of deploying machine learning at scale and are making it available to our customers now. So, I do think we will see a much more rapid uptake of machine learning.
Then we'll see a lot of broad use cases like intelligent document processing, a lot of paper-based processing, will become automated because a machine learning model is now able to scan those documents and infer information from theminfer semantic information, not just the syntax. If you think of paper-based processes, whether it's loan processing and mortgage processing, a lot of that will get automated. Then, we are also seeing businesses get a lot more efficient in terms of personalization like forecasting, supply chain forecasting, demand forecasting, and so on.
We are seeing a lot of uptake of machine learning in health. We have customers, GE for example, uses a machine learning service for radiology. They use machine learning to scan radiology images to determine which ones are more serious, and therefore, you want to get the patients in early. We are also seeing potential and opportunity for using machine learning in genomics for precision medicine. So, I do think a lot of innovation is going to happen with machine learning in health care.
We'll see a lot of machine learning in manufacturing. A lot of manufacturing processes will become more efficient, get automated, and become safer because of machine learning.
So, I see in the next five to 10 years, pick any domainlike sports, NFL, NASCAR, Bundesliga, they're all using our machine learning services. NFL uses Amazon SageMaker to give their fans a more immersive experience through Next Gen Stats. Bundesliga uses our machine learning services to make a range of predictions and provide a much more immersive experience. Same with NASCAR. NASCAR has a lot of data history from their races, and they're using that to train models to provide a much more immersive experience to their viewers because they can predict much more easily what's going to happen. So, sports, entertainment, financial services, health care, manufacturingI think we'll see a lot more uptake of machine learning and making the world a smarter, healthier, and safer place.
Laurel: What a great conversation. Thank you very much, Bratin for joining us on Business Lab.
Bratin: Thank you. Thank you for having me. It was really nice talking to you.
Laurel: That was Dr. Bratin Saha, Vice President and General Manager of Machine Learning Services for Amazon AI, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles river. That's it for this episode of Business Law. I'm your host, Laurel Ruma. I'm the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can also find us in prints on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoy this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.
Read the rest here:
Machine learning in the cloud is helping businesses innovate - MIT Technology Review
Learn about machine learning and the fundamentals of AI with free Raspberry Pi course – Geeky Gadgets
On this four-week course from the Raspberry Pi Foundation, youll learn about different types of machine learning, and use online tools to train your own AI models. Youll delve into the problems that ML can help to solve, discuss how AI is changing the world, and think about the ethics of collecting data to train a ML model. For teachers and educators its particularly important to have a good foundational knowledge of AI and ML, as they need to teach their learners what the young people need to know about these technologies and how they impact their lives. (Weve also got a free seminar series about teaching these topics.)
The first week of this course will guide you through how you can use machine learning to label data, whether to work out if a comment is positive or negative or to identify the contents of an image. Then youll look at algorithms that create models to give a numerical output, such as predicting house prices based on information about the house and its surroundings. Youll also explore other types of machine learning that are designed to discover connections and groupings in data that humans would likely miss, giving you a deeper understanding of how it can be used.
To register for the course for free jump over to the official course page by following the link below.
Source : RPiF
Here is the original post:
Learn about machine learning and the fundamentals of AI with free Raspberry Pi course - Geeky Gadgets
Sleepy Hollow Teen Receives National Scholarship for Development of New Machine Learning Techniques – River Journal Staff
Owen Dugan
Owen Dugan awarded $10,000 as a 2021 Davidson Fellow Scholarship Winner
The Davidson Fellows Scholarship Program has announced the 2021 scholarship winners. Among the honorees is 18-year-old Owen Dugan of Sleepy Hollow. Only twenty students across the country are recognized as scholarship winners each year.
I am honored to be a Davidson Fellow, to have my work nationally recognized, and to join the Davidson Fellows community, said Dugan.
For his project, Dugan developed several new techniques to improve and expand the scope of OccamNet, a new interpretable neural network architecture, with the goal of increasing adoption of interpretable and reliable machine learning techniques.
The 2021 Davidson Fellows Scholarship recipients have risen to the challenges of a global pandemic to complete significant projects within their fields of study, said Bob Davidson, founder of the Davidson Institute. To be awarded this recognition, these students have shown immense skill and work ethic, and they should be commended as they continue their educational and research journeys while continuing to work to solve some of the worlds most vexing problems.
The 2021 Davidson Fellows were honored during a virtual ceremony in September 2021. The ceremony can be viewed online at https://www.davidsongifted.org/gifted-programs/fellows-scholarship/fellows/fellows-ceremony/.
The Davidson Fellows Scholarship program offers $50,000, $25,000, and $10,000 college scholarships to students 18 or younger, who have completed significant projects that have the potential to benefit society in the fields of science, technology, engineering, mathematics, literature, and music. The Davidson Fellows Scholarship has provided more than $8.6 million in scholarship funds to 386 students since its inception in 2001, and has been named one of the most prestigious undergraduate scholarships by U.S. News & World Report. It is a program of the Davidson Institute, a national nonprofit organization headquartered in Reno, Nev. that supports profoundly gifted youth.
Read the original here:
Sleepy Hollow Teen Receives National Scholarship for Development of New Machine Learning Techniques - River Journal Staff