Category Archives: Machine Learning

Forget Machine Learning, Constraint Solvers are What the Enterprise Needs – – RTInsights

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems.

When a business looks to implement an artificial intelligence strategy, even proper expertise can be too narrow. Its what has led many businesses to deploy machine learning or neural networks to solve problems that require other forms of AI, like constraint solvers.

Constraint solvers take a set of hard and soft constraints in an organization and formulate the most effective plan, taking into account real-time problems. It is the best solution for businesses that have timetabling, assignment or efficiency issues.

In a RedHat webinar, principal software engineer, Geoffrey De Smet, ran through three use cases for constraint solvers.

Vehicle Routing

Efficient delivery management is something Amazon has seemingly perfected, so much so its now an annoyance to have to wait 3-5 days for an item to be delivered. Using RedHats OptaPlanner, businesses can improve vehicle routing by 9 to 18 percent, by optimizing routes and ensuring drivers are able to deliver an optimal amount of goods.

To start, OptaPlanner takes in all the necessary constraints, like truck capacity and driver specialization. It also takes into account regional laws, like the amount of time a driver is legally allowed to drive per day and creates a route for all drivers in the organization.

SEE ALSO: Machine Learning Algorithms Help Couples Conceive

In a practical case, De Smet said RedHat saved a technical vehicle routing company over $100 million in savings per year with the constraint solver. Driving time was reduced by 25 percent and the business was able to reduce its headcount by 10,000.

The benefits [of OptaPlanner] are to reduce cost, improve customer satisfaction, employee well-being and save the planet, said De Smet. The nice thing about some of these are theyre complementary, for example reducing travel time also reduces fuel consumption.

Employee timetabling

Knowing who is covering what shift can be an infuriating task for managers, with all the requests for time off, illness and mandatory days off. In a place where 9 to 5 isnt regular, it can be even harder to keep track of it all.

RedHats OptaPlanner is able to take all of the hard constraints (two days off per week, no more than eight-hour shifts) and soft constraints (should have up to 10 hours rest between shifts) and can formulate a timetable that takes all that into account. When someone asks for a day off, OptaPlanner is able to reassign workers in real-time.

De Smet said this is useful for jobs that need to run 24/7, like hospitals, the police force, security firms, and international call centers. According to RedHats simulation, it should improve employee well-being by 19 to 85 percent, alongside improvements in retention and customer satisfaction.

Task assignment

Even within a single business department, there are skills only a few employees have. For instance, in a call center, only a few will be able to speak fluently in both English and French. To avoid customer annoyance, it is imperative for employees with the right skill-set to be assigned correctly.

With OptaPlanner, managers are able to add employee skills and have the AI assign employees correctly. Using the call center example again, a bilingual advisor may take all calls in French for one day when theres a high demand for it, but on others have a mix of French and English.

For customer support, the constraint solver would be able to assign a problem to the correct advisor, or to the next best thing, before the customer is connected, thus avoiding giving out the wrong advice or having to pass the customer on to another advisor.

In the webinar, De Smet said that while the constraint solver is a valuable asset for businesses looking to reduce costs, this shouldnt be their only aim.

Without having all stakeholders involved in the implementation, the AI could end up harming other areas of the business, like customer satisfaction or employee retention. This is a similar warning given from all analysts on AI implementation it needs to come from a genuine desire to improve the business to get the best outcome.

Read the rest here:
Forget Machine Learning, Constraint Solvers are What the Enterprise Needs - - RTInsights

Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core – The Register

MIT boffins have devised a software-based tool for predicting how processors will perform when executing code for specific applications.

In three papers released over the past seven months, ten computer scientists describe Ithemal (Instruction THroughput Estimator using MAchine Learning), a tool for predicting the number processor clock cycles necessary to execute an instruction sequence when looped in steady state, and include a supporting benchmark and algorithm.

Throughput stats matter to compiler designers and performance engineers, but it isn't practical to make such measurements on-demand, according to MIT computer scientists Saman Amarasinghe, Eric Atkinson, Ajay Brahmakshatriya, Michael Carbin, Yishen Chen, Charith Mendis, Yewen Pu, Alex Renda, Ondrej Sykora, and Cambridge Yang.

So most systems rely on analytical models for their predictions. LLVM offers a command-line tool called llvm-mca that can presents a model for throughput estimation, and Intel offers a closed-source machine code analyzer called IACA (Intel Architecture Code Analyzer), which takes advantage of the company's internal knowledge about its processors.

Michael Carbin, a co-author of the research and an assistant professor and AI researcher at MIT, told the MIT News Service on Monday that performance model design is something of a black art, made more difficult by Intel's omission of certain proprietary details from its processor documentation.

The Ithemal paper [PDF], presented in June at the International Conference on Machine Learning, explains that these hand-crafted models tend to be an order of magnitude faster than measuring basic block throughput sequences of instructions without branches or jumps. But building these models is a tedious, manual process that's prone to errors, particularly when processor details aren't entirely disclosed.

Using a neural network, Ithemal can learn to predict throughout using a set of labelled data. It relies on what the researchers describe as "a hierarchical multiscale recurrent neural network" to create its prediction model.

"We show that Ithemals learned model is significantly more accurate than the analytical models, dropping the mean absolute percent error by more than 50 per cent across all benchmarks, while still delivering fast estimation speeds," the paper explains.

A second paper presented in November at the IEEE International Symposium on Workload Characterization, "BHive: A Benchmark Suite and Measurement Framework for Validating x86-64 Basic Block Performance Models," describes the BHive benchmark for evaluating Ithemal and competing models, IACAm llvm-mca, and OSACA (Open Source Architecture Code Analyzer). It found Ithemal outperformed other models except on vectorized basic blocks.

And in December at the NeurIPS conference, the boffins presented a third paper titled Compiler Auto-Vectorization with Imitation Learning that describes a way to automatically generate compiler optimizations in a way that outperforms LLVMs SLP vectorizer.

The academics argue that their work shows the value of machine learning in the context of performance analysis.

"Ithemal demonstrates that future compilation and performance engineering tools can be augmented with datadriven approaches to improve their performance and portability, while minimizing developer effort," the paper concludes.

Sponsored: Detecting cyber attacks as a small to medium business

Excerpt from:
Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register

Cloud as the enabler of AI’s competitive advantage – Finextra

Data is just the beginning. Financial institutions (FIs) are now hyper-focused on surfacing meaningful, timely, and actionable insights from proprietary and third-party data. Technologies such as the cloud and artificial intelligence (AI) are forming new partnerships between humans and machines.

The barriers to entry have fallen and FIs are no longer only testing and experimenting with machine learning (ML), a subset of AI that allows computers to perform tasks without explicit instructions and relies on patterns. ML is now being deployed in key departments such as risk management, pre-trade analytics and portfolio optimisation, for example.

Finextra spoke to Geoffrey Horrell, director of applied innovation at Refinitivs London Lab and Joe Rothermich, Refinitivs head of labs - Americas about their recent report Smarter Humans. Smarter Machines: Insights from the Refinitiv 2019 Artificial Intelligence/Machine Learning Global Study, how ML processes can be deployed in the cloud and how it has become an enabler of competitive advantage.

The AI explosion Rothermich starts off with a comparison of AI to the explosion of the Internet, when suddenly you had the ability to quickly scale up servers and create websites. I think we are starting to see that with data, AI and machine learning algorithms.

In the past, there used to be a huge barrier to entry and although the machine learning algorithms havent changed dramatically since the early 2000s, we now have the ability to test out new ideas, train models and implement them in production systems easily.

Traditional infrastructure prevents scalability and digital transformation, Rothermich explains, his team being one of the early adopters of Hadoop in financial services he explores how building the infrastructure and prepping the data required substantial up-front investment of time and equipment.

The industry has moved to the cloud in order to make data accessible immediately, so algorithms can be written and tested at a faster rate, which in turn lowers the cost of production. There is an extensive breadth of data across all asset classes used by Refinitiv Labs that has been extensively curated and enriched and as a result, is now ML ready.

Providing the productivity edgeCoupling access to this real time data in the cloud allows clients to receive new insights at a faster rate, for use in risk assessment, transaction analysis, regulatory reporting, for example. Rothermich discusses how data such as accounting data, market data and text mining of news, events, filings, and transcripts are used to predict the likelihood of a company defaulting on its debt within a year.

Rothermich adds that recent research using deep learning has allowed the model to generalise better and not be tied to fixed vocabularies and could even adapt to multiple languages. Refinitiv Labs is conducting research in other areas, such as M&A prediction, to combine fundamental and text data to predict financial events.

Financial use casesThe growth of easy to use cloud infrastructure, the open source Python ecosystem and capabilities that help with machine learning workflows allow FIs to test out new databases or new computing infrastructures easily.

Implementing deep learning requires a lot more compute power and a lot more training data. We are working on this by using the cloud to scale up and conduct these experiments, leveraging machine learning frameworks without up-front investments in time and cost being an issue, Rothermich says.

Returning to risk management, it is evident that the scalability of the cloud also allows FIs to process massive amounts of data and obtain a response at a rapid rate but from a regulatory standpoint, there are issues around data, requiring the traceability of experiments with data and proving there are no biases.

But what type of risk use case are we using machine learning to address? Horrell extends on the credit risk example to answer this question and states that with investment risk, its about getting a much more real time view of probability of default compared to traditional credit ratings which tend to be lagging indicators.

By the time you see substantial deterioration in a companys fortunes that equates to a credit rating downgrade, the damage to the investment is already done. We know that there is more unstructured information out there that would give an early indicator to different kinds of financial distress or other leading indicators towards a higher probability of distress.

You can incorporate additional types of information using machine learning, different models for different data sets must be maintained and many, many test iterations must be run through. You also have to have a large capacity to handle the data, and to backtest it to see whether that additional unstructured data can give you that early indication that there might be a problem with the company, Horrell explains.

Sharing and parallelising with the cloudIn addition to this, while the cloud helps smaller teams become more agile when setting up a project and allows for faster experimentation, the cloud also allows FIs of all sizes to change direction when required, enhancing creativity and productivity.

In the front office, new horizons have opened up in terms of the types of data financial services insitutions can now analyse to power their investment and trading strategies. The rise of alternative data feeds into that, and the cloud creates many opportunities to look at this data.

The cloud can handle the scale of these datasets and provide the techniques and ML approaches to make sense of them and help FIs find completely new ways of generating investment ideas.

Rothermich explains that sharing code, sharing resources and sharing data is a lot easier in the cloud. And some of the tasks that are completed during a machine learning research project are very easily parallelisable and easy to scale up if the cloud resource is there.

On parallelisation, Horrell continues to say that because of the flexibility of the cloud, the technology can be applied to areas where it would not normally be. For instance, multiple risk models can be run, and data can be analysed in different ways from a risk point of view.

Rothermich highlights that in conversation with hedge funds, they revealed that one of the biggest tasks that they face is evaluating new datasets in addition to ingesting, mapping and validating that data. The clouds capacity for data has helped with loading up and merging new content sets and new, alternative datasets. This form of rapid data onboarding and evaluation gives FIs an informational edge.

Democracy vs. dataWhile there is a definite democratisation emerging with anyone being able to access data in the cloud, Horrell adds that ultimately, you cannot do data science without the data. The better quality your data, the better quality your results.

ReadSmarter Humans. Smarter Machines: Insights from the Refinitiv 2019 Artificial Intelligence/Machine Learning Global Study here.

More:
Cloud as the enabler of AI's competitive advantage - Finextra

Dell’s Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels – PCWorld

Dell's Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels | PCWorld ');consent.ads.queue.push(function(){ try { IDG.GPT.addDisplayedAd("gpt-superstitial", "true"); $('#gpt-superstitial').responsiveAd({screenSize:'971 1115', scriptTags: []}); IDG.GPT.log("Creating ad: gpt-superstitial [971 1115]"); }catch (exception) {console.log("Error with IDG.GPT: " + exception);} }); This business workhorse has a lot to like.

Dell Latitude 9510 hands-on: The three best features

Dell's Latitude 9510 has three features we especially love: The integrated 5G, the Dell Optimizer Utility that tunes the laptop to your preferences, and the thin bezels around the huge display.

Today's Best Tech Deals

Picked by PCWorld's Editors

Top Deals On Great Products

Picked by Techconnect's Editors

The Dell Latitude 9510 is a new breed of corporate laptop. Inspired in part by the companys powerful and much-loved Dell XPS 15, its the first model in an ultra-premium business line packed with the best of the best, tuned for business users.

Announced January 2 and unveiled Monday at CES in Las Vegas, the Latitude 9510 weighs just 3.2 pounds and promises up to 30 hours of battery life.PCWorld had a chance to delve into the guts of the Latitude 9510, learning more about whats in it and how it was built. Here are the coolest things we saw:

The Dell Latitude 9510 is shown disassembled, with (top, left to right) the magnesium bottom panel, the aluminum display lid, and the internals; and (bottom) the array of ports, speaker chambers, keyboard, and other small parts.

The thin bezels around the 15.6-inch screen (see top of story) are the biggest hint that the Latitude 9510 took inspiration from its cousin, the XPS 15. Despite the size of the screen, the Latitude 9510 is amazingly compact. And yet, Dell managed to squeeze in a camera above the displaythanks to a teeny, tiny sliver of a module.

A closer look at the motherboard of the Dell Latitude 9510 shows the 52Wh battery and the areas around the periphery where Dell put the 5G antennas.

The Latitude 9510 is one of the first laptops weve seen with integrated 5G networking. The challenge of 5G in laptops is integrating all the antennas you need within a metal chassis thats decidedly radio-unfriendly.

Dell made some careful choices, arraying the antennas around the edges of the laptop and inserting plastic pieces strategically to improve reception. Two of the antennas, for instance, are placed underneath the plastic speaker components and plastic speaker grille.

The Dell Latitude 9510 incorporated plastic speaker panels to allow reception for the 5G antennas underneath.

Not ready for 5G? No worries. Dell also offers the Latitude 9510 with Wi-Fi 6, the latest wireless networking standard.

You are constantly asking your PC to do things for you, usually the same things, over and over. Dells Optimizer software, which debuts on the Latitude 9510, analyzes your usage patterns and tries to save you time with routine tasks.

For instance, the Express SignIn feature logs you in faster. The ExpressResponse feature learns which applications you fire up first and loads them faster for you. Express Charge watches your battery usage and will adjust settings to save bettery, or step in with faster charging when you need some juice, pronto. Intelligent Audio will try to block out background noise so you can videoconference with less distraction.

The Dell Latitude 9510s advanced features and great looks should elevate corporate laptops in performance as well as style.It will come in clamshell and 2-in-1 versions, and is due to ship March 26. Pricing is not yet available.

Melissa Riofrio spent her formative journalistic years reviewing some of the biggest iron at PCWorld--desktops, laptops, storage, printers. As PCWorld's Executive Editor she leads PCWorlds content direction and covers productivity laptops and Chromebooks.

Go here to read the rest:
Dell's Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels - PCWorld

Here’s why digital marketing is as lucrative a career as data science and machine learning – Business Insider India

In an interview with Business Insider, Mayank Kumar, Founder & MD of upGrad told how digital literacy is becoming a buzzword in the ecosystem. The requirement for experienced marketers is getting replaced by the demand for data-driven marketers.

In fact, Kumar says that professionals with 10+ years of experience in traditional marketing or sales are feeling the palpable need to upskill and do so, really fast.

As per LinkedIn, digital marketing specialist is one of the top 15 emerging job roles in India with Mumbai, Bangalore and Delhi attracting the most talent. However, it is no longer confined to traditional aspects of social media or content marketing. They would have to acquire skills with regards to Google Ads, Social Media Optimization, Google Analytics and Search Engine Optimization (SEO).

Nearly doubled salaries

They earn as much as data scientists and other techies who work for full stack development which is one the best paying software roles.

The top 20% of the transitioned learners graduated with an average hike of 177%, which is way above any industry benchmark. Those who were previously in profiles like software testing, software development, traditional marketing, sales and operations are now working with leading companies like HDFC Life, Facebook, IBM, Uber, Zomato, and Microsoft, upGrad said in a statement.

upGrad provides an industry connect for professionals who want to transition from their existing job roles.

We started our in-house placement support team which provides holistic placement services like resume building, interview preparation support and salary negotiation tips. As of today, we have over 300 corporates hiring from upGrads talent pool and we plan to add 50 new companies every quarter.

See also:This Indian startup gains as students from Tier 2 and 3 cities opt for online digital courses

Data scientists with 3 years experience can earn 20 lacs per annum

Here is the original post:
Here's why digital marketing is as lucrative a career as data science and machine learning - Business Insider India

CMSWire’s Top 10 AI and Machine Learning Articles of 2019 – CMSWire

PHOTO: tiffany terry

Would you believe me if I told you artificial intelligence (AI) wrote this article?

With 2020 on the horizon, and with all the progress made in AI and machine learning (ML) already, it probably wouldnt surprise you if that were indeed the case which is bad news for writers like me (or not).

As we transition into a new year, its worth noting that 73% of global consumers say they are open to businesses using AI if it makes life easier, and 83% of businesses say that AI is a strategic priority for their businesses already. If thats not a recipe for even more progress in 2020 and beyond, then my name isnt CMSWire-Bot-927.

Today, were looking back at the AI and ML articles which resonated with CMSWire's audience in 2019. Strap yourself in, because this list is about to blast you into the future.

ML and, more broadly, AI have become the tech industry's most important trends over the past 18 months. And despite the hype and, to some extent, fear surrounding the technology, many businesses are now embracing AI at an impressive speed.

Despite this progress, many of the pilot schemes are still highly experimental, and some organizations are struggling to understand how they can really embrace the technology.

As the business world grapples with the potential of AI and machine learning, new ethical challenges arise on a regular basis related to its use.

One area where tensions are being played out is in talent management: a struggle between relying on human expertise or in deferring decisions to machines so as to better understand employee needs, skills and career potential.

Marketing technology has evolved rapidly over the past decade, with one of the most exciting developments being the creation of publicly-available, cost-effective cognitive APIs by companies like Microsoft, IBM, Alphabet, Amazon and others. These APIs make it possible for businesses and organizations to tap into AI and ML technology for both customer-facing solutions as well as internal operations.

The workplace chatbots are coming! The workplace chatbots are coming!

OK, well, theyre already here. And in a few years, there will be even more. According to Gartner, by 2021 the daily use ofvirtual assistants in the workplacewill climb to 25%. That will be up from less than 2% this year.Gartneralso identified a workplace chatbot landscape of more than 1,000 vendors, so choosing a workplace chatbot wont be easy. IT leaders need to determine the capabilities they need from such a platform in the short term and select a vendor on that basis, according to Gartner.

High-quality metadata plays an outsized role in improving enterprise search results. But convincing people to consistently apply quality metadata has been an uphill battle for most companies. One solution that has been around for a long time now is to automate metadata's creation, using rules-based content auto-classification products.

Although enterprise interest in bots seems to be at an all-time high,Gartner reports that 68%of customer service leaders believe bots and virtual assistants will become even more important in the next two years. As bots are called upon to perform a greater range of tasks, chatbots will increasingly rely on back-office bots to find information and complete transactions on behalf of customers.

If digital workplaces are being disrupted by the ongoing development of AI driven apps, by 2021 those disruptors could end up in their turn being disrupted. The emergence of a new form of AI, or a second wave of AI, known as augmented AI is so significant Gartner predicts that by 2021 it will be creating up to $2.9 trillion of business value and 6.2 billion hours of worker productivity globally.

AI and ML took center stage at IBM Think this year, the shows major AI announcements served as a reminder that the company has some of the most differentiated and competitive services for implementing AI in enterprise operational processes in the market. But if Big Blue is to win the AI race against AWS, Microsoft and Google Cloud in 2019 and beyond, it must improve its developer strategy and strengthen its communications, especially in areas such as trusted AI and governance

Sentiment analysis is the kind of tool a marketer dreams about. By gauging the publics opinion of an event or product through analysis of data on a scale no human could achieve, it gives your team the ability to figure out what people really think. Backed by a growing body of innovative research, sentiment-analysis tools have the ability to dramatically improve your ROI yet many companies are overlooking it.

Pop quiz: Can you define the differences between AI and automation?

I wont judge you if the answer is no. There's a blurry line between AI and automation, with the terms often used interchangeably, even in tech-forward professions. But there's a very real difference between the two and its one thats becoming evermore critical for organizations to understand.

Follow this link:

CMSWire's Top 10 AI and Machine Learning Articles of 2019 - CMSWire

Machine Learning in 2019 Was About Balancing Privacy and Progress – ITPro Today

The overall theme of the year was two-fold: how can this technology make our lives easier and how can we protect privacy while enjoying those benefits? Natural language processing development continued and enterprises increasingly looked to AI and machine learning in 2019 for automation. Meanwhile, consumers became more concerned about the privacy of all that data theyre creating and enterprises are collecting, with consequences for businesses especially those that rely on said data for various technological processes or must invest in ensuring its security.

This year was a big one for analytics, big data and artificial intelligence but at the current pace of development, every subsequent year in this sector seems bigger than the last. Here are five of the leading stories in big data, AI and machine learning in 2019, with an eye to how they may continue to unfold in 2020.

Related: Prepare for Machine Learning in the Enterprise

The dominance of Amazons digital personal assistant, Alexa, in the home is clear, but this falls slew of new Alexa product announcementswas a sign that the workplace is the logical next step. An Alexa-powered enterprise seems increasingly likely as Facebook, Google and Microsoft all put their own resources into advancing natural language processingfor both voice-powered assistants and chatbots. The tech will become even more important if the growth of robotic process automation (see below) also continues and it emerges as another way to automate things in the enterprise space.

In 2019, it became increasingly clear that the enterprise is past preparing for the impact of machine learningon their operations and into the time for action for organizations that want to stay ahead of the enterprise machine learning curve. According to Gartner, seven out of 10 enterprises will be using some form of AI in the workplace by 2021.

The countrys most populous state and one thats home to many tech companies finished negotiations for its GDPR-esque California Consumer Privacy Actin September, with the law taking effect on the first day of 2020. Many tech companies put up strong opposition to CCPR, but Microsoft unexpectedly announcedin November that it would apply the regulations to customers across the country. Its a sign that the tech giant anticipates that CCPR isnt the only law of its kind likely to take effect in the U.S., especially as the push for federal regulationscontinues. Microsoft recently announced a regulatory compliance dashboard in Azure and AI-powered recommendations in the Microsoft 365 admin center to include guidance for compliance with the European Unions General Data Protection Regulation.

The world beyond the United States continued to affect the adoption and use of machine learning and big data in this country in 2019. Visa issuesaffected not just talent acquisition a challenge for the enterprise in taking AI and machine learning in 2019 from the organizational wishlist to implementation but also research, as it hampered conference travel. Chinas own advancements in artificial intelligence, and the ethical issues related to data privacythat have emerged, could also affect policy and practices in the U.S. especially as things shift to 5G. Barring a sea change in China related to data collection and use, the country should continue to affect tech adoption here in the United States in 2020.

Robotic process automation a group of technologies that let line of business users set up, launch and administer virtual workers sans the IT department is still a small sector in software. Worldwide revenue was at $850 million in 2018. However, its also a quickly growing one because it frees up workers from routine work and cuts labor costs. As automation becomes more robust, natural language processing continues to advance quickly and data quality improves, look for this sectors growth to continue in 2020 -- with big potential in IT and HR departments in particular. Robotic process automation is here to assume the standardized, routine tasks for any organization that generates or uses data.

Read more here:

Machine Learning in 2019 Was About Balancing Privacy and Progress - ITPro Today

Machine Learning | Blog | Microsoft Azure

Tuesday, November 5, 2019

Enterprises today are adopting artificial intelligence (AI) at a rapid pace to stay ahead of their competition, deliver innovation, improve customer experiences, and grow revenue. AI and machine learning applications are ushering in a new era of transformation across industries from skillsets to scale, efficiency, operations, and governance.

Monday, October 28, 2019

Azure Machine Learning is the center for all things machine learning on Azure, be it creating new models, deploying models, managing a model repository and/or automating the entire CI/CD pipeline for machine learning. We recently made some amazing announcements on Azure Machine Learning, and in this post, Im taking a closer look at two of the most compelling capabilities that your business should consider while choosing the machine learning platform.

Wednesday, July 17, 2019

Today we are announcing the open sourcing of our recipe to pre-train BERT (Bidirectional Encoder Representations from Transformers) built by the Bing team, including code that works on Azure Machine Learning, so that customers can unlock the power of training custom versions of BERT-large models for their organization. This will enable developers and data scientists to build their own general-purpose language representation beyond BERT.

Tuesday, June 25, 2019

The next time you see your physician, consider the times you fill in a paper form. It may seem trivial, but the information could be crucial to making a better diagnosis. Now consider the other forms of healthcare data that permeate your lifeand that of your doctor, nurses, and the clinicians working to keep patients thriving.

Monday, June 10, 2019

Data scientists have a dynamic role. They need environments that are fast and flexible while upholding their organizations security and compliance policies. Notebook Virtual Machine (VM), announced in May 2019, resolves these conflicting requirements while simplifying the overall experience for data scientists.

Thursday, June 6, 2019

Build more accurate forecasts with the release of capabilities in automated machine learning. Have scenarios that require have gaps in training data or need to apply contextual data to improve your forecast or need to apply lags to your features? Learn more about the new capabilities that can assist you.

Tuesday, June 4, 2019

The automated machine learning capability in Azure Machine Learning service allows data scientists, analysts, and developers to build machine learning models with high scalability, efficiency, and productivity all while sustaining model quality.

Wednesday, May 22, 2019

During Microsoft Build we announced the preview of the visual interface for Azure Machine Learning service. This new drag-and-drop workflow capability in Azure Machine Learning service simplifies the process of building, testing, and deploying machine learning models for customers who prefer a visual experience to a coding experience.

Thursday, May 9, 2019

Artificial intelligence (AI) has become the hottest topic in tech. Executives, business managers, analysts, engineers, developers, and data scientists all want to leverage the power of AI to gain better insights to their work and better predictions for accomplishing their goals.

Friday, May 3, 2019

With the exponential rise of data, we are undergoing a technology transformation, as organizations realize the need for insights driven decisions. Artificial intelligence (AI) and machine learning (ML) technologies can help harness this data to drive real business outcomes across industries. Azure AI and Azure Machine Learning service are leading customers to the world of ubiquitous insights and enabling intelligent applications such as product recommendations in retail, load forecasting in energy production, image processing in healthcare to predictive maintenance in manufacturing and many more.

Original post:

Machine Learning | Blog | Microsoft Azure

AI and machine learning products – Cloud AI | Google Cloud

AI Platform Notebooks

An enterprise notebook service to launch projects in minutes

AI Platform Notebooks is a managed service whose integrated JupyterLab environment makes it easy to create instances that come pre-installed with the latest data science and ML frameworks and integrate with BigQuery, Cloud Dataproc, and Cloud Dataflow for easy development and deployment.

Preconfigured virtual machines for deep learning applications

Deep Learning VM Image makes it easy and fast to provision a VM quickly and effortlessly, with everything you need to get your deep learning project started on Google Cloud. You can launch Compute Engine instances pre-installed with popular ML frameworks like TensorFlow, PyTorch, or scikit-learn, and add Cloud TPU and GPU support with a single click.

Preconfigured and optimized containers for deep learning environments

Build your deep learning project quickly with a portable and consistent environment for developing, testing, and deploying your AI applications on Google Kubernetes Engine (GKE), AI Platform, Cloud Run, Compute Engine, Kubernetes, and Docker Swarm. Deep Learning Containers provide a consistent environment across Google Cloud services, making it easy to scale in the cloud or shift from on-premises.

Data preparation for machine learning model training

Use the AI Platform Data Labeling Service to request having human labelers label a collection of data that you plan to use to train a custom machine learning model. You can submit the representative samples to human labelers who annotate them with the "right answers" and return the dataset in a format suitable for training a machine learning model.

Distributed training with automatic hyper parameter tuning

Use AI Platform to run your TensorFlow, scikit-learn, and XGBoost training applications in the cloud. You can also use custom containers to run training jobs with other machine learning frameworks.

Model hosting service with serverless scaling

Host your trained machine learning models in the cloud and use AI Platform Prediction to infer target values for new data.

Model optimization using ground truth labels

Sample the prediction from trained machine learning models that you have deployed to AI Platform and provide ground truth labels for your prediction input using the continuous evaluation capability. The Data Labeling Service compares your models' predictions with the ground truth labels to provide continual feedback on your model performance.

Model evaluation and understanding using a code-free visual interface

Investigate model performances for a range of features in your dataset, optimization strategies, and even manipulations to individual datapoint values using the What-If Tool integrated with AI Platform.

Hardware designed for performance

Cloud TPUs are a family of hardware accelerators that Google designed and optimized specifically to speed up and scale up machine learning workloads for training and inference programmed with TensorFlow. Cloud TPUs are designed to deliver the best performance per dollar for targeted TensorFlow workloads and to enable ML engineers and researchers to iterate more quickly.

The machine learning toolkit for Kubernetes

Kubeflow makes deployments of machine learning workflows on Kubernetes simple, portable, and scalable by providing a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures.

See the original post here:

AI and machine learning products - Cloud AI | Google Cloud

TinyML as a Service and machine learning at the edge – Ericsson

This is the second post in a series about tiny machine learning (TinyML) at the deep IoT edge. Read our earlier introduction to TinyMl as-a-Service, to learn how it ranks in respect to traditional cloud-based machine learning or the embedded systems domain.

TinyML is an emerging concept (and community) to run ML inference on Ultra Low-Power (ULP ~1mW) microcontrollers. TinyML as a Service will democratize TinyML, allowing manufacturers to start their AI business with TinyML running on microcontrollers.

In this article, we introduce the challenges behind the applicability of ML concepts within the IoT embedded world. Furthermore, we emphasize how these challenges are not simply due to the constraints added by the limited capabilities of embedded devices but are also evident where the computation capabilities of ML-based IoT deployments are empowered by additional resources confined at the network edge.

To summarize the nature of these challenges, we can say:

Below, we take a closer look at each of these challenges.

Edge computing promises higher performing service provisioning, both from a computational and a connectivity point of view.

Edge nodes support the latency requirements of mission critical communications thanks to their proximity to the end-devices, and enhanced hardware and software capabilities allow execution of increasingly complex and resource-demanding services in the edge nodes. There is growing attention, investments and R&D to make execution of ML tasks at the network edge easier. In fact, there are already several ML-dedicated "edge" hardware examples (e.g. Edge TPU by Google, Jetson Nano by Nvidia, Movidius by Intel) which confirm this.

Therefore, the question we are asking is: what are the issues that the edge computing paradigm has not been able to completely solve yet? And how can these issues undermine the applicability of ML concepts in IoT and edge computing scenarios?

We intend to focus on and analyze five areas in particular: (Note: Some areas we describe below may have solutions through other emerging types of edge computing but are not yet commonly available).

Figure 1

The web and the embedded worlds feature very heterogeneous characteristics. Figure 1 (above) depicts how this high heterogeneity is characterized, by comparing qualitatively and quantitively the capacities of the two paradigms both from a hardware and software perspective. Web services can rely on powerful underlying CPU architectures with high memory and storage capabilities. From a software perspective, web technologies can be designed to choose and benefit from a multitude of sophisticated operating systems (OS) and complex software tools.

On the other hand, embedded systems can rely on the limited capacity of microcontroller units (MCUs) and CPUs that are much less powerful when compared with general-purpose and consumer CPUs. The same applies with memory and storage capabilities, where 500KB of SRAM and a few MBs of FLASH memory can already be considered a high resource. There have been several attempts to bring the flexibility of Linux-based systems in the embedded scenario (e.g. Yocto Project), but nevertheless most of 32bit MCU-based devices owns the capacity for running real-time operating systems and no more complex distribution.

In simple terms, when Linux can run, system deployment is made easier since software portability becomes straightforward. Furthermore, an even higher cross-platform software portability is also made possible thanks to the wide support and usage of lightweight virtualization technologies such as containers. With almost no effort, developers can basically ship the same software functionalities between entities operating under Linux distributions, as happens in the case of cloud and edge.

The impossibility of running Linux and container-based virtualization in MCUs represents one of the most limiting issue and bigger challenge for current deployments. In fact, it appears clear how in typical "cloud-edge-embedded devices" scenarios, cloud and edge services are developed and deployed with hardware and software technologies, which are fundamentally different and easier to be managed if compared to embedded technologies.

TinyML as-a-Service tries to tackle this issue by taking advantage of alternative (and lightweight) software solutions.

Figure 2

In the previous section, we considered on a high-level how the technological differences between web and embedded domains can implicitly and significantly affect the execution of ML tasks on IoT devices. Here, we analyze how a big technological gap exists also in the availability of ML-dedicated hardware and software web, edge, and embedded entities.

From a hardware perspective, during most of computing history there have been only a few types of processor, mostly available for general use. Recently, the relentless growth of artificial intelligence (AI) has led to the optimization of ML tasks for existing chip designs such as graphics processing units (GPUs), as well as the design of new dedicated hardware forms such as application specific integrated circuits (ASICs), which embed chips designed exclusively for the execution of specific ML operations. The common thread that connects all these new devices is their usage at the edge. In fact, these credit-card sized devices are designed with the idea of operating at the network edge.

At the beginning of this article we mentioned a few examples of this new family of devices (Edge TPU, Jetson Nano, Movidius). We foresee that in the near future even more big and small chip and hardware manufacturers will increasingly invest resources into the design and production of ML-dedicated hardware. However, it appears clear how, at least so far, there has not been the same effort in the embedded world.

Such a lack of hardware availability undermines somehow a homogeneous and seamless ML "cloud-to-embedded" deployments. In many scenarios, the software can help compensate for hardware deficiencies. However, the same boundaries that we find in the hardware sphere apply for the development of software tools. Today, in the web domain, there are hundreds of ML-oriented application software. Such availability is registering a constant growth thanks also to the possibility given by the different open source initiatives that allow passionate developers all over the world to merge efforts. The result is more effective, refined, and niche applications. However, the portability of these applications into embedded devices is not so straightforward. The usage of high-level programming languages (e.g., Python), as well as the large sizes of the software runtime (intended as both runtime system and runtime program lifecycle phase) are just some of the reasons why the software portability is painful if not impossible.

The main rationale behind the TinyML as-a-Service approach is precisely the one to break the existing wall between cloud/edge and embedded entities. However, to expect exactly the same ML experience in the embedded domain as we have in the web and enterprise world would be unrealistic. It is still an irrefutable fact that size matters. The execution of ML inference is the only operation that we reasonably foresee to be executed in an IoT device. We are happy to leave all the other cumbersome ML tasks, such as data processing and training, to the more equipped and resourceful side of the scenario depicted in Figure 2.

In the next article, we will go through the different features which characterize TinyML as-a-Service and share the technological approach underlying the TinyML as-a-Service concept.

In the meantime, if you have not read it yet, we recommend reading our earlier introduction to TinyMl as-a-Service.

The IoT world needs a complete ML experience. TinyML as-a-service can be one possible solution for making this enhanced experience possible, as well as expanding potential technology opportunities. Stay tuned!

Read the original:

TinyML as a Service and machine learning at the edge - Ericsson