Category Archives: Machine Learning

Smart condition monitoring with IIoT sensors and machine learning – eeNews Europe

Siemens has launched a wireless multisensor system for condition monitoring in the Industrial Internet of Things (IIoT)

Sitrans SCM IQ is an IIoT system for smart condition monitoring that transmits vibration and temperature data via Bluetooth to an industry gateway and on to the cloud.

This enables potential incidents to be detected and prevented at an early stage, reducing maintenance costs and downtimes, and increasing plant performance by up to ten percent.

The wireless, robust Sitrans MS200 multisensors form the hardware basis for installation on machinery such as pumps, gear units, compressors, and drive trains, where they collect vibration and temperature data. Via a Bluetooth connection, this data is sent to the Sitrans CC220 industry gateway where it is encrypted before being transmitted from there to the cloud, in this case the MindSphere industrial IoT-as-a-Service solution.

The anomaly detection of the Sitrans SCM IQ system is based on machine learning. It constantly monitors and analyzes all sensor values and detects deviations from the normal operating state in advance. Anomaly notifications are sent via SMS and/or email, depending on the configuration and defined user group.

An app can be used to document the anomalies of machinery behavior and makes them available to a specific circle of users. The Sitrans SCM IQ system comprises multisensors, gateway and app, and can be used in all industrial plants with mechanical or rotating components. It is scheduled to be available from summer 2021.

The Sitrans MS200 multisensors feature a robust and compact industrial design with a high IP68 degree of protection. Bluetooth communication eliminates the need for cabling, which greatly simplifies installation and commissioning. The power supply is provided by replaceable industrial batteries, enabling a long service life.

The Sitrans CC220 industry gateway ensures secure communication between the multisensor and the cloud. It is suitable for cabinet installation and has an external Bluetooth antenna. The high sample rate transmission enables accurate and reliable data analysis.

See the article here:
Smart condition monitoring with IIoT sensors and machine learning - eeNews Europe

Twitter analysing harmful impacts of its AI, machine learning algorithms – Business Standard

In a bid to assess racial and gender bias in its artificial intelligence/machine learning systems, Twitter is starting a new initiative called Responsible Machine Learning.

Terming it a long journey in its early days, Twitter said the initiative will assess any "unintentional harms" caused by its algorithms.

"When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended," said Jutta Williams and Rumman Chowdhury from Twitter.

"These subtle shifts can then start to impact the people using Twitter and we want to make sure we're studying those changes and using them to build a better product," they said in a statement late on Thursday.

Twitter's 'Responsible ML' working group is interdisciplinary and is made up of people from across the company, including technical, research, trust and safety, and product teams.

"Leading this work is our ML Ethics, Transparency and Accountability (META) team: a dedicated group of engineers, researchers, and data scientists collaborating across the company to assess downstream or current unintentional harms in the algorithms we use and to help Twitter prioritise which issues to tackle first," the company elaborated.

Twitter said it will research and understand the impact of ML decisions, conduct in-depth analysis and studies to assess the existence of potential harms in the algorithms it uses.

Some of the tasks will be a gender and racial bias analysis of its image cropping (saliency) algorithm, a fairness assessment of our Home timeline recommendations across racial subgroups and an analysis of content recommendations for different political ideologies across seven countries.

"The most impactful applications of responsible ML will come from how we apply our learnings to build a better Twitter," the company said.

This may result in changing its product, such as removing an algorithm and giving people more control over the images they Tweet.

Twitter said it is also building explainable ML solutions so people can better understand its algorithms, what informs them, and how they impact what they see on the platform.

--IANS

na/

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.

As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.

Support quality journalism and subscribe to Business Standard.

Digital Editor

View original post here:
Twitter analysing harmful impacts of its AI, machine learning algorithms - Business Standard

Solving the basic problems of machine learning – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Fed Tech Talks audio interviews on Apple Podcasts or PodcastOne

This week on Federal Tech Talk, host John Gilroy speaks with Michael Stonebraker, co-founder of Tamr. Stonebraker has been involved with technology since 1971. Fifty years in a profession can lead someone to plateau, but that has not been the case with Stonebraker.

In addition to his academic career, he has been involved in over a dozen tech startups, the latest being Tamr. He decided to throw in with Tamr because he thinks it solves the basic problem with machine learning.

According to Stonebraker, data scientists spend too much time cleaning up data and not enough time on the analysis. Tamrs breakthrough technology will assist in reducing the amount of time a data scientist takes to begin a project.

The reason he has gravitas is in 2014, he won the Turing Award from the Association for Computing Machinery. Kind of like the Nobel for computer scientists.

In this humorous and entertaining interview, Stonebraker has no problem sharing controversial opinions about products like Hadoop and MapReduce.

He uses terms like grok and intergalactic. It is obvious he is a master craftsman with databases and has completely adopted the nomenclature of state-of-the-art systems developers.

Go here to read the rest:
Solving the basic problems of machine learning - Federal News Network

Twitter Outlines Evolving Approach to Algorithms as Part of New ‘Responsible Machine Learning Initiative’ – Social Media Today

It's amazing how commonplace the term 'algorithm' has now become, with machine learning, algorithmic-defined systems now being used to filter information to us at an increasingly efficient rate, in order to keep us engaged, keep us clicking, and keep us scrolling through our social media feeds for hours on end.

But algorithms have also become a source of rising concern in recent times, with the goals of the platforms feeding us such information often at odds with broader societal aims of increased connection and community. Indeed, various studies have found that what sparks more engagement online is content that triggers strong emotional response, with anger, for one, being a powerful driver of such. Given this, algorithms, whether intentionally or not, are basically built to fuel division, via the more practical business aim of maximizing engagement.

Sure, partisan news coverage also plays a part, as does existing bias and division. But algorithms have arguably incentivized such to a significant enough degree that such approaches now largely define, or at least influence, everything that we see.

If it feels like the world is more divided than ever, that's probably because it is, and it's likely due to the algorithms which, in effect, keep us angry all of the time.

Every platform is examining this, and the impacts of algorithms in various respects. And today, Twitter has outlined its latest algorithmic research effort, which it's calling its 'Responsible Machine Learning Initiative', which will monitor the impacts of algorithmic shifts with a view to removing various negative elements, including bias, from how it applies machine learning systems.

As explained by Twitter:

"When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter and we want to make sure were studying those changes and using them to build a better product."

The project will address four key pillars:

The broader view is that by analyzing these elements, Twitter will be able to both maximize engagement, in line with its ambitious growth targets, while also taking into account, and minimizing potential societal harms. Which may lead to difficult conflicts across the two streams -but Twitter's hoping that by instituting more specific guidance as to how it applies such, it can build a more beneficial, inclusive platform through its increased learning and development.

"The META team works to study how our systems work and uses those findings to improve the experience people have on Twitter. This may result in changing our product, such as removing an algorithm and giving people more control over the images they Tweet, or in new standards into how we design and build policies when they have an outsized impact on one particular community."

The project will also include Twitter's ambitious 'BlueSky' initiative, which essentially aims to enable users to define their own algorithms at some point, as opposed to being guided by an overarching set of platform-wide rules.

"Were also building explainable ML solutions so you can better understand our algorithms, what informs them, and how they impact what you see on Twitter. Similarly, algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them. Were currently in the early stages of exploring this and will share more soon."

That's a far broader-reaching project, with complexities that could make it impossible for day-to-day application or use by regular people. But the idea is that by exploring specific elements, Twitter will be able to make more informed, intelligent, and fair choices as to how it applies its machine-defined rules and systems.

It's good to see Twitter taking this element on, even with the amount of challenges it will face, and hopefully, it will help the platform weed out some of the more concerning algorithmic elements, and create a better, more inclusive, less divisive system.

But I have my doubts.

The desires of idealists will generally always conflict in the demands of shareholders, and it seems like, at some stage, such investigations will lead to difficult choices that can only go one way or the other. But still, that's likely on a wider scale - maybe, by addressing at least some of these aspects, Twitter can build a better system, even if it's not perfect.

At the least, it will provide more insight into the effects of algorithms, and what that means for social platforms in general.

Read more:
Twitter Outlines Evolving Approach to Algorithms as Part of New 'Responsible Machine Learning Initiative' - Social Media Today

Optimize manufacturing with AI, machine learning and digitalization – The Manufacturer

From production planning and mechanical engineering to project planning, advanced technologies and digitalization effectively mitigate challenges while optimizing key processes.

INFORM Software Corporation, a leading provider of AI-based optimization software that facilitates improves decision making, processes and resource management across diverse industries, will be hosting three, free webinars focused on optimizing various manufacturing and assembly process in sectors that include: machine building, marine and aircraft engines, turbines, generators, fluid power and air motors, pumps, hydraulics and industry cranes with a high diversity of end products and complex planning processes. Find here a short video of our intelligent production planning solution.

The first webinar scheduled for Tuesday, 1 June, 2021, 17:00-17:30 CEST, 11:00-11:30 AM EDT is on Production planning of the future The road to digitalization, AI and machine learning.

It will demonstrate how companies can utilize AI and machine learning in target environments to increase efficiency and planning in their production. They will learn: how to capitalize on their data by making decisions with the assistance of intelligent solutions that increase planning reliability and adherence to schedules in the long term; how machine learning can help to accurately predict replenishment lead times; and how to optimally schedule production despite limited capacities.

INFORM Software Corporation Chief Operating Officer Justin Newell noted, By leveraging AI and machine learning, manufacturers can derive key benefits, including shorter throughput times, cost savings, improved materials management, optimized procurement and identification of critical paths within the supply chain, and how to mitigate potential problems.

The second webinar on Tuesday, 8 June, 2021, 17:00-17:30 CEST, 11:00-11:30 AM EDT will cover Digitalization in mechanical engineering: Mastering complexity in production.

Its focus will be on the growing challenges faced by mechanical and plant engineers stemming from increasing production variants, smaller batch sizes, and daily changes often introduced with short notice. Takeaways will include advice on how to master the daily challenges and leverage an intelligent planning solution to achieve delivery reliability, realistic scheduling, and agility within manufacturing and assembly planning processes.

The topic of the last webinar held on Tuesday, 15 June, 2021, 17:00-17:30 CEST, 11:00-11:30 AM EDT, will be Increased transparency in project and assembly planning focused on machine and plant engineering.

Machine and plant engineering demands transparency with up-to-date information, visualization on the impacts of postponements on production processes and orders, synchronization of work lists to capacities and materials, as well as a reduction missing parts, residue and routine work, all of which can be achieved using intelligent planning solutions, continued Newell. Through this webinar, participants will learn how to achieve their project management goals while planning and delivering parallel projects on-schedule. They will gain insights relating to centralized planning of all resources from employees and facility space to materials and machines, and how to achieve real-time synchronization with their ERP systems, added Newell.

Register here for the free webinars.

The date does not fit for you? Register anyway to receive the presentations after the webinars are held.

Read the original here:
Optimize manufacturing with AI, machine learning and digitalization - The Manufacturer

How machine learning can revolutionize the quality of hearing – TechHQ

As machine learning (ML) integrates itself into almost every industry from automotive and healthcare to banking and manufacturing- the most exciting advancements look as if they are still yet to come. Machine learning as a subset of artificial intelligence (AI) have been among the most significant technological developments in recent history, with few fields possessing the same amount of potential to disrupt a wide range of industries.

And while many applications of ML technology go unseen, there are countless ways companies are harnessing its power in new and intriguing applications. That said, MLs revolutionary impact is most poised perhaps when put to use for age-old problems.

Hearing loss is not a new condition by any means, and people have suffered from it for centuries. The first electric hearing aid was designed in 1898 by Miller Reese Hutchison, with the first commercially manufactured hearing aids introduced in 1913.With an estimated 48 million Americans experiencing some sort of hearing loss, hearing aids can be a lifeline for many who struggle with the quality of their hearing.

And while it may seem hard to believe, todays most predominant hearing aids on the market can be painful to wear having been designed 50-100 years ago.In response to a stagnant area of development, ML is being leveraged with deep learning and advanced signal processing techniques at a level of detail previously impossible.

Through the application of software-based solutions, ML algorithms can power hearing aids to detect, predict, and suppress unwanted background noise. Neural network models take structured and unstructured data and augment it with other data sets relating to the spectrum of age, language, and voice types. The data is then refined by being fed into neural network training, which begins a process of ongoing product improvement.

In an interview with Forbes, Andre Esteva, Head of Medical AI at Salesforce says that the limits of traditional approaches have been emphasized by the manual processes involved in acquiring data to mold it into a usable format, before preparing basic algorithms to be deployed to devices. ML training protocols, on the other hand, Esteva says, automatically processes data before updating themselves to redeploy.

The effect is a significant reduction in product feedback cycles and an increase in the range of capabilities available. The beauty of this approach is that the underlying intelligence improves over time as the neural nets go through iterative training, added Esteva.

As of today, there are several companies providing AI-powered hearing aids. The most recent being Whisper, a startup that has recently obtained funds of around US$50 million as they prepare to go to production on their first product. The AI-powered hearing aids from Whisper self-tune over time, and continually improve with AI for better performance.

Elsewhere, MicroTech claims its Essentia Edge product scans environments to make changes to boost speech intelligibility, while Widexs Evoke hearing aids combine real-time input with previously learned sounds from users and millions of other listening domains.The goal of introducing machine learning technologies in healthcare is to enhance the experience of patients and users. As intelligent innovative solutions continue to emerge in a field full of noise, the buzz around revolutionary tech seems to only get louder.

See the rest here:
How machine learning can revolutionize the quality of hearing - TechHQ

Savills, MRI Software Announce Expanded Partnership to Accelerate AI and Machine Learning Capabilities for Knowledge Cubed – KPVI News 6

NEW YORK, April 8, 2021 /PRNewswire/ --Global real estate services firm Savills today announced that it expanded its global partnership with MRI Software, a worldwide leader in real estate software.

As part of an extended agreement, Savills will expand its integration of MRI's artificial intelligence-powered data abstraction tool, MRI Contract Intelligence powered by Leverton AI, to include MRI ProLease, a cloud-based solution for lease administration, lease accounting, lease analysis and workplace management. The integrated capabilities will deliver enhanced real estate management applications to clients utilizing Savills award-winning business intelligence platform Knowledge Cubed.

"Corporate occupiers require access to data historically locked away in leases to effectively manage and reevaluate their portfolios," said Saurabh Abhyankar, MRI Software's chief product officer. "The expanded integration of Leverton AI automates and simplifies the complex data extraction process, enabling Savills clients to easily access and analyze data from leases, contracts and legal documents."

Savills and MRI will leverage a jointly developed data model to accelerate document abstraction and structuring for corporate occupiers. The proprietary machine-learning algorithm will allow smaller teams to quickly set up digital applications within Knowledge Cubed and highlight actionable insights to enable better management of real estate portfolios.

"By integrating our algorithm within Knowledge Cubed applications, we are able to provide clients an unparalleled speed and scale advantage that helps analyze portfolios in real time with access to the source documents in one click," said Patrick McGrath, Savills chief information officer and head of client technologies.

The MRI partnership continues Savills ongoing investment in innovative client technologies and data partnerships. Launched in 2016, Knowledge Cubed brings together key technologies (e.g., machine learning, cloud, IoT, big data, mobile apps, cybersecurity, and digital contracts) to help clients better understand and optimize global real estate and human capital investments.

During the last 12 months, Savills expanded and signed new partnerships with key partners such as Matterport, CoStar, and CompStak to further invest in the best-in-class technology and data platform designed for corporate occupiers.

About Savills Inc.

As one of the world's leading property advisors, Savills services span the globe,with 39,000 experts working across 600 offices in the Americas, Europe, Asia Pacific, Africa and the Middle East. Sharply skilled and fiercely dedicated, the firm's integrated teams of consultants and brokers are experts in better real estate. With services in tenant representation, workforce and incentives strategy, workplace strategy and occupant experience, project management, and capital markets, Savills has elevated the potential of workplaces around the corner, and around the world, for 160 years and counting.

For more information, please visitSavills.usand follow us on LinkedIn, Twitter, Instagramand Facebook.

View original content to download multimedia:http://www.prnewswire.com/news-releases/savills-mri-software-announce-expanded-partnership-to-accelerate-ai-and-machine-learning-capabilities-for-knowledge-cubed-301265002.html

SOURCE Savills

Read more from the original source:
Savills, MRI Software Announce Expanded Partnership to Accelerate AI and Machine Learning Capabilities for Knowledge Cubed - KPVI News 6

PODCAST: rise of the machine (learning) – BlueNotes

Jason is working on a few of the complex processes weve been wanting to automate for some time now and hes seeing some positive results.

[Were] looking to automate the home loan process - very document driven - trying to condense that, trying to extract data they can send into our decision systems for me to make a decision, Jason says.

The really exciting part is in today's world, using the old school techniques [such as neutral networks and gradient boosted models], we can make a decision after all those processes have been conducted within four seconds.

A faster decision means customers dont need to find supplementary documentation or spend time waiting for approval. They can get their answer and focus on whats important: getting into their new home.

But its not just the home loan process thats seen the benefit of new technologies. Our Institutional team has been using machine learning for the past few years and Sreeram says even three years ago the team saw the promise the tool held. Now, theyre seeing results.

I'm excited because it is really good for our staff. You know, there's so much value added from an individual point of view because banking can be notoriously paper intensive, he says.

This is a combination of technologies and capabilities. The machine nowthe transfer of paper to image, the quality and accuracy of imaging, the ability to read, the ability to interpret and then the ability to process; this is coming together for the first time, at least in my career.

We have seen cases where 50 per cent of the manual effort before has been. We have seen cases where our internal times have improved roughly 40 to 50 per cent. So I think it's absolutely made things better.

Although Sreeram reminds us that comes with its challenges and caution and management of governance, as with any technology.

Original post:
PODCAST: rise of the machine (learning) - BlueNotes

27 million galaxy morphologies quantified and cataloged with the help of machine learning | Penn Today – Penn Today

Research from Penns Department of Physics and Astronomy has produced the largest catalog of galaxy morphology classification to date. Led by former postdocs Jess Vega-Ferrero and Helena Domnguez Snchez, who worked with professor Mariangela Bernardi, this catalog of 27 million galaxy morphologies provides key insights into the evolution of the universe. The study was published in Monthly Notices of the Royal Astronomical Society.

The researchers used data from the Dark Energy Survey (DES), an international research program whose goal is to image one-eighth of the sky to better understand dark energys role in the accelerating expansion of the universe.

A byproduct of this survey is that the DES data contains many more images of distant galaxies than other surveys to date. The DES images show us what galaxies looked like more than 6 billion years ago, says Bernardi.

And because DES has millions of high-quality images of astronomical objects, its the perfect dataset for studying galaxy morphology. Galaxy morphology is one of the key aspects of galaxy evolution. The shape and structure of galaxies has a lot of information about the way they were formed, and knowing their morphologies gives us clues as to the likely pathways for the formation of the galaxies, Domnguez Snchez says.

Previously, the researchers had published a morphological catalog for more than 600,000 galaxies from the Sloan Digital Sky Survey (SDSS). To do this, they developed a convolutional neural network, a type of machine learning algorithm, that was able to automatically categorize whether a galaxy belonged to one of two major groups: spiral galaxies, which have a rotating disk where new stars are born, and elliptical galaxies, which are larger, and made of older stars which move more randomly than their spiral counterparts.

But the catalog developed using the SDSS dataset was primarily made of bright, nearby galaxies, says Vega-Ferrero. In their latest study, the researchers wanted to refine their neural network model to be able to classify fainter, more distant galaxies. We wanted to push the limits of morphological classification and trying to go beyond, to fainter objects or objects that are farther away, Vega-Ferrero says.

To do this, the researchers first had to train their neural network model to be able to classify the more pixelated images from the DES dataset. They first created a training model with previously known morphological classifications, comprised of a set of 20,000 galaxies that overlapped between DES and SDSS. Then, they created simulated versions of new galaxies, mimicking what the images would look like if they were farther away using code developed by staff scientist Mike Jarvis.

Once the model was trained and validated on both simulated and real galaxies, it was applied to the DES dataset, and the resulting catalog of 27 million galaxies includes information on the probability of an individual galaxy being elliptical or spiral. The researchers also found that their neural network was 97% accurate at classifying galaxy morphology, even for galaxies that were too faint to classify by eye.

We pushed the limits by three orders of magnitude, to objects that are 1,000 times fainter than the original ones, Vega-Ferrero says. That is why we were able to include so many more galaxies in the catalog.

Catalogs like this are important for studying galaxy formation, Bernardi says about the significance of this latest publication. This catalog will also be useful to see if the morphology and stellar populations tell similar stories about how galaxies formed.

For the latter point, Domnguez Snchez is currently combining their morphological estimates with measures of the chemical composition, age, star-formation rate, mass, and distance of the same galaxies. Incorporating this information will allow the researchers to better study the relationship between galaxy morphology and star formation, work that will be crucial for a deeper understanding of galaxy evolution.

Bernardi says that there are a number of open questions about galaxy evolution that both this new catalog, and the methods developed to create it, can help address. The upcoming LSST/Rubin survey, for example, will use similar photometry methods to DES but will have the capability of imaging even more distant objects, providing an opportunity to gain even deeper understanding of the evolution of the universe.

Mariangela Bernardi is a professor in the Department of Physics and Astronomy in the School of Arts & Sciences at the University of Pennsylvania.

Helena Domnguez Snchez is a former Penn postdoc and is currently a postdoctoral fellow at Instituto de Ciencias del Espacio (ICE), which is part of the Consejo Superior de Investigaciones Cientficas (CSIC).

Jess Vega Ferrero is a former Penn postdoc and currently a postdoctoral researcher at the Instituto de Fsica de Cantabria (IFCA), which is part of the Consejo Superior de Investigaciones Cientficas (CSIC).

The Dark Energy Survey is supported by funding from the Department of Energys Fermi National Accelerator Laboratory, the National Center for Supercomputing Applications, and the National Science Foundations NOIRLab. A complete list of funding organizations and collaborating institutions is at The Dark Energy Survey website.

This research was supported by NSF Grant AST-1816330.

Read the original here:
27 million galaxy morphologies quantified and cataloged with the help of machine learning | Penn Today - Penn Today

Rackspace Technology Works with Brave Software to Improve Machine Learning Functionality in the Web Browser – Yahoo Finance

SAN ANTONIO, April 08, 2021 (GLOBE NEWSWIRE) -- Rackspace Technology (NASDAQ: RXT), a leading end-to-end, multicloud technology solutions company, announced today its relationship with Brave Software which provides a free, open-source private and secure web browser for PC, Mac, and mobile environments.

Brave gives users a fast and private web experience, helps advertisers achieve better conversions and increases publishers revenue share. "Its machine learning functionality helps to match advertisements in the Brave Ads content categories for which Brave users would have the most interest, while preserving user privacy

Brave worked with AWS Premier Consulting Partner, Onica, a Rackspace Technology company, to improve the scalability of Braves software, increased the teams efficiency, and reduced infrastructure costs by 50 percent. Rackspace Technology used a wide range of AWS services to build cloud infrastructure tailored to Braves needs.

Before working with Rackspace Technology, Braves processes for training and deploying machine learning models were slower, involving manual steps spanning several days. Brave needed a more robust pipeline and fully automated processes.

Working with Rackspace Technology and AWS was beneficial to the continued success and scaling of Brave, said Jimmy Secretan, VP of Services and Operations, Brave Software. It substantially improved the way we created and deployed new models, which has helped us to be much more responsive to advertisers needs."

Rackspace Technology is one of only a few providers to have achieved AWS Machine Learning Competency status, said Jeff Deverter, CTO, Solutions at Rackspace Technology. This unique combination of expertise in AWS services and machine learning made us an ideal partner for Brave.

To learn more about Rackspace Technologys work and capabilities please visit http://www.rackspace.com.

About Rackspace TechnologyRackspace Technology is a leading end-to-end multicloud technology services company. We can design, build and operate our customers cloud environments across all major technology platforms, irrespective of technology stack or deployment model. We partner with our customers at every stage of their cloud journey, enabling them to modernize applications, build new products and adopt innovative technologies.

Story continues

Media ContactNatalie SilvaRackspace Technology Corporate Communicationspublicrelations@rackspace.com

Read the rest here:
Rackspace Technology Works with Brave Software to Improve Machine Learning Functionality in the Web Browser - Yahoo Finance