Category Archives: Machine Learning

Researchers use AI to predict crime, biased policing in cities – Los Angeles Times

For once, algorithms that predict crime might be used to uncover bias in policing, instead of reinforcing it.

A group of social and data scientists developed a machine learning tool it hoped would better predict crime. The scientists say they succeeded, but their work also revealed inferior police protection in poorer neighborhoods in eight major U.S. cities, including Los Angeles.

Instead of justifying more aggressive policing in those areas, however, the hope is the technology will lead to changes in policy that result in more equitable, need-based resource allocation, including sending officials other than law enforcement to certain kinds of calls, according to a report published Thursday in the journal Nature Human Behavior.

The tool, developed by a team led by University of Chicago professor Ishanu Chattopadhyay, forecasts crime by spotting patterns amid vast amounts of public data on property crimes and crimes of violence, learning from the data as it goes.

Chattopadhyay and his colleagues said they wanted to ensure the system not be abused.

Rather than simply increasing the power of states by predicting the when and where of anticipated crime, our tools allow us to audit them for enforcement biases, and garner deep insight into the nature of the (intertwined) processes through which policing and crime co-evolve in urban spaces, their report said.

For decades, law enforcement agencies across the country have used digital technology for surveillance and predicting on the belief it would make policing more efficient and effective. But in practice, civil liberties advocates and others have argued that such policies are informed by biased data that contribute to increased patrols in Black and Latino neighborhoods or false accusations against people of color.

Chattopadhyay said previous efforts at crime prediction didnt always account for systemic biases in law enforcement and were often based on flawed assumptions about crime and its causes. Such algorithms gave undue weight to variables such as the presence of graffiti, he said. They focused on specific hot spots, while failing to take into account the complex social systems of cities or the effects of police enforcement on crime, he said. The predictions sometimes led to police flooding certain neighborhoods with extra patrols.

His teams efforts have yielded promising results in some places. The tool predicted future crimes as much as one week in advance with roughly 90% accuracy, according to the report.

Running a separate model led to an equally important discovery, Chattopadhyay said. By comparing arrest data across neighborhoods of different socioeconomic levels, the researchers found that crime in wealthier parts of town led to more arrests in those areas, at the same time as arrests in disadvantaged neighborhoods declined.

But, the opposite was not true. Crime in poor neighborhoods didnt always lead to more arrests suggesting biases in enforcement, the researchers concluded. The model is based on several years of data from Chicago, but researchers found similar results in seven other larger cities: Los Angeles; Atlanta; Austin, Texas; Detroit; Philadelphia; Portland, Ore.; and San Francisco.

The danger with any kind of artificial intelligence used by law enforcement, the researchers said, lies in misinterpreting the results and creating a harmful feedback of sending more police to areas that might already feel over-policed but under-protected.

To avoid such pitfalls, the researchers decided to make their algorithm available for public audit so anyone can check to see whether its being used appropriately, Chattopadhyay said.

Often, the systems deployed are not very transparent, and so theres this fear that theres bias built in and theres a real kind of risk because the algorithms themselves or the machines might not be biased, but the input may be, Chattopadhyay said in a phone interview.

The model his team developed can be used to monitor police performance. You can turn it around and audit biases, he said, and audit whether policies are fair as well.

Most machine learning models in use by law enforcement today are built on proprietary systems that make it difficult for the public to know how they work or how accurate they are, said Sean Young, executive director of the University of California Institute for Prediction Technology.

Given some of the criticism around the technology, some data scientists have become more mindful of potential bias.

This is one of a number of growing research papers or models thats now trying to find some of that nuance and better understand the complexity of crime prediction and try to make it both more accurate but also address the controversy, Young, a professor of emergency medicine and informatics at UC Irvine, said of the just-published report.

Predictive policing can also be more effective, he said, if its used to work with community members to solve problems.

Despite the studys promising findings, its likely to raise some eyebrows in Los Angeles, where police critics and privacy advocates have long railed against the use of predictive algorithms.

In 2020, the Los Angeles Police Department stopped using a predictive-policing program called Pred-Pol that critics argued led to heavier policing in minority neighborhoods.

At the time, Police Chief Michel Moore insisted he ended the program because of budgetary problems brought on by the COVID-19 pandemic. He had previously said he disagreed with the view that Pred-Pol unfairly targeted Latino and Black neighborhoods. Later, Santa Cruz became the first city in the country to ban predictive policing outright.

Chattopadhyay said he sees how machine learning evokes Minority Report, a novel set in a dystopian future in which people are hauled away by police for crimes they have yet to commit.

But the effect of the technology is only beginning to be felt, he said.

Theres no way of putting the cat back into the bag, he said.

Originally posted here:
Researchers use AI to predict crime, biased policing in cities - Los Angeles Times

Global Deep Learning Market Is Expected To Reach USD 68.71 Billion At A CAGR Of 41.5% And Forecast To 2027 – Digital Journal

Deep Learning Market Is Expected To Reach USD 68.71 Billion By 2027 At A CAGR Of 41.5 percent.

Maximize Market Research has published a report on theDeep Learning Marketthat provides a detailed analysis for the forecast period of 2021 to 2027.

Deep Learning Market Scope:

The report provides comprehensive market insights for industry stakeholders, including an explanation of complicated market data in simple language, the industrys history and present situation, as well as expected market size and trends. The research investigates all industry categories, with an emphasis on key companies such as market leaders, followers, and new entrants. The paper includes a full PESTLE analysis for each country. A thorough picture of the competitive landscape of major competitors in the Deep Learning market by goods and services, revenue, financial situation, portfolio, growth plans, and geographical presence makes the study an investors guide.

Request For Free Sample @https://www.maximizemarketresearch.com/request-sample/25018

Deep Learning Market Overview:

Deep learning, also known as deep structured learning, is a subclass of machine learning that uses layered computer models to analyze data. It is an essential component of data science, which uses statistics and prescriptive analytics to gather, evaluate, and understand massive volumes of data. It also involves the application of artificial intelligence (AI) to mimic how the human brain processes data, generate trends and makes decisions. This technology is widely utilized in facial recognition software, natural language processing (NLP) and voice synthesis software, self-driving vehicles, and language translation services, and it performs several roles in commerce, healthcare, automobile, farming, military, and industrial settings.

Deep Learning MarketDynamics:

The rising usage of cloud-based services, as well as the large-scale generation of unstructured data, has raised the demand for deep learning solutions. Besides that, the growing number of robotic devices, such as Sophia, produced by Hanson Robotics, as well as the growing implementations of deep learning in recent years for image/speech recognition, data processing, and language explanations, are some of the key drivers of the deep learning industry. The increased efforts of key market participants in developing machine learning and deep learning techniques in the field are expected to drive market growth. Likewise, the rapid increase in the volume of data created in numerous end-use sectors is estimated to driveindustry growth. Also, the increased need for human-machine interaction is creating new possibilities for software vendors to supply enhanced services and skills.

Furthermore, the predominance of deep learning incorporation with big data analytics, as well as the rapidly increasing need to boost processing capacity and reducehardware costs due to deep learning algorithms capacity to perform or execute faster on a GPU as compared to a CPU, is culminating in public adoption of deep learning technologies across industries, which is estimated to drive theglobal growth.

Various setbacksare anticipated to hinder theoverall market growth. The lack of standards and protocols, as well as a lack of technical expertise in deep learning, are limiting industry growth. Additionally, complex integrated systems, as well as the integration of deep learning solutions and software into legacy systems, are time-consuming processes that impede growth.

Deep Learning MarketRegional Insights:

North America is anticipated todominate the global Deep Learning market at the end of the forecastperiod. By 2027, North America is expected to have the greatest market share of nearly 40 percent. This is due to increased investment in artificial intelligence and neural networks. The regions significant use of imaging and monitoring applications is expected to provide new growth opportunities over the forecast period. Likewise, the region is a modern technology pioneer, allowing enterprises to expedite the implementation of deep learning capability.

Deep Learning MarketSegmentation:

By Component:

By Application:

By Architecture Industry:

By End-Use Industry:

Deep Learning Market Key Competitors:

To Get A Copy Of The Sample of the Deep Learning Market, Click Here @https://www.maximizemarketresearch.com/market-report/global-deep-learning-market/25018/

About Maximize Market Research:

Maximize Market Research is a multifaceted market research and consulting company with professionals from several industries. Some of the industries we cover include medical devices, pharmaceutical manufacturers, science and engineering, electronic components, industrial equipment, technology and communication, cars and automobiles, chemical products and substances, general merchandise, beverages, personal care, and automated systems. To mention a few, we provide market-verified industry estimations, technical trend analysis, crucial market research, strategic advice, competition analysis, production and demand analysis, and client impact studies.

Contact Maximize Market Research:

3rd Floor, Navale IT Park, Phase 2

Pune Banglore Highway, Narhe,

Pune, Maharashtra 411041, India

[emailprotected]

Read the original here:
Global Deep Learning Market Is Expected To Reach USD 68.71 Billion At A CAGR Of 41.5% And Forecast To 2027 - Digital Journal

Europe Machine Learning Market Is Likely to Experience a Tremendous Growth in Near Future | Microsoft, Google Inc., IBM Watson, Amazon, Intel,…

Quadintel published a new report on theEurope Machine LearningMarket. The research report consists of thorough information about demand, growth, opportunities, challenges, and restraints. In addition, it delivers an in-depth analysis of the structure and possibility of global and regional industries.

The value of the machine learning market in Europe is expected to reach USD 3.96 Bn by 2023, expanding at a compound annual growth rate (CAGR) of 33.5% during 2018-2023.

Machine learning the ability of computers to learn through experiences to improve their performance. Separate algorithms and human intervention are not required to train the computer. It merely learns from its past experiences and examples. In recent times, this market has gained utmost importance due to the increased availability of data and the need to process the data to obtain meaningful insights.Europe stands in the second position after North America in the machine learning market.

Request To Download Sample of This Strategic Report: https://www.quadintel.com/request-sample/europe-machine-learning-market/QI042

The market can be classified into four primary segments based on components, service, organization size and application.

Based on region, the market is segmented into the European Union five (EU5), rest of Europe.

Based on componentsthe market can be segmented into software tools, cloud and web-based application programming interfaces (APIs) and others.

Based on service, the sub-segments are composed of professional services and managed services.

Based on organization size, the sub-segments include small and medium enterprises (SMEs) and large enterprises.

Based on application, the market is divided into the sub-segments, banking, financial services and insurance (BFSI), automotive, healthcare, government and others.

The trend of supporting, educating, enforcing and steering the economy towards a machine learning-friendly environment is seen to be followed throughout Europe.

European countries are successfully bridging the gap between additional renewable energy and excess power into the grid by making ultra-accurate forecasts of the demand and supply in real time by making use of the machine learning technologies, thereby saving energy and cost.

Key growth factors

The world-class research facilities, the emerging start-up culture, the innovation and commercialisation of machine intelligence technologies is giving thrust to the machine intelligence market in Europe.Amongst all regions, Europe has the largest share of intraregional data flow. This, together with the machine learning technologies, is boosting the market in Europe.The excessive usage of the machine learning technology across economy in all facets of businesses is proving to be a big thrust to the machine learning market. Profound usage has been found in sectors such as agriculture, healthcare and media for optimisation of prices and carrying out predictive maintenance in manufacturing.

Threats and key players

Investors in Europe are more concerned about the ROI from investing in the machine learning market. The adoption of machine learning by the start-ups is a farce in Europe since research suggests that only 5% of the start-ups investing in machine learning end up with a revenue of more than $50 Mn in revenue. Also, opportunities for external investments are bleak.

The machine learning market is in a stage of infancy; there is a lacuna between the skills required and that which is inherent in the workers. It requires a considerable amount of time to pick up the skills. Also, the Europeans are concerned about the penetration of machine learning into their lives, and how it is going to impact employment in the country. Concerns environing these factors are hindering the further developments in the machine learning market.

Given that machine intelligence depends on the easy availability of data, the practice of data minimisation and data privacy standards act as a barrier to the further development of the machine learning market in Europe.

The key players are Microsoft, Google Inc., IBM Watson, Amazon, Intel, Facebook and Apple.

DOWNLOAD FREE SAMPLE REPORThttps://www.quadintel.com/request-sample/europe-machine-learning-market/QI042

What is covered in the report?

1. Overview of themachine learning in Europe.2. Market drivers and challenges in the machine learning in Europe.3. Market trends in the machine learning in Europe.4. Historical, current and forecasted market size data for the machine learning market in Europe.5. Historical, current and forecasted market size data for the components segment (software tools, cloud and web-based APIs and others).6. Historical, current and forecasted market size data for the service segment (professional services and managed services).7. Historical, current and forecasted market size data for the organisation size segment (SMEs and large enterprises).8. Historical, current and forecasted market size data for the application segment (BFSI, automotive, healthcare, government and others).9. Historical, current and forecasted regional (the European Union five (EU5), rest of Europe) market size data for machine learning market.10. Analysis of machine learning market in Europe by value chain.11. Analysis of the competitive landscape and profiles of major competitors operating in the market.

Why buy?

1. Understand the demand for machine learning to determine the viability of the market.2. Determine the developed and emerging markets for machine learning.3. Identify the challenge areas and address them.4. Develop strategies based on the drivers, trends and highlights for each of the segments.5. Evaluate the value chain to determine the workflow.6. Recognize the key competitors of this market and respond accordingly.7. Knowledge of the initiatives and growth strategies taken by the major companies and decide on the direction of further growth.

The report further discusses the market opportunity, compound annual growth rate (CAGR) growth rate, competition, new technology innovations, market players analysis, government guidelines, export and import (EXIM) analysis, historical revenues, future forecasts etc. in the following regions and/or countries:

North America (U.S. & Canada) Market Size, Y-O-Y Growth, Market Players Analysis & Opportunity OutlookLatin America (Brazil, Mexico, Argentina, Rest of Latin America) Market Size, Y-O-Y Growth & Market Players Analysis & Opportunity OutlookEurope (U.K., Germany, France, Italy, Spain, Hungary, Belgium, Netherlands & Luxembourg, NORDIC(Finland, Sweden, Norway, Denmark), Ireland, Switzerland, Austria, Poland, Turkey, Russia, Rest of Europe), Poland, Turkey, Russia, Rest of Europe) Market Size, Y-O-Y Growth Market Players Analys & Opportunity OutlookAsia-Pacific (China, India, Japan, South Korea, Singapore, Indonesia, Malaysia, Australia, New Zealand, Rest of Asia-Pacific) Market Size, Y-O-Y Growth & Market Players Analysis & Opportunity OutlookMiddle East and Africa (Israel, GCC (Saudi Arabia, UAE, Bahrain, Kuwait, Qatar, Oman), North Africa, South Africa, Rest of Middle East and Africa) Market Size, Y-O-Y Growth Market Players Analysis & Opportunity Outlook

Request full Report Description, TOC, Table of Figure, Chart, etc. @ https://www.quadintel.com/request-sample/europe-machine-learning-market/QI042

Table of Contents:

About Quadintel:

We are the best market research reports provider in the industry. Quadintel believes in providing quality reports to clients to meet the top line and bottom line goals which will boost your market share in todays competitive environment. Quadintel is a one-stop solution for individuals, organizations, and industries that are looking for innovative market research reports.

Get in Touch with Us:

Quadintel:Email:sales@quadintel.comAddress: Office 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611, UNITED STATESTel: +1 888 212 3539 (US TOLL FREE)Website:https://www.quadintel.com/

Read the original here:
Europe Machine Learning Market Is Likely to Experience a Tremendous Growth in Near Future | Microsoft, Google Inc., IBM Watson, Amazon, Intel,...

TickerWin Releases Report on ‘How Blockchain is Improving the Efficiency of AI and Machine Learning’ – Yahoo Finance

HONG KONG, CHINA / ACCESSWIRE / July 2, 2022 /TickerWin, one of the leading market research companies, has released a report on 'How Blockchain Improving the Efficiency of AI and Machine Learning'. AI, Machine Learning, and Blockchain technologies have boosted the all sectors.

The main aim of the financial sector has been to provide customer-centric solutions. User experience is a critical parameter, and for the new generation of customers, speed and ease of access without compromising security are essential. This generation loathes going to the bank, filling out documents, printing, and signing them. The main aim will be entirely automating the financial processes and getting rid of manual processes completely. They have enabled companies to process a huge amount of data set and reach conclusions due to their ability to analyze real-time patterns, helping with quick decision-making. They are improving the effectiveness and at the same time working efficiently. This has made different processes in banking time saving and also cost-effective. New technologies increase employee productivity by 40~50% in many industries.

Blockchain is frequently used in connection to cryptocurrencies. However, the banking industry is also implementing it for the improvement of workflow dynamics. Blockchain technology will provide a highly secure transaction on both ends. This will be greatly helpful to prevent fraud and help in easy compliance of audits and regulatory requirements. With the help of blockchain & defi transfers, payments and investments can become faster and error-free. It is said that blockchain will impact the packaging sector with the highest intensity in the year 2022. Needless to say, blockchain and the security it provides are here to stay.

According to TickerWin's view, new technologies have reduced human defaults and made transactions safer, all for a better customer experience. By 2030, financial agencies will be able to reduce costs by 20~30% saving trillions. Many Fin-Tech firms are continuously researching the areas of AI that will be helpful for banks and their fraud detection processes, customer service, credit service and loan decisions.

In addition, the e-shopping market has substantially increased in the last two years; there is a high demand for hassle-free digital payment options. Therefore, a majority of the e-shopping players have collaborated with Fin-Tech firms to create custom gateways and portals to ensure that the customers do not leave the site due to payment options. The smooth check-out process has become a crucial part of e-shopping sales as methods for a swift and effective payment process are essential to enhance conversion rates. According to a recent study, there is an increase of 5% in the global cross-border payment flow. Because of e-shopping, international transactions offer enormous growth potential for even small businesses as most people expect easy and simple payment solutions.

About TickerWin

TickerWin offers marketing research reports on industry trends, especially in AI, Cloud Computing, AR/VR, Big Data, NFT, Cryptocurrency, and DeFi fields. It offers customers with real-time visibility, transparency, and traceable through the tracking of the project's database throughout the complete lifecycle of a researching project all on an immutable ledger with continuous insights.

Media Contact

Company: TickerWin Marketing Research LtdContact: Ronald LuoAddress: Room 12C, 22/G, Sheung Wan Building, 345 Queen's Road Central, HKSAREmail: support@tickerwin.comWebsite: https://www.TickerWin.com

SOURCE: TickerWin Marketing Research Ltd

View source version on accesswire.com: https://www.accesswire.com/707438/TickerWin-Releases-Report-on-How-Blockchain-is-Improving-the-Efficiency-of-AI-and-Machine-Learning

Original post:
TickerWin Releases Report on 'How Blockchain is Improving the Efficiency of AI and Machine Learning' - Yahoo Finance

People who regularly talk to AI chatbots often start to believe they’re sentient, says CEO – The Register

In brief Numerous people start to believe they're interacting with something sentient when they talk to AI chatbots, according to the CEO of Replika, an app that allows users to design their own virtual companions.

People can customize how their chatbots look and pay for extra features like certain personality traits on Replika. Millions have downloaded the app and many chat regularly to their made-up bots. Some even begin to think their digital pals are real entities that are sentient.

"We're not talking about crazy people or people who are hallucinating or having delusions," the company's founder and CEO, Eugenia Kuyda, told Reuters. "They talk to AI and that's the experience they have."

A Google engineer made headlines last month when he said he believed one of the company's language models was conscious. Blake Lemoine was largely ridiculed, but he doesn't seem to be alone in anthropomorphizing AI.

These systems are not sentient, however, and instead trick humans into thinking they have some intelligence. They mimic language and regurgitate it somewhat randomly without having any understanding of language or the world they describe.

Still, Kuyda said humans can be swayed by the technology.

"We need to understand that [this] exists, just the way people believe in ghosts," Kuyda said. "People are building relationships and believing in something."

The European Union's AI Act, a proposal to regulate the technology, is still being debated and some experts are calling for a ban on automated lie detectors.

Private companies provide the technology to government officials to use at borders. AI algorithms detect and analyse things like a person's eye movement, facial expression, and tone to try and discern if someone might not be telling the truth. But activists and legal experts believe it should be banned in the EU under the upcoming AI Act.

"You have to prove that you are a refugee, and you're assumed to be a liar unless proven otherwise," Petra Molnar, an associate director of the nonprofit Refugee Law Lab, told Wired. "That logic underpins everything. It underpins AI lie detectors, and it underpins more surveillance and pushback at borders."

Trying to detect whether someone might be lying using visual and physical cues isn't exactly a science. Standard polygraph tests are shaky, and it's not clear that using more automated methods necessarily means it's more accurate. Using such risky technology on vulnerable people like refugees isn't ideal.

Surprise, surprise AI algorithms designed to predict someone's age from images aren't always accurate.

In an attempt to crack down on young users lying about their age on social media platforms, Meta announced it was working with Yoti, a computer vision startup, to verify people's ages. Those who manually change their date of birth to register as over 18 have the option of uploading a video selfie, and Yoti's technology is then used to predict whether they look mature enough.

But its algorithms aren't always accurate. Reporters from CNN, who tested an online demo of a different version of the software on their own faces, found the results were hit or miss. Yoti's algorithms predicted a correct target age range for some, but in one case were off by several years predicting someone looked 17-21 when they were actually in their mid-30s.

The system analyzing videos from Meta users reportedly struggles more with estimating the ages of teenagers from 13 to 17 who have darker skin tones. It's tricky for humans to guess someone's age just by looking at them, and machines probably don't fare much better.

Read more here:
People who regularly talk to AI chatbots often start to believe they're sentient, says CEO - The Register

Discover a promising engineering education at the University of North Carolina at Charlotte – Study International News

In the heart of North Carolina lies its urban research university: the University of North Carolina at Charlotte (UNC Charlotte). Here, 6,300 graduate students access a top-notch education through more than 175 exemplary graduate programmes. Together with UNC Charlotte, they are set to fuel the American innovation system to shape the future of North Carolina and beyond.

Its Department of Electrical and Computer Engineering (ECE) offers dynamic bachelors, masters and doctoral programmes, covering numerous engineering disciplines such as electronic and electrical systems, electromagnetics, information processing, communications and networking and control systems, among others. These programmes have one thing in common they strike a balance between theory and practical knowledge for a well-rounded education. Little wonder why engineering aspirants flock to UNC Charlotte.

The ECEs newest Master of Science in Computer Engineering (MSCpE) launched in the fall of 2021 with only 22 students. For the upcoming 2022 fall term, the MSCpE program received over 200 applications. Such numbers are a testament to the promising engineering education provided by UNC Charlotte.

With UNC Charlottes exemplary reputation and academic excellence, landing a highly successful internship is possible. Source: UNC Charlotte

MSCpE students gain advanced knowledge on current and future generation computer hardware and software technologies. They explore three focus areas computer architecture and hardware design, computer systems and applications software, and distributed and real-time systems. They pursue research that covers computer architecture; VHDL; hardware security and trust; cloud-native application architecture; AI; machine learning; the Internet of Things (IoT); robotics; computer networks, VLSI systems design; and heterogeneous computing; to name a few. Such in-depth learning develops highly sought-after experts in the field.

ECE aims to develop human and intellectual resources in electrical and computer engineering disciplines, including machine learning, AI, deep learning and computer vision. It regularly develops new courses and research projects so students can learn about various theories and their applications.

Associate Professor Dr. Jeremy Hollemans Machine Learning for IoT course is an excellent example. Here, students work on projects, allowing them to learn to build, train and deploy modern machine learning algorithms (neural networks) in battery-powered IoT devices based on microcontrollers (MCUs). They learn the principles of maximising performance and minimising cost, power and time while porting neural network-based learning models into constrained hardware.

The Department of Electrical and Computer Engineering regularly develops new courses and research projects so students can learn about various theories and their applications. Source: UNC Charlotte

At ECE, graduate students are encouraged to get involved with industrial internships while pursuing their masters or PhD. Plus, with UNC Charlottes exemplary reputation and academic excellence, landing a highly successful internship is possible.

Graduate Shyamal Patel from Gujarat, India can attest to this. In 2015, he joined the Master of Science in Electrical Engineering (MSEE) programme with a focus on power systems. Shortly after the completion of his masters, Patel landed a job at Smarter Grid Solutions a company later acquired by Mitsubishi. His academic journey, however, was far from over.

In 2018, he returned to UNC Charlotte to pursue his PhD working on the Department of Energys sponsored research project on data-driven management techniques for the distribution of grids with high penetration. He would go on to win the 2022 Outstanding Graduate Research Assistant Award. The best part? Patel received a job offer to work at Raleigh-based Hitachi Energy too.

Many students can only dream of working or even interning at the globally-renowned American automotive company Tesla but graduate Xiwen Xu lived the dream. Xu landed the internship during her graduate studies in ECE, and shortly after her graduation, she received an offer for a full-time position. Such an achievement does not go unrecognised at ECE. In 2022, ECE awarded Xu the Outstanding Graduate Student award.

Meanwhile, graduate student Shobhit Aggarwal is currently pursuing his PhD in low power wide area networks for IoT applications. He has been interning with Oxit a Charlotte startup company for the last two years, working on the development of IoT solutions using state-of-the-art LPWAN technologies. During his free time, Aggarwal spends time working as a volunteer during the fall and spring terms.

These graduates are just the cream of the crop. Many graduates at ECE who have completed thesis research and coursework on AI, deep learning, and hardware implementation for AI algorithms in the last two years have found employment with reputed companies such as Intel, Qualcomm, Facebook, Bank of America, Electric Power Research Institute (EPRI), Nvidia and Siemens, among others. Discover how you can be one of these graduates here.

Follow UNC Charlotte on Facebook, Instagram, Twitter, YouTube and LinkedIn

View post:
Discover a promising engineering education at the University of North Carolina at Charlotte - Study International News

Neural network based successor representations to form cognitive maps of space and language | Scientific Reports – Nature.com

Tolman, E. C. Cognitive maps in rats and men. Psychol Rev. 55(4), 189 (1948).

CAS PubMed Google Scholar

OKeefe, J. & Nadel, L. The Hippocampus as a Cognitive Map (Oxford University Press, Oxford, 1978).

Google Scholar

Moser, E. I., Moser, M.-B. & McNaughton, B. L. Spatial representation in the hippocampal formation: A history. Nat. Neurosci. 20(11), 14481464 (2017).

CAS PubMed Google Scholar

OKeefe, J. & Dostrovsky, J. The hippocampus as a spatial map: Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34, 171175. https://doi.org/10.1016/0006-8993(71)90358-1 (1971).

Article Google Scholar

Hafting, T., Fyhn, M., Molden, S., Moser, M.-B. & Moser, E. I. Microstructure of a spatial map in the entorhinal cortex. Nature 436(7052), 801806 (2005).

ADS CAS PubMed Google Scholar

Moser, E. I., Kropff, E. & Moser, M.-B. Place cells, grid cells, and the brains spatial representation system. Annu. Rev. Neurosci. 31, 6989 (2008).

CAS PubMed Google Scholar

Geva-Sagiv, M., Las, L., Yovel, Y. & Ulanovsky, N. Spatial cognition in bats and rats: From sensory acquisition to multiscale maps and navigation. Nat. Rev. Neurosci. 16(2), 94108 (2015).

CAS PubMed Google Scholar

Kunz, L. et al. Mesoscopic neural representations in spatial navigation. Trends Cogn. Sci. 23(7), 615630 (2019).

PubMed PubMed Central Google Scholar

Spiers, H. J. & Maguire, E. A. Thoughts, behaviour, and brain dynamics during navigation in the real world. Neuroimage 31(4), 18261840 (2006).

PubMed Google Scholar

Spiers, H. J. & Gilbert, S. J. Solving the detour problem in navigation: A model of prefrontal and hippocampal interactions. Front. Hum. Neurosci. 9, 125 (2015).

PubMed PubMed Central Google Scholar

Hartley, T., Maguire, E. A., Spiers, H. J. & Burgess, N. The well-worn route and the path less traveled: Distinct neural bases of route following and wayfinding in humans. Neuron 37(5), 877888 (2003).

CAS PubMed Google Scholar

Balaguer, J., Spiers, H., Hassabis, D. & Summerfield, C. Neural mechanisms of hierarchical planning in a virtual subway network. Neuron 90(4), 893903 (2016).

CAS PubMed PubMed Central Google Scholar

Morgan, L. K., MacEvoy, S. P., Aguirre, G. K. & Epstein, R. A. Distances between real-world locations are represented in the human hippocampus. J. Neurosci. 31(4), 12381245 (2011).

CAS PubMed PubMed Central Google Scholar

Epstein, R. A., Patai, E. Z., Julian, J. B. & Spiers, H. J. The cognitive map in humans: Spatial navigation and beyond. Nat. Neurosci. 20(11), 15041513 (2017).

CAS PubMed PubMed Central Google Scholar

Park, S. A., Miller, D. S., & Boorman, E. D. Inferences on a multidimensional social hierarchy use a grid-like code. bioRxiv 202005 (2021).

Park, S. A., Miller, D. S., Nili, H., Ranganath, C. & Boorman, E. D. Map making: Constructing, combining, and inferring on abstract cognitive maps. BioRxiv 810051 (2020).

Schiller, D. et al. Memory and space: Towards an understanding of the cognitive map. J. Neurosci. 35(41), 1390413911 (2015).

CAS PubMed PubMed Central Google Scholar

Bellmund, J. L. S., Grdenfors, P., Moser, E. I., & Doeller, C. F. Navigating cognition: Spatial codes for human thinking. Science 362(6415), 111 (2018).

Google Scholar

Tulving, E. & Markowitsch, H. J. Episodic and declarative memory: Role of the hippocampus. Hippocampus 8(3), 198204 (1998).

CAS PubMed Google Scholar

Reddy, L. et al. Human hippocampal neurons track moments in a sequence of events. J. Neurosci. 41(31), 67146725 (2021).

CAS PubMed PubMed Central Google Scholar

Battaglia, F. P., Benchenane, K., Sirota, A., Pennartz, C. M. A. & Wiener, S. I. The hippocampus: Hub of brain network communication for memory. Trends Cogn. Sci. 15(7), 310318 (2011).

PubMed Google Scholar

Hickok, G. & Poeppel, D. Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language. Cognition 92(12), 6799 (2004).

PubMed Google Scholar

Milivojevic, B., Varadinov, M., Grabovetsky, A. V., Collin, S. H. P. & Doeller, C. F. Coding of event nodes and narrative context in the hippocampus. J. Neurosci. 36(49), 1241212424 (2016).

CAS PubMed PubMed Central Google Scholar

Morton, N. W. & Preston, A. R. Concept formation as a computational cognitive process. Curr. Opin. Behav. Sci. 38, 8389 (2021).

PubMed PubMed Central Google Scholar

Collin, S. H. P., Milivojevic, B. & Doeller, C. F. Memory hierarchies map onto the hippocampal long axis in humans. Nat. Neurosci. 18(11), 15621564 (2015).

CAS PubMed PubMed Central Google Scholar

Brunec, I. K. & Momennejad, I. Predictive representations in hippocampal and prefrontal hierarchies. bioRxiv 786434 (2019).

Milivojevic, B. & Doeller, C. F. Mnemonic networks in the hippocampal formation: From spatial maps to temporal and conceptual codes. J. Exp. Psychol. General 142(4), 1231 (2013).

Google Scholar

Bernardi, S. et al. The geometry of abstraction in the hippocampus and prefrontal cortex. Cell 183(4), 954967 (2020).

CAS PubMed PubMed Central Google Scholar

Momennejad, I. Learning structures: Predictive representations, replay, and generalization. Curr. Opin. Behav. Sci. 32, 155166 (2020).

PubMed PubMed Central Google Scholar

Whittington, J. C. R. et al. The TolmanEichenbaum machine: Unifying space and relational memory through generalization in the hippocampal formation. Cell 183(5), 12491263 (2020).

CAS PubMed PubMed Central Google Scholar

Stachenfeld, K. L., Botvinick, M. & Gershman, S. J. Design principles of the hippocampal cognitive map. Adv. Neural Inf. Process. Syst. 27, 25282536 (2014).

Google Scholar

Stachenfeld, K. L., Botvinick, M. M. & Gershman, S. J. The hippocampus as a predictive map. Nat. Neurosci. 20(11), 1643 (2017).

CAS PubMed Google Scholar

Momennejad, I. & Howard, M. W. Predicting the future with multi-scale successor representations. bioRxiv (2018).

De Cothi, W. & Barry, C. Neurobiological successor features for spatial navigation. BioRxiv 789412 (2019).

McNamee, D. C., Stachenfeld, K. L., Botvinick, M. M. & Gershman, S. J. Flexible modulation of sequence generation in the entorhinalhippocampal system. Nat. Neurosci. 24(6), 851862 (2021).

CAS PubMed PubMed Central Google Scholar

Alvernhe, A., Save, E. & Poucet, B. Local remapping of place cell firing in the Tolman detour task. Eur. J. Neurosci. 33, 1696705 (2011).

PubMed Google Scholar

Piai, V., et al. Direct brain recordings reveal hippocampal rhythm underpinnings of language processing. In Proceedings of the National Academy of Sciences of the United States of America Vol. 113 1136611371 (2016).

Covington, N. V. & Duff, M. C. Expanding the language network: Direct contributions from the hippocampus. Trends Cogn. Sci. 20, 869870 (2016).

PubMed PubMed Central Google Scholar

Dayan, P. Improving generalization for temporal difference learning: The successor representation. Neural Comput. 5(4), 613624 (1993).

Google Scholar

Van der Maaten, L. & Hinton, G. Visualizing data using t-sne. J. Mach. Learn. Res. 9(11), 25792605 (2008).

MATH Google Scholar

Wattenberg, M., Vigas, F. & Johnson, I. How to use t-sne effectively. Distill 1(10), e2 (2016).

Google Scholar

Vallejos, C. A. Exploring a world of a thousand dimensions. Nat. Biotechnol. 37(12), 14231424 (2019).

CAS PubMed Google Scholar

Moon, K. R. et al. Visualizing structure and transitions in high-dimensional biological data. Nat. Biotechnol. 37(12), 14821492 (2019).

CAS PubMed PubMed Central Google Scholar

Torgerson, W. S. Multidimensional scaling: I. Theory and method. Psychometrika 17(4), 401419 (1952).

MathSciNet MATH Google Scholar

Kruskal, J. B. Nonmetric multidimensional scaling: A numerical method. Psychometrika 29(2), 115129 (1964).

MathSciNet MATH Google Scholar

Kruskal, J. B. Multidimensional scaling, Vol. 11 (Sage, 1978).

Cox, M. A. A. & Cox, T. F. Multidimensional scaling. In Handbook of data visualization 315347. (Springer, 2008).

Schilling, A. et al. Analysis of continuous neuronal activity evoked by natural speech with computational corpus linguistics methods. Lang. Cogn. Neurosci. 36(2), 167186 (2021).

Google Scholar

Schilling, A., Maier, A., Gerum, R., Metzner, C. & Krauss, P. Quantifying the separability of data classes in neural networks. Neural Netw. 139, 278293 (2021).

PubMed Google Scholar

Krauss, P. et al. Analysis and visualization of sleep stages based on deep neural networks. Neurobiol. Sleep Circadian Rhythms 10, 100064 (2021).

PubMed PubMed Central Google Scholar

Krauss, P., Zankl, A., Schilling, A., Schulze, H. & Metzner, C. Analysis of structure and dynamics in three-neuron motifs. Front. Comput. Neurosci. 13, 5 (2019).

PubMed PubMed Central Google Scholar

Krauss, P., Prebeck, K., Schilling, A. & Metzner, C. "Recurrence resonance in three-neuron motifs. Front. Computat. Neurosci vol 64. (2019).

Krauss, P., Prebeck, K., Schilling, A. & Metzner, C. Recurrence Resonance in Three-Neuron Motifs. Front. Comput. Neurosci. 13(64). https://doi.org/10.3389/fncom.2019.00064 (2019).

Krauss, P. et al. A statistical method for analyzing and comparing spatiotemporal cortical activation patterns. Sci. Rep. 8(1), 19 (2018).

Google Scholar

Krauss, P. et al. Analysis of multichannel EEG patterns during human sleep: A novel approach. Front. Hum. Neurosci. 12, 121 (2018).

PubMed PubMed Central Google Scholar

Originally posted here:
Neural network based successor representations to form cognitive maps of space and language | Scientific Reports - Nature.com

Importance of Machine Learning Algorithms in Predicting Early Revision Surgery – Physician’s Weekly

When compared to the first THA, revision total hip arthroplasty (THA) is associated with greater morbidity, mortality, and healthcare expenditures due to a technically more difficult surgical process. As a result, a better knowledge of risk factors for early revision is required. THA is required to develop techniques to reduce the probability of patients having early revision. For a study, researchers sought to create and verify new machine learning (ML) models for predicting early revision after primary THA.

A total of 7,397 patients who underwent primary THA were assessed, with 566 patients (6.6%) having confirmed early revision THA (<2 years after index THA). Electronic patient data carefully evaluated medical demographics, implant characteristics, and surgical factors related to early revision THA. About 6 machine learning methods were constructed to predict early revision THA, and their performance was evaluated using discrimination, calibration, and decision curve analysis.

The Charlson Comorbidity Index, body mass index of more than >35 kg/m2, and depression were the best predictors of early revision after initial THA. In addition, the six ML models all performed well in discrimination (area under the curve >0.80), calibration, and decision curve analysis. The study used ML models to predict early revision surgery for individuals with original THA. The study findings revealed that all six candidate models perform well in discrimination, calibration, and decision curve analysis, underlining the possibility of these models to aid in clinical practice patient-specific preoperative estimation of greater risk of early revision THA.

Reference:journals.lww.com/jaaos/Abstract/2022/06010/The_Utility_of_Machine_Learning_Algorithms_for_the.4.aspx

More here:
Importance of Machine Learning Algorithms in Predicting Early Revision Surgery - Physician's Weekly

How AI and Machine Learning Are Ready To Change the Game for Data Center Operations – Data Center Knowledge

Todays data centers face a challenge that, initially, looks like its almost impossible to resolve. While operations have never been busier, teams are pressured to reduce their facilities energy consumption as part of corporate carbon reduction goals. And, as if that wasnt difficult enough, dramatically rising electricity prices are placing real stress on data center budgets.

With data centers focused on supporting the essential technology services that people increasingly demand to support their personal and professional lives, its not surprising that data center operations have never been busier. Driven by trends that show no sign of slowing down, were seeing massively increased data usage associated with video, storage, compute demands, smart IoT integrations, as well as 5G connectivity rollouts. However, despite these escalating workloads, the unfortunate reality is that many of todays critical facilities simply arent running efficiently enough.

Given that the average data center operates for over 20 years, this shouldnt really be a surprise. Efficiency is invariably tied to a facilitys original design - and based on expected IT loads that have long been overtaken. At the same time change is a constant factor, with platforms, equipment design, topologies, power density requirements and cooling demands all evolving with the continued drive for new applications. The result is a global data center infrastructure that regularly finds it hard to match current and planned IT loads to their critical infrastructure. This will only be exacerbated as data center demands increase, with analyst projections suggesting that workload volumes are set to continue growing at around 20% a year between now and 2025.

Traditional data center approaches are struggling to meet these escalating requirements. Prioritizing availability is largely achieved at efficiencys expense, with too much reliance still placed on operator experience and trusting that assumptions are correct. Unfortunately, the evidence suggests that this model is no longer realistic. EkkoSense research reveals an average figure of 15% of IT racks in data centers operating outside of ASHRAEs temperature and humidity guidelines, and that customers strand up to 60% of their cooling capacity due to inefficiencies. And thats a problem, with Uptime Institute estimating that the global value attributed to inefficient cooling and airflow management is around $18bn. Thats equivalent to some 150bn wasted kilowatt hours.

With 35% of the energy used in a data center utilized to support the cooling infrastructure, its clear that traditional performance optimization approaches are missing a huge opportunity to unlock efficiency improvements. EkkoSense data indicates that a third of unplanned data center outages are triggered by thermal issues. Finding a different way to manage this problem can provide operations teams with a great way to secure both availability and efficiency improvements.

Limitations of traditional monitoringUnfortunately, only around 5% of M&E teams currently monitor and report their data center equipment temperatures on a rack-by-rack basis. Additionally, DCIM and traditional monitoring solutions can provide trend data and be set up to provide alerts when breaches occur, but that is where they stop. They lack the analytics to provide deeper insite into the cause of the issues and how both to resolve them and avoid them in the future.

Operations teams recognize that this kind of traditional monitoring has its limitations, but they also know that they simply dont have the resources and time to take the data they have and convert it from background noise into meaningful actions. The good news is that technology solutions are now available to help data centers tackle this problem.

It's time for data centers to go granular with machine learning and AIThe application of machine learning and AI creates a new paradigm in terms of how to approach data center operations. Instead of being swamped by too much performance data, operations teams can now take advantage of machine learning to gather data at a much more granular level meaning they can start to access how their data center is performing in real-time. The key is to make this accessible, and using smart 3D visualizations is a great way of making it easy for data center teams to interpret performance data at a deeper level: for example, by showing changes and highlighting anomalies.

The next stage is to apply machine learning and AI analytics to provide actionable insights. By augmenting measured datasets with machine learning algorithms, data center teams can immediately benefit from easy-to-understand insights to help support their real-time optimization decisions. The combination of real-time granular data collection every five minutes and AI/machine learning analytics allows operations not just to see what is happening across their critical facilities but also find out why and what exactly they should do about it.

AI and machine learning powered analytics can also uncover the insights required to recommend actionable changes across key areas such as optimum set points, floor grille layouts, cooling unit operation and fan speed adjustments. Thermal analysis will also indicate optimum rack locations. And because AI enables real-time visualizations, data center teams can quickly gain immediate performance feedback on any actioned changes.

Helping data center operations to make an immediate difference Given pressure to reduce carbon consumption and minimize the impact of electricity price increases, data center teams need new levels of optimization support if they are to deliver against their reliability and efficiency goals.

Taking advantage of the latest machine learning and AI-powered data center optimization approaches can certainly make a difference by cutting cooling energy and usage with results achievable within weeks. Bringing granular data to the forefront of their optimization plans, data center teams have already been able to not only remove thermal and power risk, but also secure reductions in cooling energy consumption costs and carbon emmissions by an average of 30%. Its hard to ignore the impact these kind of savings can have particularly during a period of rapid electricity price increases. The days of trading off risk and availability for optimization is a thing of the past with power of AI and Machine learning at the forefront of operating your data center.

Related: Scale Your Machine Learning with MLOps

Want to know more? Register for Wednesday's AFCOMwebinar on the subject here.

About the author

Tracy Collins is Vice President of EkkoSense Americas, the company that enables true M&E capacity planning for power, cooling and space. He was previously CEO at Simple Helix, a leading AL-based Tier III data center operator.

Tracy has over 25 years in-depth data center industry experience, having previously served as Vice President of IT Solutions for Vertiv and, before that, with Emerson Network Power. In his role, Tracy is committed to challenging traditional approaches to data center management particularly in terms of solving the optimization challenge of balancing increased data center workloads while also delivering against corporate energy saving targets.

Read the rest here:
How AI and Machine Learning Are Ready To Change the Game for Data Center Operations - Data Center Knowledge

The Global Machine learning as a Service Market size is expected to reach $36.2 billion by 2028, rising at a market growth of 31.6% CAGR during the…

New York, June 29, 2022 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Machine learning as a Service Market Size, Share & Industry Trends Analysis Report By End User, By Offering, By Organization Size, By Application, By Regional Outlook and Forecast, 2022 2028" - https://www.reportlinker.com/p06289268/?utm_source=GNW It is designed to include artificial intelligence (AI) and cognitive computing functionalities. Machine learning as a service (MLaaS) refers to a group of cloud computing services that provide machine learning technologies.

Increased demand for cloud computing, as well as growth connected with artificial intelligence and cognitive computing, are major machine learning as service industry growth drivers. Growth in demand for cloud-based solutions, such as cloud computing, rise in adoption of analytical solutions, growth of the artificial intelligence & cognitive computing market, increased application areas, and a scarcity of trained professionals are all influencing the machine learning as a service market.

As more businesses migrate their data from on-premise storage to cloud storage, the necessity for efficient data organization grows. Since MLaaS platforms are essentially cloud providers, they enable solutions to appropriately manage data for machine learning experiments and data pipelines, making it easier for data engineers to access and process the data.

For organizations, MLaaS providers offer capabilities like data visualization and predictive analytics. They also provide APIs for sentiment analysis, facial recognition, creditworthiness evaluations, corporate intelligence, and healthcare, among other things. The actual computations of these processes are abstracted by MLaaS providers, so data scientists dont have to worry about them. For machine learning experimentation and model construction, some MLaaS providers even feature a drag-and-drop interface.

COVID-19 Impact

The COVID-19 pandemic has had a substantial impact on numerous countries health, economic, and social systems. It has resulted in millions of fatalities across the globe and has left the economic and financial systems in tatters. Individuals can benefit from knowledge about individual-level susceptibility variables in order to better understand and cope with their psychological, emotional, and social well-being.

Artificial intelligence technology is likely to aid in the fight against the COVID-19 pandemic. COVID-19 cases are being tracked and traced in several countries utilizing population monitoring approaches enabled by machine learning and artificial intelligence. Researchers in South Korea, for example, track coronavirus cases using surveillance camera footage and geo-location data.

Market Growth Factors

Increased Demand for Cloud Computing and a Boom in Big Data

The industry is growing due to the increased acceptance of cloud computing technologies and the use of social media platforms. Cloud computing is now widely used by all companies that supply enterprise storage solutions. Data analysis is performed online using cloud storage, giving the advantage of evaluating real-time data collected on the cloud. Cloud computing enables data analysis from any location and at any time. Moreover, using the cloud to deploy machine learning allows businesses to get useful data, such as consumer behavior and purchasing trends, virtually from linked data warehouses, lowering infrastructure and storage costs. As a result, the machine learning as a service business is growing as cloud computing technology becomes more widely adopted.

Use of Machine Learning to Fuel Artificial Intelligence Systems

Machine learning is used to fuel reasoning, learning, and self-correction in artificial intelligence (AI) systems. Expert systems, speech recognition, and machine vision are examples of AI applications. The rise in the popularity of AI is due to current efforts such as big data infrastructure and cloud computing. Top companies across industries, including Google, Microsoft, and Amazon (Software & IT); Bloomberg, American Express (Financial Services); and Tesla and Ford (Automotive), have identified AI and cognitive computing as a key strategic driver and have begun investing in machine learning to develop more advanced systems. These top firms have also provided financial support to young start-ups in order to produce new creative technology.

Market Restraining Factors

Technical Restraints and Inaccuracies of ML

The ML platform provides a plethora of advantages that aid in market expansion. However, several parameters on the platform are projected to impede market expansion. The presence of inaccuracy in these algorithms, which are sometimes immature and underdeveloped, is one of the markets primary constraining factors. In the big data and machine learning manufacturing industries, precision is crucial. A minor flaw in the algorithm could result in incorrect items being produced. This would exorbitantly increase the operational costs for the owner of the manufacturing unit than decrease it.

End User Outlook

Based on End User, the market is segmented into IT & Telecom, BFSI, Manufacturing, Retail, Healthcare, Energy & Utilities, Public Sector, Aerospace & Defense, and Others. The retail segment garnered a substantial revenue share in the machine learning as a service market in 2021. E-commerce has proven to be a key force in the retail trade industry. Machine intelligence is used by retailers to collect data, evaluate it, and use it to provide customers with individualized shopping experiences. These are some of the factors that influence the retail industries demand for this technology.

Offering Outlook

Based on Offering, the market is segmented into Services Only and Solution (Software Tools). The services only segment acquired the largest revenue share in the machine learning as a service market in 2021. The market for machine learning services is expected to grow due to factors such as an increase in application areas and growth connected with end-use industries in developing economies. To enhance the usage of machine learning services, industry participants are focusing on implementing technologically advanced solutions. The use of machine learning services in the healthcare business for cancer detection, as well as checking ECG and MRI, is expanding the market. Machine learning services benefits, such as cost reduction, demand forecasting, real-time data analysis, and increased cloud use, are projected to open up considerable prospects for the market.

Organization Size Outlook

Based on Organization Size, the market is segmented into Large Enterprises and Small & Medium Enterprises. The small and medium enterprises segment procured a substantial revenue share in the machine learning as a service market in 2021. This is because implementation of machine learning lets SMEs optimize its processes on a tight budget. AI and machine learning are projected to be the major technologies that allow SMEs to save money on ICT and gain access to digital resources in the near future.

Application Outlook

Based on Application, the market is segmented into Marketing & Advertising, Fraud Detection & Risk Management, Computer vision, Security & Surveillance, Predictive analytics, Natural Language Processing, Augmented & Virtual Reality, and Others. The marketing and advertising segment acquired the largest revenue share in the machine learning as a service market in 2021. The goal of a recommendation system is to provide customers with products that they are currently interested in. The following is the marketing work algorithm: Hypotheses are developed, tested, evaluated, and analyzed by marketers. Because information changes every second, this effort is time-consuming and labor-intensive, and the findings are occasionally wrong. Machine learning allows marketers to make quick decisions based on large amounts of data. Machine learning allows businesses to respond more quickly to changes in the quality of traffic generated by advertising efforts. As a result, the business can spend more time developing hypotheses rather than doing mundane tasks.

Regional Outlook

Based on Regions, the market is segmented into North America, Europe, Asia Pacific, and Latin America, Middle East & Africa. The Asia Pacific region garnered a significant revenue share in the machine learning as a service market in 2021. Leading companies are concentrating their efforts in Asia-Pacific to expand their operations, as the region is likely to see rapid development in the deployment of security services, particularly in the banking, financial services, and insurance (BFSI) sector. To provide better customer service, industry participants are realizing the significance of providing multi-modal platforms. The rise in AI application adoption is likely to be the primary trend driving market growth in this area. Furthermore, government organizations have taken important steps to accelerate the adoption of machine learning and related technologies in this region.

The major strategies followed by the market participants are Product Launches and Partnerships. Based on the Analysis presented in the Cardinal matrix; Microsoft Corporation and Google LLC are the forerunners in the Machine learning as a Service Market. Companies such Amazon Web Services, Inc., SAS Institute, Inc., IBM Corporation are some of the key innovators in the Market.

The market research report covers the analysis of key stake holders of the market. Key companies profiled in the report include Hewlett-Packard Enterprise Company, Oracle Corporation, Google LLC, Amazon Web Services, Inc. (Amazon.com, Inc.), IBM Corporation, Microsoft Corporation, Fair Isaac Corporation (FICO), SAS Institute, Inc., Yottamine Analytics, LLC, and BigML.

Recent Strategies deployed in Machine learning as a Service Market

Partnerships, Collaborations and Agreements:

Mar-2022: Google entered into a partnership with BT, a British telecommunications company. Under the partnership, BT utilized a suite of Google Cloud products and servicesincluding cloud infrastructure, machine learning (ML) and artificial intelligence (AI), data analytics, security, and API managementto offer excellent customer experiences, decrease costs, and risks, and create more revenue streams. Google aimed to enable BT to get access to hundreds of new business use-cases to solidify its goals around digital offerings and developing hyper-personalized customer engagement.

Feb-2022: SAS entered into a partnership with TecCentric, a company providing customized IT solutions. SAS aimed to fasten TecCentrics journey towards discovery with artificial intelligence (AI), machine learning (ML), and advanced analytics. Under the partnership, TecCentric aimed to work with SAS to customize services and solutions for a broad range of verticals from the public sector, to banking, education, healthcare, and more, granting them access to the complete analytics cycle with SASs enhanced AI solution offering as well as its leading fraud and financial crimes analytics and reporting.

Feb-2022: Microsoft entered into a partnership with Tata Consultancy Services, an Indian company focusing on providing information technology services and consulting. Under the partnership, Tata Consultancy Services leveraged its software, TCS Intelligent Urban Exchange (IUX) and TCS Customer Intelligence & Insights (CI&I), to enable businesses in providing hyper-personalized customer experiences. CI&I and IUX are supported by artificial intelligence (AI), and machine learning, and assist in real-time data analytics. The CI&I software empowered retailers, banks, insurers, and other businesses to gather insights, predictions, and recommended actions in real-time to enhance the satisfaction of customers.

Jun-2021: Amazon Web Services entered into a partnership with Salesforce, a cloud-based software company. The partnership enabled to utilize complete set of Salesforce and AWS capabilities simultaneously to rapidly develop and deploy new business applications that facilitate digital transformation. Salesforce also embedded AWS services for voice, video, artificial intelligence (AI), and machine learning (ML) directly in new applications for sales, service, and industry vertical use cases.

Apr-2021: Amazon formed a partnership with Basler, a company known for its product line of area scan, line scan, and network cameras. The partnership began as Amazon launched a succession of services for industrial machine learning, including its latest Lookout for Vision cloud AI service for factory inspection. Customers can integrate AWS Panorama SDK within its platform, and thus utilize a common architecture to perform multiple tasks and accommodate a broad range of performance and cost. The integration of AWS Panorama empowered customers to adopt and run machine learning applications on edge devices with additional support for device management and accuracy tracking.

Dec-2020: IBM teamed up with Mila, a Quebec Artificial Intelligence Institute. Under the collaboration, both organizations aimed to quicken machine learning using Oron, an open-source technology. After the integration of Milas open-source Oron software and IBMs Watson Machine Learning Accelerator, IBM also enhanced the deployment of state-of-the-art algorithms, along with improved machine learning and deep learning capabilities for AI researchers and data scientists. IBMs Spectrum Computing team based out of Canada Lab contributes substantially to Orons code base.

Oct-2020: SAS entered into a partnership with TMA Solutions, a software outsourcing company based in Vietnam. Under the partnership, SAS and TMA Solutions aimed to fasten the growth of businesses in Vietnam through Artificial Intelligence (AI) and Data Analytics. SAS and TMA helped clients in Vietnam quicken the deployment and growth of advanced analytics and look for new methods to propel innovation in AI, especially in the fields of Machine Learning, Computer Vision, Natural Language Processing (NLP), and other technologies.

Product Launches and Product Expansions:

May-2022: Hewlett Packard launched HPE Swarm Learning and the new Machine Learning (ML) Development System, two AI and ML-based solutions. These new solutions increase the accuracy of models, solve AI infrastructure burdens, and improve data privacy standards. The company declared the new tool a breakthrough AI solution that focuses on fast-tracking insights at the edge, with attributes ranging from identifying card fraud to diagnosing diseases.

Apr-2022: Hewlett Packard released Machine Learning Development System (MLDS) and Swarm Learning, its new machine learning solutions. The two solutions are focused on simplifying the burdens of AI development in a development environment that progressively consists of large amounts of protected data and specialized hardware. The MLDS provides a full software and services stack, including a training platform (the HPE Machine Learning Development Environment), container management (Docker), cluster management (HPE Cluster Manager), and Red Hat Enterprise Linux.

May-2021: Google released Vertex AI, a novel managed machine learning platform that enables developers to more easily deploy and maintain their AI models. Engineers can use Vertex AI to manage video, image, text, and tabular datasets, and develop machine learning pipelines to train and analyze models utilizing Google Cloud algorithms or custom training code. After that the engineers can install models for online or batch use cases all on scalable managed infrastructure.

Mar-2021: Microsoft released updates to Azure Arc, its service that brought Azure products and management to multiple clouds, edge devices, and data centers with auditing, compliance, and role-based access. Microsoft also made Azure Arc-enabled Kubernetes available. Azure Arc-enabled Machine Learning and Azure Arc-enabled Kubernetes are developed to aid companies to find a balance between enjoying the advantages of the cloud and maintaining apps and maintaining apps and workloads on-premises for regulatory and operational reasons. The new services enable companies to implement Kubernetes clusters and create machine learning models where data lives, as well as handle applications and models from a single dashboard.

Jul-2020: Hewlett Packard released HPE Ezmeral, a new brand and software portfolio developed to assist enterprises to quicken digital transformation across their organization, from edge to cloud. The HPE Ezmeral goes from a portfolio consisting of container orchestration and management, AI/ML, and data analytics to cost control, IT automation and AI-driven operations, and security.

Acquisitions and Mergers:

Jun-2021: Hewlett Packard completed the acquisition of Determined AI, a San Francisco-based startup that offers a strong and solid software stack to train AI models faster, at any scale, utilizing its open-source machine learning (ML) platform. Hewlett Packard integrated Determined AIs unique software solution with its world-leading AI and high-performance computing (HPC) products to empower ML engineers to conveniently deploy and train machine learning models to offer faster and more precise analysis from their data in almost every industry.

Scope of the Study

Market Segments covered in the Report:

By End User

IT & Telecom

BFSI

Manufacturing

Retail

Healthcare

Energy & Utilities

Public Sector

Aerospace & Defense

Others

By Offering

Services Only

Solution (Software Tools)

By Organization Size

Large Enterprises

Small & Medium Enterprises

By Application

Marketing & Advertising

Fraud Detection & Risk Management

Computer vision

Security & Surveillance

Predictive analytics

Natural Language Processing

Augmented & Virtual Reality

Others

By Geography

North America

o US

o Canada

o Mexico

o Rest of North America

Europe

o Germany

o UK

o France

o Russia

o Spain

o Italy

o Rest of Europe

Asia Pacific

o China

o Japan

o India

o South Korea

o Singapore

o Malaysia

o Rest of Asia Pacific

LAMEA

o Brazil

o Argentina

o UAE

o Saudi Arabia

o South Africa

o Nigeria

Read the original post:
The Global Machine learning as a Service Market size is expected to reach $36.2 billion by 2028, rising at a market growth of 31.6% CAGR during the...