Category Archives: Machine Learning

Microsoft picks perfect time to dump its AI ethics team – The Register

Microsoft has eliminated its entire team responsible for ensuring the ethical use of AI software at a time when the Windows giant is ramping up its use of machine learning technology.

The decision to ditch the ethics and society team within its artificial intelligence organization is part of the 10,000 job cuts Microsoft announced in January, which will continue rolling through the IT titan into next year.

The hit to this particular unit may remove some guardrails meant to ensure Microsoft's products that integrate machine learning features meet the mega-corp's standards for ethical use of AI. And it comes as discussion rages about the effects of controversial artificial intelligence models on society at large.

Baking AI ethics into the whole business as something for all employees to consider seems kinda like when Bill Gates told his engineers in 2002 to make security an organization-wide priority, which obviously went really well. You might think a dedicated team overseeing that internally would be helpful.

Platformer first reported the layoffs in the ethics and society group and cited unnamed current and former employees. The group was supposed to advise teams as Redmond accelerated the integration of AI technologies into a range of products from Edge and Bing to Teams, Skype, and Azure cloud services.

Microsoft still has in place its Office of Responsible AI, which works with the company's Aether Committee and Responsible AI Strategy in Engineering (RAISE) to spread responsible practices across operations in day-to-day work. That said, employees told the newsletter that the ethics and society team played a crucial role in ensuring those principles were directly reflected in how products were designed.

A Microsoft spokesperson told The Register that the impression that the layoffs meant the tech goliath is cutting its investment in responsible AI is wrong. The unit was key in helping to incubate a culture of responsible innovation as Microsoft got its AI efforts underway several years ago, we were told, and now Microsoft executives have adopted that culture and seeded it throughout the company.

"That initial work helped to spur the interdisciplinary way in which we work across research, policy, and engineering across Microsoft," the spokesperson said.

"Since 2017, we have worked hard to institutionalize this work and adopt organizational structures and governance processes that we know to be effective in integrating responsible AI considerations into our engineering systems and processes."

There are hundreds of people working on these issues across Microsoft "including net new, dedicated responsible AI teams that have since been established and grown significantly during this time, including the Office of Responsible AI, and a responsible AI team known as RAIL that is embedded in the engineering team responsible for our Azure OpenAI Service," they added.

By contrast, fewer than ten people on the ethics and society team were affected, and some were moved to other parts of the biz with the Office of Responsible AI and the RAIL unit.

According to the Platformer report, the team had been shrunk from about 30 people to seven through a reorganization within Microsoft in October 2022.

Team members lately had been investigating potential risks involved with Microsoft's integration of OpenAI's technologies across the organization. Unnamed sources reportedly said CEO Satya Nadella and CTO Kevin Scott were anxious to get those technologies integrated into products and out to users as fast as possible.

Microsoft is investing billions of dollars into OpenAI a startup whose products include Dall-E2 for generating images, GPT for text (OpenAI this week introduced its latest iteration, GPT-4), and Codex for developers. Meanwhile, OpenAI's ChatGPT is a chatbot trained on mountains of data from the internet and other sources that takes in prompts from humans "Write a two-paragraph history of the Roman Empire," for example and spits out a written response.

Microsoft also is integrating a new large language model into its Edge browser and Bing search engine in hopes of chipping away at Google's dominant position in search.

Since being opened up to the public in November 2022, ChatGPT has become the fastest app to reach 100 million users, crossing that mark in February. However, problems with the technology and with similar AI apps like Google's Bard cropped up fairly quickly, ranging from wrong answers to offensive language and gaslighting.

The rapid innovation and mainstreaming of these large language model AI systems is fuelling a larger debate about their impact on society.

Redmond will shed more light on its ongoing AI strategy during an event on March 16 hosted by Nadella and titled "The Future of Work with AI," which The Register will be covering.

See the article here:
Microsoft picks perfect time to dump its AI ethics team - The Register

How Machine Learning Optimizes the Supply Chain – Talking Logistics

The supply chain as we know it continues to evolve, and that is due in part to learning from the effects of the industry impacts over the last few years. As a result, more advanced supply chain technologies are focusing on the use of artificial intelligence primarily machine learning (ML) in various areas of their operations, and it is proving to be one of the most profound technologies enabling significant improvements in supply chains.

ML allows technology to teach itself over time, so that it can improve predictions, recommendations and decisions, but how does it relate to the supply chain exactly? In theory, it is nothing new, but in terms of the supply chain, it continues to prove beneficial in the advancement of digitization through data cleaning and supply chain planning, procurement and execution.

ML can deliver several benefits for supply chain management, one regarding an issue plaguing many companies today too much or too little inventory. The pandemic limited inventory due to backlogs, and now, many are finding they have too much inventory. With ML, organizations do not need to hold as much inventory because it optimizes the flow of products from one place to another. As a result, costs are reduced due to quality improvement and waste reduction, and products arrive in the marketplace just in time for sale as a result of upstream optimization.

Dealing with suppliers is one of the most challenging parts of supply chain management (SCM). With ML, supplier relationship management becomes easier due to simpler, proven administrative practices. ML can be implemented to analyze the types of contracts, documentation and other areas that lead to the best outcomes from suppliers and use those as a basis for future agreements and administration. Stakeholders get more insight into meaningful information, allowing for continual improvement and easier problem solving.

Quality is vital to good SCM as waste and faulty products create unnecessary rework and increase costs. ML can monitor how quality varies over time and suggest improvements. This doesnt just apply to materials and products. It can track other areas such as shipping, supplier and third-party quality.

ML isnt perfect, of course. It depends on reliable, high-quality and timely information, and lack of access to good data can cause significant issues. Supply chain managers need to have a robust approach to collecting and analyzing their data.

All organizations in the supply chain should provide information in a consistent way, and, where possible, SCM software should integrate with supplier and manufacturer systems to automatically collect and process data. There will need to be some human interaction with ML, especially with the quality of the data being collected. Supply chain information should be checked and audited periodically to ensure quality.

Machine learning models should be tested and checked to make sure outputs and suggestions are aligned with business needs and expectations.

For retailers, stock level analysis through ML can identify when products are declining in popularity and are reaching the end of their life in the retail marketplace. Price analysis can be compared to costs in the supply chain and retail profit margins to establish the best combination of pricing and customer demand. Also, upstream delays can be identified, allowing for contingency planning or alternative sourcing, and retailers can lower storage costs due to not having to hold as much stock.

Food manufacturers can use ML to conduct an analysis of commodity prices and weather patterns to optimize harvesting. Manufacturers can also increase speed to market by optimizing contracts and reducing turnaround times with upstream organizations.

The industry continues to focus on supply chain technologys role moving forward, and leaders will only benefit by riding the wave that ML is creating.

Glenn Jones is SVP of Product Marketing at Blume Global. He has a proven track record of growing businesses by building and leading R&D and product marketing organizations to define, develop, position and sell highly innovative and high value enterprise solutions delivered in the cloud. He was formerly the COO of Sweetbridge, the CTO of Steelwedge Software and also held leadership positions at other supply chain software companies including Elementum, E2Open and i2 Technologies.

Read this article:
How Machine Learning Optimizes the Supply Chain - Talking Logistics

Machine Learning as a Service Market to Predicts Huge Growth by 2029: Google, BigML, FICO – EIN News

Machine Learning as a Service Market

Stay up to date with Machine Learning as a Service Market research offered by HTF MI.

Criag Francis

According to HTF Market Intelligence, the Global Machine Learning as a Service market to witness a CAGR of 39.25% during forecast period of 2023-2028. The market is segmented by Global Machine Learning as a Service Market Breakdown by Application (Network Analytics and Automated Traffic Management, Augmented Reality, Predictive Maintenance, Fraud Detection and Risk Analytics, Marketing and Advertising, Others) and by Geography (North America, South America, Europe, Asia Pacific, MEA). The Machine Learning as a Service market size is estimated to increase by USD 288.71 Million at a CAGR of 39.25% from 2023 to 2028. The report includes historic market data from 2017 to 2022E. Currently, market value is pegged at USD 13.95 Million.

Get an Inside Scoop of Study, Request now for Sample Study @ https://www.htfmarketintelligence.com/sample-report/global-machine-learning-as-a-service-market

Definition: The Machine Learning as a Service (MLaaS) market refers to the provision of cloud-based platforms or services that enable organizations to leverage machine learning capabilities without the need for in-house expertise, infrastructure, or data storage. MLaaS providers offer a range of services, including tools for data preprocessing, model training and evaluation, deployment, and maintenance. The market also includes providers of pre-built machine learning models, APIs, and software development kits (SDKs) that enable developers to build intelligent applications and automate business processes. MLaaS can help organizations of all sizes to reduce the cost and complexity of adopting machine learning and accelerate their time-to-market for AI-powered solutions.

Market Trends: Growing Adoption of Machine Learning Services in Healthcare and Research Oriented Marketing Campaigns and Customer-centric Communication

Market Drivers: Lack of Technical Expertise to Deploy Machine Learning Services

Market Opportunities: Increasing Data Volume and Growing IoT Application and Consistent Retraining of Algorithms

The titled segments and sub-section of the market are illuminated below: The Study Explore the Product Types of Machine Learning as a Service Market:

Key Applications/end-users of Machine Learning as a Service Market: Network Analytics and Automated Traffic Management, Augmented Reality, Predictive Maintenance, Fraud Detection and Risk Analytics, Marketing and Advertising, Others

Book Latest Edition of Global Machine Learning as a Service Market Study @ https://www.htfmarketintelligence.com/buy-now?format=1&report=592

With this report you will learn: Who the leading players are in Machine Learning as a Service Market? What you should look for in a Machine Learning as a Service What trends are driving the Market About the changing market behaviour over time with strategic view point to examine competition Also included in the study are profiles of 15 Machine Learning as a Service vendors, pricing charts, financial outlook, swot analysis, products specification &comparisons matrix with recommended steps for evaluating and determining latest product/service offering.

List of players profiled in this report: Google [United States], IBM Corporation [United States], Microsoft Corporation [United States], Amazon Web Services [United States], BigML [United States], FICO [United States], Yottamine Analytics [United States], Ersatz Labs [United States], Predictron Labs [United Kingdom], H2O.ai [United States], AT&T [United States], Sift Science [United States]

Who should get most benefit from this report insights? Anyone who are directly or indirectly involved in value chain cycle of this industry and needs to be up to speed on the key players and major trends in the market for Machine Learning as a Service Marketers and agencies doing their due diligence in selecting a Machine Learning as a Service for large and enterprise level organizations Analysts and vendors looking for current intelligence about this dynamic marketplace. Competition who would like to benchmark and correlate themselves with market position and standings in current scenario.

Make an enquiry to understand outline of study and further possible customization in offering https://www.htfmarketintelligence.com/enquiry-before-buy/global-machine-learning-as-a-service-market

Quick Snapshot and Extracts from TOC of Latest Edition Overview of Machine Learning as a Service Market Machine Learning as a Service Size (Sales Volume) Comparison by Type (2023-2028) Machine Learning as a Service Size (Consumption) and Market Share Comparison by Application (2023-2028) Machine Learning as a Service Size (Value) Comparison by Region (2023-2028) Machine Learning as a Service Sales, Revenue and Growth Rate (2023-2028) Machine Learning as a Service Competitive Situation and Current Scenario Analysis Strategic proposal for estimating sizing of core business segments Players/Suppliers High Performance Pigments Manufacturing Base Distribution, Sales Area, Product Type Analyse competitors, including all important parameters of Machine Learning as a Service Machine Learning as a Service Manufacturing Cost Analysis Latest innovative headway and supply chain pattern mapping of leading and merging industry players

Get Detailed TOC and Overview of Report @ https://www.htfmarketintelligence.com/report/global-machine-learning-as-a-service-market

Thanks for reading this article; you can also get individual chapter-wise sections or region-wise report versions like North America, MINT, BRICS, G7, Western / Eastern Europe, or Southeast Asia. Also, we can serve you with customized research services as HTF MI holds a database repository that includes public organizations and Millions of Privately held companies with expertise across various Industry domains.

Criag FrancisHTF Market Intelligence Consulting Pvt Ltd+1 434-322-0091sales@htfmarketintelligence.comVisit us on social media:FacebookTwitterLinkedIn

Read more:
Machine Learning as a Service Market to Predicts Huge Growth by 2029: Google, BigML, FICO - EIN News

Machine Learning Helps Predict Food Crisis | News – Specialty Food Association

A team of researchers at New York University has developed a model using machine learning that draws from news article content to predict locations that face food insecurity, according toNYU.

The model can help prioritize emergency food assistance allocation by helping uncover locations in most need.

Our approach could drastically improve the prediction of food crisis outbreaks up to 12 months ahead of time using both real-time news streams and a predictive model that is simple to interpret, said Samuel Fraiberger, a visiting researcher at NYU's Courant Institute of Mathematical Sciences, a data scientist at World Bank, and an author of the study, which appears in the journal Science Advances, in a statement.

Researchers collected text from more than 11 million news articles focused on roughly 40 food-insecure countries. They used their model to extract phrases and content related to food insecurity. Then they assessed the regions for food-insecurity factors like fatality counts, rainfall, vegetation, and food price changes, to determine a correlation between the news and food insecuritys impact. The high correlation indicated that the news stories served as an accurate indicator of the factors.

Traditional measurements of food insecurity risk factors, such as conflict severity indices or changes in food prices, are often incomplete, delayed, or outdated, said Lakshminarayanan Subramanian, a professor at the Courant Institute and one of the papers authors. Our approach takes advantage of the fact that risk factors triggering a food crisis are mentioned in the news prior to being observable with traditional measurements.

Related: Ukraine Export Helps Stabilize Cooking Oil Market; Giant Announces Sustainability Grant Program

Follow this link:
Machine Learning Helps Predict Food Crisis | News - Specialty Food Association

Researchers Crack Open Machine Learning’s Black Box to Shine a Light on Generalization – Hackster.io

Researchers from the Massachusetts Institute of Technology (MIT) and Brown University have taken steps to open up the "black box" of machine learning and say that the key to success may lie in generalization.

"This study provides one of the first theoretical analyses covering optimization, generalization, and approximation in deep networks and offers new insights into the properties that emerge during training," explains co-author Tomaso Poggio, the Eugene McDermott Professor at MIT. "Our results have the potential to advance our understanding of why deep learning works as well as it does."

Machine learning has proven outstanding at a range of tasks, from surprisingly convincing chat bots to autonomous vehicles. It comes, however, with a big caveat: it's not always clear how or why a machine learning system comes to its outputs for a given input. Many networks operate as a black box, performing unknowable tasks on incoming data and but the researchers' work is helping to open that box and shine a light within.

The team's work focused on two network types: fully-connected deep networks and convolutional neural networks (CNNs). A key part of their study involved investigating exactly what factors contribute to the state of "neural collapse," when a networks' training maps multiple class examples to a single template.

"Our analysis shows that neural collapse emerges from the minimization of the square loss with highly expressive deep neural networks," explains co-author and post-doctoral researcher Akshay Rangamani. "It also highlights the key roles played by weight decay regularization and stochastic gradient descent in driving solutions towards neural collapse."

That understanding led to another finding, which flips recent studies of generalization on their head. "[Our study] validates the classical theory of generalization showing that traditional bounds are meaningful," explains postdoc Tomer Galanti, of findings which proved new norm-based generalization bounds for convolutional neural networks. "It also provides a theoretical explanation for the superior performance in many tasks of sparse networks, such as CNNs, with respect to dense networks."

The study found that generalization could offer a performance orders of magnitude better than densely-connected networks something the researchers claim has been "almost completely ignored by machine learning theory."

The team's work has been published in the journal Research under open-access terms.

Excerpt from:
Researchers Crack Open Machine Learning's Black Box to Shine a Light on Generalization - Hackster.io

Neural Networks vs. Deep Learning: How Are They Different? – MUO – MakeUseOf

Artificial intelligence has become an integral part of our daily lives in today's technology-driven world. Although some people use neural networks and deep learning interchangeably, their advancements, features, and applications vary.

So what are neural networks and deep learning models, and how do they differ?

Neural networks, also known as neural nets, are modeled after the human brain. They analyze complex data, complete mathematical operations, look for patterns, and use the information gathered to make predictions and classifications. And just like the brain, AI neural networks have a basic functional unit known as the neuron. These neurons, also called nodes, transfer information within the network.

A basic neural network has interconnected nodes in the input, hidden, and output layers. The input layer processes and analyzes information before sending it to the next layer.

The hidden layer receives data from the input layer or other hidden layers. Then, the hidden layer further processes and analyzes the data by applying a set of mathematical operations to transform and extract relevant features from the input data.

It is the output layer that delivers the final information using the extracted features. This layer may have one or more nodes, depending on the data collection type. For binary classificationa yes/no problemthe output will have one node presenting a 1 or 0 result.

There are different types of AI neural networks.

Feedforward neural networks, mostly used for facial recognition, transfer information in one direction. This means every node in one layer is linked to every node in the next layer, with information flowing unidirectionally until it reaches the output node. This is one of the simplest types of neural networks.

This form of neural network aids theoretical learning. Recurrent neural networks are used for sequential data, like natural language and audio. They are also used for text-to-speech applications for Android and iPhones. And unlike feedforward neural networks that process information in one direction, recurrent neural networks use data from the procession neuron and send it back into the network.

This return option is critical for times when the system releases wrong predictions. Recurrent neural networks can attempt to find the reason for incorrect outcomes and adjust accordingly.

Traditional neural networks have been designed to process fixed-size inputs, but convolutional neural networks (CNNs) can process data of varying dimensions. CNNs are ideal for classifying visual data like images and videos of different resolutions and aspect ratios. They are also very useful for image recognition applications.

This neural network is also known as a transposed convolutional neural network. It is the opposite of a convolutional network.

In a convolutional neural network, input images are processed through convolutional layers to extract important features. This output is then processed through a series of connected layers, which carry out classificationassigning a name or label to an input image based on its features. This is useful for object identification and image segmentation.

However, in a deconvolutional neural network, the feature map that was formerly an output becomes the input. This feature map is a three-dimensional array of values and is unspooled to form the original image with an increased spatial resolution.

This neural network combines interconnected modules, each performing a specific subtask. Each module in a modular network consists of a neural network primed to tackle a subtask like speech recognition or language translation.

Modular neural networks are adaptable and useful for handling input with widely varying data.

Deep learning, a subcategory of machine learning, involves training neural networks to automatically learn and evolve independently without being programmed to do so.

Is deep learning artificial intelligence? Yes. It is the driving force behind many AI applications and automation services, helping users carry out tasks with little human intervention. ChatGPT is one of those AI applications with several practical uses.

There are many hidden layers between the input and output layers of deep learning. This allows the network to perform extremely complex operations and continually learn as the data representations pass through the layers.

Deep learning has been applied to image recognition, speech recognition, video synthesis, and drug discoveries. In addition, it has been applied to complex creations, like self-driving cars, which use deep learning algorithms to identify obstacles and perfectly navigate around them.

You must feed large amounts of labeled data into the network to train a deep-learning model. This is when backpropagation occurs: adjusting the weights and biases of the networks neurons until it can accurately predict the output for new input data.

Neural networks and deep learning models are subsets of machine learning. However, they differ in various ways.

Neural networks are usually made up of an input, hidden, and output layer. Meanwhile, deep learning models comprise several layers of neural networks.

Though deep learning models incorporate neural networks, they remain a concept different from neural networks. Applications of neural networks include pattern recognition, face identification, machine translation, and sequence recognition.

Meanwhile, you can use deep learning networks for customer relationship management, speech and language processing, image restoration, drug discovery, and more.

Neural networks require human intervention, as engineers must manually determine the hierarchy of features. However, deep learning models can automatically determine the hierarchy of features using labeled datasets and unstructured raw data.

Neural networks take less time to train, but feature lower accuracy when compared to deep learning; deep learning is more complex. Also, neural networks are known to interpret tasks poorly despite fast completion.

Deep learning is a complex neural network that can classify and interpret raw data with little human intervention but requires more computational resources. Neural networks are a simpler subset of machine learning that can be trained using smaller datasets with fewer computational resources, but their ability to process complex data is limited.

Though used interchangeably, neural and deep learning networks are different. They have different methods of training and degrees of accuracy. Nonetheless, deep learning models are more advanced and produce results with higher accuracy, as they can learn independently with little human interference.

View original post here:
Neural Networks vs. Deep Learning: How Are They Different? - MUO - MakeUseOf

RiceNet Uses Machine Learning and Aerial Drones to Take the Tedium Out of Rice Production Estimation – Hackster.io

RiceNet Uses Machine Learning and Aerial Drones to Take the Tedium Out of Rice Production Estimation  Hackster.io

Read more from the original source:
RiceNet Uses Machine Learning and Aerial Drones to Take the Tedium Out of Rice Production Estimation - Hackster.io

Microsoft and Columbia Researchers Propose LLM-AUGMENTER: An AI System that Augments a Black-Box LLM with a Set of Plug-and-Play Modules -…

Microsoft and Columbia Researchers Propose LLM-AUGMENTER: An AI System that Augments a Black-Box LLM with a Set of Plug-and-Play Modules  MarkTechPost

See more here:
Microsoft and Columbia Researchers Propose LLM-AUGMENTER: An AI System that Augments a Black-Box LLM with a Set of Plug-and-Play Modules -...

A.I. is the star of earnings calls as mentions skyrocket 77% with companies saying theyll use for everything from medicine to cybersecurity – Fortune

A.I. is the star of earnings calls as mentions skyrocket 77% with companies saying theyll use for everything from medicine to cybersecurity  Fortune

See the original post:
A.I. is the star of earnings calls as mentions skyrocket 77% with companies saying theyll use for everything from medicine to cybersecurity - Fortune

A New Deep Reinforcement Learning (DRL) Framework can React to Attackers in a Simulated Environment and Block 95% of Cyberattacks Before They Escalate…

A New Deep Reinforcement Learning (DRL) Framework can React to Attackers in a Simulated Environment and Block 95% of Cyberattacks Before They Escalate  MarkTechPost

More here:
A New Deep Reinforcement Learning (DRL) Framework can React to Attackers in a Simulated Environment and Block 95% of Cyberattacks Before They Escalate...