Category Archives: Machine Learning

America’s AI in Retail Industry Report to 2026 – Machine Learning Technology is Expected to Grow Significantly – ResearchAndMarkets.com – Business…

DUBLIN--(BUSINESS WIRE)--The "America's AI in the Retail Market - Growth, Trends, COVID-19 Impact, and Forecasts (2022 - 2027)" report has been added to ResearchAndMarkets.com's offering.

America's AI in the retail market is expected to register a CAGR of 30% during the forecast period, 2021 - 2026.

Companies Mentioned

Key Market Trends

Machine Learning Technology is Expected to Grow Significantly

Food and Grocery to Augment Significant Growth

Key Topics Covered:

1 INTRODUCTION

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET DYNAMICS

4.1 Market Overview

4.2 Market Drivers

4.2.1 Hardware Advancement Acting as a Key Enabler for AI in Retail

4.2.2 Disruptive Developments in Retail, including AR, VR, IOT, and New Metrics

4.2.3 Rise of AI First Organizations

4.2.4 Need for Efficiency in Supply Chain Optimization

4.3 Market Restraints

4.3.1 Lack of Professionals, as well as In-house Knowledge for Cultural Readiness

4.4 Industry Value Chain Analysis

4.5 Porter's Five Forces Analysis

4.6 Industry Policies

4.7 Assessment of Impact of COVID-19 on the Industry

5 AI Adoption in the Retail Industry

5.1 AI Penetration with Retailers (Historical, Current, and Forecast)

5.2 AI penetration by Retailer Size (Large and Medium)

5.3 AI Use Cases in Operations

5.3.1 Logistics and Distribution

5.3.2 Planning and Procurement

5.3.3 Production

5.3.4 In-store Operations

5.3.5 Sales and Marketing

5.4 AI Retail Startups (Equity Funding vs Equity Deals)

5.5 Road Ahead for AI in Retail

6 MARKET SEGMENTATION

6.1 Channel

6.2 Solution

6.3 Application

6.4 Technology

7 COMPETITIVE LANDSCAPE

7.1 Company Profiles

8 INVESTMENT ANALYSIS

9 MARKET TRENDS AND FUTURE OPPORTUNITIES

For more information about this report visit https://www.researchandmarkets.com/r/kddpm3

More:
America's AI in Retail Industry Report to 2026 - Machine Learning Technology is Expected to Grow Significantly - ResearchAndMarkets.com - Business...

Applied BioMath, LLC to Present on Machine Learning in Drug Discovery at Bio-IT World Conference and Expo – PR Newswire

CONCORD, Mass., April 27, 2022 /PRNewswire/ -- Applied BioMath (www.appliedbiomath.com), the industry-leader in providing model-informed drug discovery and development (MID3) support to help accelerate and de-risk therapeutic research and development (R&D), today announced their participation at the Bio-IT World Conference and Expo occurring May 3-5, 2022 in Boston, MA.

Kas Subramanian, PhD, Executive Director of Modeling at Applied BioMath will present "Applications of Machine Learning in Preclinical Drug Discovery" within the conference track, AI for Drug Discovery and Development on Thursday, May 5, 2022 at 1:05 p.m. E.T. In this presentation, Dr. Subramanian will discuss how machine learning methods can improve efficiency in therapeutic R&D decision making. He will review case studies that demonstrate machine learning applications to target validation and lead optimization.

"Traditionally, therapeutic R&D requires experiments on many different targets, hits, leads, and candidates that are based on best guesses," said John Burke, PhD, Co-founder, President and CEO of Applied BioMath. "By utilizing artificial intelligence and machine learning, project teams can computationally work with more data to better inform experiments and develop better therapeutics."

To learn more about Applied BioMath's presence at the Bio-IT World Conference and Expo, please visit http://www.appliedbiomath.com/BioIT22.

About Applied BioMath

Founded in 2013, Applied BioMath's mission is to revolutionize drug invention. Applied BioMath applies biosimulation, including quantitative systems pharmacology, PKPD, bioinformatics, machine learning, clinical pharmacology, and software solutions to provide quantitative and predictive guidance to biotechnology and pharmaceutical companies to help accelerate and de-risk therapeutic research and development. Their approach employs proprietary algorithms and software to support groups worldwide in decision-making from early research through all phases of clinical trials. The Applied BioMath team leverages their decades of expertise in biology, mathematical modeling and analysis, high-performance computing, and industry experience to help groups better understand their therapeutic, its best-in-class parameters, competitive advantages, patients, and the best path forward into and in the clinic to increase likelihood of clinical concept and proof of mechanism, and decrease late stage attrition rates. For more information about Applied BioMath and its services and software, visitwww.appliedbiomath.com.

Applied BioMath and the Applied BioMath logo are registered trademarks of Applied BioMath, LLC.

Press Contact: Kristen Zannella ([emailprotected])

SOURCE Applied BioMath, LLC

More:
Applied BioMath, LLC to Present on Machine Learning in Drug Discovery at Bio-IT World Conference and Expo - PR Newswire

How Artificial Intelligence Is Transforming Israeli Intelligence Collection – The National Interest Online

Intelligence is a profession as old as time, but new advances in artificial intelligence and machine learning are changing it like never before. As these new technologies mature, they are likely to have both predicted and unexpected implications for humanity, and intelligence collection will never be the same. In Israel, Unit 8200, which is the cyber unit of military intelligence, is leading the transformation in the Israel Defense Forces.

According to the commander of Unit 8200, a machine can use big data to create information better than humans, but it does not understand the context, has no emotions or ethics, and is unable to think outside the box. Therefore, instead of prioritizing between humans and machines, we should recognize that, for at least the foreseeable future, machines will not replace humans role in intelligence decisionmaking. However, it is clear that intelligence professionals need to adapt how they conceptualize technology and mathematical thinking in the twenty-first century.

The first interesting development worth highlighting in Israeli intelligence is automatic translation. In recent years, we have seen unprecedented advancements in translation technology; algorithms based on neural networks have been successful in offering a highly accurate level of translation. The translation of languagessuch as Arabic and Persianinto Hebrew allows intelligence analysts to have direct contact with raw material and eliminates the dependence that analysts had on the collection units themselves. In addition, it enables intelligence analysts to deal with big data repositories. This means that the Israeli Military Intelligence has begun to integrate automatic translation engines into its work processes and is starting to give up some of its human translators. Instead, the military is having some of its intelligence personnel train the machines to raise the level of translation they provide.

A second development is in the area of identifying targets for attack in the war on terror. This process also relies on advanced algorithms in the field of machine learning, utilizing the ability to process vast amounts of information and cross-link many layers of geographic information to reveal anomalies in the data. The change appeared for the first time in the recent operation in Gaza (2021), in which Israeli military intelligence first used artificial intelligence in a campaign to identify many real-time terror targets.

In order to adopt these new technologies, intelligence units must change how they are organized and the work processes that they employ. Further, new intelligence roles must be defined. Here are two such roles. First, an information researcher: a person responsible for the analysis of the information, the acquisition of advanced tools for analyzing large reservoirs, semantic research (NLP), data Preparation, visualization of the information, network analysis (SNA) or geospatial analysis. Second, an information specialist: a person responsible for defining the problem in terms of optimizing machine learning and defining business metrics, directing the collection operation, analyzing errors, and designing the product requirements.

The integration of artificial intelligence will change the way intelligence is handled in Israel long into the future, but unresolved challenges remain. For the time being, machines still do not know how to ask questions or summarize research insights at a sufficient level. In addition, imposing responsibility for attacking targets on a machine can lead to devastating consequences and create ethical dilemmas, as is evident in recent conflicts such as the Russo-Ukrainian War.

The response to these challenges will be gradual. It is likely that the changes today are just the tip of the iceberg in how artificial intelligence will alter the practice of intelligence collection and analysis. Most of the changes we are seeing today are changes that automate the intelligence process, but the next step is making processes more automatic, raising questions and fears about who is in control. Therefore, it is more appropriate to first incorporate artificial intelligence components into non-life-threatening intelligence processes and create trust between cautious intelligence professionals and the machine, humans new partner in the intelligence cycle.

Lt. Col. (res.) David Siman-Tov is a Senior Research Fellow at the Institute for National Security Studies (INSS) and deputy head of the Institute for the Research of the Methodology of Intelligence (IRMI) at the Israeli Intelligence Community Commemoration and Heritage Center.

Image: Reuters.

Read the original:
How Artificial Intelligence Is Transforming Israeli Intelligence Collection - The National Interest Online

Dynamic compensation of stray electric fields in an ion trap using machine learning and adaptive algorithm | Scientific Reports – Nature.com

A gradient descent algorithm (ADAM) and a deep learning network (MLOOP) were tested for compensating stray fields in different working regimes. The source code used for the experiments is available in28. The software controlled the voltages using the PXI-6713 DAQ and read the fluorescence counts from a photo-mutliplier-tube (PMT) through a time tagging counter (IDQ id800). All software was written in python and interfaced with the DAQ hardware using the library NI-DAQmx Python. A total of 44 DC electrodes and the horizontal position of the cooling laser were tuned by the program, resulting in a total of 45 input parameters.

Voltage deviation from the original starting point during optimization with ADAM. (a) uncharged trap (Gradient descent optimizer section). (b) During UV charging (Testing under poor trap conditions). Top graphs show odd electrode numbers corresponding to top DC electrodes in Fig. 1b and the bottom graphs show the even electrode numbers. The values were determined by subtracting the voltage at each iteration by the starting voltage (Delta V = V_n - V_0). Changes can be seen in almost all the electrodes of the trap.

(a) MLOOP deep learning network. Differential Evolution explores the input space (blue points) and the neural network creates a model of the data and predicts an optimum (red points). Maximum photon count of the neural network points is 96 1% higher than manual optimization. Differential Evolution continues to explore the input space and has varied photon counts. The beginning point for the process (found by manually adjusting the 4 voltage set weights) was at 33700counts/s and the highest photon count found by the neural network was at 66200counts/s. (b) and (c) Fluorescence versus laser frequency detuning from the resonance for inital setting and after different optimizations. It can be seen from that the experimental values are very close to the theoretical Lorentzian fit29,30,31. This shows the heating is low before and after optimization and therefore the change in fluorescence can be used to infer the change in heating. Deviation from the theory near the resonance shown in (b) is a sign of small heating instability.

The first compensation test was performed by ADAM gradient descent algorithm. This is a first order optimizer that uses the biased first and second order moments of the gradient to update the inputs of an objective function, and was chosen for its fast convergence, versatility in multiple dimensions and tolerance to noise23. Our goal was to maximize the fluorescence of the ion which was described by a function (f(vec {alpha }{,})), where (vec {alpha }{,} =(alpha _1, alpha _2, alpha _3, ldots alpha _{45})) represents the array of parameters to be optimized. To find the optimal (vec {alpha }{,}), the algorithm needs to know the values of the partial derivatives for all input parameters. Because we do not have an analytic expression for (f(vec {alpha }{,})), the values of its derivatives were estimated from experimental measurements by sequentially changing each input (alpha _{i}), and reading the associated change in fluorescence f. This data were used as inputs to ADAM for finding the optimal (vec {alpha }{,}) which maximized f.

Before running the automated compensation, we manually adjusted the 4 weights of the voltage sets used for compensation described in the previous section. We also tried to run ADAM to optimize these 4 parameters but the increase in fluorescence was limited to 6%. After manual compensation, we ran ADAM on all 45 inputs with the algorithm parameters given in the source code28. Each iteration took 12s, where 9.8s were the photon readout (0.1s(times )2 readouts per parameter plus 2(times )0.1s readouts at the beginning and end of the iteration), and the rest of the time was the gradient computation. If the photon count reduced by more than 40% of its initial value, the algorithm terminated and applied the previously found optimum. This acted as the safety net for the program, ensuring the ion was not lost while optimizing the 45 inputs. We need this safety net because if the ion is heated past the capture range for the used cooling detuning, it will be ejected from the trap. In our implementation of the algorithm we removed the reduction in the step size of the optimization algorithm as iterations progressed. This step reduction, which is present in the standard version of ADAM, is not ideal when stray fields change with time since the optimal values of the voltages also drift in time. The removal caused some fluctuations in the photon readout near the optimal settings. Adding to these fluctuations, other sources of noise, such as wavemeter laser locking32, and mechanical drift in the trap environment, resulted in daily photon count variations of around 5%. Fluctuation in laser power was not a concern here since the power of the cooling laser was stabilized. Despite these fluctuations, and the fact that stray fields change every day, the algorithm demonstrated an increase in fluorescence collection up to 78 1% (Fig.2b) when starting from a manually optimized configuration in less than 10 iterations, or 120 s.

The ADAM algorithm was fast and reliable (the ion was never lost during optimization), even in extremely volatile conditions like having time-dependent charging and stray electric field buildup. Figure3a shows a colourmap of the voltages and laser position adjustments, where most of the improvement came from adding the same voltage to all DC electrodes indicating that the ion was not at optimal height. The volatility of the ion-trap environment causes the fluorescence rate to oscillate around the optimal point. To get the best value, instead of using the values of the final iteration, the software saved all voltage combinations and applied the setting with the highest photon count after all iterations were finished. Despite picking the best value it can be seen in Fig.2b that the fluorescence for some iterations during the optimization are higher than the final point selected by the software. This is because when the settings are changed, the ion fluorescence rate may transiently increase and subsequently stabilize to a slightly lower value for the same voltage settings.

The second algorithm tested was a deep learning network using the python based optimization and experimental control package MLOOP20. MLOOP uses Differential Evolution33 for exploring and sampling data. The blue points in Fig.4a corresponds to these samples and it can be seen that even at the end of optimization, they can have non-optimum fluorescence rates. MLOOP also trains a neural network using the data collected by Differential Evolution and creates an approximate model of the experimental system. It then uses this model to predict an optimum point. The red points in Fig. 4a shows the optimum points predicted by the neural network model. It can be seen that this section starts later than Differential Evolution, as it requires some data for initial neural network training, and gradually finds the optimum and stays near it. For training of the neural network, the inbuilt ADAM optimizer is used to minimize the cost function. The sampling in MLOOP does not require a gradient calculation which greatly improves the sampling time. Even though the sampling is fast, training the network to find an optimal point requires a minimum of 100 samples and that makes MLOOP slower than ADAM. With our settings for MLOOP, each iteration took 0.7s on average and therefore 700s was needed to take 1000 samples shown in Fig.4a.

In our test the neural network in MLOOP had 5 layers with 45 nodes each, all with Gaussian error correction. The neural network structure (number of layers and cells) was manually optimized and tested on a 45-dimensional positive definite quadratic function before being used for the experiment. Once the ion was trapped, positioned above the integrated mirror22, and photon counts were read, the program started sampling 100 different voltage combinations around its initial point. Then, the network started training on the initial data and making predictions for the voltages that maximise fluorescence. Since the ion trap setup is very sensitive to changes in the electric field, the voltages were set to move a maximum of 1% of their previous value in each iteration to reduce the chance of losing the ion. As a step size value could not be explicitly defined, this percentage was chosen to make the changes similar to the step size used for ADAM.

A small percentage of our initial trials with the maximum change of a few percent (instead of 1%) led to an unstable ion during the parameter search sequence. This is because MLOOP is a global optimizer and can set the voltages to values far from the stable starting point. Since the ion trap is a complicated system that can only be modelled for a specific range of configurations, moving away from these settings can lead to unpredictable and usually unstable behavior. MLOOP also has an in-built mechanism that handles function noise using a predefined expected uncertainty. We set this uncertainty to the peak-to-peak noise of the photon readout when no optimization was running.

Since MLOOP is a global optimizer it was able to find optimum points different from the points found by ADAM. For trials where low numbers of initial training data points were used, these configurations proved to be unstable and in most cases resulted in the loss of the ion. Unstable states were also observed occasionally if the optimizer was run for too long. With moderate-size training sets, MLOOP was able to find voltage settings with fluorescence rates similar or higher than optimum points found by ADAM as shown in Fig.4a. Considering the long duration of the MLOOP iteration sequence and the possibility of finding unstable settings in volatile conditions, the test of optimization with induced changing stray fields (Testing under poor trap conditions) was only performed with the ADAM optimizer as the gradient based search method proved to be more robust against fluctuations in the ion environment.

To test the effectiveness of the protocols, the saturation power, (P_{sat}), was measured before and after the optimization process. The (P_{sat}) is the laser power at which the fluorescence rate of a two-level system is half the fluorescence at infinite laser power. We also measured the overall detection efficiency (eta ), the fraction of emitted photons which resulted in detection events. Table1 shows (P_{sat}) decreased (ion photon absorption was improved) using both ADAM and MLOOP. The detection efficiency was approximately the same for all runs as expected.

Another test was done by measuring fluorescence versus laser detuning before and after optimization. Figure4b shows that the measured values follows the expected Lorentzian profile29,30,31 and associated linewidth before and after optimization. This indicates that the initial micromotion magnitude (beta ) was sufficiently small for fluorescence to be a good optimization proxy. Clear increase in florescence can be seen after optimizing 44 electrodes individually both with ADAM and MLOOP. The fit residual curve (difference between the experimental values and the theoretical fit) shows that optimizing individual electrodes, resulted in slight increase in heating instability near the resonance.

To test the live performance of the optimization protocol in a non-ideal situation, we deliberately charged the trap by shining 369.5nm UV laser light onto the chip for 70 min. The power of the laser was (200pm 15mu W) and the Gaussian diameter of the focus was (120pm 10mu m). This process ejects electrons due to the photo-electric effect34 and produces irregular and potentially unpredictable slow time varying electric fields within the trap. The process charged the trap significantly and made a noticeable reduction to the photon count. The ADAM algorithm was then tested both during charging and after charging was stopped. In both cases an improvement of fluorescence rate was observed.

The first experiment was performed to test the optimizing process after charging. In this test, starting with the optimal manual setting, ADAM individual electrode optimizer was able to obtain 27% improvement in the fluorescence rate (blue points on the left side of Fig.5a). Then charging was induced onto the trap for 70min and a clear decrease in photon count was seen that went even lower than the initial value (red points in Fig. 5a). At this point charging was stopped and ADAM was run again and fluorescence rate returned back to the previous optimum, within the error, in approximately 12min. During the second optimization, the fluorescence goes higher than the stable final value for some iterations before the final. This is because of the same effect explained in Gradient descent optimizer section that the fluorescence might spike right after a change but go down slightly after stabilizing. Looking at the changes of individual electrodes, shown in Fig.3b, we see that the main electrodes adjusted were those around the ion and some throughout the trap. The change in the laser horizontal position was negligible.

Another experiment was done by running ADAM during continuous charging for real-time compensation. Since we induce charging via laser scattering from the trap, the collected photons are both from the ion and the scattered laser and fluctuations in the intensity of scattered light confuses the optimizer. Despite that the optimizer did not lose the ion nor needed to abort the process. Figure5b shows that the fluorescence rate, even after a 70-min charging session, remained near the optimum value. After stopping the charging, the ion remained trapped for more than 8 h and was intentionally removed from the trap after this time.

(a) Real time compensation with ADAM of laser charging induced stray electric field. The ion was optimized using ADAM (left blue points) then the photon count was noted whilst charging for 70 min (red points) then re-optimized (right blue points). Initial improvement from manually optimized settings was 27%. The second optimization improved the fluorescence by 58% from the charged conditions and returned it back to the optimum value of the first optimization within the error. (b) The trap was charged by hitting a UV laser to destabilize the ion and individual electrodes optimized using ADAM simultaneously for 70 min. The photon count fluctuates as a result of combination of fluctuations of power of the cooling laser power, algorithm search and charging irregularities. The optimizer keeps the fluorescence at the photon count similar to the case of optimizing after the charging is stopped (third section of (a)).

See the article here:
Dynamic compensation of stray electric fields in an ion trap using machine learning and adaptive algorithm | Scientific Reports - Nature.com

Link Machine Learning (LML) has a Neutral Sentiment Score, is Rising, and Outperforming the Crypto Market Friday: What’s Next? – InvestorsObserver

Link Machine Learning (LML) gets a neutral rating from InvestorsObserver Friday. The crypto is up 37.96% to $0.006191928941 while the broader crypto market is up 236905.2%.

The Sentiment Score provides a quick, short-term look at the cryptos recent performance. This can be useful for both short-term investors looking to ride a rally and longer-term investors trying to buy the dip.

Link Machine Learning price is currently above resistance. With support set around $0.0037498183340939 and resistance at $0.00596165217246732, Link Machine Learning is potentially in a volatile position if the rally burns out.

Link Machine Learning has traded on low volume recently. This means that today's volume is below its average volume over the past seven days.

Due to a lack of data, this crypto may be less suitable for some investors.

Click here to unlock the rest of the report on Link Machine Learning

Subscribe to our daily morning update newsletter and never miss out on the need-to-know market news, movements, and more.

Thank you for signing up! You're all set to receive the Morning Update newsletter

Follow this link:
Link Machine Learning (LML) has a Neutral Sentiment Score, is Rising, and Outperforming the Crypto Market Friday: What's Next? - InvestorsObserver

Deep Science: AI simulates economies and predicts which startups receive funding – TechCrunch

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column aims to collect some of the most relevant recent discoveries and papers particularly in, but not limited to, artificial intelligence and explain why they matter.

This week in AI, scientists conducted a fascinating experiment to predict how market-driven platforms like food delivery and ride-hailing businesses affect the overall economy when theyre optimized for different objectives, like maximizing revenue. Elsewhere, demonstrating the versatility of AI, a team hailing from ETH Zurich developed a system that can read tree heights from satellite images, while a separate group of researchers tested a system to predict a startups success from public web data.

The market-driven platform work builds on Salesforces AI Economist, an open source research environment for understanding how AI could improve economic policy. In fact, some of the researchers behind the AI Economist were involved in the new work, which was detailed in a study originally published in March.

As the coauthors explained to TechCrunch via email, the goal was to investigate two-sided marketplaces like Amazon, DoorDash, Uber and TaskRabbit that enjoy larger market power due to surging demand and supply. Using reinforcement learning a type of AI system that learns to solve a multi-level problem by trial and error the researchers trained a system to understand the impact of interactions between platforms (e.g., Lyft) and consumers (e.g., riders).

We use reinforcement learning to reason about how a platform would operate under different design objectives [Our] simulator enables evaluating reinforcement learning policies in diverse settings under different objectives and model assumptions, the coauthors told TechCrunch via email. We explored a total of 15 different market settings i.e., a combination of market structure, buyer knowledge about sellers, [economic] shock intensity and design objective.

Using their AI system, the researchers arrived at the conclusion that a platform designed to maximize revenue tends to raise fees and extract more profits from buyers and sellers during economic shocks at the expense of social welfare. When platform fees are fixed (e.g., due to regulation), they found a platforms revenue-maximizing incentive generally aligns with the welfare considerations of the overall economy.

The findings might not be Earth-shattering, but the coauthors believe the system which they plan to open source could provide a foundation for either a business or policymaker to analyze a platform economy under different conditions, designs and regulatory considerations. We adopt reinforcement learning as a methodology to describe strategic operations of platform businesses that optimize their pricing and matching in response to changes in the environment, either the economic shock or some regulation they added. This may give new insights about platform economies that go beyond this work or those that can be generated analytically.

Turning our attention from platform businesses to the venture capital that fuels them, researchers hailing from Skopai, a startup that uses AI to characterize companies based on criteria like technology, market and finances, claims to be able to predict the ability of a startup to attract investments using publicly available data. Relying on data from startup websites, social media, and company registries, the coauthorssay that they can obtain prediction results comparable to the ones making also use of structured data available in private databases.

Applying AI to due diligence is nothing new. Correlation Ventures, EQT Ventures and Signalfire are among the firms currently using algorithms to inform their investments. Gartner predicts that 75% of VCs will use AI to make investment decisions by 2025, up from less than 5% today. But while some see the value in the technology, dangers lurk beneath the surface. In 2020, Harvard Business Review (HBR) found that an investment algorithm outperformed novice investors but exhibited biases, for example frequently selecting white and male entrepreneurs. HBR noted that this reflects the real world, highlighting AIs tendency to amplify existing prejudices.

In more encouraging news, scientists at MIT, alongside researchers at Cornell and Microsoft, claim to have developed a computer vision algorithm STEGO that can identify images down to the individual pixel. While this might not sound significant, its a vast improvement over the conventional method of teaching an algorithm to spot and classify objects in pictures and videos.

Traditionally, computer vision algorithms learn to recognize objects (e.g., trees, cars, tumors, etc.) by being shown many examples of the objects that have been labeled by humans. STEGO does away with this time-consuming, labor-intensive workflow by instead applying a class label to each pixel in the image. The system isnt perfect it sometimes confuses grits with pasta, for example but STEGO can successfully segment out things like roads, people and street signs, the researchers say.

On the topic of object recognition, it appears were approaching the day when academic work like DALL-E 2, OpenAIs image-generating system, becomes productized. New research out of Columbia University shows a system called Opal thats designed to create featured images for news stories from text descriptions, guiding users through the process with visual prompts.

When they tested it with a group of users, the researchers said that those who tried Opal were more efficient at creating featured images for articles, creating over two times more usable results than users without. Its not difficult to imagine a tool like Opal eventually making its way into content management systems like WordPress, perhaps as a plugin or extension.

Given an article text, Opal guides users through a structured search for visual concepts and provides pipelines allowing users to illustrate based on an articles tone, subjects and intended illustration style, the coauthors wrote. [Opal] generates diverse sets of editorial illustrations, graphic assets and concept ideas.

See more here:
Deep Science: AI simulates economies and predicts which startups receive funding - TechCrunch

How is the Expectation-Maximization algorithm used in machine learning? – Analytics India Magazine

The expectation-maximization (EM) algorithm is an elegant algorithm that maximizes the likelihood function for problems with latent or hidden variables. As from the name itself it could primarily be understood that it does two things one is the expectation and the other is maximization. This article would help to understand the math behind the EM algorithm with an implementation. Following are the topics to be covered.

Lets try to understand how the expectation and maximization combination helps to decide the number of clusters to be formed but before that we need to understand the concept of the latent variable.

A latent variable is a random variable that can be observed neither in training nor in the test phase. These variables cant be measured on a quantitative scale. There are two reasons to use latent variables:

The latent variable is the direct causation of all the parameters. Now the final model is much simpler to work with and has the same efficiency without reducing the flexibility of the model. There is one drawback of latent variables: it is harder to train these models.

Are you looking for a complete repository of Python libraries used in data science,check out here.

The general form of probability distribution arises from the observed variables for the variables that arent directly observable also known as latent variables, the expectation-maximization algorithm is used to predict their values by using the values of the other observed variable. This algorithm is the building block of many unsupervised clustering algorithms in the field of machine learning. This algorithm has two major computational steps which are expectation and maximization:

A high-level idea of EM algorithm functioning is stated below.

So, we had an understanding of the EM algorithm functionality but for implementation of this algorithm in python we need to understand the model which uses this algorithm to form clusters. Lets talk about the Gaussian Mixture model.

The Gaussian Mixture Model is an important concept in machine learning which uses the concept of expectation-maximization. A Gaussian Mixture is composed of several Gaussians, each represented by k which is the subset of the number of clusters to be formed. For each Gaussian k in the mixture the following parameters are present:

The above plot explains the Gaussian distribution for the data having a mean of 4 and a variance of 0.25. This could be concluded as the normal distribution. Using an iterative process the model concludes the final number of the cluster with the help of these parameters which determines the cluster stability.

Lets implement the concept of expectation-maximization in python.

Import necessary libraries

Reading and analyzing the data

Using the famous wine data for this implementation.

Plotting a distribution

This plot helps to understand the distribution of the dependent variable over the independent variable.

Fitting the GMM

The score function returns the log-likelihood which the lower the better. The is negative because it is the product of the density evaluated at the observations and the density takes values that are smaller than one, so its logarithm will be negative. Ignoring the negative and focusing on the magnitude which is 0.73 indicates the model is good and the number of clusters should be 6.

The expectation-Maximization Algorithm represents the idea of computing the latent variables by taking the parameters as fixed and known. The algorithm is inherently fast because it doesnt depend on computing gradients. With a hands-on implementation of this concept in this article, we could understand the expectation-maximization algorithm in machine learning.

Original post:
How is the Expectation-Maximization algorithm used in machine learning? - Analytics India Magazine

Anticipating others’ behavior on the road | MIT News | Massachusetts Institute of Technology – MIT News

Humans may be one of the biggest roadblocks keeping fully autonomous vehicles off city streets.

If a robot is going to navigate a vehicle safely through downtown Boston, it must be able to predict what nearby drivers, cyclists, and pedestrians are going to do next.

Behavior prediction is a tough problem, however, and current artificial intelligence solutions are either too simplistic (they may assume pedestrians always walk in a straight line), too conservative (to avoid pedestrians, the robot just leaves the car in park), or can only forecast the next moves of one agent (roads typically carry many users at once.)

MIT researchers have devised a deceptively simple solution to this complicated challenge. They break a multiagent behavior prediction problem into smaller pieces and tackle each one individually, so a computer can solve this complex task in real-time.

Their behavior-prediction framework first guesses the relationships between two road users which car, cyclist, or pedestrian has the right of way, and which agent will yield and uses those relationships to predict future trajectories for multiple agents.

These estimated trajectories were more accurate than those from other machine-learning models, compared to real traffic flow in an enormous dataset compiled by autonomous driving company Waymo. The MIT technique even outperformed Waymos recently published model. And because the researchers broke the problem into simpler pieces, their technique used less memory.

This is a very intuitive idea, but no one has fully explored it before, and it works quite well. The simplicity is definitely a plus. We are comparing our model with other state-of-the-art models in the field, including the one from Waymo, the leading company in this area, and our model achieves top performance on this challenging benchmark. This has a lot of potential for the future, says co-lead author Xin Cyrus Huang, a graduate student in the Department of Aeronautics and Astronautics and a research assistant in the lab of Brian Williams, professor of aeronautics and astronautics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Joining Huang and Williams on the paper are three researchers from Tsinghua University in China: co-lead author Qiao Sun, a research assistant; Junru Gu, a graduate student; and senior author Hang Zhao PhD 19, an assistant professor. The research will be presented at the Conference on Computer Vision and Pattern Recognition.

Multiple small models

The researchers machine-learning method, called M2I, takes two inputs: past trajectories of the cars, cyclists, and pedestrians interacting in a traffic setting such as a four-way intersection, and a map with street locations, lane configurations, etc.

Using this information, a relation predictor infers which of two agents has the right of way first, classifying one as a passer and one as a yielder. Then a prediction model, known as a marginal predictor, guesses the trajectory for the passing agent, since this agent behaves independently.

A second prediction model, known as a conditional predictor, then guesses what the yielding agent will do based on the actions of the passing agent. The system predicts a number of different trajectories for the yielder and passer, computes the probability of each one individually, and then selects the six joint results with the highest likelihood of occurring.

M2I outputs a prediction of how these agents will move through traffic for the next eight seconds. In one example, their method caused a vehicle to slow down so a pedestrian could cross the street, then speed up when they cleared the intersection. In another example, the vehicle waited until several cars had passed before turning from a side street onto a busy, main road.

While this initial research focuses on interactions between two agents, M2I could infer relationships among many agents and then guess their trajectories by linking multiple marginal and conditional predictors.

Real-world driving tests

The researchers trained the models using the Waymo Open Motion Dataset, which contains millions of real traffic scenes involving vehicles, pedestrians, and cyclists recorded by lidar (light detection and ranging) sensors and cameras mounted on the companys autonomous vehicles. They focused specifically on cases with multiple agents.

To determine accuracy, they compared each methods six prediction samples, weighted by their confidence levels, to the actual trajectories followed by the cars, cyclists, and pedestrians in a scene. Their method was the most accurate. It also outperformed the baseline models on a metric known as overlap rate; if two trajectories overlap, that indicates a collision. M2I had the lowest overlap rate.

Rather than just building a more complex model to solve this problem, we took an approach that is more like how a human thinks when they reason about interactions with others. A human does not reason about all hundreds of combinations of future behaviors. We make decisions quite fast, Huang says.

Another advantage of M2I is that, because it breaks the problem down into smaller pieces, it is easier for a user to understand the models decision making. In the long run, that could help users put more trust in autonomous vehicles, says Huang.

But the framework cant account for cases where two agents are mutually influencing each other, like when two vehicles each nudge forward at a four-way stop because the drivers arent sure who should be yielding.

They plan to address this limitation in future work. They also want to use their method to simulate realistic interactions between road users, which could be used to verify planning algorithms for self-driving cars or create huge amounts of synthetic driving data to improve model performance.

Predicting future trajectories of multiple, interacting agents is under-explored and extremely challenging for enabling full autonomy in complex scenes. M2I provides a highly promising prediction method with the relation predictor to discriminate agents predicted marginally or conditionally which significantly simplifies the problem, wrote Masayoshi Tomizuka, the Cheryl and John Neerhout, Jr. Distinguished Professor of Mechanical Engineering at University of California at Berkeley and Wei Zhan, an assistant professional researcher, in an email. The prediction model can capture the inherent relation and interactions of the agents to achieve the state-of-the-art performance. The two colleagues were not involved in the research.

This research is supported, in part, by the Qualcomm Innovation Fellowship. Toyota Research Institute also provided funds to support this work.

Original post:
Anticipating others' behavior on the road | MIT News | Massachusetts Institute of Technology - MIT News

Mawf is a free machine learning-powered plugin synth from the company behind TikTok – MusicRadar

In their first foray into the music production world, Bytedance - the company behind social media platform TikTok - have announced the development of a curious new plugin.

Mawf uses machine learning to 'morph' incoming audio signals into emulations of real instruments in your DAW. The plugin's ML synthesis engine can also run on MIDI input alone. This means you can use Mawf as an effect to colour an existing sound, or use it as a virtual instrument by itself. In its beta version, Mawf offers models of three instruments: saxophone, trumpet and the khlui, a Thai flute.

The developers behind Mawf used machine learning to analyse recordings of professional musicians playing instruments. The ML models extracted expressive changes in the instrument's sound that were linked to variations in pitch and amplitude. Mawf then uses these trained models to approximate the sound of these instruments based on input provided by the user. They're keen to distinguish this from physical modelling synthesis, which requires "specialised equations" for each instrument modelled. Mawf needs only a solo recording of the target instrument in order to imitate it.

Though the range of modelled instruments is a little small, Mawf does feature some interesting additions. There's an in-built compressor, chorus, and reverb effects, and a number of Control Modes that adjust how Mawf's synth engine is triggered, allowing the user to control the pitch of the processed audio through MIDI.

In a statement on their website, the developers commented on the current limitations of the technology: "Like the first ever analogue synthesiser, expect some funky bleeps and bloops from Mawf. ML for audio synthesis is a new technology no one has really perfected yet."

Mawf can be downloaded for free by users outside of the U.S., but beta testing is limited to the first 500 sign-ups, so we suggest moving fast if you'd like to snag a copy.

Visit Mawf's website to find out more.

Original post:
Mawf is a free machine learning-powered plugin synth from the company behind TikTok - MusicRadar

All You Need to Know about the Growing Role of Machine Learning in Cybersecurity – CIO Applications

ML can help security teams perform better, smarter, and faster by providing advanced analytics to solve real-world problems, such as using ML UEBA to detect user-based threats.

Fremont, CA: Machine learning (ML) and artificial intelligence (AI) are popular buzzwords in the cybersecurity industry. Security teams urgently require more automated methods to detect threats and malicious user activity, and machine learning promises a brighter future. Melissa Ruzzi offers some pointers on how to bring it into your organization.

Cybersecurity is undergoing massive technological and operational shifts, and data science is a key component driving these future innovations. Machine learning (ML) can play a critical role in extracting insights from data in the cyber security space.

To capitalize on ML's automated innovation, security teams must first identify the best opportunities for implementing these technologies. Correctly deploying ML is critical to achieving a meaningful impact in improving an organization's capability of detecting and responding to emerging and ever-evolving cyber threats.

Driving an AI-powered Future

ML can help security teams perform better, smarter, and faster by providing advanced analytics to solve real-world problems, such as using ML UEBA to detect user-based threats.

The use of machine learning to transform security operations is a new approach, and data-driven capabilities will continue to evolve in the coming years. Now is the time for organizations to understand how these technologies can be deployed to achieve greater threat detection and protection outcomes in order to secure their future against a growing threat surface.

Machine Learning and the Attack Surface

Because of the proliferation of cloud storage, mobile devices, teleworking, distance learning, and the Internet of Things, the threat surface has grown exponentially, increasing the number of suspicious activities that are not necessarily related to threats. The difficulty is exacerbated by the large number of suspicious events flagged by most security monitoring tools. Teams are finding it increasingly difficult to keep up with suspicious activity analysis and identify emerging threats in a crowded threat landscape.

This is where ML comes into play. From the perspective of a security professional, there is a strong need for ML and AI. They're looking for ways to automate the detection of threats and the detection of malicious behavior. Moving away from manual methods frees up time and resources, allowing security teams to concentrate on other tasks. They can use ML to use technologies beyond deterministic rule-based approaches requiring prior knowledge of fixed patterns.

Read the original:
All You Need to Know about the Growing Role of Machine Learning in Cybersecurity - CIO Applications