Machine Learning Improves Performance of the Advanced Light Source – Machine Design

Synchrotrons, such as the like the Advanced Light Source (ALS) at the Department of Energys Lawrence Berkeley National Laboratory, can generate light in a wide variety of frequencies by accelerating electrons until they emit a controlled beam of light. Scientists use the controlled and uniform light beams to peer into materials to learn more about biology, chemistry, physics, environmental science, and, of course, material science.

The more intense and uniform a synchrotrons light beam is, the more information scientists can get from their experiments. And over the years, researchers have devised ways to upgrade their synchrotrons to produce brighter, more-consistent light beams that let them make more complex and detailed studies across a broad range of sample types.

This image shows the profile of an electron beam at Berkeley Labs Advanced Light Source synchrotron represented as pixels measured by a charged-coupled-device sensor. Some experiments require that the light-beam size remain stable on time scales ranging from less than seconds to hours to ensure reliable data. (Image: Lawrence Berkeley National Laboratory)

But some light-beam properties still fluctuate, posing a challenge for certain experiments.

That changed recently when a large team of researchers at Berkeley Lab and UC Berkeley developed a method of using machine learning to improve the stability of the synchrotron light beams size by using an algorithm to make adjustments that largely cancel out these fluctuations, reducing them from a level of a few percent down to 0.4%, with submicron (below 1 millionth of a meter) precision.

Machine learning, a form of artificial intelligence, uses a computer to analyze a set of data to build predictive programs that solve complex problems. The machine learning used at the ALS is referred to as a neural network because it recognizes patterns in data in a way loosely resembling the way a human brain does.

This chart shows how vertical beam-size stability greatly improves when a neural network is implemented during Advanced Light Source operations. When the so-called feed-forward correction is used, fluctuations in the vertical beam size are stabilized down to the sub-percent level (see yellow-highlighted section) from levels that otherwise range to several percent. (Credit: Lawrence Berkeley National Laboratory)

During development, researchers fed electron beam data from the ALSwhich included the positions of the magnetic devices used to produce light from the electron beaminto the neural network. The neural network recognized patterns in this data and identified how different device parameters affected the width of the electron beam. The machine-learning algorithm also recommended adjustments to the magnets to improve the electron beam.

The machine learning technique suggested changes to the way the magnets are constantly adjusted in the ALS that compensate in real-time for fluctuations in the various beams the ALS can create simultaneously. In fact, the improvements are refinements of alterations made back in 1993.

The algorithm-directed ALS can now make corrections at a rate of up to 10 times per second, though three times a second appears to be adequate for improving performance at this stage.

An exterior view of the Advanced Light Source dome that houses dozens of beamlines. (Credit: Roy Kaltschmidt/Berkeley Lab)

The changes narrowed the focus of light beams from around 100 microns down to below 10 microns. Scientist already know that the newest upgrade reduced artifacts in the images our X-ray microscopes that use the light beam. This make ALS suitable for advanced X-ray techniques such as ptychography, which can resolve the structure of samples down to the level of nanometers, and X-ray photon correlation spectroscopy, or XPCS, which lets scientists study rapid changes in highly concentrated materials that dont have a uniform structure.

Machine learning fundamentally requires two things: The problem needs to be reproducible, and you need huge amounts of data, says Simon Leemann, leader of the machine learning project at ALS. We realized we could put all of our data to use and have an algorithm recognize patterns. The problem consisted of roughly 35 parametersway too complex for us to figure out ourselves.

Read more:

Machine Learning Improves Performance of the Advanced Light Source - Machine Design

Related Posts

Comments are closed.