While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely availablealong with the burgeoning quantities of data that can be easily harvested and used to train neural networks.

The amount of computing power at people's fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google's Tensor Processing Unit (TPU) being a prime example.

Here, I will describe a very different approach to this problemusing optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.

Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.

Reducing the energy needs of neural networks might require computing with light

For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.

While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).

What are these mysterious linear-algebra calculations? They aren't so complicated really. They involve operations on matrices, which are just rectangular arrays of numbersspreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.

This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1,600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.

Advancing from LeNet's initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore's law provided much of that increase. The challenge has been to keep this trend going now that Moore's law is running out of steam. The usual solution is simply to throw more computing resourcesalong with time, money, and energyat the problem.

As a result, training today's large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.

Improvements in digital electronic computers allowed deep learning to blossom, to be sure. But that doesn't mean that the only way to carry out neural-network calculations is with such machines. Decades ago, when digital computers were still relatively primitive, some engineers tackled difficult calculations using analog computers instead. As digital electronics improved, those analog computers fell by the wayside. But it may be time to pursue that strategy once again, in particular when the analog computations can be done optically.

It has long been known that optical fibers can support much higher data rates than electrical wires. That's why all long-haul communication lines went optical, starting in the late 1970s. Since then, optical data links have replaced copper wires for shorter and shorter spans, all the way down to rack-to-rack communication in data centers. Optical data communication is faster and uses less power. Optical computing promises the same advantages.

But there is a big difference between communicating data and computing with it. And this is where analog optical approaches hit a roadblock. Conventional computers are based on transistors, which are highly nonlinear circuit elementsmeaning that their outputs aren't just proportional to their inputs, at least when used for computing. Nonlinearity is what lets transistors switch on and off, allowing them to be fashioned into logic gates. This switching is easy to accomplish with electronics, for which nonlinearities are a dime a dozen. But photons follow Maxwell's equations, which are annoyingly linear, meaning that the output of an optical device is typically proportional to its inputs.

The trick is to use the linearity of optical devices to do the one thing that deep learning relies on most: linear algebra.

To illustrate how that can be done, I'll describe here a photonic device that, when coupled to some simple analog electronics, can multiply two matrices together. Such multiplication combines the rows of one matrix with the columns of the other. More precisely, it multiplies pairs of numbers from these rows and columns and adds their products togetherthe multiply-and-accumulate operations I described earlier. My MIT colleagues and I published a paper about how this could be done in 2019. We're working now to build such an optical matrix multiplier.

Optical data communication is faster and uses less power. Optical computing promises the same advantages.

The basic computing unit in this device is an optical element called a beam splitter. Although its makeup is in fact more complicated, you can think of it as a half-silvered mirror set at a 45-degree angle. If you send a beam of light into it from the side, the beam splitter will allow half that light to pass straight through it, while the other half is reflected from the angled mirror, causing it to bounce off at 90 degrees from the incoming beam.

Now shine a second beam of light, perpendicular to the first, into this beam splitter so that it impinges on the other side of the angled mirror. Half of this second beam will similarly be transmitted and half reflected at 90 degrees. The two output beams will combine with the two outputs from the first beam. So this beam splitter has two inputs and two outputs.

To use this device for matrix multiplication, you generate two light beams with electric-field intensities that are proportional to the two numbers you want to multiply. Let's call these field intensities x and y. Shine those two beams into the beam splitter, which will combine these two beams. This particular beam splitter does that in a way that will produce two outputs whose electric fields have values of (x + y)/2 and (x y)/2.

In addition to the beam splitter, this analog multiplier requires two simple electronic componentsphotodetectorsto measure the two output beams. They don't measure the electric field intensity of those beams, though. They measure the power of a beam, which is proportional to the square of its electric-field intensity.

Why is that relation important? To understand that requires some algebrabut nothing beyond what you learned in high school. Recall that when you square (x + y)/2 you get (x2 + 2xy + y2)/2. And when you square (x y)/2, you get (x2 2xy + y2)/2. Subtracting the latter from the former gives 2xy.

Pause now to contemplate the significance of this simple bit of math. It means that if you encode a number as a beam of light of a certain intensity and another number as a beam of another intensity, send them through such a beam splitter, measure the two outputs with photodetectors, and negate one of the resulting electrical signals before summing them together, you will have a signal proportional to the product of your two numbers.

Simulations of the integrated Mach-Zehnder interferometer found in Lightmatter's neural-network accelerator show three different conditions whereby light traveling in the two branches of the interferometer undergoes different relative phase shifts (0 degrees in a, 45 degrees in b, and 90 degrees in c).Lightmatter

My description has made it sound as though each of these light beams must be held steady. In fact, you can briefly pulse the light in the two input beams and measure the output pulse. Better yet, you can feed the output signal into a capacitor, which will then accumulate charge for as long as the pulse lasts. Then you can pulse the inputs again for the same duration, this time encoding two new numbers to be multiplied together. Their product adds some more charge to the capacitor. You can repeat this process as many times as you like, each time carrying out another multiply-and-accumulate operation.

Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don't have to do that after each pulseyou can wait until the end of a sequence of, say, N pulses. That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large. Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy.

Sometimes you can save energy on the input side of things, too. That's because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple timesconsuming energy each timeit can be transformed just once, and the light beam that is created can be split into many channels. In this way, the energy cost of input conversion is amortized over many operations.

Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements.

I've outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat. Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence, are developing optical neural-network accelerators based on this approach. Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year.

Another startup using optics for computing is Optalysis, which hopes to revive a rather old concept. One of the first uses of optical computing back in the 1960s was for the processing of synthetic-aperture radar data. A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely.

Theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

There is also a company called Luminous, spun out of Princeton University, which is working to create spiking neural networks based on something it calls a laser neuron. Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous's hardware is still in the early phase of development, but the promise of combining two energy-saving approachesspiking and opticsis quite exciting.

There are, of course, still many technical challenges to be overcome. One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That's because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it's difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision. While 8-bit electronic deep-learning hardware exists (the Google TPU is a good example), this industry demands higher precision, especially for neural-network training.

There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can't be packed nearly as tightly as transistors, so the required chip area adds up quickly. A 2017 demonstration of this approach by MIT researchers involved a chip that was 1.5 millimeters on a side. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way.

There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug. What's clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

Based on the technology that's currently available for the various components (optical modulators, detectors, amplifiers, analog-to-digital converters), it's reasonable to think that the energy efficiency of neural-network calculations could be made 1,000 times better than today's electronic processors. Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed.

Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks, were first demonstrated in the 1970s. But this approach didn't catch on. Will this time be different? Possibly, for three reasons.

First, deep learning is genuinely useful now, not just an academic curiosity. Second, we can't rely on Moore's Law alone to continue improving electronics. And finally, we have a new technology that was not available to earlier generations: integrated photonics. These factors suggest that optical neural networks will arrive for real this timeand the future of such computations may indeed be photonic.

Excerpt from:

First Photonic Quantum Computer on the Cloud - IEEE Spectrum

- Quantum computers are on the path to solving bigger problems for BMW, LG and others - CNET - December 31st, 2021
- Research Opens the Door to Fully Light-Based Quantum Computing - Tom's Hardware - December 31st, 2021
- 10 science breakthroughs of 2021 that you need to know about - India Today - December 31st, 2021
- 7 Tech Trends Where Israel Could Make An Impact In 2022 - NoCamels - Israeli Innovation News - December 31st, 2021
- Research Fellow, Theory, Centre for Quantum Technologies job with NATIONAL UNIVERSITY OF SINGAPORE | 275976 - Times Higher Education (THE) - December 24th, 2021
- The Spanish who have entered this year 2021 in the Guinness Book of Records | Life - Central Valley Business Journal - December 24th, 2021
- Quantum computation is helping uncover materials that turn wasted heat into electricity - The Next Web - December 22nd, 2021
- Quantum Cryptography Market Share-Size, Growth Drivers 2021, Global Demand, Emerging Trends, Opportunities in Grooming Regions, Key Players... - December 22nd, 2021
- What Is Quantum Computing? - Data Center Knowledge - December 16th, 2021
- Quantum computing now has an out-of-this-world problem: Cosmic rays - ZDNet - December 16th, 2021
- Lecturer in Computer Science, Quantum Computing job with ROYAL HOLLOWAY, UNIVERSITY OF LONDON | 275274 - Times Higher Education (THE) - December 16th, 2021
- Atos announces hybridisation projects at its 8th Quantum Advisory Board - Scientific Computing World - December 16th, 2021
- IonQ Stock Is an Investment in Cutting Edge, Global Solutions - InvestorPlace - December 16th, 2021
- Another setback for 'Majorana' particle as Science paper earns an expression of concern - Retraction Watch - December 16th, 2021
- US is risking APOCALYPSE with millions lining up for food & water if theres a cyberattack on power grid,... - The Sun - December 16th, 2021
- ColdQuanta and Strangeworks Announce Addition of Hilbert Quantum Computer to Strangeworks Ecosystem - The Grand Junction Daily Sentinel - December 8th, 2021
- How a quantum computer could break 2048-bit RSA encryption ... - December 3rd, 2021
- Atos confirms role as global leader in quantum hybridization technologies at its 8th Quantum Advisory Board - Yahoo Finance - December 3rd, 2021
- Breakthroughs in quantum computing drive higher processing power - Digital Nation - November 25th, 2021
- US blocks export of quantum computing tech to Chinese organizations - CNET - November 25th, 2021
- Why Blockchain isnt as secure as you think - Evening Standard - November 25th, 2021
- Tech pioneers to headline Princeton conference on innovation and entrepreneurship - Princeton University - November 25th, 2021
- Quantum Turing machine - Wikipedia - November 11th, 2021
- Is Quantum Computing the Future of AI? - Datanami - November 11th, 2021
- Multiverse Computing Partners with IonQ to Bring Quantum Computing to Global Finance - HPCwire - November 11th, 2021
- Leading Technology Executive Max Schireson Joins Quantum Machines' Board of Directors - HPCwire - November 11th, 2021
- Andrew Chi-Chih Yao receives the 2021 Kyoto Prize in Advanced Technology: Commemorative Lecture on his wonderful journey in computer science to be... - November 11th, 2021
- Clever Combination of Quantum Physics and Molecular Biology - SciTechDaily - November 11th, 2021
- QuTech creates a time crystal - Innovation Origins - November 6th, 2021
- Grover's algorithm - Wikipedia - November 1st, 2021
- Quantum Engineering | Electrical and Computer Engineering - November 1st, 2021
- Quantum Computer Maker Rigetti to Go Public via $1.5 ... - November 1st, 2021
- Building a large-scale quantum computer is a huge challenge. Will we ever get there? - ZDNet - November 1st, 2021
- Two UCSB Scientists Receive Award to Partner With Ciscos New Quantum Research Team - Noozhawk - November 1st, 2021
- Researchers show new strategy for detecting non-conformist particles called anyons - Brown University - November 1st, 2021
- IonQ to Report Third Quarter 2021 Financial Results on November 15, 2021 - Yahoo Finance - November 1st, 2021
- UCSB and Cisco Systems Collaboration Aims to Push the Boundaries of Quantum Technologies - AZoQuantum - November 1st, 2021
- 'Quantum computer algorithms are linear algebra, probabilities. This is not something that we do a good job of teaching our kids' - The Register - October 2nd, 2021
- Quantum computer company IonQ makes Wall Street debut - Financial Times - October 2nd, 2021
- Fujitsu and Osaka University Deepen Collaborative Research and Development for Fault-Tolerant Quantum Computers - HPCwire - October 2nd, 2021
- Fermilab on the trail for a new building block of matter and quantum computing power - Medill Reports Chicago - Medill Reports: Chicago - October 2nd, 2021
- A Boulder Company Is Leading the Next Technology Revolution - 5280 - 5280 | The Denver Magazine - October 2nd, 2021
- Judith Olson, Senior Physicist at ColdQuanta, Named Next Generation Leader of the Year at Women in IT Awards - HPCwire - October 2nd, 2021
- Quantum Computing in Manufacturing Market Still Has Room To Grow: International Business Machines, D-Wave Systems, Microsoft - Digital Journal - October 2nd, 2021
- The coevolution of particle physics and computing - Symmetry magazine - September 30th, 2021
- The Fourth Industrial Revolution (4IR) Takeover: IoT and Quantum-Resistant Blockchains Are Setting the Trend - FinanceFeeds - September 30th, 2021
- $5M NSF Grant to Fund Research on Quantum Internet Foundations - Maryland Today - September 19th, 2021
- 3 Quantum Computing Stocks to Buy for Their Promising Healthcare Potential - InvestorPlace - September 19th, 2021
- DoD Buys Two New Supercomputers That Rank Among Its Most Powerful Ever - Breaking Defense - September 19th, 2021
- College Park's IonQ and the University of Maryland are teaming up to open a $20M quantum lab - Technical.ly DC - September 15th, 2021
- Where the laws of matter break down, a quantum discovery crops up - UPJ Athletics - September 15th, 2021
- Small, diamond-based quantum computers could be in our hands within five years - Cosmos Magazine - August 26th, 2021
- IBM partners with the University of Tokyo on quantum computer - Illinoisnewstoday.com - August 26th, 2021
- A Simple Crystal Could Finally Give Us Large-Scale Quantum Computing, Scientists Say - ScienceAlert - August 16th, 2021
- IBM Partnering with University of Tokyo on Quantum Computer - Datamation - August 16th, 2021
- 40 years ago the first IBM PC was presented, that's how it was and what it knew how to do - Tech Gaming Report - August 14th, 2021
- The week in science news: Olympic Bat, controlled qubits, and speeding metal stars - TechRadar - August 14th, 2021
- Quantum information and quantum field theory: Study found a new connection between them - Tech Explorist - August 14th, 2021
- Global AI Chipsets for Wireless Networks and Devices, Cloud and Next Generation Computing, IoT, and Big Data Analytics to 2026 -... - August 14th, 2021
- What is quantum computing? Everything you need to know about the strange world of quantum computers - ZDNet - July 29th, 2021
- Is Bitcoin (BTC) Safe from Grover's Algorithm? - Yahoo Finance - July 29th, 2021
- Will the NSA Finally Build Its Superconducting Spy Computer? - IEEE Spectrum - July 29th, 2021
- IBM's newest quantum computer is now up-and-running: Here's what it's going to be used for - ZDNet - July 27th, 2021
- URI to host international experts for conference on future of quantum computing - URI Today - July 27th, 2021
- Research by University of Surrey and Arqit reveals Quantum Threat to Digital Assets - Business Wire - July 27th, 2021
- A Roadmap On The Geopolitical Impact Of Emerging Technologies By Chuck Brooks And Dr. David Bray - Forbes - July 27th, 2021
- IBM and CERN on quantum computing to track the elusive Higgs boson - Tech News Inc - July 27th, 2021
- The Convergence of Communication and Computation with Dr. Vida Ilderem - RCR Wireless News - July 27th, 2021
- Quantum Computing Market is anticipated to surge at a CAGR of 33.7% over the next ten years - PRNewswire - July 21st, 2021
- Quantum Computing for the Future Grid - Transmission & Distribution World - July 21st, 2021
- Red Hat embraces quantum supremacy as it looks to the future - SiliconANGLE News - July 21st, 2021
- Quantum Computing Is Coming. What Can It Do? - Harvard Business Review - July 16th, 2021
- Rigetti Computing Partners with Riverlane, Astex Pharmaceuticals on Quantum Computing for Drug Discovery - HPCwire - July 16th, 2021
- Quantware Launches the World's First Commercially Available Superconducting Quantum Processors, Accelerating the Advent of the Quantum Computer. -... - July 16th, 2021
- The Future of Data Encryption: What You Need to Know Now - FedTech Magazine - July 16th, 2021
- Quantum computing: this is how quantum programming works using the example of random walk - Market Research Telecast - July 16th, 2021
- IBM shows the advantages of a quantum computer over traditional computers - Tech News Inc - July 16th, 2021
- Quantum Blockchain Technologies Plc - Working with D-Wave Systems - Yahoo Finance UK - July 6th, 2021
- Conclusions from Forum TERATEC 2021: European Cooperation, Novel Uses of HPC - HPCwire - July 6th, 2021
- IBM researchers demonstrate the advantage that quantum computers have over classical computers - ZDNet - July 2nd, 2021