Intel Unveils Second-Generation Neuromorphic Chip
Intel has unveiled its second-generation neuromorphic computing chip, Loihi 2, the first chip to be built on its Intel 4 process technology. Designed for research into , Loihi 2 brings a range of improvements. They include a new instruction set for neurons that provides more programmability, allowing spikes to have integer values beyond just 1 and 0, and the ability to scale into three-dimensional meshes of chips for larger systems.
The chipmaker also unveiled Lava, an open-source software framework for developing neuro-inspired applications. Intel hopes to engage neuromorphic researchers in development of Lava, which when up and running will allow research teams to build on each other’s work.
Loihi is Intel’s version of what neuromorphic hardware, designed for brain-inspired spiking neural networks (SNNs), should look like. SNNs are used in event-based computing, in which the timing of input spikes encodes the information. In general, spikes that arrive sooner have more computational effect than those arriving later.
Intel’s Loihi 2 second-generation neuromorphic processor. (Source: Intel)
Among the key differences between neuromorphic hardware and standard CPUs is fine-grained distribution of memory, meaning Loihi’s memory is embedded into individual cores. Since Loihi’s spikes rely on timing, the architecture is asynchronous.
“In neuromorphic computing, the computation is emerging through the interaction between these dynamical elements,” explained , director of Intel’s Neuromorphic Computing Lab. “In this case, it’s neurons that have this dynamical property of adapting online to the input it receives, and the programmer may not know the precise trajectory of steps that the chip will go through to arrive at an answer.
“It goes through a dynamical process of self-organizing its states and it settles into some new condition. That final fixed point as we call it, or equilibrium state, is what is encoding the answer to the problem that you want to solve,” Davies added. “So it’s very fundamentally different from how we even think about computing in other architectures.”
First-generation Loihi chips have thus far been demonstrated in a variety of research applications, including adaptive robot arm control, where the motion adapts to changes in the system, reducing friction and wear on the arm. Loihi is able to adapt its control algorithm to compensate for errors or unpredictable behavior, enabling robots to operate with the desired accuracy. Loihi has also been used in . In this scenario, it can learn and detect new odors much more efficiently than a deep learning-based equivalent. A project with Deutsche Bahn also used Loihi for train scheduling. The system reacted quickly to changes such as track closures or stalled trains.
Second-gen features
Built on a pre-production version of the Intel 4 process, Loihi 2 aims to increase programmability and performance without compromising energy efficiency. Like its predecessor, it typically consumes around 100 mW (up to 1 W).
An increase in resource density is one of the most important changes; while the chip still incorporates 128 cores, the neuron count jumps by a factor of eight.
“Getting to a higher amount of storage, neurons and synapses in a single chip is essential for the commercial viability… and commercializing them in a way that makes sense for customer applications,” said Davies.
Loihi 2 features. (Source: Intel)
With Loihi 1, workloads would often map onto the architecture in non-optimal ways. For example, the neuron count would often max out while free memory was still available. The amount of memory in Loihi 2 is similar in total, but has been broken up into memory banks that are more flexible. Additional compression has been added to network parameters to minimize the amount of memory required for larger models. This frees up memory that can be reallocated for neurons.
The upshot is that Loihi 2 can tackle larger problems with the same amount of memory, delivering a roughly 15-fold increase in neural network capacity per millimeter2 of chip area–bearing in mind that die area is halved overall by new process technology.
Neuron programmability
Programmability is another important architectural modification. Neurons that were previously fixed-function, though configurable, in Loihi 1 gain a full instruction set in Loihi 2. The instruction set includes common arithmetic, comparison and program control flow instructions. That level of programmability would allow varied SNN types to be run more efficiently.
“This is a kind of microcode that allows us to program almost arbitrary neuron models,” Davies said. “This covers the limits of Loihi [1], and where generally we’re finding more application value could be unlocked with even more complex and richer neuron models, which is not what we were expecting at the beginning of Loihi. But now we can actually encompass that full extent of neuron models that our partners are trying to investigate, and what the computational neuroscience domain [is] proposing and characterizing.”
The Loihi 2 die is the first to be fabricated on a pre-production version of Intel 4 process technology. (Source: Intel)
For Loihi 2, the idea of spikes has also been generalized. Loihi 1 employed strict binary spikes to mirror what is seen in biology, where spikes have no magnitude. All information is represented by spike timing, and earlier spikes would have greater computational effect than later spikes. In Loihi 2, spikes carry a configurable integer payload available to the programmable neuron model. While biological brains don’t do this, Davies said it was relatively easy for Intel to add to the silicon architecture without compromising performance.
“This is an instance where we’re departing from the strict biological fidelity, specifically because we understand what the importance is, the time-coding aspect of it,” he said. “But [we realized] that we can do better, and we can solve the same problems with fewer resources if we have this extra magnitude that can be sent alongside with this spike.”
Generalized event-based messaging is key to Loihi 2’s support of a deep neural network called the sigma-delta neural network (SDNN), which is much faster than the timing approach used on Loihi 1. SDNNs compute graded-activation values in the same way that conventional DNNs do, but only communicate significant changes as they happen in a sparse, event-driven manner.
3D Scaling
Loihi 2 is billed as up to 10 times faster than its predecessor at the circuit level. Combined with functional improvements, the design can deliver up to 10X speed gains, Davies claimed. Loihi 2 supports minimum chip-wide time steps under 200ns; it can also process neuromorphic networks up to 5,000 times faster than biological neurons.
The new chip also features scalability ports which allow Intel to scale neural networks into the third dimension. Without external memory on which to run larger neural networks, Loihi 1 required multiple devices (such as in Intel’s ). Planar meshes of Loihi 1 chips become 3D meshes in Loihi 2. Meanwhile, chip-to-chip bandwidth has been improved by a factor of four, with compression and new protocols providing one-tenth the redundant spike traffic sent between chips. Davies said the combined capacity boost is around 60-fold for most workloads, avoiding bottlenecks caused by inter-chip links.
Also supported is three-factor learning, which is popular in cutting-edge neuromorphic algorithm research. The same modification, which maps third factors to specific synapses, can be used to approximate back-propagation, the training method used in deep learning. That creates new ways of learning via Loihi.
Loihi 2 will be available to researchers as a single-chip board for developing edge applications (Oheo Gulch). It will also be offered as an eight-chip board intended to scale for more demanding applications. (Source: Intel)
Lava
The Lava software framework rounds out the Loihi enhancements. The open-source project is available to the neuromorphic research community.
“Software continues to hold back the field,” Davies said. “There hasn’t been a lot of progress, not at the same pace as the hardware over the past several years. And there hasn’t been an emergence of a single software framework, as we’ve seen in the deep learning world where we have TensorFlow and PyTorch gathering huge momentum and a user base.”
While Intel has a portfolio of applications demonstrated for Loihi, code sharing among development teams has been limited. That makes it harder for developers to build on progress made elsewhere.
Promoted as a new project, not a product, Davies said Lava is intended as a way to build a framework that supports Loihi researchers working on a range of algorithms. While Lava is aimed at event-based asynchronous message passing, it will also support heterogeneous execution. That allows researchers to develop applications that initially run on CPUs. With access to Loihi hardware, researchers can then map parts of the workload onto the neuromorphic chip. The hope is that approach would help lower the barrier to entry.
“We see a need for convergence and a communal development here towards this greater goal which is going to be necessary for commercializing neuromorphic technology,” Davies said.
Loihi 2 will be used by researchers developing advanced neuromorphic algorithms. Oheo Gulch, a single-chip system for lab testing, will initially be available to researchers, followed by Kapoho Point, an eight-chip Loihi 2 version of Kapoho Bay. Kapoho Point includes an Ethernet interface designed to allow boards to be stacked for applications such as robotics requiring more computing power.
Lava is available for download on .
–Editor’s note: A previous version of this story listed an incorrect link to the Lava open source project. The link has been corrected.
The post appeared first on .