A team of researchers have designed and constructed a chip that performs computations directly in memory, and can run a large variety of AI applications – all at a fraction of energy consumed by general-purpose AI computing platforms.
Advent of NeuRRAM neuromorphic chip is a development for AI to be a step nearer to run on a broad range of edge devices, disconnected from the cloud, where they can perform complex cognitive works anywhere and anytime without depending on a network connection with a centralized server. Applications of neuromorphic chip range from smart watches, smart earbuds, to VR headsets, rovers for space exploration, and smart sensors in factories.
In terms of efficiency, the NeuRRAM chip is twice as efficient as state-of-the-art compute-in-memory chips, but also produces results that are as precise as conventional digital chips.
In fact, conventional AI platforms are much bulkier, and are typically constrained to use large data servers operating in the cloud.
Additionally, the NeuRRAM chip is highly multifaceted, and supports numerous neural network models and architectures. Featuring this, the chip can be used for many applications, which includes image recognition, voice recognition, and reconstruction.
To counter this, the conventional wisdom is that higher efficiency of NeuRRAM chip is due to compromise of versatility, but ‘compute-in-memory’ chip delivers efficiency while not compromising on versatility.
The research team, in association with bioengineers at the University of California presented their findings in the Aug 17 publication of Nature.
Currently, AI computing is computationally expensive and consumes immense energy. Most AI applications on edge devices involve transferring data from devices to the cloud, where AI processes it and examines it. The results are then moved back to the device. This is because most edge devices are battery-operated, and thus have only limited amount of power that can be used for computing.