“We are entering a new era where artificial intelligence is shaping the world of the future,” said Barbara Der Salvo, researcher at CEA Leti.
Earlier this year, she described a number of emerging technologies, such as neuromorphic hardware and ultra-low-power microdevices, that helped to create a radically new Internet of Things (IoT) communication architecture that would increase Analytics processing at the device and endpoint level rather than in the cloud.
“With billions of easily accessible and cost-effective networked devices, the world has entered the era of hyper-connectivity, allowing humans and machines to interact in symbiosis with the physical world and the cyber-environment.” said De Salvo. “AI is at the heart of this revolution.”
At ISSCC 2018 earlier this year, De Salvo said the architecture would include human-to-human-inspired hardware coupled with paradigms and computer algorithms for distributed intelligence across the IoT network.
There is a growing consensus in the world of IoT that the potential efficiency of data processing at the edge of the networks and not in remote data centers or in the cloud will be significant.
However, achieving this long-term goal will be a challenge. For example, battery-powered IoT devices do not have the processing power to analyze the data they receive and a power source that supports data processing.
Transformative approaches are needed to “solve the efficiency problems of traditional IT architectures” and De Salvo called for a “holistic approach to the development of low-consumption architectures inspired by the human brain, where process development and integration, design of circuits, system architecture and learning algorithms are optimized at the same time. ”
The salvo said the optimized neuromorphic material was a promising solution for future very low-consumption cognitive systems that would go well beyond IoT.
“New technologies, such as advanced CMOS, 3D technologies, emerging resistive storage and silicon photonics, combined with new paradigms inspired by the brain, such as encoding and time-dependent plasticity, offer tremendous potential the way knowledge is generated and processed in the human brain, “she said.
De Salvo said brain-based work has helped researchers better understand the emergence of connectionism, new imaging techniques, and how neural networks “could provide all models of brain-based technologies.”
She noted that the convergence of miniaturization, wireless connectivity, increased data storage capacity, and data analysis has placed IoT at the epicenter of social, business, and social change. and deep politics.
She noted significant improvements in the performance and applications of machine learning due to the huge data storage in images, videos, audio files and text on the Internet. These gains have, in turn, significantly improved learning / training approaches and algorithms, as well as the increased computational power of computers, including the parallel processing of neural network processing, offsetting the slowdown in Moore.
According to The Salva, deep learning is the most popular learning area.
“Today, machine learning applications are synonymous with tasks such as image recognition or speech recognition, or even exceed the human performance of experts,” said De Salvo. “Other tasks that were considered extremely difficult in the past, such as understanding natural language or complex games, have also been successfully addressed.”
Future applications will require even more analysis, better understanding of the environment and intelligence, and machine learning algorithms will require more computing power to become ubiquitous.
Peripheral or terminal devices
“Bringing the intelligence to the periphery or to the terminals means processing the data as close as possible to the collection point and allowing the systems to make certain operational decisions in a semi-autonomous way,” said De Salvo.
Remote control of real-time on-site remote learning programs will be essential for many applications, from landing drones to driverless car navigation.
Der Salva said the delays caused by sending data to the cloud could lead to devastating results.
“Confidentiality also requires that key data does not leave the user’s device while allowing the transmission of high-level information generated by local neural network algorithms,” she said.
La Salva warned that using millions of cameras, for example, would require local data analysis, as sending to the cloud would likely result in bandwidth and communication issues.
“We need new concepts and new technologies that can bring artificial intelligence closer to boundaries and terminals,” she said.
“The main design goal in distributed applications spanning multiple hierarchical levels (as in the brain) is to find an overall optimum between performance and energy consumption,” said De Salvo. “This imperative demands a holistic approach to research that reshapes the technology stack (from device to application).”
According to The Salvo, the process is underway and companies are interested in embedded applications by developing specialized platforms capable of running machine learning algorithms on embedded hardware.
She found that the use of Moore’s Law and the use of hardware and software co-ops resulted in impressive performance improvements (up to a few watts).
To optimize energy efficiency, several research groups focused on hardware design with Convolutional Neural Network (CNV) accelerators, noted De Salvo. Off-chip storage devices such. DRAMs significantly increase power consumption, but mobile applications using low-power, low-power programmable accelerators that consume less than 300 μW have been demonstrated.
According to De Salvo, integrating intelligence into low-consumption IoT endpoints that support applications such as habitat monitoring and medical surveillance will be much more difficult than providing traditional peripheral peripherals in the future. network.
“Most connected devices are wireless sensor nodes with microcontrollers, wireless transceivers, sensors and actuators,” she said. “The power requirements of these systems are critical – less than 100 μW for normal workloads – because these devices often run for years with power sources or a single battery.”
According to De Salvo, scientists inspired by the human brain, whose computing power and efficiency have remained unprecedented, are now pursuing radically different approaches to neuromorphic systems.
“They implement bio-inspired architectures in optimized neuromorphic hardware to allow direct correspondence between the hardware and the learning algorithm,” she said. These architectures include peak coding, which encodes neuron values in the form of pulses or peaks rather than analog or numerical values, and timing-dependent plasticity, a bio-inspired algorithm that allows unsupervised learning.
The intelligence and efficiency of the human brain are closely related to its extremely dense 3D interconnectivity: there are about 10,000 synapses per neuron and billions of neurons in the human cerebral cortex. “The hierarchical structure in the cortex follows certain patterns, through vertical arrays or microcolumns, where local data flows on specialized subcortical structures, and laminar links, which promote communication between domains, form the hierarchy .
“Based on these considerations, it is clear that emerging 3D technologies such as vias through silicon and 3D monolith integration, also known as CoolCube, will play a key role in the effective neuromorphic material” .
Axel Salvo described the silicon technologies that are important for hardware development, and named resistive memory or ReRAM, silicon-on-silicon photonics.
“Thanks to its low-power design capability, the FDSOI technology is an excellent candidate for neuromorphic material,” she said. In deep learning architectures, high-performance digital reconfigurable processors based on FDSOI 28nm have already demonstrated power consumption of 50mW, thanks to the introduction of an optimized data movement strategy and the use of feedback strategies FDSOI.
De Salvo also discovered that a large multi-core neuromorphic processor called Dynap-SEL based on 28 nm FDSOI had been demonstrated.
“New materials to connect devices to living cells and tissues, new design architectures to reduce power consumption, system-level data retrieval and management, and secure communication are the next areas where Intensive development will take place, “said De Salvo.
“Implantable brain-based microdevices, which act as intelligent neuroprosthetics and bio-hybrid systems, represent a new era in interdisciplinary brain repair strategies that complement biological and technical solutions, probably through artificial intelligence.”