The new AI chip design is stackable and reconfigurable, allowing for swapping out and building on existing sensors and neural network processors. This AI chip is comprised of alternating layers of sensing and processing elements that can communicate with each other.
- KEY POINTS:
- MIT researchers have created a modular chip that can be easily reconfigured to take new features.
- Instead of traditional wiring, the chip uses LEDs to help its different components communicate.
- The design will require a lot of testing before it can be used in the real world, suggest experts.
Imagine a more sustainable future where mobile phones, smartwatches, and other wearables don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto the device’s internal chip — like LEGO bricks incorporated into an existing assembly. Such reconfigurable chip software could keep devices up-to-date while reducing our electronic waste.
MIT engineers have now taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.
The design includes alternating layers of sensing and processing elements along with light-emitting diodes (LEDs) that allow the layers of the chip to communicate optically. Other modular chip designs use conventional wiring to transfer signals between layers. Such complex connections are difficult, if not impossible, to break and reconnect, so such stackable structures cannot be reconfigured.
Artificial Intelligence (AI) is the next big innovation that will revolutionize the world. MIT engineers have now developed an AI chip resembling LEGO as they intend to create modular and sustainable electronics. It is a reconfigurable chip with multiple layers that can be stacked on or swapped to simplify the process of adding new sensors or updating existing processors. The “reconfigurable” AI chips have unlimited expandability based on the combination of the layers. Hence, these chips can reduce electronic waste while ensuring our devices remain up-to-date.
The New AI Chip Design
The artificial intelligence chip design is truly outstanding as it comprises alternating layers of processing and sensing elements combined with LEDs (light-emitting diodes) that let the layers of the chips communicate optically.
The traditional modular chip design uses conventional wiring to transmit the signals between the layers. Such complex configurations are impossible to separate and rewire, thus making these traditional chips not reconfigurable.
MIT’s proposed chip uses light instead of physical wires to transfer information through chips. Thus, the process of reconfiguration it with layers is convenient.
MIT postdoc Jihoon Kang said, “You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell.” He adds, “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
Researchers are eager to apply its innovative design to edge computing devices, such as self-contained sensors and other electronics that do not rely on a centralized or distributed resource, such as cloud-based computing or supercomputers.
In the innovative new design, the engineers pair image sensors with artificial synapse arrays, and each one of them is trained to identify a particular letter; in this instance, M, I, and T.
Instead of relying on the conventional approach of relaying the sensor singles to the process through physical wires, the team fabricates an optical system. Every sensor and an artificial synapse create an array to facilitate communication among the letters without needing a physical connection.
The Chip Configuration
Every chip measuring around 4 square millimeters is stacked with around three image recognition blocks, as each contains an optical communication layer and an image sensor with an artificial synapse array for categorizing one of three letters in M, I, and T and assigns a pixelated image to the random letters on the clips.
It calculates the electrical current that every neural network array creates in response. They found that these chips accurately categorized clear images for every letter but couldn’t distinguish between the blurry images, for example, between I and T.
However, they instantly swapped the processing layer of the chip for a refined denoising processor and then created the clip which accurately identified images. The researchers also intend to add more processing capability and sensors to the chips as they think their application can be limitless.
Imagine a more sustainable future where cellphones, smartwatches, and other wearable devices don’t need to be shelved or abandoned for new models. Instead, they can be upgraded with the latest sensors and processors that will snap to the device’s internal chip – like the Lego bricks included in current builds. Such reconfigurable chip are can keep devices up to date while reducing our electronic waste.
MIT engineers have taken a step toward that modular vision with a Lego-like design for a stackable, reconfigurable artificial intelligence chip.
The design includes alternating layers of sensing and processing elements with light-emitting diodes (LEDs) that allow the layers of the chip to communicate alternately. Other modular chip designs employ conventional wiring to relay signals between the layers. Such complex connections are difficult, if not impossible, to disassemble and reconnect, making such stackable designs not reconfigurable.
The MIT design uses light instead of physical wires to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped or stacked, for example, to add new sensors or update processors.
“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call it a Lego-like reconfigurable AI chip because it has unlimited expansion potential based on a combination of layers.”
The researchers are eager to apply the design to edge computing devices—self-contained sensors and other electronics that operate independently of any central or distributed resources, such as supercomputers or cloud-based computing.
As we enter the era of the Internet of Things based on sensor networks, the demand for multifunctioning edge-computing devices will increase dramatically, says Jeevan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide the high versatility of edge computing in the future.”
To light the way
The team’s design is currently configured to perform basic image-recognition tasks. It does this through a layering of image sensors, LEDs, and processors made of artificial synapses – arrays of memory resistors, or “memristors,” the team previously developed that act together as a physical neural network. are, or “brain-on-a-piece.” Each array can be trained to process and classify signals directly on the chip, without the need for external software or an Internet connection.
In their new chip design, the researchers combined image sensors with artificial synapse arrays, each trained to recognize certain letters—in this case, M, I, and T.—whereas a traditional approach would be to relay the signals to a sensor. Through physical wiring to the processor, the team designed an optical system between each sensor and an artificial synapse array to enable communication between the layers without the need for a physical connection.
Other chips are physically wired through metal, which makes them difficult to rewire and redesign, so you’ll need to build a new chip if you want to add a new function, says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack the chips and connect them the way we want.”
The team’s optical communications system consists of coupled photodetectors and LEDs, each patterned with tiny pixels.
The photodetectors constitute an image sensor to receive the data and the LED to transmit the data to the next layer.
As a signal (for example, an image of a letter) reaches the image sensor, the light pattern of the image encodes a certain configuration of LED pixels, which in turn excites another layer of photodetectors. It is an artificial synapse array, which classifies the signal based On the pattern and strength of the incoming LED light.
The team built a single chip with a computing core roughly the size of 4 square millimeters, or a piece of confetti. The chip is outfitted with three image recognition “blocks”, each containing an image sensor, optical communication layer, and artificial synapse array to classify one of the three letters, M, I, or T. They then flashed a pixelated image of random letters to the chip and measured the electrical current that each neural network array generated in response. (The larger the current, the greater the likelihood that the image is the letter the particular array is trained to recognize.)
The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for example between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better one. the processor, and found the chip and then correctly identified the images.
“We showed stackability, replaceability, and the ability to put a new function in the chip,” notes MIT postdoc Min-kyu Song.
The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications being limitless.
“We can add layers to a cellphone’s camera so that it can recognize more complex images, or build these into health care monitors that can be embedded in wearable electronic skin,” offers Choi, who previously worked with Kim. “Smart” skin was developed for monitoring vital signs.
Another idea, he says, is for modular chips to be built into electronics so that consumers can choose to build with the latest sensor and processor “bricks.”
“We can build a common chip platform, and each layer can be sold separately, like in a video game,” says Jeevan Kim. “We can build different types of neural networks, such as for image or voice recognition, and let the customer choose what they want, and add it to an existing chip like Lego.”
This research was partially supported by the Ministry of Trade, Industry, and Energy (MOTIE) of South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach Program.