Research

UCLA 3D prints an artificial “brain” that could breed new autonomous vehicles

A team from the UCLA Samueli School of Engineering in Los Angeles, California, have applied 3D printing to create a “seeing” device modeled on the human brain.

Appearing as a series of neatly stacked plastic plates, this device is capable of analyzing image data to identify objects such as items of clothing, and handwritten characters.

By developing technologies based on the device, the scientists could have discovered a simpler way of teaching artificially intelligent (AI) products, like autonomous vehicles and “smart assistants”, to perceive the world around them.

UCLA's 3D printed artificial neural network. Photo via UCLA Samueli / Ozcan Research Group
UCLA’s 3D printed artificial neural network. Photo via UCLA Samueli / Ozcan Research Group

Like a “maze of glass and mirrors”

Sounds spooky? AI is already a bigger part of our lives than we think. Paying checks into the bank has relied on computer vision for some time – when the machine takes the pay in slip, it reads the amount written on it. The way this is done is usually by programming cameras to identify the numbers written on paper.

In self-driving cars, an ability to “see” signs is also controlled by cameras, in-sync with complex LiDAR systems that scan the road and surrounding obstacles.

Like LiDAR, UCLA’s 3D printed device relies on the diffraction of light to see however, the device is much less complex and doesn’t need power to run.

Like a “maze of glass and mirrors”

Each plate in the UCLA device is patterned with artificial neurons, in the shape of tiny pixels, that each diffract light in a different way.

When looking at an object, the device determines what it can see by the way light travels through the plates, and what comes out at the other side. UCLA principle investigator Aydogan Ozcan, explains, “This is intuitively like a very complex maze of glass and mirrors,”

“The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting.”

What can it see?

In experiments, the device has proven the ability to correctly identify handwriting, and a ladies’ shoe.

How the UCLA device "sees" a sandal. Image via UCLA Samueli / Ozcan Research Group
How the UCLA device “sees” a sandal. Image via UCLA Samueli / Ozcan Research Group

“This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects,” adds Ozcan. “This optical artificial neural network device is intuitively modeled on how the brain processes information,”

“It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential.”

All-optical machine learning using diffractive deep neural networks” is published online in Science journal. It is co-authored by Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Yi Luo, Mona Jarrahi and Aydogan Ozcan.

Featured image shows UCLA’s 3D printed artificial neural network. Photo via UCLA Samueli / Ozcan Research Group