We are one AI and brainwave experiment away from X-ray vision

The future is here, and it’s just as cool and creepy as you’d hoped.

X-ray vision has always been pretty far behind on my list of superpowers I would like to possess, far behind time travel and mind reading. But X-ray vision may be closer to reality than the other options, and I’ll take what I can get. Researchers at the University of Glasgow are working to combine artificial intelligence and human brain waves to identify objects around the corner — objects that humans can’t normally see because it’s around a corner. Dubbed a “ghost imaging” system, it will be presented at this month’s Optica Imaging and Applied Optics Congress.

“We believe this work provides ideas that could one day be used to bring human and artificial intelligence together,” Daniele Faccio, a professor of quantum technologies at the School of Physics and Astronomy at the University of Glasgow, told Optica. “The next steps in this work range from expanding the ability to provide 3D in-depth information to finding ways to combine multiple information from multiple viewers simultaneously.”

The research is part of non-line-of-sight imaging, according to New Atlas, a branch of technology that allows people to see objects that are covered. Sometimes a laser light has to be beamed onto a surface, which is much like a power Superman might have.

But Faccio’s experiment worked like this: an object was projected onto a cardboard cutout. A person wearing an electroencephalography headset to track their brain waves can only see the diffused light on a wall rather than the actual light patterns being projected. The EEG helmet reads signals in the person’s visual cortex, which are fed into a computer, which then works to identify the object using AI’s brainwaves. And it was successful: In about a minute, the researchers were able to successfully reconstruct 16 x 16 pixel images of simple objects that people couldn’t see because of the obstacle.

“This is one of the first times computational imaging has been performed by using the human visual system in a neurofeedback loop that modifies the imaging process in real time,” Faccio said. “While we could have used a standard detector instead of the human brain to detect the wall’s diffuse signals, we wanted to explore methods that could one day be used to augment human capabilities.”