Sometimes it can be tough to remember a person’s face—don’t worry though, a computer’s got your back.
Researchers from the University of Toronto Scarborough have found a way to digitally reconstruct images seen by test subjects, all based on electroencephalography (EEG) data. The technique was developed by Dan Nemrodov who is a part of Adrian Nestor’s visual recognition lab.
For the study and the subsequent findings, subjects were hooked up to EEG equipment that rests on a person’s head and then shown pictures of faces. After seeing the faces, the subject’s brain activity was recorded and then used to digitally recreate the image in the subject’s mind using a special method based on machine learning and AI algorithms.
“When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing,” explained Nemrodov. “We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process.”
“What’s really exciting is that we’re not reconstructing squares and triangles but actual images of a person’s face, and that involves a lot of fine-grained visual detail,” said Nestor.
The research was published in the journal eNeuro and proves that EEG is capable for this kind of image reconstruction. The image below details how close a reconstructed image was to its target, as compared to any other face or stimulus with the same expression. The faces on the left are a neutral expression; on the right are happy faces.
This process is called neuroimaging and was pioneered by the lab’s leader, Nestor. He had recreated facial images before using functional magnetic resonance imaging (fMRI), but this is the first time it is done using EEG. EEG is also much more precise in terms of how an image develops.
“fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale,” said Nemrodov. “So we can see with very fine detail how the percept of a face develops in our brain using EEG.”
fMRI information typically provides finer details, but EEG is considerably more common, portable and inexpensive. In fact, Interaxon, a well-known and successful Toronto tech company, uses EEG brain-sensing tech on their wellness wearable Muse.
The special software that recognizes and draws the faces relies on neural networks that are trained to recognize patterns in information. In this case, it was a massive database of face pictures. Once that software could recognize the characteristics of a human face, it was then trained to associate those traits and associate them with specific EEG brain activity patterns. The sensors picked up those patterns then spouted them out as an image.
The research also found that it takes our brains about 170 milliseconds, or 0.17 seconds, to form a good representation of a face we have seen before.
There is a huge amount of ways this kind of research could be further applied, especially with the relatively inexpensive EEG method. It could help those who cannot verbally communicate convey a person’s face, or as the research continues and expands its capabilities, any kind of object. It could even help visually represent what someone is imagining or remembering.
It could also help law enforcement to obtain better images or people who commit crimes. The findings essentially unveil the subjective content of the mind, which is a completely new way to explore how ideas are formed and executed.
The University of Toronto has been involved in some interesting studies as of late. Earlier this month three researchers found that there was a six per cent chance that the recently-launched-into-space Tesla Roadster would collide with earth over the next million years.