-->
Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Friday, September 30, 2011

The Human Mind Can be Read

Who says the human mind can not read? Recent experiments by scientists from the University of California, Berkeley, were able to reconstruct images recorded in the brain.

Efforts visualization is performed using functional magnetic resonance imaging machines, which scan the human brain activity. Through computer modeling, the scanning data is collected and translated so that it can read the movement, shape, and color. Furthermore, researchers matched the results of this translation to the database video clip circulating on the Internet.

Computer modeling suggests that the brain record images by eye. A photograph showed a very similar form of cryptic video clips on YouTube. Although still vague, the reconstructed image can be ordered so as to produce a video.
Reconstruct images recorded in the brain. (Picture from: http://gizmodo.com)

"This particular modeling framework can reconstruct dynamic events that are created in the brain," said neuroscientists from the University of California, Jack Gallant. "Opening a window into the movie in our minds."

According to him, the success of the human brain to read this opens a new knowledge for neuroscience. So far researchers have not had access to what is recorded in the brain. Imagine, we tapped into the mind of patients who fell into a coma, or watching someone on YouTube dream.

An 18-million-second picture palette
After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.

Think about those 18 million seconds of random videos as a painter's color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he's seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.
In this other video you can see how this process worked in the three experimental targets. On the top left square you can see the movie the subjects were watching while they were in the fMRI machine. Right below you can see the movie "extracted" from their brain activity. It shows that this technique gives consistent results independent of what's being watched—or who's watching. The three lines of clips next to the left column show the random movies that the computer program used to reconstruct the visual information.

In further experiments, researchers are attempting to access semantic information from visual areas in the human brain is more sophisticated. If this can be done, researchers will produce a clearer picture.*** [SCIENCEDAILY | ANTON WILLIAM | KORAN TEMPO 3658]
Kindly Bookmark and Share it: