Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

CLASSIC

Try with us

Join & Get Updates

Thursday, December 27, 2012

The Emotion Intelligence Technology Revealed

Recent research at the University of Rochester, USA, reveal about a computer program that can measure human feeling through sound. The technology is called emotional intelligence technology is capable of reading a person's heart condition with greater accuracy. It would be great if smartphones could sense moods – especially when they've dropped a call three times in five minutes.

As quoted by Eurekalert, the research described by the team on December 5 at the IEEE Workshop on Spoken Language Technology, Miami, Florida, the app analyzes twelve features of speech such as pitch and volume and uses this to identify one of six emotional states.

Though there are other emotion evaluating systems, the University of Rochester approach is surprisingly simple. Instead of studying a speaker’s gestures and expression or trying to figure out the meaning of words or the structure of conversation, the Rochester system deals strictly with the inherent voice cues that people use to pick up on emotion. It’s a bit like how you can figure out the mood of someone on the phone from Kenya even though you don’t speak a word of Swahili.

The team has built a prototype app from the system that displays a happy or sad face depending on the speaker’s mood. "The research is still in its early days," said Heinzelman, "but it is easy to envision a more complex app that could use this technology for everything from adjusting the colors displayed on your mobile to playing music fitting to how you're feeling after recording your voice." 
The MATLAB GUI prototype. (Picture from: http://www.gizmag.com/)
This technology can know the feelings users, without the need analysis through speech or words users. "We actually use the footage of the actor who reads the date of the month.'s Really not relevant to what they say but how they say that we are interested," said Wendi Heinzelman, professor of computer and electrical engineering.

Then Heinzelman further explained that the program analyzes the 12 features of the human voice, such as pitch and volume. Analysis is required to identify one of six emotions in sound recordings. This study achieved an accuracy of 81 percent compared with the previous studies which only reached 55 percent.

This research is used to develop a prototype of an application. This application displays both happy and sad faces after record and analyze the user's voice. *** [GIZMAG | EKA | FROM VARIOUS SOURCES | RIF | PIKIRAN RAKYAT 20122012]
Note: This blog can be accessed via your smart phone.
Enhanced by Zemanta
Kindly Bookmark and Share it: