to dubbing

feelings is difficult. Because we people of our subtle emotions in facial Expressions and Gestures To be true, and we hear many sounds in between. Computer can the people still not as good as sensitive. For this, you cope with huge amounts of data. “This opens up completely new possibilities in the sense of analysis,” says Olga Perepelkina, research Director of the young company’s neuro-data Lab.

The company, based in Florida and research Department in Moscow, which has recently opened in the Canton of Lucerne Root a branch that specializes in sentiment analysis from Film and video material. For The Tages-Anzeiger.ch/Newsnet has studied neuro-data Lab 805 Speeches, the councillors, the Federal government and the Federal councillors in the current year, in front of the National Council and Council of States have held per Person, between 72 and 274 performances.

example image, the main motion: disgust. Image: PD

Almost in real time, the Software analyzes with self-learning Algorithms and neural networks emotions. “We rely on a multi-channel approach,” says Perepelkina: Analyzes both the voice and the facial Expressions, Gestures and body movements of the filmed people. “As a result, the hit rate is much higher than in the case of simple analysis, as they are today.” The System will have to learn quickly: “The more training data we have, the better the results are.” With a new algorithm that compares the shading of individual Mariobet pixels, want to feel the neuro-data Lab the Filmed re-even without the use of additional measuring devices, the pulse.

System can only. English

Perepelkina not conceal, that there are such analyses, numerous sources of error In pulse analyses, for example, the video material has to be very good. In the case of sentiment analysis, in turn, there are socio-cultural pitfalls: so Far, the System was trained only on English. Also subtle, culturally-based communication differences could have for inaccuracies concerns, as well as individual patterns of behavior. In addition, the facial Expressions and Gestures are not interpreted by all people.

example image, the main motion: grief. Image: PD

The computer-based sentiment analysis is much more than just a Gimmick, stressed Olga Perepelkina. “Already today, there are many possibilities, in the future many will come to it.” Thus, such robots should understand what that means, and of natural and “human” to communicate learning. Emotion translator for people could arise in this area of deficits. The car could analyse in the future whether with the driver, everything is in order and, if necessary, the emergency brake pull. It is conceivable that employers will use the analysis technique in job interviews. The advertising industry is already experimenting diligently. Thanks to the new possibilities, is easy to document how people react to a product.

example image, the main motion: joy. Image: PD (editor-in-Tamedia)

Created: 05.12.2018, 18:32 PM