Emotional contact with digital characters in interactive storytelling.

in #chatbot5 years ago (edited)

What are the factors influencing the level of emotional contact with the story characters? What is the nature of this contact? And how can we operate this nature by creating engaging stories? Those questions will be addressed to the storytelling with digital characters in the following article.

What kind of emotional contact when we watch an actor playing on the stage. This moment, besides being plugged into a great dramatic situation, we are captured by an actor’s emotional play and improvisation abilities. The most attractive thing in the human dialog is the level of empathy, emotional feedback of interlocutor. The emotions represented in live body performance, expressive speech, this is an important component for the viewer’s engagement.

The dreams about improvisational theater, as we imagine those, were not fully realized. But they may come true for interactive storytelling with new generation of digital characters. Those characters are able to engage by building strong emotional contact with the viewer. For that purpose, they should have features prescribed by a drama context and be believable interlocutor.

Creating realistic, emotionally compelling characters is one of the general problems that interactive storytelling have to handle. The attempt of solving this problem brought us to the main trends of #ComputerGeneratedImagery and #ArtificialIntelligence.

Creating Photorealistic Characters

#DigitalHumans

The #UncannyValley is a psychological effect, that became a symbol of viewers disbelief for interactive storytelling with digital characters. Even considering the great achievements of 3D computer graphics for video games, creating truly realistic life-like character animation still remains a challenging task.

The series of projects Digital Humans reveal revolutionary possibilities for real-time animation of Photorealistic Characters. The development of Siren and a virtual replica of Mike Seymour (MeetMike) is a result of the collaboration of 3d Literal, Epic Games, Tencent, Vicon and Cubic Motion. The base of the project is a workflow consisting:

  • 4D Volumetric facial scan, with 50 FACS (Facial Animation Coding System) 6 which allows isolating each face muscles group;

  • Facial Motion capture (markerless) made by facial Mocap rig by Vicon;

  • Emotion recognition of actor made by computer vision technology CM Live by Cubic Motion;

  • Facial animation, where all the data received from CM Live applied to the virtual character in real time by facial rig system Rig Logic created by 3Lаteral;

#AndySerkis and #OsirisBlackProject is the next step in the evolution of Digital Humans, where was used volumetric capture, reconstruction and compression technology 3Lаteral’s Meta Human Framework©. Besides already known Rig Logic in this project was applied new 4D scan technology allowing to create real-time animation without specialist intervention. (Usually, already captured data need additional cleaning ) New technique 4D scanning noticeably modified Facial motion capture workflow: instead of raw version, final mocap previsualization is available right on the set.

Epic Unreal team was not only showing the capabilities of Andy Serkis’s performance real-time rendering but moreover, the captured data were relocated to the Osiris Black model. That became possible because of digital DNA technology (the architecture of digital DNA is a part of Rig Logic technology ) allowing creating face morphing on the flow.

Creating an emotional AI

#Mitsuku

Already 4 years in a row an ingenious chatbot #Mitsuku becomes the winner of Loebner Prize Award. Mitsuku is the most human-like bot from all existing earned her title for the extraordinary quality for artificial intelligence - lack of knowledge and subjectivity.

Mitsuku was written in AIML (Artificial Intelligence Markup Language) without applying neural network technology. Steve Worswick the creator of Mitsuku has been working on this chatbot development for 15 years. One of the key features of Mitsuku is variability of her answers, ability to give an answer in the most “freestyle” kind of way.

For instance, the weirdest question: Can you eat the Tree? (the question which was not prewritten ) Mitsuku is able to understand by taking already existing (in her dataset) data about the Tree: “Tree is a plant. Plants are eatable”.

Steve didn't create Mitsuku for some practical purpose, such as assisting, which makes her even more artistic-like, in a way. In the case of an assistant chatbot, if it’s programmed to sell pizza, it will know everything about pizza. But such a bot won’t be able to answer any question which is not related to the pizza making. Unlike any other, even the smartest bot, Mitsuku doesn’t have so much initial knowledge in some particular area but she is able to interpret the question in very individual manner.

Empathy. Sympathy. Compassion.

#SoulMachines
#IBMWatson
#BabyX

#SoulMachines unites specialist from different areas: AI researchers, biologists, psychologists, and artists. Such a relation of various points of view brought new perspectives in creating the most organic way of interaction in between of human and machine.

Inspired by the biological model of the human brain and key sensory networks, creators of Soul Machines developed central virtual nervous system Human Computing Engine™. The system includes neural network and machine learning.

In this project, Human Computing Engine™ is an interface allowing to unfold new verges of the interaction progress. The most valuable for the empathetic understanding is the possibility to follow the processes happening in the brain and nervous system of the digital human.

#BabyX is the most suitable illustration for Human Computing Engine™ workflow as a fusion of IBM® Watson™Assistant conversational system and an interface with neural network and machine learning applied.

Baby X is the most suitable illustration for Human Computing Engine™ workflow as a fusion of IBM® Watson™Assistant conversational system and an interface with neural network and machine learning applied.

Baby X is an interactive animated virtual infant prototype with real-time generated animation. The infant is able to learn new definitions. The cognitive process itself can be seen as an algorithm shown on Human Computing Engine™ interface. For instance, when a baby sees an object and hears the name of this object the user is able to contemplate the process of how the contact in between the signified and signifier happens. In addition, with the help of Human Computing Engine™ interface, one can see not only the processes happening in the virtual brain model and central nervous system but moreover, the motion of the facial muscles in real-time.

The virtual Infant is able to learn the consequences any action he made by having the set of definitions which can be separated on positive (pleasant) and negative (unpleasant). For example, pushing the green or red button the virtual Infant is first testing the result of its own action. After that, the knowledge of having a positive or negative reaction is processed and transformed into an ability to predict the result of an action.

Coin Marketplace

STEEM 0.22
TRX 0.26
JST 0.039
BTC 95444.49
ETH 3354.43
USDT 1.00
SBD 3.15