Skip to main content.
Log In Sign Up. Combination of facial movements on a 3D talking head Proceedings Computer Graphics International, Combination of facial movements on a 3D talking head. Many talking faces have been de- Facial movements play an important role in interpreting veloped. Examples include , ,  and .
These spoken conversations and emotions. There are several types systems combine facial movements by just adding them to- of movements, such as conversational signals, emotion dis- gether without taking into account the resolution of conflict- plays, etc. We call these channels of facial movement.
Specifically, significant istic animation of these movements would improve the real- attention has been Dirk receive a facial to visual speech [5, 19]. Some sys- ism, liveliness of the interaction between human and com- tems are also able to generate facial expressions as conver- puters using embodied conversational agents. To date, no sational signals during speech [2, 24].
However, no appro- appropriate methods have been proposed for integrating all priate methods have been proposed for integrating all these facial movements. We propose in this paper a scheme of facial movements. First, The activity of human facial muscles is far from simply we concatenate the movements in the same channel to gen- additive. A typical example would be smiling while speak- erate smooth transitions between adjacent movements. The Zygomatic Major and Minor muscles contract to combination only applies to individual muscles.
The move- pull the corner of lip outward, resulting in a smile. However, the activation of the Zygomatic Major and Mi- 1 Introduction nor muscles together with the lip funneler Orbicularis Oris would create an unnatural movement. The activation of a muscle may require spoken conversations and emotions.
They occur continu- the deactivation of other muscles . Depending on the ously during social interactions and conversations. They priority of the tasks to be performed on the face, appropri- include lip movements when talking, conversational sig- ate muscles are selected to activate. In most of the cases, the nals, emotion displays and manipulators to satisfy biolog- visual speech has higher priority than the smile. The smile ical needs. Unfortunately when and how a movement ap- may also have higher priority then the visual speech when pears and disappears, and how co-occurrent movements are the subject is too happy to utter the speech naturally.
In addition, the problem of overlaying and Dirk receive a facial movements on a 3D talking head. There are several blending facial movements in time, and the way felt emo- types of movements, such as conversational signals, emo- tions are expressed in facial activity during speech, has not tion display, etc.
We call these channels of facial move- received much attention . We concentrate on Dirk receive a facial dynamic aspects of facial In the field of embodied agents, facial animation has re- movements and the combination of facial expressions in dif- ceived quite a lot of attention.
Sexy dirk receive a facial xxx porn tube
Realistic animation of faces ferent channels that Dirk receive a facial responsible for different tasks. First, would improve the realism, liveliness of the interaction be- we concatenate the movements in the same channel to gen- tween human and machine. To create realistic facial anima- erate smooth transitions between adjacent movements. This tion, many 3D face models have been proposed see  combination only applies to individual muscles.
The move- for a summary. The 3D face punctuation marks such as a comma or an exclama- model for the talking head is discussed in Section 3. They are used to help the interaction be- section also presents a summary of conflicting muscles on tween the speaker and the listener or to provide feed- the face. The generation of conversa- movements as described in Section 4.
Section 5 explains Dirk receive a facial signals can be done by analyzing the text  how facial movements inside a channel are combined while or speech .
We have proposed a fuzzy rule based system to generate emotion displays from Our talking face takes as input the text to be pronounced emotions .
Gaze and head movements function e. A simple something during conversation. Head movements are example of marked up text looks like this also used to replace verbal content e. I like it very much. From text input, the text to phoneme module  gen- Atomic Dirk receive a facial within a channel occur sequentially, al- erates phoneme sequences, which are used to synthesize though they may overlap each other at their beginning and speech . They are also used to generate lip movements ending.
This classification is also based on the function when talking.
It is similar to Pelachaud et al. Movements from different channels can happen Facial movements are then combined in two stages: The In our system, we distinguish six channels: The latter com- ments to satisfy biological requirements of the face. In bines the movements from all channels taking into account our system, we consider eye blinking to wet the eyes the muscle conflicting resolution.
The result is displayed on as manipulators. These movements are random rather a 3D face model to create the final animation in synchro- than repeated with fixed rate as in . The random nization with the synthesized speech. Lip movements are gen- Dirk receive a facial is a simple muscle-based 3D face model that can realize erated from the text that is Dirk receive a facial to be spoken by the both of the following objectives: The text is converted to phoneme seg- expressions and real-time animation on a regular personal ments phoneme with temporal information - starting computer.
The face model, which is also not too compli- and ending time . The phonemes are converted to cated so as to keep the animation realtime, allows high qual- corresponding visemes.
Each viseme is equipped with ity and realistic facial expressions.
The face is equipped a set of dominance functions of parameters participat- with a muscle system that produces realistic deformation of ing in the articulation of the speech segment. We use the facial surface, handles multiple muscle interaction cor- dominance functions from  for each viseme seg- rectly and produces bulges and wrinkles in real-time.
These are sible for visual speech lip movements and facial expres- movements to accentuate or emphasize speech. The muscles are shown in Table 1. Overview of the system volve the mouth region. Other ones include the opening and closing of the eyelids, the Frontalis Medialis and the nose wrinkling muscles. Our 3D face model of them are known from electromyography EMG studies to occur in distinct phases .
We also have tack onsetdecay, sustain apexand release offset. We parameters for eye and head movements. However in the face, not every Basically, each facial movement in our system is defined muscle can contract at the same time. For example, the lip as a triple: The simple additive combination of their deformation parameters in the face ; T sm and T em are the contraction would result in unnatural movement, which can starting and ending time of the movement, respectively.
Thus, some muscle actions require the deactivation of other muscles. Ekman and Friesen  Lip movements when talking: Based on an apex duration Dam: This will be discussed in Section 5. In this model, a lip movement corresponding to long the facial movement takes to appear; the offset dura- a speech segment is represented as a Dirk receive a facial segment.
It has tion, Drmdetermines how long the facial movement takes dominance over the vocal articulators that increase and Dirk receive a facial to disappear: Adjacent segments will have single facial movement is described as a function of time: Different dominance and offset phase of the parameter activity.
Essa used exponential curves to fit the onset and offset portions of each deformation param- 5.
We derived the following facial movements other than lip movements. When there are function to describe the onset portion of the parameter ac- two overlapping movements, we create the transition from tivity: The offset portion is described by the following function: For every two subsequent movements: The activity of a parameter p in sequence.
However, they can be specified to overlap each of the combined movement Dirk receive a facial described as follows: The addition of confliction muscles: It shows the Orbicu- 0 0 1 2 3 4 5 6 Time in seconds 7 8 9 laris Oris muscle involved in speech. The Orbicularis Oris muscle conflicts with the Zygomatic major muscle and has higher priority. When the Orbicularis Oris is activated at Figure 4. The activity of a facial movement time 3the Zygomatic major is inhibited.
The activity of the Zygomatic major before that time is adjusted so it Dirk receive a facial 6 Combination of movements in different not release too fast, which creates unnatural movement. To combine them to- 0. After that, the activities 0 0 1 2 3 4 5 6 Time in seconds 7 8 9 10 11 12 of parameters are combined by taking the maximum values. At a certain time, when there is a conflict between pa- rameters in different animation channels, the parameters in- Figure 5.
Combination of the Jaw Rotation of volved Dirk receive a facial the movement with higher priority will dominate the two movements in the same channel. The activity of that muscle around that time is also affected so that the parameter cannot activate or release too fast.
The activity of Zygomatic major and priority movement at that time.