Motion#

Motion actions include movements, or sets of movements, that have a specific meaning. They are typically recognized through computer vision and can be generated by interactive avatars. At the moment we distinguish the following modalities for both the bot and the user: face, gesture, posture and position.

Many of these modalities are governed by the override policy. Which means the action server is expected to handle multiple concurrent actions by temporarily overriding the currently running action by and new action that has been started. A concrete example: The IM starts a PostureBotAction(gesture=“attentive”) action (Avatar maintains an attentive posture). 2 seconds later the IM starts a PostureBotAction(posture=“listening”) action. The listening posture action is executed by the action server overriding the “attentive” posture (Avatar appears to be listening). Once the listening posture action is stopped the Avatar is going back to the “attentive” posture (overwritten action is resumed).

Posture Bot Action#

Instruct the bot to assume a pose. A pose will never be finished by the system. This is in contrast to Gesture actions that have a limited lifetime and should be “performed” with a clear start and end date dedicated by the Gesture. Poses can be implemented by the interactive system in different ways. For interactive avatar systems, poses can change the posture of the avatar. For chatbot systems this could change a bot indication icon (e.g. like the Siri assistant)

StartPostureBotAction(posture: str)#

The bot should start adopting the specified posture.

Parameters:
  • posture (str) – Natural language description (NLD) of the posture. The availability of postures depends on the interaction system. Postures should be expressed hierarchically such that interactive systems that provide less nuanced postures can fall back onto higher level postures. The following base postures need to be supported by all interactive systems supporting this action.: “idle”, “attentive”

  • ... – Additional parameters/payload inherited from StartBotAction().

PostureBotActionStarted()#

The bot has attained the posture.

Parameters:

... – Additional parameters/payload inherited from BotActionStarted().

StopPostureBotAction()#

Stop the posture. Postures have no lifetime, so unless the IM calls the Stop action the bot will keep the posture indefinitely.

Parameters:

... – Additional parameters/payload inherited from StopBotAction().

PostureBotActionFinished()#

The posture was stopped.

Parameters:

... – Additional parameters/payload inherited from BotActionFinished().

Gesture Bot Action#

Instruct the bot to make a gesture. In contrast to PostureBotAction bot gestures have a limited “lifetime” and are used to provide an immediate effect. Bot gestures can be implemented by the interactive system in different ways. For interactive avatar systems, gestures should be performed by the avatar. For chatbot systems, gestures can be expressed by emojis or images, gifs.

StartGestureBotAction(gesture: str)#

The bot should start making a specific gesture.

Parameters:
  • gesture (str) – Natural language description (NLD) of the gesture. Availability of gestures depends on the interaction system. If a system supports this action, the following base gestures need to be supported: affirm, negate, attract

  • ... – Additional parameters/payload inherited from StartBotAction().

GestureBotActionStarted()#

The bot has started to perform the gesture.

Parameters:

... – Additional parameters/payload inherited from BotActionStarted().

StopGestureBotAction()#

Stop the gesture. All gestures have a limited lifetime and finish on ‘their own’. Gesture are meant to accentuate a certain situation or statement. For example, in an interactive avatar system a affirm gesture could be implemented by a 1 second animation clip where the avatar nods twice. The IM can use this action to stop a gesture before it would be naturally done.

Parameters:

... – Additional parameters/payload inherited from StopBotAction().

GestureBotActionFinished()#

The gesture was performed.

Parameters:

... – Additional parameters/payload inherited from BotActionFinished().

Gesture User Action#

The system detected a user gesture.

GestureUserActionStarted()#

The interactive system detects the start of a user gesture. Note: the time the system detects the gesture might be different from when the user started to perform the gesture.

Parameters:

... – Additional parameters/payload inherited from UserActionStarted().

GestureUserActionFinished(gesture: str)#

The user performed a gesture.

Parameters:
  • gesture (str) – Human readable name of the gesture. Availability of gestures depends on the interaction system.

  • ... – Additional parameters/payload inherited from UserActionFinished().

Position Bot Action#

Instructs the bot to hold a new position. If the action is stopped the bot will return to its original position. This is a state action (like PostureBotAction), that will ensure that the bot returns to its previous position when the action is finished.

StartPositionBotAction(position: str)#

The bot needs to hold a new position.

Parameters:
  • position (str) –

    Specify the position the bot needs to move to and maintain.

    Availability of positions depends on the interactive system. Positions are typically structured hierarchically into base position and position modifiers (“off center”).

    Minimal NLD set:

    The following base positions are supported by all interactive systems (that support this action):

    center : Default position of the bot

    left : Bot should be positioned to the left (from the point of view of the bot)

    right: Bot should be positioned to the right (from the point of view of the bot)

  • ... – Additional parameters/payload inherited from StartBotAction().

PositionBotActionStarted()#

The bot has started to transition to the new position

Parameters:

... – Additional parameters/payload inherited from BotActionStarted().

PositionBotActionUpdated(position_reached: str)#

The bot has arrived at the position and is maintaining that position for the entire action duration.

Parameters:
  • position_reached (str) – The position the bot has reached.

  • ... – Additional parameters/payload inherited from BotActionUpdated().

StopPositionBotAction()#

Stop holding the position. The bot will return to the position it had before the call. Position holding actions have an infinite lifetime, so unless the IM calls the Stop action the bot maintains the position indefinitely. Alternatively PositionBotAction actions can be overwritten, since the modality policy is Override.

Parameters:

... – Additional parameters/payload inherited from StopBotAction().

PositionBotActionFinished()#

The bot shifted back to the original position before this action. This might be a neutral position or the position of any PositionBotAction overwritten by this action that now gains the “focus”.

Parameters:

... – Additional parameters/payload inherited from BotActionFinished().

Facial Gesture Bot Action#

Instruct the bot to make rapid and brief facial expressions that last for at most a few seconds, like a quick smile, a momentary frown, or a brief wink. In a chatbot system this could generate an emoji as part of a text message. For interactive avatars this changes the facial animations of the avatar for a short while (e.g. to wink or smile).

StartFacialGestureBotAction(facial_gesture: str)#

The bot should start making a facial gesture.

Parameters:
  • facial_gesture (str) –

    Natural language description (NLD) of the facial gesture or expression.

    Availability of facial gestures depends on the interactive system.

    Minimal NLD set:

    The following gestures should be supported by every interactive system implementing this action: smile, lough, frown, wink

  • ... – Additional parameters/payload inherited from StartBotAction().

FacialGestureBotActionStarted()#

The bot has started to perform the facial gesture

Parameters:

... – Additional parameters/payload inherited from BotActionStarted().

StopFacialGestureBotAction()#

Stop the facial gesture or expression. All gestures have a limited lifetime and finish on “their own” (e.g., in an interactive avatar system a “smile” gesture could be implemented by a 1 second animation clip where some facial bones are animated). The IM can use this action to stop an expression before it would be naturally done.

Parameters:

... – Additional parameters/payload inherited from StopBotAction().

FacialGestureBotActionFinished()#

The facial gesture was performed.

Parameters:

... – Additional parameters/payload inherited from BotActionFinished().

Facial Gesture User Action#

The system detected a short facial gesture, e.g. a frown, a smile or a wink, that is different from a neutral expression of the user. Detected gestures or small expressions are expressed by the user over a short period of time.

FacialGestureUserActionStarted(expression: str)#
Parameters:
  • expression (str) –

    Natural language description (NLD) of the facial expression.

    Detected facial expressions depend on the capabilities of the interactive system.

    Minimal NLD set:

    The following expressions should be supported by every interactive system implementing this action: smile, lough, frown, wink

  • ... – Additional parameters/payload inherited from UserActionStarted().

FacialGestureUserActionFinished()#
Parameters:

... – Additional parameters/payload inherited from UserActionFinished().