The AsyncAPI version of UMIM is meant for use with Interaction Manager (IM) components that communicate through an asynchronous event bus.
UMIM event stream specific to a unique device stream.
Unique identifier for the stream.
Accepts one of the following messages:
A new pipeline has been acquired. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about the availability of new pipelines.
{
"type": "PipelineAcquired",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"stream_uid": "string",
"session_uid": "string",
"user_uid": "string"
}
A pipeline has been released and is no longer available. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about pipelines that have been released.
{
"type": "PipelineReleased",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"stream_uid": "string",
"session_uid": "string",
"user_uid": "string"
}
Information about an existing pipeline has been updated. This means that a new session was started or a new user has been identified as part of the same pipeline.
{
"type": "PipelineUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"stream_uid": "string",
"session_uid": "string",
"user_uid": "string"
}
UMIM event stream specific to a unique device stream.
Unique identifier for the stream.
Accepts one of the following messages:
The system detects the user to be disengaged with the interactive system.
{
"type": "AttentionUserActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_engagement",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"user_id": "string"
}
The interactive system detects some level of engagement of the user.
{
"type": "AttentionUserActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_engagement",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"user_id": "string",
"attention_level": "string"
}
The interactive system provides an update to the engagement level.
{
"type": "AttentionUserActionUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_engagement",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_updated_at": "2019-08-24T14:15:22Z",
"user_id": "string",
"attention_level": "string"
}
The custom action has finished its execution.
{
"type": "CustomBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string",
"custom_action_name": "string",
"results": {}
}
The execution of the custom bot action has started.
{
"type": "CustomBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"custom_action_name": "string",
"parameters": {}
}
Something happened during the execution of the custom action (if supported by the custom action).
{
"type": "CustomBotActionUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_updated_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"custom_action_name": "string",
"updates": {}
}
An action has finished its execution. An action can finish either because the action has completed or failed (natural completion) or it can finish because it was stopped by the IM. The success (or failure) of the execution is marked using the status_code attribute.
{
"type": "CustomUserActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"user_id": "string",
"custom_action_name": "string",
"results": {}
}
The execution of the custom user action has started.
{
"type": "CustomUserActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"user_id": "string",
"custom_action_name": "string",
"parameters": {}
}
A running action provides a (partial) result. Ongoing actions can provide partial updates on the current status of the action. An ActionUpdated should always update the payload of the action object and provide the type of update.
{
"type": "CustomUserActionUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_updated_at": "2019-08-24T14:15:22Z",
"user_id": "string",
"custom_action_name": "string",
"updates": {}
}
The interactive system acknowledges that the bot expectation is finished.
{
"type": "ExpectationBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_expectation",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
The interactive system communicates to the IM that it is able to handle the expectation for the specified events. In case the system is able to handle the expectation it has to send out the ExpectationBotActionStarted event. Receiving the ActionStarted event does not come with any guarantees on how the expectation is handled, but it provides the IM with a way to know if the system is even capable of handling expectations. For expectations for events that are not supported by any Action Server in the interactive system, no ExpectationBotActionStarted event will be sent out. If a system is not capable of handling certain bot expectations the IM might stop communicating them.
{
"type": "ExpectationBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_expectation",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
Bot has stopped actively waiting. Note that this action is only stopped on explicit request, by calling the sending the StopExpectationSignalingBotAction . Otherwise the action will continue indefinitely.
{
"type": "ExpectationSignalingBotActionFinished ",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_expectation",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
The bot has started actively waiting for an event on the specified modality.
{
"type": "ExpectationSignalingBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_expectation",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
The facial gesture was performed.
{
"type": "FacialGestureBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_face",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
The bot has started to perform the facial gesture
{
"type": "FacialGestureBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_face",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
An action has finished its execution. An action can finish either because the action has completed or failed (natural completion) or it can finish because it was stopped by the IM. The success (or failure) of the execution is marked using the status_code attribute.
{
"type": "FacialGestureUserActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_face",
"action_info_modality_policy": "replace",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"user_id": "string"
}
The execution of an action has started.
{
"type": "FacialGestureUserActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_face",
"action_info_modality_policy": "replace",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"user_id": "string",
"expression": "string"
}
{
"type": "GestureBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_gesture",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
The bot has started to perform the gesture.
{
"type": "GestureBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_gesture",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
The user performed a gesture.
{
"type": "GestureUserActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_upper_body",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"user_id": "string",
"gesture": "string"
}
The interactive system detects the start of a user gesture. Note: the time the system detects the gesture might be different from when the user started to perform the gesture.
{
"type": "GestureUserActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_upper_body",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"user_id": "string"
}
Camera effect finished.
{
"type": "MotionEffectCameraActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "camera_motion_effect",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
Camera effect started.
{
"type": "MotionEffectCameraActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "camera_motion_effect",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
Information about an existing pipeline has been updated. This means that a new session was started or a new user has been identified as part of the same pipeline.
{
"type": "PipelineUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"stream_uid": "string",
"session_uid": "string",
"user_uid": "string"
}
The bot shifted back to the original position before this action. This might be a neutral position or the position of any PositionBotAction overwritten by this action that now gains the “focus”.
{
"type": "PositionBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_position",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
The bot has started to transition to the new position
{
"type": "PositionBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_position",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
The bot has arrived at the position and is maintaining that position for the entire action duration.
{
"type": "PositionBotActionUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_position",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_updated_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"position_reached": "string"
}
{
"type": "PostureBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_posture",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
The bot has attained the posture.
{
"type": "PostureBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_posture",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
The interactive system detects the user's absence
{
"type": "PresenceUserActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_presence",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"user_id": "string"
}
The interactive system detects the presence of a user in the system.
{
"type": "PresenceUserActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_presence",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"user_id": "string"
}
{
"type": "RestApiCallBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "web_request",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string",
"response": {}
}
{
"type": "RestApiCallBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "web_request",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
The camera shot was stopped. The camera has returned to the shot it had before (either a neutral shot) or the shot specified by any overwritten ShotCameraAction actions.
{
"type": "ShotCameraActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "camera_shot",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
{
"type": "ShotCameraActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "camera_shot",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
{
"type": "TimerBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "time",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
{
"type": "TimerBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "time",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
The bot utterance finished, either because the utterance has been delivered to the user or the action was stopped.
{
"type": "UtteranceBotActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string",
"final_script": "string"
}
Provides updated transcripts during a UtteranceBotAction. These events correspond to the time that a certain part of the utterance is delivered to the user. In a interactive system that supports voice output these events should align with when the user hears the partial transcript
{
"type": "UtteranceBotActionScriptUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_updated_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"interim_script": "string"
}
The bot started to produce the utterance. This event should align as close as possible with the moment in time the user is receiving the utterance. For example in an Interactive Avatar system, the event is sent out by the Action Server once the text-to-speech (TTS) stream is sent to the user.
{
"type": "UtteranceBotActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
Whenever the interactive system detects a change in the utterance activity of the user this event may be sent out by the interactive system. Utterance activity can relate to different events in an interactive system. For a chatbot the activity can relate to the typing speed of the user whereas in a voice-enabled system activity reflects the user's voice activity. Utterance activity can typically be detected much faster compared to the end of an utterance. This event can allow interaction designers to react to brief periods of no activity (e.g., silence for a voice bot) during an user utterance. --- **Implementation guidance** --- - ``action_updated_at`` : The time stamp should match the time the user changed the utterance activity (e.g. when they became silent) as closely as possible. For most systems, activity detection will introduce a small delay. However the timestamp action_update_at should represent the moment in time the user changed activity, not the timestamp of when this event was created (for this purpose there is a separate field event_created_at in the payload).
{
"type": "UtteranceUserActionActivityUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_updated_at": "2019-08-24T14:15:22Z",
"user_id": "string",
"activity": 0
}
The user utterance has finished. --- **Implementation guidance** --- - Since this event is sent out when the final transcript has been computed, the event is typically delayed compared to the actual moment in time the user utterance stopped. --- - ``action_finished_at`` : The timestamp ``action_finished_at`` should represent the moment in time the user finished talking/typing, not the timestamp of when this event was created (for this purpose there is a separate field event_created_at in the payload). Example: If an interactive system can detect both voice activity (VAD) and transcribe speech (ASR), the timestamp should correspond to the detected utterance end time from VAD and not be related to any delays that ASR processing introduces.
{
"type": "UtteranceUserActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"user_id": "string",
"final_transcript": "string"
}
Provides updated speaking intensity levels if the interactive system supports it.
{
"type": "UtteranceUserActionIntensityUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_updated_at": "2019-08-24T14:15:22Z",
"user_id": "string",
"intensity": 0
}
The user started to produce an utterance. The user could have started talking or typing for example. --- **Implementation guidance** --- - This event should be sent out as soon as the system is able to detect the start of a user utterance. In an interactive system that supports voice activity detection (VAD) this should be sent out as soon as we detect voice activity. --- - ``action_started_at`` : The time stamp should match the time the utterance started as closely as possible. For most systems, voice activity detection will introduce a small delay. However the timestamp ``action_started_at`` should represent the moment in time the user started talking/typing, not the timestamp of when this event was created (for this purpose there is a separate field ``event_created_at`` in the payload).
{
"type": "UtteranceUserActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_started_at": "2019-08-24T14:15:22Z",
"user_id": "string"
}
Provides updated transcripts during a UtteranceUserAction
{
"type": "UtteranceUserActionTranscriptUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"action_updated_at": "2019-08-24T14:15:22Z",
"user_id": "string",
"interim_transcript": "string",
"stability": 0
}
Whenever the user interacts directly with the choice presented in the scene but does not confirmed cancel the choice, a ChoiceUpdated event is sent out by the interactive system.
{
"type": "VisualChoiceSceneActionChoiceUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_updated_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"current_choice": [
"string"
]
}
Whenever the user confirms or tries to abort the choice when interacting with the visual representation of the choice. Examples of this include: clicking a “confirm” button, “clicking on close”
{
"type": "VisualChoiceSceneActionConfirmationUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_updated_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"confirmation_status": "confirm"
}
The choice action was stopped by the IM. (no user action will cause the action to be finished by the Action Server).
{
"type": "VisualChoiceSceneActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string",
"final_choice": [
"string"
]
}
The system has started presenting the choice to the user
{
"type": "VisualChoiceSceneActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
Whenever the user confirms or tries to abort the form input when interacting with the visual representation of the form. Examples of this include: clicking a “confirm” button, “clicking on close”
{
"type": "VisualFormSceneActionConfirmationUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_updated_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"confirmation_status": "confirm"
}
Form action was stopped by the IM (no user action will cause the action to be finished by the Action Server).
{
"type": "VisualFormSceneActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string",
"final_inputs": [
{
"id": "string",
"value": "string",
"description": "string"
}
]
}
Whenever the user interacts directly with the form inputs presented in the scene but has not yet confirmed the input, an Updated action is sent out by the interactive system. This allows the IM to react to partial inputs, e.g. if a user is typing an e-mail address the bot can react to partial inputs (the bot could say "And now only the domain missing" after the user typed "@" in the form field).
{
"type": "VisualFormSceneActionInputUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_updated_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"interim_inputs": [
{
"id": "string",
"value": "string",
"description": "string"
}
]
}
The system has started presenting the the form to the user..
{
"type": "VisualFormSceneActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
Whenever the user confirms or tries to abort the visual information shown in the screen. Examples of this include: clicking a “confirm” button, “clicking on close”
{
"type": "VisualInformationSceneActionConfirmationUpdated",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_updated_at": "2019-08-24T14:15:22Z",
"bot_id": "string",
"confirmation_status": "confirm"
}
Information action was stopped by the IM (no user action will cause the action to be finished by the Action Server).
{
"type": "VisualInformationSceneActionFinished",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_finished_at": "2019-08-24T14:15:22Z",
"is_success": true,
"was_stopped": true,
"failure_reason": "string",
"bot_id": "string"
}
The system has started presenting the information to the user..
{
"type": "VisualInformationSceneActionStarted",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"action_started_at": "2019-08-24T14:15:22Z",
"bot_id": "string"
}
UMIM event stream specific to a unique device stream.
Unique identifier for the stream.
Accepts one of the following messages:
The structured representation of the intent of the bot. This event should be generated by the IM if it communicates the current intent of the bot. A bot intent can lead to different multimodal expressions of that intent.
{
"type": "BotIntent",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"intent": "string"
}
Change parameters of the custom action (if supported by the custom action)
{
"type": "ChangeCustomBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"bot_id": "string",
"custom_action_name": "string",
"parameters": {}
}
The parameters of a running action needs to be changed. Updating running actions is useful for longer running actions (e.g. an avatar animation) which can adapt their behavior dynamically. For example, a nodding animation can change its speed depending on the voice activity level.
{
"type": "ChangeCustomUserAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"user_id": "string",
"custom_action_name": "string",
"parameters": {}
}
Change the duration of the timer. If the duration is reduced this can cause the timer to go off immediately (an TimerBotActionFinished event will be sent out).
{
"type": "ChangeTimerBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "time",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"bot_id": "string",
"duration": 0
}
Adjusts the intended volume while the action has already been started.
{
"type": "ChangeUtteranceBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"bot_id": "string",
"intensity": 0
}
An update to the context. All specified keys will override the ones in the current context.
{
"type": "ContextUpdate",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"data": {}
}
A custom event that can be used in addition to the standardized ones.
{
"type": "CustomEvent",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"name": "string",
"data": {}
}
A new pipeline has been acquired. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about the availability of new pipelines.
{
"type": "PipelineAcquired",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"stream_uid": "string",
"session_uid": "string",
"user_uid": "string"
}
A pipeline has been released and is no longer available. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about pipelines that have been released.
{
"type": "PipelineReleased",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"stream_uid": "string",
"session_uid": "string",
"user_uid": "string"
}
Event to start an action. All other actions that can be started inherit from this base spec. The action_uid is used to differentiate between multiple runs of the same action.
{
"type": "StartCustomBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"bot_id": "string",
"custom_action_name": "string",
"parameters": {}
}
The bot expects a certain event on the UMIM event bus in the near future. This optional event can allow the Action Servers to optimize their functions. As an example a AS responsible for processing camera frames can enable / disable certain vision algorithms depending on what the IM is expecting (e.g. BotExpectation(event=PositionChangeUserActionStarted) can allow the AS to start a computationally more expensive motion tracker for better resolution/accuracy.)
{
"type": "StartExpectationBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_expectation",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"expected_event": {}
}
The bot is waiting for an event for a specific modality.
{
"type": "StartExpectationSignalingBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_expectation",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"modality": "bot_speech"
}
The bot should start making a facial gesture.
{
"type": "StartFacialGestureBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_face",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"bot_id": "string",
"facial_gesture": "string"
}
The bot should start making a specific gesture.
{
"type": "StartGestureBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_gesture",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"bot_id": "string",
"gesture": "string"
}
Perform the described camera motion effect.
{
"type": "StartMotionEffectCameraAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "camera_motion_effect",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"bot_id": "string",
"effect": "string"
}
The bot needs to hold a new position.
{
"type": "StartPositionBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_position",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"position": "string"
}
The bot should start adopting the specified posture.
{
"type": "StartPostureBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_posture",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"posture": "string"
}
{
"type": "StartRestApiCallBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "web_request",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"bot_id": "string",
"request_type": "get",
"url": "string",
"headers": {},
"payload": {}
}
{
"type": "StartShotCameraAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "camera_shot",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"shot": "string",
"start_transition": "string"
}
{
"type": "StartTimerBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "time",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"bot_id": "string",
"duration": 0,
"timer_name": "string"
}
The bot should start to produce an utterance. Depending on the interactive system this could be a bot sending a text message or an avatar talking to the user.
{
"type": "StartUtteranceBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"bot_id": "string",
"script": "string",
"intensity": 0
}
Present a choice in the scene to the user.
{
"type": "StartVisualChoiceSceneAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"prompt": "string",
"image": "string",
"support_prompts": [
"string"
],
"options": [
{
"id": "string",
"text": "string",
"image": "string"
}
],
"choice_type": "selection",
"allow_multiple_choices": true
}
Present a visual form that is requesting certain inputs from the user in the scene to the user.
{
"type": "StartVisualFormSceneAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"prompt": "string",
"image": "string",
"support_prompts": [
"string"
],
"inputs": [
{
"id": "string",
"value": "string",
"description": "string"
}
]
}
Present information in the scene to the user.
{
"type": "StartVisualInformationSceneAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"title": "string",
"summary": "string",
"support_prompts": [
"string"
],
"content": [
{
"text": "string",
"image": "string"
}
]
}
{
"type": "StopCustomBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "custom",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"bot_id": "string",
"custom_action_name": "string",
"parameters": {}
}
The IM communicates that it stopped its expectations. This normally happens when the expectation has been met (e.g. the event has been received) or something else happened to change the course of the interaction.
{
"type": "StopExpectationBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_expectation",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "indefinite",
"bot_id": "string"
}
Stop waiting for an event on the modality.
{
"type": "StopExpectationSignalingBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_expectation",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "indefinite",
"bot_id": "string"
}
Stop the facial gesture or expression. All gestures have a limited lifetime and finish on “their own” (e.g., in an interactive avatar system a “smile” gesture could be implemented by a 1 second animation clip where some facial bones are animated). The IM can use this action to stop an expression before it would be naturally done.
{
"type": "StopFacialGestureBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_face",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"bot_id": "string"
}
Stop the gesture. All gestures have a limited lifetime and finish on 'their own'. Gesture are meant to accentuate a certain situation or statement. For example, in an interactive avatar system a `affirm` gesture could be implemented by a 1 second animation clip where the avatar nods twice. The IM can use this action to stop a gesture before it would be naturally done.
{
"type": "StopGestureBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_gesture",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"bot_id": "string"
}
Stop the camera effect . All effects have a limited lifetime and finish “on their own” (e.g., in an interactive avatar system a “shake” effect could be implemented by a 1 second camera motion). The IM can use this action to stop a camera effect before it would be naturally done.
{
"type": "StopMotionEffectCameraAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "camera_motion_effect",
"action_info_modality_policy": "override",
"action_info_lifetime": "limited",
"bot_id": "string"
}
Stop holding the position. The bot will return to the position it had before the call. Position holding actions have an infinite lifetime, so unless the IM calls the Stop action the bot maintains the position indefinitely. Alternatively PositionBotAction actions can be overwritten, since the modality policy is Override.
{
"type": "StopPositionBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_position",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string"
}
Stop the posture. Postures have no lifetime, so unless the IM calls the Stop action the bot will keep the posture indefinitely.
{
"type": "StopPostureBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_posture",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string"
}
Stop the camera shot. The camera will return to the shot it had before this action was started. ShotCameraAction actions have an infinite lifetime, so unless the IM calls the Stop action the camera maintains the shot indefinitely.
{
"type": "StopShotCameraAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "camera_shot",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string",
"stop_transition": "string"
}
{
"type": "StopTimerBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "time",
"action_info_modality_policy": "parallel",
"action_info_lifetime": "limited",
"bot_id": "string"
}
Stops the bot utterance.The action is stopped only once the UtteranceBotActionFinished has been received. For interactive systems that do not support this event, the action will continue to run normally until finished. The interaction manager is expected to handle arbitrary delays between the time stopping the utterance and the time the utterance actually finished.
{
"type": "StopUtteranceBotAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "bot_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"bot_id": "string"
}
Indicate that the IM has received the information needed and that the Action Server should consider the Utterance as finished as soon as possible. This could for example instruct the Action Server to decrease the hold time (duration of silence in the user speech until we consider the end of speech has been reached.
{
"type": "StopUtteranceUserAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "user_speech",
"action_info_modality_policy": "replace",
"action_info_lifetime": "limited",
"user_id": "string"
}
Stop presenting the choice to the user.
{
"type": "StopVisualChoiceSceneAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string"
}
Stop presenting the form to the user.
{
"type": "StopVisualFormSceneAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string"
}
Stop presenting the information to the user
{
"type": "StopVisualInformationSceneAction",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"action_uid": "string",
"action_info_modality": "information",
"action_info_modality_policy": "override",
"action_info_lifetime": "indefinite",
"bot_id": "string"
}
The structured representation of the intent of the user. This event should be generated by the IM when it has inferred a user's intent. A user intent can be inferred based on verbal or non-verbal communication of the user.
{
"type": "UserIntent",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"intent": "string"
}
{
"type": "UserMovement",
"uid": "string",
"event_created_at": "2019-08-24T14:15:22Z",
"source_uid": "string",
"tags": {},
"movement": "string",
"parameters": {}
}
The system detects the user to be disengaged with the interactive system.
The interactive system detects some level of engagement of the user.
The interactive system provides an update to the engagement level.
The structured representation of the intent of the bot. This event should be generated by the IM if it communicates the current intent of the bot. A bot intent can lead to different multimodal expressions of that intent.
Change parameters of the custom action (if supported by the custom action)
The parameters of a running action needs to be changed. Updating running actions is useful for longer running actions (e.g. an avatar animation) which can adapt their behavior dynamically. For example, a nodding animation can change its speed depending on the voice activity level.
Change the duration of the timer. If the duration is reduced this can cause the timer to go off immediately (an TimerBotActionFinished event will be sent out).
Adjusts the intended volume while the action has already been started.
An update to the context. All specified keys will override the ones in the current context.
The custom action has finished its execution.
The execution of the custom bot action has started.
Something happened during the execution of the custom action (if supported by the custom action).
A custom event that can be used in addition to the standardized ones.
An action has finished its execution. An action can finish either because the action has completed or failed (natural completion) or it can finish because it was stopped by the IM. The success (or failure) of the execution is marked using the status_code attribute.
The execution of the custom user action has started.
A running action provides a (partial) result. Ongoing actions can provide partial updates on the current status of the action. An ActionUpdated should always update the payload of the action object and provide the type of update.
The interactive system acknowledges that the bot expectation is finished.
The interactive system communicates to the IM that it is able to handle the expectation for the specified events. In case the system is able to handle the expectation it has to send out the ExpectationBotActionStarted event. Receiving the ActionStarted event does not come with any guarantees on how the expectation is handled, but it provides the IM with a way to know if the system is even capable of handling expectations. For expectations for events that are not supported by any Action Server in the interactive system, no ExpectationBotActionStarted event will be sent out. If a system is not capable of handling certain bot expectations the IM might stop communicating them.
Bot has stopped actively waiting. Note that this action is only stopped on explicit request, by calling the sending the StopExpectationSignalingBotAction . Otherwise the action will continue indefinitely.
The bot has started actively waiting for an event on the specified modality.
The facial gesture was performed.
The bot has started to perform the facial gesture
An action has finished its execution. An action can finish either because the action has completed or failed (natural completion) or it can finish because it was stopped by the IM. The success (or failure) of the execution is marked using the status_code attribute.
The execution of an action has started.
The bot has started to perform the gesture.
The user performed a gesture.
The interactive system detects the start of a user gesture. Note: the time the system detects the gesture might be different from when the user started to perform the gesture.
Camera effect finished.
Camera effect started.
A new pipeline has been acquired. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about the availability of new pipelines.
A pipeline has been released and is no longer available. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about pipelines that have been released.
Information about an existing pipeline has been updated. This means that a new session was started or a new user has been identified as part of the same pipeline.
The bot shifted back to the original position before this action. This might be a neutral position or the position of any PositionBotAction overwritten by this action that now gains the “focus”.
The bot has started to transition to the new position
The bot has arrived at the position and is maintaining that position for the entire action duration.
The bot has attained the posture.
The interactive system detects the user's absence
The interactive system detects the presence of a user in the system.
The camera shot was stopped. The camera has returned to the shot it had before (either a neutral shot) or the shot specified by any overwritten ShotCameraAction actions.
Event to start an action. All other actions that can be started inherit from this base spec. The action_uid is used to differentiate between multiple runs of the same action.
The bot expects a certain event on the UMIM event bus in the near future. This optional event can allow the Action Servers to optimize their functions. As an example a AS responsible for processing camera frames can enable / disable certain vision algorithms depending on what the IM is expecting (e.g. BotExpectation(event=PositionChangeUserActionStarted) can allow the AS to start a computationally more expensive motion tracker for better resolution/accuracy.)
The bot is waiting for an event for a specific modality.
The bot should start making a facial gesture.
The bot should start making a specific gesture.
Perform the described camera motion effect.
The bot needs to hold a new position.
The bot should start adopting the specified posture.
The bot should start to produce an utterance. Depending on the interactive system this could be a bot sending a text message or an avatar talking to the user.
Present a choice in the scene to the user.
Present a visual form that is requesting certain inputs from the user in the scene to the user.
Present information in the scene to the user.
The IM communicates that it stopped its expectations. This normally happens when the expectation has been met (e.g. the event has been received) or something else happened to change the course of the interaction.
Stop waiting for an event on the modality.
Stop the facial gesture or expression. All gestures have a limited lifetime and finish on “their own” (e.g., in an interactive avatar system a “smile” gesture could be implemented by a 1 second animation clip where some facial bones are animated). The IM can use this action to stop an expression before it would be naturally done.
Stop the gesture. All gestures have a limited lifetime and finish on 'their own'. Gesture are meant to accentuate a certain situation or statement. For example, in an interactive avatar system a `affirm` gesture could be implemented by a 1 second animation clip where the avatar nods twice. The IM can use this action to stop a gesture before it would be naturally done.
Stop the camera effect . All effects have a limited lifetime and finish “on their own” (e.g., in an interactive avatar system a “shake” effect could be implemented by a 1 second camera motion). The IM can use this action to stop a camera effect before it would be naturally done.
Stop holding the position. The bot will return to the position it had before the call. Position holding actions have an infinite lifetime, so unless the IM calls the Stop action the bot maintains the position indefinitely. Alternatively PositionBotAction actions can be overwritten, since the modality policy is Override.
Stop the posture. Postures have no lifetime, so unless the IM calls the Stop action the bot will keep the posture indefinitely.
Stop the camera shot. The camera will return to the shot it had before this action was started. ShotCameraAction actions have an infinite lifetime, so unless the IM calls the Stop action the camera maintains the shot indefinitely.
Stops the bot utterance.The action is stopped only once the UtteranceBotActionFinished has been received. For interactive systems that do not support this event, the action will continue to run normally until finished. The interaction manager is expected to handle arbitrary delays between the time stopping the utterance and the time the utterance actually finished.
Indicate that the IM has received the information needed and that the Action Server should consider the Utterance as finished as soon as possible. This could for example instruct the Action Server to decrease the hold time (duration of silence in the user speech until we consider the end of speech has been reached.
Stop presenting the choice to the user.
Stop presenting the form to the user.
Stop presenting the information to the user
The structured representation of the intent of the user. This event should be generated by the IM when it has inferred a user's intent. A user intent can be inferred based on verbal or non-verbal communication of the user.
The bot utterance finished, either because the utterance has been delivered to the user or the action was stopped.
Provides updated transcripts during a UtteranceBotAction. These events correspond to the time that a certain part of the utterance is delivered to the user. In a interactive system that supports voice output these events should align with when the user hears the partial transcript
The bot started to produce the utterance. This event should align as close as possible with the moment in time the user is receiving the utterance. For example in an Interactive Avatar system, the event is sent out by the Action Server once the text-to-speech (TTS) stream is sent to the user.
Whenever the interactive system detects a change in the utterance activity of the user this event may be sent out by the interactive system. Utterance activity can relate to different events in an interactive system. For a chatbot the activity can relate to the typing speed of the user whereas in a voice-enabled system activity reflects the user's voice activity. Utterance activity can typically be detected much faster compared to the end of an utterance. This event can allow interaction designers to react to brief periods of no activity (e.g., silence for a voice bot) during an user utterance. --- **Implementation guidance** --- - ``action_updated_at`` : The time stamp should match the time the user changed the utterance activity (e.g. when they became silent) as closely as possible. For most systems, activity detection will introduce a small delay. However the timestamp action_update_at should represent the moment in time the user changed activity, not the timestamp of when this event was created (for this purpose there is a separate field event_created_at in the payload).
The user utterance has finished. --- **Implementation guidance** --- - Since this event is sent out when the final transcript has been computed, the event is typically delayed compared to the actual moment in time the user utterance stopped. --- - ``action_finished_at`` : The timestamp ``action_finished_at`` should represent the moment in time the user finished talking/typing, not the timestamp of when this event was created (for this purpose there is a separate field event_created_at in the payload). Example: If an interactive system can detect both voice activity (VAD) and transcribe speech (ASR), the timestamp should correspond to the detected utterance end time from VAD and not be related to any delays that ASR processing introduces.
Provides updated speaking intensity levels if the interactive system supports it.
The user started to produce an utterance. The user could have started talking or typing for example. --- **Implementation guidance** --- - This event should be sent out as soon as the system is able to detect the start of a user utterance. In an interactive system that supports voice activity detection (VAD) this should be sent out as soon as we detect voice activity. --- - ``action_started_at`` : The time stamp should match the time the utterance started as closely as possible. For most systems, voice activity detection will introduce a small delay. However the timestamp ``action_started_at`` should represent the moment in time the user started talking/typing, not the timestamp of when this event was created (for this purpose there is a separate field ``event_created_at`` in the payload).
Provides updated transcripts during a UtteranceUserAction
Whenever the user interacts directly with the choice presented in the scene but does not confirmed cancel the choice, a ChoiceUpdated event is sent out by the interactive system.
Whenever the user confirms or tries to abort the choice when interacting with the visual representation of the choice. Examples of this include: clicking a “confirm” button, “clicking on close”
The choice action was stopped by the IM. (no user action will cause the action to be finished by the Action Server).
The system has started presenting the choice to the user
Whenever the user confirms or tries to abort the form input when interacting with the visual representation of the form. Examples of this include: clicking a “confirm” button, “clicking on close”
Form action was stopped by the IM (no user action will cause the action to be finished by the Action Server).
Whenever the user interacts directly with the form inputs presented in the scene but has not yet confirmed the input, an Updated action is sent out by the interactive system. This allows the IM to react to partial inputs, e.g. if a user is typing an e-mail address the bot can react to partial inputs (the bot could say "And now only the domain missing" after the user typed "@" in the form field).
The system has started presenting the the form to the user..
Whenever the user confirms or tries to abort the visual information shown in the screen. Examples of this include: clicking a “confirm” button, “clicking on close”
Information action was stopped by the IM (no user action will cause the action to be finished by the Action Server).
The system has started presenting the information to the user..
An enumeration.
Request Type
An enumeration.
An enumeration.