UMIM API 1.3.0

The AsyncAPI version of UMIM is meant for use with Interaction Manager (IM) components that communicate through an asynchronous event bus.

Operations

  • PUB umim_events_system

    UMIM event stream specific to a unique device stream.

    stream_uid
    required
    string
    uid: stream_uid

    Unique identifier for the stream.

    Accepts one of the following messages:

    • #0PipelineAcquiredPipelineAcquired

      A new pipeline has been acquired. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about the availability of new pipelines.

      object

      Examples

    • #1PipelineReleasedPipelineReleased

      A pipeline has been released and is no longer available. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about pipelines that have been released.

      object

      Examples

    • #2PipelineUpdatedPipelineUpdated

      Information about an existing pipeline has been updated. This means that a new session was started or a new user has been identified as part of the same pipeline.

      object

      Examples

  • PUB umim_events{stream_uid}

    UMIM event stream specific to a unique device stream.

    stream_uid
    required
    string
    uid: stream_uid

    Unique identifier for the stream.

    Accepts one of the following messages:

    • #0AttentionUserActionFinishedAttentionUserActionFinished

      The system detects the user to be disengaged with the interactive system.

      object

      Examples

    • #1AttentionUserActionStartedAttentionUserActionStarted

      The interactive system detects some level of engagement of the user.

      object

      Examples

    • #2AttentionUserActionUpdatedAttentionUserActionUpdated

      The interactive system provides an update to the engagement level.

      object

      Examples

    • #3CustomBotActionFinishedCustomBotActionFinished

      The custom action has finished its execution.

      object

      Examples

    • #4CustomBotActionStartedCustomBotActionStarted

      The execution of the custom user action has started.

      object

      Examples

    • #5CustomBotActionUpdatedCustomBotActionUpdated

      Something happened during the execution of the custom action (if supported by the custom action).

      object

      Examples

    • #6CustomUserActionFinishedCustomUserActionFinished

      An action has finished its execution. An action can finish either because the action has completed or failed (natural completion) or it can finish because it was stopped by the IM. The success (or failure) of the execution is marked using the status_code attribute.

      object

      Examples

    • #7CustomUserActionStartedCustomUserActionStarted

      The execution of the custom user action has started.

      object

      Examples

    • #8CustomUserActionUpdatedCustomUserActionUpdated

      A running action provides a (partial) result. Ongoing actions can provide partial updates on the current status of the action. An ActionUpdated should always update the payload of the action object and provide the type of update.

      object

      Examples

    • #9ExpectationBotActionFinishedExpectationBotActionFinished

      The interactive system acknowledges that the bot expectation is finished.

      object

      Examples

    • #10ExpectationBotActionStartedExpectationBotActionStarted

      The interactive system communicates to the IM that it is able to handle the expectation for the specified events. In case the system is able to handle the expectation it has to send out the ExpectationBotActionStarted event. Receiving the ActionStarted event does not come with any guarantees on how the expectation is handled, but it provides the IM with a way to know if the system is even capable of handling expectations. For expectations for events that are not supported by any Action Server in the interactive system, no ExpectationBotActionStarted event will be sent out. If a system is not capable of handling certain bot expectations the IM might stop communicating them.

      object

      Examples

    • #11ExpectationSignalingBotActionFinishedExpectationSignalingBotActionFinished

      Bot has stopped actively waiting. Note that this action is only stopped on explicit request, by calling the sending the StopExpectationSignalingBotAction . Otherwise the action will continue indefinitely.

      object

      Examples

    • #12ExpectationSignalingBotActionStartedExpectationSignalingBotActionStarted

      The bot has started actively waiting for an event on the specified modality.

      object

      Examples

    • #13FacialGestureBotActionFinishedFacialGestureBotActionFinished

      The facial gesture was performed.

      object

      Examples

    • #14FacialGestureBotActionStartedFacialGestureBotActionStarted

      The bot has started to perform the facial gesture

      object

      Examples

    • #15FacialGestureUserActionFinishedFacialGestureUserActionFinished

      An action has finished its execution. An action can finish either because the action has completed or failed (natural completion) or it can finish because it was stopped by the IM. The success (or failure) of the execution is marked using the status_code attribute.

      object

      Examples

    • #16FacialGestureUserActionStartedFacialGestureUserActionStarted

      The execution of an action has started.

      object

      Examples

    • #17GestureBotActionFinishedGestureBotActionFinished

      The gesture was performed.

      object

      Examples

    • #18GestureBotActionStartedGestureBotActionStarted

      The bot has started to perform the gesture.

      object

      Examples

    • #19GestureUserActionFinishedGestureUserActionFinished

      The user performed a gesture.

      object

      Examples

    • #20GestureUserActionStartedGestureUserActionStarted

      The interactive system detects the start of a user gesture. Note: the time the system detects the gesture might be different from when the user started to perform the gesture.

      object

      Examples

    • #21MotionEffectCameraActionFinishedMotionEffectCameraActionFinished

      Camera effect finished.

      object

      Examples

    • #22MotionEffectCameraActionStartedMotionEffectCameraActionStarted

      Camera effect started.

      object

      Examples

    • #23PipelineUpdatedPipelineUpdated

      Information about an existing pipeline has been updated. This means that a new session was started or a new user has been identified as part of the same pipeline.

      object

      Examples

    • #24PositionBotActionFinishedPositionBotActionFinished

      The bot shifted back to the original position before this action. This might be a neutral position or the position of any PositionBotAction overwritten by this action that now gains the “focus”.

      object

      Examples

    • #25PositionBotActionStartedPositionBotActionStarted

      The bot has started to transition to the new position

      object

      Examples

    • #26PositionBotActionUpdatedPositionBotActionUpdated

      The bot has arrived at the position and is maintaining that position for the entire action duration.

      object

      Examples

    • #27PostureBotActionFinishedPostureBotActionFinished

      The posture was stopped.

      object

      Examples

    • #28PostureBotActionStartedPostureBotActionStarted

      The bot has attained the posture.

      object

      Examples

    • #29PresenceUserActionFinishedPresenceUserActionFinished

      The interactive system detects the user's absence

      object

      Examples

    • #30PresenceUserActionStartedPresenceUserActionStarted

      The interactive system detects the presence of a user in the system.

      object

      Examples

    • #31RestApiCallBotActionFinishedRestApiCallBotActionFinished

      API call finished

      object

      Examples

    • #32RestApiCallBotActionStartedRestApiCallBotActionStarted

      API call started

      object

      Examples

    • #33ShotCameraActionFinishedShotCameraActionFinished

      The camera shot was stopped. The camera has returned to the shot it had before (either a neutral shot) or the shot specified by any overwritten ShotCameraAction actions.

      object

      Examples

    • #34ShotCameraActionStartedShotCameraActionStarted

      The camera shot started

      object

      Examples

    • #35TimerBotActionFinishedTimerBotActionFinished

      Timer finished.

      object

      Examples

    • #36TimerBotActionStartedTimerBotActionStarted

      Timer started.

      object

      Examples

    • #37UtteranceBotActionFinishedUtteranceBotActionFinished

      The bot utterance finished, either because the utterance has been delivered to the user or the action was stopped.

      object

      Examples

    • #38UtteranceBotActionScriptUpdatedUtteranceBotActionScriptUpdated

      Provides updated transcripts during a UtteranceBotAction. These events correspond to the time that a certain part of the utterance is delivered to the user. In a interactive system that supports voice output these events should align with when the user hears the partial transcript

      object

      Examples

    • #39UtteranceBotActionStartedUtteranceBotActionStarted

      The bot started to produce the utterance. This event should align as close as possible with the moment in time the user is receiving the utterance. For example in an Interactive Avatar system, the event is sent out by the Action Server once the text-to-speech (TTS) stream is sent to the user.

      object

      Examples

    • #40UtteranceUserActionFinishedUtteranceUserActionFinished

      The user utterance has finished.

      object

      Examples

    • #41UtteranceUserActionIntensityUpdatedUtteranceUserActionIntensityUpdated

      Provides updated speaking intensity levels if the interactive system supports it.

      object

      Examples

    • #42UtteranceUserActionStartedUtteranceUserActionStarted

      The user started to produce an utterance. The user could have started talking or typing for example.

      object

      Examples

    • #43UtteranceUserActionTranscriptUpdatedUtteranceUserActionTranscriptUpdated

      Provides updated transcripts during a UtteranceUserAction

      object

      Examples

    • #44VisualChoiceSceneActionChoiceUpdatedVisualChoiceSceneActionChoiceUpdated

      Whenever the user interacts directly with the choice presented in the scene but does not confirmed cancel the choice, a ChoiceUpdated event is sent out by the interactive system.

      object

      Examples

    • #45VisualChoiceSceneActionConfirmationUpdatedVisualChoiceSceneActionConfirmationUpdated

      Whenever the user confirms or tries to abort the choice when interacting with the visual representation of the choice. Examples of this include: clicking a “confirm” button, “clicking on close”

      object

      Examples

    • #46VisualChoiceSceneActionFinishedVisualChoiceSceneActionFinished

      The choice action was stopped by the IM. (no user action will cause the action to be finished by the Action Server).

      object

      Examples

    • #47VisualChoiceSceneActionStartedVisualChoiceSceneActionStarted

      The system has started presenting the choice to the user

      object

      Examples

    • #48VisualFormSceneActionConfirmationUpdatedVisualFormSceneActionConfirmationUpdated

      Whenever the user confirms or tries to abort the form input when interacting with the visual representation of the form. Examples of this include: clicking a “confirm” button, “clicking on close”

      object

      Examples

    • #49VisualFormSceneActionFinishedVisualFormSceneActionFinished

      Form action was stopped by the IM (no user action will cause the action to be finished by the Action Server).

      object

      Examples

    • #50VisualFormSceneActionInputUpdatedVisualFormSceneActionInputUpdated

      Whenever the user interacts directly with the form inputs presented in the scene but has not yet confirmed the input, an Updated action is sent out by the interactive system. This allows the IM to react to partial inputs, e.g. if a user is typing an e-mail address the bot can react to partial inputs (the bot could say "And now only the domain missing" after the user typed "@" in the form field).

      object

      Examples

    • #51VisualFormSceneActionStartedVisualFormSceneActionStarted

      The system has started presenting the the form to the user..

      object

      Examples

    • #52VisualInformationSceneActionConfirmationUpdatedVisualInformationSceneActionConfirmationUpdated

      Whenever the user confirms or tries to abort the visual information shown in the screen. Examples of this include: clicking a “confirm” button, “clicking on close”

      object

      Examples

    • #53VisualInformationSceneActionFinishedVisualInformationSceneActionFinished

      Information action was stopped by the IM (no user action will cause the action to be finished by the Action Server).

      object

      Examples

    • #54VisualInformationSceneActionStartedVisualInformationSceneActionStarted

      The system has started presenting the information to the user..

      object

      Examples

  • SUB umim_events{stream_uid}

    UMIM event stream specific to a unique device stream.

    stream_uid
    required
    string
    uid: stream_uid

    Unique identifier for the stream.

    Accepts one of the following messages:

    • #0BotIntentBotIntent

      The structured representation of the intent of the bot. This event should be generated by the IM if it communicates the current intent of the bot. A bot intent can lead to different multimodal expressions of that intent.

      object

      Examples

    • #1ChangeCustomBotActionChangeCustomBotAction

      Change parameters of the custom action (if supported by the custom action)

      object

      Examples

    • #2ChangeCustomUserActionChangeCustomUserAction

      The parameters of a running action needs to be changed. Updating running actions is useful for longer running actions (e.g. an avatar animation) which can adapt their behavior dynamically. For example, a nodding animation can change its speed depending on the voice activity level.

      object

      Examples

    • #3ChangeTimerBotActionChangeTimerBotAction

      Change the duration of the timer. If the duration is reduced this can cause the timer to go off immediately (an TimerBotActionFinished event will be sent out).

      object

      Examples

    • #4ChangeUtteranceBotActionChangeUtteranceBotAction

      Adjusts the intended volume while the action has already been started.

      object

      Examples

    • #5ContextUpdateContextUpdate

      An update to the context. All specified keys will override the ones in the current context.

      object

      Examples

    • #6CustomEventCustomEvent

      A custom event that can be used in addition to the standardized ones.

      object

      Examples

    • #7PipelineAcquiredPipelineAcquired

      A new pipeline has been acquired. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about the availability of new pipelines.

      object

      Examples

    • #8PipelineReleasedPipelineReleased

      A pipeline has been released and is no longer available. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about pipelines that have been released.

      object

      Examples

    • #9StartCustomBotActionStartCustomBotAction

      Event to start an action. All other actions that can be started inherit from this base spec. The action_uid is used to differentiate between multiple runs of the same action.

      object

      Examples

    • #10StartExpectationBotActionStartExpectationBotAction

      The bot expects a certain event on the UMIM event bus in the near future. This optional event can allow the Action Servers to optimize their functions. As an example a AS responsible for processing camera frames can enable / disable certain vision algorithms depending on what the IM is expecting (e.g. BotExpectation(event=PositionChangeUserActionStarted) can allow the AS to start a computationally more expensive motion tracker for better resolution/accuracy.)

      object

      Examples

    • #11StartExpectationSignalingBotActionStartExpectationSignalingBotAction

      The bot is waiting for an event for a specific modality.

      object

      Examples

    • #12StartFacialGestureBotActionStartFacialGestureBotAction

      The bot should start making a facial gesture.

      object

      Examples

    • #13StartGestureBotActionStartGestureBotAction

      The bot should start making a specific gesture.

      object

      Examples

    • #14StartMotionEffectCameraActionStartMotionEffectCameraAction

      Perform the described camera motion effect.

      object

      Examples

    • #15StartPositionBotActionStartPositionBotAction

      The bot needs to hold a new position.

      object

      Examples

    • #16StartPostureBotActionStartPostureBotAction

      The bot should start adopting the specified posture.

      object

      Examples

    • #17StartRestApiCallBotActionStartRestApiCallBotAction

      Start an API call.

      object

      Examples

    • #18StartShotCameraActionStartShotCameraAction

      Start a new shot.

      object

      Examples

    • #19StartTimerBotActionStartTimerBotAction

      Start a timer.

      object

      Examples

    • #20StartUtteranceBotActionStartUtteranceBotAction

      The bot should start to produce an utterance. Depending on the interactive system this could be a bot sending a text message or an avatar talking to the user.

      object

      Examples

    • #21StartVisualChoiceSceneActionStartVisualChoiceSceneAction

      Present a choice in the scene to the user.

      object

      Examples

    • #22StartVisualFormSceneActionStartVisualFormSceneAction

      Present a visual form that is requesting certain inputs from the user in the scene to the user.

      object

      Examples

    • #23StartVisualInformationSceneActionStartVisualInformationSceneAction

      Present information in the scene to the user.

      object

      Examples

    • #24StopCustomBotActionStopCustomBotAction

      An action needs to be stopped.

      object

      Examples

    • #25StopExpectationBotActionStopExpectationBotAction

      The IM communicates that it stopped its expectations. This normally happens when the expectation has been met (e.g. the event has been received) or something else happened to change the course of the interaction.

      object

      Examples

    • #26StopExpectationSignalingBotActionStopExpectationSignalingBotAction

      Stop waiting for an event on the modality.

      object

      Examples

    • #27StopFacialGestureBotActionStopFacialGestureBotAction

      Stop the facial gesture or expression. All gestures have a limited lifetime and finish on “their own” (e.g., in an interactive avatar system a “smile” gesture could be implemented by a 1 second animation clip where some facial bones are animated). The IM can use this action to stop an expression before it would be naturally done.

      object

      Examples

    • #28StopGestureBotActionStopGestureBotAction

      Stop the gesture. All gestures have a limited lifetime and finish on 'their own'. Gesture are meant to accentuate a certain situation or statement. For example, in an interactive avatar system a `affirm` gesture could be implemented by a 1 second animation clip where the avatar nods twice. The IM can use this action to stop a gesture before it would be naturally done.

      object

      Examples

    • #29StopMotionEffectCameraActionStopMotionEffectCameraAction

      Stop the camera effect . All effects have a limited lifetime and finish “on their own” (e.g., in an interactive avatar system a “shake” effect could be implemented by a 1 second camera motion). The IM can use this action to stop a camera effect before it would be naturally done.

      object

      Examples

    • #30StopPositionBotActionStopPositionBotAction

      Stop holding the position. The bot will return to the position it had before the call. Position holding actions have an infinite lifetime, so unless the IM calls the Stop action the bot maintains the position indefinitely. Alternatively PositionBotAction actions can be overwritten, since the modality policy is Override.

      object

      Examples

    • #31StopPostureBotActionStopPostureBotAction

      Stop the posture. Postures have no lifetime, so unless the IM calls the Stop action the bot will keep the posture indefinitely.

      object

      Examples

    • #32StopShotCameraActionStopShotCameraAction

      Stop the camera shot. The camera will return to the shot it had before this action was started. ShotCameraAction actions have an infinite lifetime, so unless the IM calls the Stop action the camera maintains the shot indefinitely.

      object

      Examples

    • #33StopTimerBotActionStopTimerBotAction

      Stop the timer.

      object

      Examples

    • #34StopUtteranceBotActionStopUtteranceBotAction

      Stops the bot utterance.The action is stopped only once the UtteranceBotActionFinished has been received. For interactive systems that do not support this event, the action will continue to run normally until finished. The interaction manager is expected to handle arbitrary delays between the time stopping the utterance and the time the utterance actually finished.

      object

      Examples

    • #35StopUtteranceUserActionStopUtteranceUserAction

      Indicate that the IM has received the information needed and that the Action Server should consider the Utterance as finished as soon as possible. This could for example instruct the Action Server to decrease the hold time (duration of silence in the user speech until we consider the end of speech has been reached.

      object

      Examples

    • #36StopVisualChoiceSceneActionStopVisualChoiceSceneAction

      Stop presenting the choice to the user.

      object

      Examples

    • #37StopVisualFormSceneActionStopVisualFormSceneAction

      Stop presenting the form to the user.

      object

      Examples

    • #38StopVisualInformationSceneActionStopVisualInformationSceneAction

      Stop presenting the information to the user

      object

      Examples

    • #39UserIntentUserIntent

      The structured representation of the intent of the user. This event should be generated by the IM when it has inferred a user's intent. A user intent can be inferred based on verbal or non-verbal communication of the user.

      object

      Examples

    • #40UserMovementUserMovement

      A specific user movement was detected.

      object

      Examples

Messages

  • #1AttentionUserActionFinishedAttentionUserActionFinished

    The system detects the user to be disengaged with the interactive system.

    object
  • #2AttentionUserActionStartedAttentionUserActionStarted

    The interactive system detects some level of engagement of the user.

    object
  • #3AttentionUserActionUpdatedAttentionUserActionUpdated

    The interactive system provides an update to the engagement level.

    object
  • #4BotIntentBotIntent

    The structured representation of the intent of the bot. This event should be generated by the IM if it communicates the current intent of the bot. A bot intent can lead to different multimodal expressions of that intent.

    object
  • #5ChangeCustomBotActionChangeCustomBotAction

    Change parameters of the custom action (if supported by the custom action)

    object
  • #6ChangeCustomUserActionChangeCustomUserAction

    The parameters of a running action needs to be changed. Updating running actions is useful for longer running actions (e.g. an avatar animation) which can adapt their behavior dynamically. For example, a nodding animation can change its speed depending on the voice activity level.

    object
  • #7ChangeTimerBotActionChangeTimerBotAction

    Change the duration of the timer. If the duration is reduced this can cause the timer to go off immediately (an TimerBotActionFinished event will be sent out).

    object
  • #8ChangeUtteranceBotActionChangeUtteranceBotAction

    Adjusts the intended volume while the action has already been started.

    object
  • #9ContextUpdateContextUpdate

    An update to the context. All specified keys will override the ones in the current context.

    object
  • #10CustomBotActionFinishedCustomBotActionFinished

    The custom action has finished its execution.

    object
  • #11CustomBotActionStartedCustomBotActionStarted

    The execution of the custom user action has started.

    object
  • #12CustomBotActionUpdatedCustomBotActionUpdated

    Something happened during the execution of the custom action (if supported by the custom action).

    object
  • #13CustomEventCustomEvent

    A custom event that can be used in addition to the standardized ones.

    object
  • #14CustomUserActionFinishedCustomUserActionFinished

    An action has finished its execution. An action can finish either because the action has completed or failed (natural completion) or it can finish because it was stopped by the IM. The success (or failure) of the execution is marked using the status_code attribute.

    object
  • #15CustomUserActionStartedCustomUserActionStarted

    The execution of the custom user action has started.

    object
  • #16CustomUserActionUpdatedCustomUserActionUpdated

    A running action provides a (partial) result. Ongoing actions can provide partial updates on the current status of the action. An ActionUpdated should always update the payload of the action object and provide the type of update.

    object
  • #17ExpectationBotActionFinishedExpectationBotActionFinished

    The interactive system acknowledges that the bot expectation is finished.

    object
  • #18ExpectationBotActionStartedExpectationBotActionStarted

    The interactive system communicates to the IM that it is able to handle the expectation for the specified events. In case the system is able to handle the expectation it has to send out the ExpectationBotActionStarted event. Receiving the ActionStarted event does not come with any guarantees on how the expectation is handled, but it provides the IM with a way to know if the system is even capable of handling expectations. For expectations for events that are not supported by any Action Server in the interactive system, no ExpectationBotActionStarted event will be sent out. If a system is not capable of handling certain bot expectations the IM might stop communicating them.

    object
  • #19ExpectationSignalingBotActionFinishedExpectationSignalingBotActionFinished

    Bot has stopped actively waiting. Note that this action is only stopped on explicit request, by calling the sending the StopExpectationSignalingBotAction . Otherwise the action will continue indefinitely.

    object
  • #20ExpectationSignalingBotActionStartedExpectationSignalingBotActionStarted

    The bot has started actively waiting for an event on the specified modality.

    object
  • #21FacialGestureBotActionFinishedFacialGestureBotActionFinished

    The facial gesture was performed.

    object
  • #22FacialGestureBotActionStartedFacialGestureBotActionStarted

    The bot has started to perform the facial gesture

    object
  • #23FacialGestureUserActionFinishedFacialGestureUserActionFinished

    An action has finished its execution. An action can finish either because the action has completed or failed (natural completion) or it can finish because it was stopped by the IM. The success (or failure) of the execution is marked using the status_code attribute.

    object
  • #24FacialGestureUserActionStartedFacialGestureUserActionStarted

    The execution of an action has started.

    object
  • #25GestureBotActionFinishedGestureBotActionFinished

    The gesture was performed.

    object
  • #26GestureBotActionStartedGestureBotActionStarted

    The bot has started to perform the gesture.

    object
  • #27GestureUserActionFinishedGestureUserActionFinished

    The user performed a gesture.

    object
  • #28GestureUserActionStartedGestureUserActionStarted

    The interactive system detects the start of a user gesture. Note: the time the system detects the gesture might be different from when the user started to perform the gesture.

    object
  • #29MotionEffectCameraActionFinishedMotionEffectCameraActionFinished

    Camera effect finished.

    object
  • #30MotionEffectCameraActionStartedMotionEffectCameraActionStarted

    Camera effect started.

    object
  • #31PipelineAcquiredPipelineAcquired

    A new pipeline has been acquired. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about the availability of new pipelines.

    object
  • #32PipelineReleasedPipelineReleased

    A pipeline has been released and is no longer available. A pipeline connects the IM to end user devices (stream_uid). This event informs the IM in an event-based implementation about pipelines that have been released.

    object
  • #33PipelineUpdatedPipelineUpdated

    Information about an existing pipeline has been updated. This means that a new session was started or a new user has been identified as part of the same pipeline.

    object
  • #34PositionBotActionFinishedPositionBotActionFinished

    The bot shifted back to the original position before this action. This might be a neutral position or the position of any PositionBotAction overwritten by this action that now gains the “focus”.

    object
  • #35PositionBotActionStartedPositionBotActionStarted

    The bot has started to transition to the new position

    object
  • #36PositionBotActionUpdatedPositionBotActionUpdated

    The bot has arrived at the position and is maintaining that position for the entire action duration.

    object
  • #37PostureBotActionFinishedPostureBotActionFinished

    The posture was stopped.

    object
  • #38PostureBotActionStartedPostureBotActionStarted

    The bot has attained the posture.

    object
  • #39PresenceUserActionFinishedPresenceUserActionFinished

    The interactive system detects the user's absence

    object
  • #40PresenceUserActionStartedPresenceUserActionStarted

    The interactive system detects the presence of a user in the system.

    object
  • #41RestApiCallBotActionFinishedRestApiCallBotActionFinished

    API call finished

    object
  • #42RestApiCallBotActionStartedRestApiCallBotActionStarted

    API call started

    object
  • #43ShotCameraActionFinishedShotCameraActionFinished

    The camera shot was stopped. The camera has returned to the shot it had before (either a neutral shot) or the shot specified by any overwritten ShotCameraAction actions.

    object
  • #44ShotCameraActionStartedShotCameraActionStarted

    The camera shot started

    object
  • #45StartCustomBotActionStartCustomBotAction

    Event to start an action. All other actions that can be started inherit from this base spec. The action_uid is used to differentiate between multiple runs of the same action.

    object
  • #46StartExpectationBotActionStartExpectationBotAction

    The bot expects a certain event on the UMIM event bus in the near future. This optional event can allow the Action Servers to optimize their functions. As an example a AS responsible for processing camera frames can enable / disable certain vision algorithms depending on what the IM is expecting (e.g. BotExpectation(event=PositionChangeUserActionStarted) can allow the AS to start a computationally more expensive motion tracker for better resolution/accuracy.)

    object
  • #47StartExpectationSignalingBotActionStartExpectationSignalingBotAction

    The bot is waiting for an event for a specific modality.

    object
  • #48StartFacialGestureBotActionStartFacialGestureBotAction

    The bot should start making a facial gesture.

    object
  • #49StartGestureBotActionStartGestureBotAction

    The bot should start making a specific gesture.

    object
  • #50StartMotionEffectCameraActionStartMotionEffectCameraAction

    Perform the described camera motion effect.

    object
  • #51StartPositionBotActionStartPositionBotAction

    The bot needs to hold a new position.

    object
  • #52StartPostureBotActionStartPostureBotAction

    The bot should start adopting the specified posture.

    object
  • #53StartRestApiCallBotActionStartRestApiCallBotAction

    Start an API call.

    object
  • #54StartShotCameraActionStartShotCameraAction

    Start a new shot.

    object
  • #55StartTimerBotActionStartTimerBotAction

    Start a timer.

    object
  • #56StartUtteranceBotActionStartUtteranceBotAction

    The bot should start to produce an utterance. Depending on the interactive system this could be a bot sending a text message or an avatar talking to the user.

    object
  • #57StartVisualChoiceSceneActionStartVisualChoiceSceneAction

    Present a choice in the scene to the user.

    object
  • #58StartVisualFormSceneActionStartVisualFormSceneAction

    Present a visual form that is requesting certain inputs from the user in the scene to the user.

    object
  • #59StartVisualInformationSceneActionStartVisualInformationSceneAction

    Present information in the scene to the user.

    object
  • #60StopCustomBotActionStopCustomBotAction

    An action needs to be stopped.

    object
  • #61StopExpectationBotActionStopExpectationBotAction

    The IM communicates that it stopped its expectations. This normally happens when the expectation has been met (e.g. the event has been received) or something else happened to change the course of the interaction.

    object
  • #62StopExpectationSignalingBotActionStopExpectationSignalingBotAction

    Stop waiting for an event on the modality.

    object
  • #63StopFacialGestureBotActionStopFacialGestureBotAction

    Stop the facial gesture or expression. All gestures have a limited lifetime and finish on “their own” (e.g., in an interactive avatar system a “smile” gesture could be implemented by a 1 second animation clip where some facial bones are animated). The IM can use this action to stop an expression before it would be naturally done.

    object
  • #64StopGestureBotActionStopGestureBotAction

    Stop the gesture. All gestures have a limited lifetime and finish on 'their own'. Gesture are meant to accentuate a certain situation or statement. For example, in an interactive avatar system a `affirm` gesture could be implemented by a 1 second animation clip where the avatar nods twice. The IM can use this action to stop a gesture before it would be naturally done.

    object
  • #65StopMotionEffectCameraActionStopMotionEffectCameraAction

    Stop the camera effect . All effects have a limited lifetime and finish “on their own” (e.g., in an interactive avatar system a “shake” effect could be implemented by a 1 second camera motion). The IM can use this action to stop a camera effect before it would be naturally done.

    object
  • #66StopPositionBotActionStopPositionBotAction

    Stop holding the position. The bot will return to the position it had before the call. Position holding actions have an infinite lifetime, so unless the IM calls the Stop action the bot maintains the position indefinitely. Alternatively PositionBotAction actions can be overwritten, since the modality policy is Override.

    object
  • #67StopPostureBotActionStopPostureBotAction

    Stop the posture. Postures have no lifetime, so unless the IM calls the Stop action the bot will keep the posture indefinitely.

    object
  • #68StopShotCameraActionStopShotCameraAction

    Stop the camera shot. The camera will return to the shot it had before this action was started. ShotCameraAction actions have an infinite lifetime, so unless the IM calls the Stop action the camera maintains the shot indefinitely.

    object
  • #69StopTimerBotActionStopTimerBotAction

    Stop the timer.

    object
  • #70StopUtteranceBotActionStopUtteranceBotAction

    Stops the bot utterance.The action is stopped only once the UtteranceBotActionFinished has been received. For interactive systems that do not support this event, the action will continue to run normally until finished. The interaction manager is expected to handle arbitrary delays between the time stopping the utterance and the time the utterance actually finished.

    object
  • #71StopUtteranceUserActionStopUtteranceUserAction

    Indicate that the IM has received the information needed and that the Action Server should consider the Utterance as finished as soon as possible. This could for example instruct the Action Server to decrease the hold time (duration of silence in the user speech until we consider the end of speech has been reached.

    object
  • #72StopVisualChoiceSceneActionStopVisualChoiceSceneAction

    Stop presenting the choice to the user.

    object
  • #73StopVisualFormSceneActionStopVisualFormSceneAction

    Stop presenting the form to the user.

    object
  • #74StopVisualInformationSceneActionStopVisualInformationSceneAction

    Stop presenting the information to the user

    object
  • #75TimerBotActionFinishedTimerBotActionFinished

    Timer finished.

    object
  • #76TimerBotActionStartedTimerBotActionStarted

    Timer started.

    object
  • #77UserIntentUserIntent

    The structured representation of the intent of the user. This event should be generated by the IM when it has inferred a user's intent. A user intent can be inferred based on verbal or non-verbal communication of the user.

    object
  • #78UserMovementUserMovement

    A specific user movement was detected.

    object
  • #79UtteranceBotActionFinishedUtteranceBotActionFinished

    The bot utterance finished, either because the utterance has been delivered to the user or the action was stopped.

    object
  • #80UtteranceBotActionScriptUpdatedUtteranceBotActionScriptUpdated

    Provides updated transcripts during a UtteranceBotAction. These events correspond to the time that a certain part of the utterance is delivered to the user. In a interactive system that supports voice output these events should align with when the user hears the partial transcript

    object
  • #81UtteranceBotActionStartedUtteranceBotActionStarted

    The bot started to produce the utterance. This event should align as close as possible with the moment in time the user is receiving the utterance. For example in an Interactive Avatar system, the event is sent out by the Action Server once the text-to-speech (TTS) stream is sent to the user.

    object
  • #82UtteranceUserActionFinishedUtteranceUserActionFinished

    The user utterance has finished.

    object
  • #83UtteranceUserActionIntensityUpdatedUtteranceUserActionIntensityUpdated

    Provides updated speaking intensity levels if the interactive system supports it.

    object
  • #84UtteranceUserActionStartedUtteranceUserActionStarted

    The user started to produce an utterance. The user could have started talking or typing for example.

    object
  • #85UtteranceUserActionTranscriptUpdatedUtteranceUserActionTranscriptUpdated

    Provides updated transcripts during a UtteranceUserAction

    object
  • #86VisualChoiceSceneActionChoiceUpdatedVisualChoiceSceneActionChoiceUpdated

    Whenever the user interacts directly with the choice presented in the scene but does not confirmed cancel the choice, a ChoiceUpdated event is sent out by the interactive system.

    object
  • #87VisualChoiceSceneActionConfirmationUpdatedVisualChoiceSceneActionConfirmationUpdated

    Whenever the user confirms or tries to abort the choice when interacting with the visual representation of the choice. Examples of this include: clicking a “confirm” button, “clicking on close”

    object
  • #88VisualChoiceSceneActionFinishedVisualChoiceSceneActionFinished

    The choice action was stopped by the IM. (no user action will cause the action to be finished by the Action Server).

    object
  • #89VisualChoiceSceneActionStartedVisualChoiceSceneActionStarted

    The system has started presenting the choice to the user

    object
  • #90VisualFormSceneActionConfirmationUpdatedVisualFormSceneActionConfirmationUpdated

    Whenever the user confirms or tries to abort the form input when interacting with the visual representation of the form. Examples of this include: clicking a “confirm” button, “clicking on close”

    object
  • #91VisualFormSceneActionFinishedVisualFormSceneActionFinished

    Form action was stopped by the IM (no user action will cause the action to be finished by the Action Server).

    object
  • #92VisualFormSceneActionInputUpdatedVisualFormSceneActionInputUpdated

    Whenever the user interacts directly with the form inputs presented in the scene but has not yet confirmed the input, an Updated action is sent out by the interactive system. This allows the IM to react to partial inputs, e.g. if a user is typing an e-mail address the bot can react to partial inputs (the bot could say "And now only the domain missing" after the user typed "@" in the form field).

    object
  • #93VisualFormSceneActionStartedVisualFormSceneActionStarted

    The system has started presenting the the form to the user..

    object
  • #94VisualInformationSceneActionConfirmationUpdatedVisualInformationSceneActionConfirmationUpdated

    Whenever the user confirms or tries to abort the visual information shown in the screen. Examples of this include: clicking a “confirm” button, “clicking on close”

    object
  • #95VisualInformationSceneActionFinishedVisualInformationSceneActionFinished

    Information action was stopped by the IM (no user action will cause the action to be finished by the Action Server).

    object
  • #96VisualInformationSceneActionStartedVisualInformationSceneActionStarted

    The system has started presenting the information to the user..

    object

Schemas

  • object
    uid: VisualChoiceOption
  • ChoiceType
    string
    uid: ChoiceType

    An enumeration.

      Allowed values:
    • "selection"
    • "search"
  • RequestType
    string
    uid: RequestType

    Request Type

      Allowed values:
    • "get"
    • "post"
    • "put"
  • object
    uid: VisualFormInputs
  • object
    uid: VisualInformationContent
  • ConfirmationStatus
    string
    uid: ConfirmationStatus

    An enumeration.

      Allowed values:
    • "confirm"
    • "cancel"
    • "unknown"
  • ActionModality
    string
    uid: ActionModality

    An enumeration.

      Allowed values:
    • "bot_speech"
    • "bot_posture"
    • "bot_gesture"
    • "user_speech"
    • "bot_face"
    • "bot_upper_body"
    • "bot_position"
    • "user_face"
    • "user_upper_body"
    • "user_lower_body"
    • "user_engagement"
    • "sound"
    • "environment"
    • "camera_shot"
    • "camera_motion_effect"
    • "information"
    • "visual_effect"
    • "user_presence"
    • "bot_active_waiting"
    • "bot_expectation"
    • "custom"
    • "time"
    • "web_request"