Search code examples
nao-robotpepperchoregraphe

What is the difference between starting Choregraphe application with a trigger sentence and starting in a different way?


I have a variety of apps that can be started in one of three ways:
1. from the robots's tablet - when the user clicks the icon I use runBehaviour or startBehaviour
2. from a dialog - trigger an event or start a behaviour
3. with trigger sentences

When the app runs, I want the robot to stay focused until it's over. That's why I stop modules such as the ALBasicAwarness and ALSpeechRecognition. However, if someone touches the robot on the head, the dialog_touch from the basic channel is triggered and Pepper begins to listen and answer questions, even though he has not left the application yet. Тhis happens if the app is started in a way (1) or (2) but if it is started in a way (3) (with trigger sentences) the robot remains focused at all times.

I want to know what is the difference between starting with a trigger sentence and other ways and how to run the application so the robot does not focus off.


Solution

  • instead of runBehavior and startBehavior you should take advantage of the robots life cycle (see doc). Call ALAutonomousLife.switchFocus instead so the robot is 100% focused on your app (all the others will be stopped).

    When the robot starts, Autonomous Life runs in "solitary" (no behavior is focused). It registers and listens to the launch trigger conditions for all the apps that are installed on the robot. When a behavior wants to start (i.e. its trigger condition is true), then the "Autonomous Life" will automatically call switchFocus.

    Then depending on the behavior type:

    • if the behavior is "interactive" then autonomous life will also unregister the launch trigger conditions (i.e. an interactive behavior cannot be stopped by another behavior).
    • if the behavior is "solitary", then only interactive behaviors' trigger conditions will still be active (i.e. a solitary behavior can be stopped if an interactive behavior needs to start).

    At the end of you application, then the robot goes back to the "solitary" state and resume watching the trigger conditions.

    If you have "The Dialog" on your robot, then you have an interactive behavior that will start automatically (with trigger condition "user is in zone 1") and run all your "collaborative dialogs". In a dialog, you can use ^switchFocus to ask autonomousLife to start an app and stop any other thing it's doing.