# Navigation Stack Evaluation

## Running the evaluation tests

1. Running NavSim

If you are using NavSim as the simulator for evaluation, you need to start NavSim first before running the tests. If you are using flatsim as the simulator, skip this step.

To create the NavSim binary containing all scenes for evaluation, run the following:

This script creates a NavSim binary, navigation_evaluation.x86_64, in ~/isaac_sim_unity3d/projects/navigation_evaluation/Builds. Now, run the following:

Copy
Copied!

~/isaac: cd ~/isaac_sim_unity3d/projects/navigation_evaluation/Builds

~/isaac_sim_unity3d/projects/navigation_evaluation/Builds: ./navigation_evaluation.x86_64


A black Unity window should appear. This means NavSim is running and waiting for an Isaac application to connect. Continue to the next step.

If you are working on a new scene, you may also want to run the tests in Unity Editor directly.

However, you can only run the tests that use the currently active scene in the Unity Editor. If you want to run all the tests with Unity Editor, you need to load all the scenes used in the tests into Unity’s build setting. This can be done using the Editor toolbar - Isaac - Open Scenes on List, and choose packages/nvidia_qa/navigation_evaluation/navsim_scenes.json, assuming all the test scenes are already included in the json file.

2. Running ROS Navigation Stack

If you want to evaluate Isaac Navigation Stack, skip this step. For ROS Navigation stack, please follow the instructions at ROS Bridge to install ROS and verify the bridge is functional. If you have installed a distro other than melodic, please modify “source /opt/ros/melodic/setup.bash” commands in packages/nvidia_qa/navigation_evaluation/*.json accordingly.

Note

ROS navigation can’t be run over ssh since rviz requires a display.

3. Running the evaluation tests:

Deploy the navigation_evaluation-pkg (from the sdk/ subdirectory):

Copy
Copied!

~/deploy/bob/navigation_evaluation-pkg: ./run ./packages/nvidia_qa/navigation_evaluation/batch_execution.py

or this for ROS evaluation:


Copy
Copied!

~/deploy/bob/navigation_evaluation_ros-pkg: ./run ./packages/nvidia_qa/navigation_evaluation/batch_execution.py --app packages/nvidia_qa/navigation_evaluation/ros_navigation_with_flatsim.app.json --test packages/nvidia_qa/navigation_evaluation/tests_ros.json

.. note:: If you do not have ROS installed, this command may result in errors. If you are not
evaluating ROS, you may delete the associated dependencies from the BUILD file.

This script by default runs isaac_navigation_with_flatsim.app.json over a list of tests
specified in tests_manual.json in packages/nvidia_qa/navigation_evaluation, and writes the
log files to the /tmp/navigation_evaluation folder. For each test, an application json, a
monitor log, and performance reports are created. Both filenames start with the uuid of the
application. After all the
tests are finished, the summary file evaluation_result.json is created containing test results.
The results include the uuid of each test, which can be used to identify the log and performance
files for the test.

=============================== ====================================================================
Command Line Options            Description
=============================== ====================================================================
:code:-a,--app                Application json filename to run the tests with. Default to
packages/nvidia_qa/navigation_evaluation/isaac_navigation_with_flatsim.app.json
:code:-n,--node               Full name of the node in the application graph to monitor for
success and failure. Default is behavior_main.
This node should be able to report success or failure when
the navigation stack under test reaches such states.
:code:-o,--output             Output folder for the log files. Default to
/tmp/navigation_evaluation
:code:-t,--test               A json file specifying all the tests to run. Default to
packages/nvidia_qa/navigation_evaluation/tests_manual.json
:code:-s,--selector           A comma-separated list of indices of tests to run. Default to
a empty string, in which case all tests in the file are run.
:code:-e,--expiration         A hard limit on time in seconds for each test to run. Default
to 300.
=============================== ====================================================================

Each test is performed by creating a new Isaac application with the test parameters specified
in tests_manual.json. The application is run in a separate process until a termination
condition is reached. A test is terminated if one of the following condition is met: the
monitored node reports success or failure; the application crashes; the application runtime
exceeds expiration.

Four sample test jsons are included: tests_manual has a small number of manually created
scenarios for sanity check, tests_random has six scenes and twenty randomly-generated
scenarios for each scene. Both set of tests can be run with either flatsim or NavSim.
tests_flatsim_only contains maps in apps/assets/maps, some of them created by gmapping.
This set should only be run with flatsim as the corresponding scene does not exist in NavSim.
For now, ROS navigation stack can only be evaluated with tests_ros.

To speed up evaluation, enable time machine or time scaling in Isaac scheduler. The
isaac_navigation_with_flatsim and ros_navigation_with_flatsim applications already enable
time machine. The isaac_navigation_with_navsim application cannot use time machine, since
Unity does not support time machine. However, Unity does support time scaling, and we can run
the test at the higher timescale by setting time_scale=2 in
isaac_navigation_with_navsim.app.json, and run NavSim with the corresponding time scale


Copy
Copied!

~/isaac_sim_unity3d/projects/navigation_evaluation/Builds: ./navigation_evaluation.x86_64 --timeScale 2

A valid timescale setting for NavSim is dependent on the number of cameras enable in the
scene. The recommendation is to disable the main camera in all scenes used for evaluation.


4. Understanding the outputs

After running the batch_execution.py scripts, the following log files are written to the output folder (/tmp/navigation_evaluation by default unless you specify another one using -o or –output).

monitor log: Written by the ScenarioMonitor to record the execution state and ground truth poses for a test. It’s named [uuid]_monitor.log, where uuid is the uuid of the Isaac application in the test. This log consist of a list of jsons written per tick of the codelet (the tick period defaults to 10 Hz). Each json has the following fields:

Copy
Copied!

"execution_state": a value of type enum State defined in ScenarioMonitor to summarize
the current state of the navigation state. 0 means normal execution, 1 means
successfully reached goal, negative values are different failure modes.

"localization_error": robot_gt_T_robot Pose2d type representing the localization error.

"poses": a list of names of poses (Pose2d) relative to a reference frame. Names if poses and
the reference frame are set in the PoseMonitor config.

"robot_gt_T_goal": ground-truth robot distance to goal (Pose2d).

"time_since_start": time (in second) since the ScenarioMonitor codelet began execution.

"collision": a list of collision events (if any) of CollisionProto type.


Of these fields, the execution_state and time_since_start are always present. Other fields are optional (dependent on their availability at the tick time).

performance report: Generated by the engine scheduler at the end of each test when the Isaac application shuts down. It’s named [uuid]_perf.json, where uuid is the uuid of the Isaac application. Thus it should match the name of the monitor log for

the corresponding test run.

evaluation_result.json: Written by the batch_execution script after all tests are finished. It includes a list of all the tests run and the result of each test. For each test, the following fields are populated:

Copy
Copied!

"app": The name of the application json file

"test": Test parameters as specified in tests_manual.json

"result": A summary of the test exit status including:

"uuid": The uuid of the application. Use this to find the corresponding monitor log, perf
report, and minidump.

"state": The state of the monitored node when test stops. See definition alice::Status in
engine/alice/status.hpp

"exitcode": The exit code of the process in which the test runs. A non-zero exit code signals
a crash.


5. Analyzing the results

Run

Copy
Copied!

~/deploy/bob/navigation_evaluation-pkg: python3 packages/nvidia_qa/navigation_evaluation/eval_report.py


This generates a JSON report, /tmp/navigation_evaluation/evaluation_report.json, and prints the statistics to the console. For test cases that fail, it also prints out the uuid of the test.

To re-run a failed test, run the application JSON saved in the log directory as uuid_app.json, as in the following example:

Copy
Copied!

~/deploy/bob/navigation_evaluation-pkg: ./external/com_nvidia_isaac_engine/engine/alice/tools/main --app /tmp/navigation_evaluation/1578e0cc-c3e3-11e9-a6db-0ba0cf826441_app.json


To visualize the map and paths taken by the robot in all tests, run the following:

Copy
Copied!

~/deploy/bob/navigation_evaluation-pkg: python3 packages/nvidia_qa/navigation_evaluation/plot_robot_paths.py


The image below is a sample output. Successful tests are plotted in green. Tests failing due to timeout are plotted in blue, and tests failing due to lost localization are plotted in red.

© Copyright 2018-2020, NVIDIA Corporation. Last updated on Feb 1, 2023.