Conclusion#
This session provides time for remaining questions, continued experimentation, and a conclusion for this learning path.
Learning Path Summary#
What You Accomplished#
Learned why simulation matters and what the sim-to-real gap is
Built and standardized the physical lightbox workspace to match the sim task
Got hands-on time with the SO-101 robot and LeRobot tools
Applied Strategy 1: Domain randomization with teleoperation
Explored NVIDIA GR00T, vision-language-action models
Evaluated policies in simulation and on the real robot (sim-to-real gap)
Applied Strategy 2: Co-training with real data, deployed to robot
Applied Strategy 3: Cosmos synthetic data augmentation
Explored Strategy 4: SAGE + GapONet (actuator gap estimation)
The Four Strategies We Covered#
Strategy |
Approach |
Key Benefit |
|---|---|---|
1. Domain Randomization |
Vary simulation parameters |
Robust to physics variations |
2. Co-training |
Mix sim and real data |
Better real-world distribution |
3. Cosmos Augmentation |
Synthetic visual diversity |
Robust to visual variations |
4. SAGE + GapONet |
Measure and model the gap |
Targeted actuation fixes |
Key Lessons#
The gap is real — simulation success doesn’t guarantee real-world success
Multiple strategies combine — no single approach solves everything
Measurement enables improvement — SAGE shows you where to focus
Iteration is essential — systematic improvement beats one-shot attempts
Documentation matters — recorded observations guide decisions
Resources#
Courses#
Documentation#
Community#
Papers#
Conclusion#
Congratulations on finishing this course “Train an SO-101 Robot From Sim-to-Real With NVIDIA Isaac.”
We hope this will enable and inspire you to keep learning and practicing your skills in Physical AI!
Feedback#
Taking a few minutes to fill out our survey gives us valuable feedback to improve the course for future participants.
If you have any feedback, suggestions, or ran into issues, please visit this survey.