Conclusion#

In this module, we got our first taste of reinforcement learning in action by watching a cartpole mechanism learn how to balance, using a policy we trained. Watching robots learn new tasks can be incredibly satisfying. If you’re eager to go deeper into this world of RL and robotics, please continue on to our next module!

In the next module, we’ll configure a robot based on a real one, and train a policy ourselves step-by-step. Let’s keep going!

Go Further#

Here’s a list of resources to help your learning journey.

  • Keep going with the next module, Training Your Second Robot With Isaac Lab

    • In this module, we’ll start from a bare external project template and build-up a reach task for a robotic arm.

  • Try the Walkthrough in the Isaac Lab repo

    • The module you just took is a Manager-based workflow. To see an example of a Direct workflow, follow this guide!

  • Experiment with provided environments in the Isaac Lab repo

    • Run their training scripts, play the results, to get familiar with what’s possible!

  • Learn more about OpenUSD

    • Understanding USD gives you strong fundamentals for working with Omniverse applications such as Isaac Sim.

Learn More About RL#

This module is meant to get you up to speed on Reinforcement Learning for robotics with Isaac Sim and Isaac Lab. To go deeper on Reinforcement Learning on a broader scale, here are more resources:

Reinforcement Learning Courses#

Hugging Face - Deep RL Course
https://huggingface.co/learn/deep-rl-course

OpenAI - Spinning Up in Deep RL
https://spinningup.openai.com

LycheeAI Community#

Visit LycheeAI hub, a community-driven platform for learning and developing with Isaac Sim and Isaac Lab.

Books#

Reinforcement Learning: An Introduction by Richard S. Sutton
and Andrew G. Barto
http://incompleteideas.net/book/the-book-2nd.html

YouTube#

Reinforcement Learning: Machine Learning Meets Control Theory - Playlist by Steve Brunton
The Physical Turing Test: Jim Fan on Nvidia’s Roadmap for Embodied AI