About NVIDIA NIM on WSL2#

NVIDIA NIM provides pre-packaged, state-of-the-art AI models that are optimized for deployment across NVIDIA GPUs. NIM is packaged as a container to self-hosted, accelerated inferencing microservices for pre-trained and customized AI models. NIM is built with pre-optimized inference engines from NVIDIA and the community, including NVIDIA TensorRT and TensorRT-LLM.

NIM is available using industry-standard APIs to simplify development. NIM offers developers a unified development and deployment experience across NVIDIA platforms, including cloud, data center, and RTX AI PCs and workstations.

NIM microservices are available on the RTX AI PC using Windows Subsystem for Linux (WSL).