Jetson Software Architecture#

NVIDIA Jetson software is the most advanced AI software stack yet, purpose-built for the next era of edge computing, where physical AI, generative models, and real-time intelligence converge. At the highest level, Jetson software is optimized for humanoid robotics and machines that interact dynamically with the physical world. It is fully ready for generative AI, enabling developers to deploy large language models (LLM), diffusion models, and vision-language models (VLM) directly at the edge. Jetson software supports the entire developer journey—from rapid prototyping to robust production deployment—ensuring a seamless path from innovation to market.

At the platform level, NVIDIA JetPack provides support for any generative AI model. JetPack is tuned to meet the latency and determinism requirements of real-time applications such as robotics, medical imaging, and industrial automation. It leverages the full power of the NVIDIA AI stack—from the cloud to the edge—with microservices and support for agentic workflows that simplify integration of perception, planning, and control in intelligent systems.

At its core, Jetson software includes the latest JetPack SDK. JetPack is built on a modern foundation with a redesigned compute stack featuring Linux kernel 6.8 and Jetson Linux, which is derived from Ubuntu 24.04 LTS. It introduces several critical technical advancements: The Holoscan Sensor Bridge enables flexible, high-throughput sensor data integration; Multi-Instance GPU (MIG) support brings resource partitioning to Jetson, allowing concurrent and isolated workloads; and the inclusion of a PREEMPT_RT kernel unlocks true deterministic performance for mission-critical, real-time use cases. JetPack also includes support for Jetson Platform Services (JPS) and is architected for the next-generation platform—Jetson Thor.

JetPack sets a new standard for edge AI, combining support for generative AI, real-time performance, and a rich ecosystem to accelerate development. It’s the fastest path to building intelligent, responsive systems across humanoids, robotics, healthcare, and industrial automation.

The following diagram illustrates the full Jetson software stack. Built on the Jetson GPU, the stack includes the JetPack layer and reference frameworks, with developer applications sitting at the top. This layered architecture enables developers to easily build applications using the components provided. The Jetson software stack also offers seamless integration with AI training systems and Omniverse simulation platforms.

Block diagram of the Jetson software stack

The following diagram illustrates a more detailed structure of the Jetson Linux stack. The stack provides a board support package (BSP) and hosts multiple modules from NVIDIA, community, and third-party libraries. NVIDIA also provides an expansive set of host-side tools.

Block diagram of the Jetson Linux stack

At the core of the stack lies the BSP, which includes the kernel, bootloader, sample root filesystem, toolchain, and sources, enabling full customization and low-level development. Atop the BSP lies critical software blocks across domains such as graphics (like Vulkan, OpenGL, and Wayland), multimedia (like NVCodec and FFMPEG), display (supporting HDMI, DP, and direct rendering), camera frameworks (like V4L2 and Argus), security features (like secure boot, TPM, and encryption),and power management (like SC7, rail gating, and dynamic scaling). These modules ensure robust support for high-performance, power-efficient embedded AI applications.

The entire stack is complemented by a suite of host tools for cross-compilation, diagnostics, and device flashing, providing a complete, end-to-end development environment for edge AI solutions.

Above the Jetson Linux stack lies the NVIDIA AI Compute Stack. The stack begins with the CUDA layer, which comprises the latest CUDA-x libraries, including CUDA, cuDNN, and TensorRT. The Frameworks layer above the CUDA layer supports several NVIDIA and community-supported AI frameworks. The Jetson Thor software stack supports all popular AI frameworks. This group includes frameworks officially supported by NVIDIA, like NVIDIA TensorRT, PyTorch, vLLM, and SGLang, which are provided with regularly updated wheels and containers through the NVIDIA GPU Cloud (NGC). Additionally, the stack offers community-driven support for popular projects such as llama.cpp, MLC, JAX, and Hugging Face Transformers, with NVIDIA releasing the latest containers for these frameworks on Jetson through Jetson AI Lab.

Jetson Thor can seamlessly run state-of-the-art architectures across LLMs, VLMs, and vision language action models (VLAs). All popular models—including DeepSeek, Llama, Qwen, Gemma, Mistral, Phi, and Physical Intelligence—are accelerated on Jetson Thor.

Block diagram of the NVIDIA AI Compute Stack

Documentation#

This Developer Guide is the primary technical reference for Jetson Linux. It provides a structured overview of the Jetson Linux architecture and detailed documentation on its features and components. The guide includes instructions for kernel customization, driver integration, peripheral interfacing, and system-level configuration. It also covers topics such as boot flow, power management, security, multimedia, and camera subsystems. Designed to support development across a range of Jetson platforms, this guide is intended for developers who are building and deploying AI-powered embedded systems at the edge.

In addition to this guide, the NVIDIA Jetson ecosystem offers a wide range of documentation and resources to support development, deployment, and scaling of applications. You can find additional reference materials and tools at the following locations:

AI Components#

  • CUDA Toolkit: The NVIDIA® CUDA® Toolkit provides a powerful development environment for creating GPU-accelerated applications, including a compiler, math libraries, and debugging tools.

  • cuDNN: The CUDA Deep Neural Network library offers high-performance primitives for deep learning, with optimized implementations for convolution, pooling, normalization, and activation layers.

  • TensorRT: NVIDIA TensorRT is a high-performance inference runtime that optimizes and accelerates deep learning models, delivering low latency and high throughput across major frameworks.

AI Frameworks#

  • PyTorch: PyTorch is a fast, flexible deep learning framework with NGC containers for easy deployment across AI tasks like NLP, computer vision, and recommendation systems.

  • TensorFlow: TensorFlow is an open-source machine learning platform offering comprehensive tools and libraries for flexible deployment across diverse platforms and AI applications.

  • JAX: JAX is a framework for high-performance numerical computing and machine learning research. It combines numpy-like APIs, automatic differentiation, XLA acceleration, and simple primitives for scaling across GPUs.

  • Triton Inference Server: NVIDIA Triton Inference Server™ enables seamless AI deployment across cloud and edge environments, ensuring consistency and performance optimization.

Jetson Linux Components and Libraries#

  • Flashing: Jetson devices can be flashed with Jetson Linux through multiple methods, from command-line tools to automated scripts, with NVIDIA SDK Manager offering the most user-friendly option.

  • Security: Jetson Linux delivers a comprehensive suite of security features spanning edge to cloud, including secure boot, disk encryption, runtime integrity, fTPM, and secure OTA updates.

  • OTA: Over-the-Air (OTA) updates on Jetson enable seamless, remote delivery of software and security upgrades, keeping devices up-to-date without manual intervention.

  • Graphics Libraries: Jetson supports various graphics APIs, including OpenGL, Vulkan, and EGL, enabling GPU-accelerated rendering and compute for advanced 3D graphics and UI rendering.

  • Multimedia APIs: Jetson Linux Multimedia APIs provide low-level access to camera and video processing hardware. This lets you create high-performance applications with fine-grained control over multimedia pipelines.

  • Computer Vision Libraries: JetPack includes optimized computer vision libraries like OpenCV and VisionWorks that accelerate image processing and vision tasks on Jetson platforms using GPUs and dedicated hardware.

Other JetPack Components#

  • Nsight Developer Tools: NVIDIA Nsight™ developer tools provide powerful profiling, debugging, and performance analysis for optimizing GPU-accelerated applications across AI, graphics, and compute workloads.

  • Cross-compiler and diagnostic utilities: Jetson Linux BSP includes a toolchain for compiling applications on an Ubuntu host system, and a suite of development tools for debugging and optimizing them.

Supported SDKs#

  • NVIDIA DeepStream SDK: The NVIDIA DeepStream SDK gives you a powerful toolkit for building AI-powered vision applications, enabling real-time video analytics with accelerated inference and object tracking.

  • NVIDIA Isaac ROS: NVIDIA Isaac ROS is a collection of hardware-accelerated ROS 2 packages for NVIDIA Jetson. It’s ideal for high-performance perception, localization, and AI in robotics applications.

  • NVIDIA Holoscan SDK: NVIDIA Holoscan SDK is a streaming AI framework for building real-time sensor-processing applications at the edge. It enables high-performance pipelines for healthcare, robotics, and industrial use cases.

Community Support#

  • Jetson AI Lab: Jetson AI Lab is an interactive platform for learning and experimenting with AI on NVIDIA Jetson, offering hands-on projects, tutorials, and tools tailored for edge AI development.

  • Developer Forums: NVIDIA Developer Forums are a community hub for developers to ask questions, share knowledge, and get support on NVIDIA technologies, platforms, and SDKs.