Overview of NeMo Microservices#

NVIDIA NeMo microservices are a modular set of tools that you can use to customize, evaluate, and secure large language models (LLMs) while optimizing AI applications across on-premises or cloud-based Kubernetes clusters.

Core Microservices#

  • NVIDIA NeMo Customizer: Facilitates the fine-tuning of large language models (LLMs) using supervised and parameter-efficient fine-tuning techniques.

  • NVIDIA NeMo Evaluator: Provides comprehensive evaluation capabilities for LLMs, supporting academic benchmarks, custom automated evaluations, and LLM-as-a-Judge approaches.

  • NVIDIA NeMo Guardrails: Adds safety checks and content moderation to LLM endpoints, protecting against hallucinations, harmful content, and security vulnerabilities.

Platform Component Microservices#

  • NVIDIA NeMo Data Store: Serves as the default file storage solution for the NeMo microservices platform, exposing APIs compatible with the Hugging Face Hub client (HfApi).

  • NVIDIA NeMo Entity Store: Provides tools to manage and organize general entities such as namespaces, projects, datasets, and models.

  • NVIDIA NeMo Deployment Management: Provides an API to deploy NIM for LLMs on a Kubernetes cluster and manage them through the NIM Operator microservice.

  • NVIDIA NeMo NIM Proxy: Provides a unified endpoint that you can use to access all deployed NIM for LLMs for inference tasks.

  • NVIDIA NeMo Operator: Manages custom resource definitions (CRDs) for NeMo Customizer fine-tuning jobs.

  • NVIDIA DGX Cloud Admission Controller: Enables multi-node training requirements for NeMo Customizer jobs through a mutating admission webhook.


Target Users#

This documentation serves two types of users:

Tip

For a map of microservice features paired to AI model development workflow stages, refer to Key Features.


High-level Data Flywheel Architecture Diagram with NeMo Microservices#

A data flywheel represents the lifecycle of models and data in a machine learning workflow. The process cycles through data ingestion, model training, evaluation, and deployment.

The following diagram illustrates how the NeMo microservices can construct a complete data flywheel.

Architecture diagram of NeMo microservices deployment forming a complete data flywheel.


Concepts#

Explore the foundational concepts and terminology used across the NeMo microservices platform.

Platform

Start here to learn about the concepts that make up the NeMo microservices platform.

Platform Concepts
Entities

Learn about the core entities you can use in your AI workflows.

Entity Concepts
Customization

Learn about the fine-tuning concepts you’ll need to be familiar with to customize base models.

Customization Concepts
Evaluation

Learn about the concepts you’ll need to be familiar with to evaluate your AI workflows.

Evaluation Concepts
Inference

Learn about the concepts you’ll need to be familiar with to use the Inference service for testing and serving your custom models.

Inference Concepts
Guardrails

Learn about the concepts you’ll need to be familiar with to control the interaction with your AI workflows.

Guardrail Concepts