About Getting Started with NeMo Microservices#

To get started with the NeMo microservices, use the following instructions to find the tutorials that are relevant to your role and needs.

Platform End-to-End Tutorials#

If you want to set up and explore the NeMo microservices as a platform, use the following materials.

Demo Cluster Setup on Minikube

Start by deploying the NeMo microservices as a platform on a minikube cluster.

Demo Cluster Setup on Minikube
Install NeMo Microservices Python SDK

Install the NeMo Microservices Python SDK if you want to build AI applications in Python instead of using the REST API.

Install NeMo Microservices Python SDK
Beginner Tutorials

Learn how to use the capabilities of the NeMo microservices as an end-to-end platform to customize large language models (LLMs), add safety checks to them, and evaluate them.

Beginner Platform Tutorials
Jupyter Notebooks

Get started with the Jupyter notebooks that demonstrate the end-to-end capabilities of the NeMo microservices.

Jupyter Notebooks
Advanced Installation on Kubernetes

After you have completed the beginner tutorials on minikube, learn how to install the NeMo microservices on a Kubernetes cluster using Helm.

Install NeMo Microservices as a Platform

Microservice-Level Tutorials#

If you want to explore the capabilities of individual NeMo microservices, use the following microservice-level tutorials.

Manage Entities

Manage entities and data for your AI applications in the NeMo microservices platform.

Entity Management Tutorials
Generate Synthetic Data (Beta)

Generate synthetic data to train large language models.

NeMo Data Designer Tutorials
Fine-tune Models

Customize and fine-tune large language models to meet your specific use cases.

Fine-Tuning Tutorials
Evaluate Models

Evaluate and benchmark your AI models to ensure they meet quality and performance standards.

Evaluation Tutorials
Audit Model Safety (Beta)

Audit the safety of your models.

NeMo Auditor Tutorials
Guardrails

Add safety checks for responsible AI.

Guardrail Tutorials
Run Inference

Deploy LLM NIM microservices and run inference on them.

About Deploying and Running Inference on NIM

For Deploying NeMo Microservices Platform to a Production-Grade Kubernetes Cluster#

If you are a cluster administrator, have completed the Demo Cluster Setup on Minikube guide, and want to deploy the NeMo microservices for production, proceed to the Admin Setup section.