About Getting Started with NeMo Microservices#
To get started with the NeMo microservices, use the following instructions to find the tutorials that are relevant to your role and needs.
Platform End-to-End Tutorials#
If you want to set up and explore the NeMo microservices as a platform, use the following materials.
Start by deploying the NeMo microservices as a platform on a minikube cluster.
Install the NeMo Microservices Python SDK if you want to build AI applications in Python instead of using the REST API.
Learn how to use the capabilities of the NeMo microservices as an end-to-end platform to customize large language models (LLMs), add safety checks to them, and evaluate them.
Get started with the Jupyter notebooks that demonstrate the end-to-end capabilities of the NeMo microservices.
After you have completed the beginner tutorials on minikube, learn how to install the NeMo microservices on a Kubernetes cluster using Helm.
Microservice-Level Tutorials#
If you want to explore the capabilities of individual NeMo microservices, use the following microservice-level tutorials.
Manage entities and data for your AI applications in the NeMo microservices platform.
Generate synthetic data to train large language models.
Customize and fine-tune large language models to meet your specific use cases.
Evaluate and benchmark your AI models to ensure they meet quality and performance standards.
Audit the safety of your models.
Add safety checks for responsible AI.
Deploy LLM NIM microservices and run inference on them.
For Deploying NeMo Microservices Platform to a Production-Grade Kubernetes Cluster#
If you are a cluster administrator, have completed the Demo Cluster Setup on Minikube guide, and want to deploy the NeMo microservices for production, proceed to the Admin Setup section.