***
layout: overview
slug: nemo-curator/nemo\_curator/models/vllm\_model
title: nemo\_curator.models.vllm\_model
---------------------------------------
## Module Contents
### Classes
| Name | Description |
| ------------------------------------------------------------------ | -------------------------------------------------------- |
| [`LLM`](#nemo_curator-models-vllm_model-LLM) | - |
| [`SamplingParams`](#nemo_curator-models-vllm_model-SamplingParams) | - |
| [`VLLMModel`](#nemo_curator-models-vllm_model-VLLMModel) | Generic vLLM language model wrapper for text generation. |
### Data
[`VLLM_AVAILABLE`](#nemo_curator-models-vllm_model-VLLM_AVAILABLE)
### API
```python
class nemo_curator.models.vllm_model.LLM()
```
```python
class nemo_curator.models.vllm_model.SamplingParams()
```
```python
class nemo_curator.models.vllm_model.VLLMModel(
model: str,
max_model_len: int | None = None,
tensor_parallel_size: int | None = None,
max_num_batched_tokens: int = 4096,
temperature: float = 0.7,
top_p: float = 0.8,
top_k: int = 20,
min_p: float = 0.0,
max_tokens: int | None = None,
cache_dir: str | None = None
)
```
**Bases:** [ModelInterface](/nemo-curator/nemo_curator/models/base#nemo_curator-models-base-ModelInterface)
Generic vLLM language model wrapper for text generation.
Return the model identifier.
```python
nemo_curator.models.vllm_model.VLLMModel.generate(
prompts: list[str]
) -> list[str]
```
Generate text from prompts.
**Parameters:**
List of prompt strings or list of message dicts
(for chat template).
**Returns:** `list[str]`
List of generated text strings.
**Raises:**
* `RuntimeError`: If the model is not set up or generation fails.
```python
nemo_curator.models.vllm_model.VLLMModel.get_tokenizer() -> typing.Any
```
Get the tokenizer from the LLM instance.
```python
nemo_curator.models.vllm_model.VLLMModel.setup() -> None
```
Set up the vLLM model and sampling parameters.
```python
nemo_curator.models.vllm_model.VLLM_AVAILABLE = True
```