nemo_deploy.multimodal.query_multimodal#

Module Contents#

Classes#

NemoQueryMultimodal

Sends a query to Triton for Multimodal inference.

API#

class nemo_deploy.multimodal.query_multimodal.NemoQueryMultimodal(url, model_name, model_type)[source]#

Sends a query to Triton for Multimodal inference.

.. rubric:: Example

from nemo_deploy.multimodal import NemoQueryMultimodal

nq = NemoQueryMultimodal(url=”localhost”, model_name=”neva”, model_type=”neva”)

input_text = “Hi! What is in this image?” output = nq.query( input_text=input_text, input_media=”/path/to/image.jpg”, max_output_len=30, top_k=1, top_p=0.0, temperature=1.0, ) print(“prompts: “, prompts)

Initialization

setup_media(input_media)[source]#

Setup input media.

frame_len(frames)[source]#

Get frame len.

get_subsampled_frames(frames, subsample_len)[source]#

Get subsampled frames.

query(
input_text,
input_media,
batch_size=1,
max_output_len=30,
top_k=1,
top_p=0.0,
temperature=1.0,
repetition_penalty=1.0,
num_beams=1,
init_timeout=60.0,
lora_uids=None,
)[source]#

Run query.