Natural Language Processing¶
Abstract: This NVIDIA Jarvis Natural Language Processing (NLP) 0.2 Early Access (EA) User Guide provides step-by-step instructions for training and deploying your model as well as how to use the NLP service with Jarvis. NLP is a flexible sequence classification and sequence labeling application. Specifically, NLP takes as input text and performs a number of analysis algorithms. NLP is based on common text processing models that can be adapted for multiple common NLP tasks.
Introduction¶
Significant advances in the NLP field have been made over the past year with most of the advances sharing one common thread: dramatically larger models trained on more data. Before BERT and the models that have since been released, NLP models commonly consisted of word embeddings followed by a simple recurrent neural network. BERT-large, for example, has 340 million parameters, GPT-2 has 1.5 billion parameters. Models of this size make inference tasks on a CPU impractical today, necessitating a scalable inference framework for NLP tasks on a GPU.
Jarvis Natural Language Processing (NLP) is a flexible sequence classification and sequence labeling toolkit. It takes text as input and performs a number of analysis algorithms, such as named entity recognition, intent classification, punctuation, and translation. Jarvis NLP is built based on common text processing models that can be adapted for multiple common NLP tasks.
Ultimately, NLP enables the fast deployment of new task-specific NLP models without requiring additional development time for deployment.
Benefits Of Jarvis NLP¶
The Triton Inference Server implementation of the natural language processing pipeline based on the BERT model provides the following benefits:
Ease of use
Triton Inference Server provides a simple tensor-in, tensor-out API. Using this API is as simple as submitting text to be analyzed directly and receive output formatted as appropriate for the downstream task. Jarvis NLP currently supports sequence classification and sequence labeling use cases.
Fast
Because Jarvis is an NVIDIA product that leverages GPUs, the Triton Inference Server sequence classification and sequence labeling pipeline achieve state-of-the-art performance. This implementation is a reference for users who want to efficiently implement sequence classification and sequence labeling pipelines on GPUs.
Modular
Even though this implementation of natural language processing uses the BERT neural network, it is modular such that you can easily replace one or many components of this pipeline. Specifically, task-specific neural nets can be deployed with few changes made to the pre and post-processing backends, so that updating labels and tokenization is not required.
API¶
Jarvis NLP Services expose two different APIs: a high-level API (JarvisNLP) and a low-level API (JarvisCoreNLP).
The high-level API exposes task-specific functions for popular NLP tasks, including intent recognition (as well as spot-filling) and entity extraction.
The low-level API, on the other hand, provides generic NLP services for custom model use cases. The intent of this service is to allow users to design models for arbitrary use cases that conform simply with input and output types specified in the service. As an explicit example, the ClassifyText function could be used for sentiment classification, domain recognition, language identification, etc.
BERT¶
By pretraining a model like BERT in an unsupervised fashion, NLP practitioners are able to create application-specific models by simply adding a different “head” (or output layer) to the model and fine-tune the augmented model with in-domain data for the desired task. The Jarvis NLP project aims to enable the deployment of models trained in this fashion.
While not the only model architecture Jarvis NLP will support, it is expected that many of the NLP models will be BERT-based. Google’s BERT (Bidirectional Encoder Representations from Transformers) is, as the name implies, a Transformer-based language model. Once pre-trained, adding a single layer as necessary for the downstream task allows the model to be fine-tuned and achieved state-of-the-art results (at the time) across a wide variety of disparate NLP tasks. While new models have built on BERT’s success, its relative simplicity, parameter count, and good task-specific performance make it a compelling choice for a latency-sensitive NLP deployment. Most fine-tuning tasks can run in a few hours on a single GPU. For more information about BERT, refer to the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper.
Modules¶
The NLP framework is based on the BERT model and can be separated into the following major components:
Component Description
Tokenizer The tokenizer custom backend has two primary responsibilities:
Lastly, the tokenizer outputs a second tensor that includes the sequence length which can optionally be consumed by the downstream neural network.
**Tokenization** - Deterministically split a text sequence into lexical units (typically characters, subwords, or words).
**Encoding** - After the sequence of tokens is generated by the tokenizer, the list of tokens is stored in a plain text file with one token per line whose path is specified as a parameter. The vocabulary is then held in an efficient data structure to optimize index retrieval. You can use the following parameters to configure the encoder behavior:
The NLP neural network The neural network is a flexible pipeline where a trained and deployed stateless model can implement any classification or sequence labeling task.
The neural network expects an input tensor of size NxS where N is the batch size and S is the sequence length. In the case where N>1 and the underlying model does not support variable-length tensors, the sequence length output from the tokenizer may be utilized so that the model may mask the inputs appropriately.
**Classification tasks** - The output tensor is expected to be size NxD where N is the batch size and D is a known number of output classes.
**Sequence labeling tasks** - The output tensor is expected to be size NxSxD where S is the maximum sequence length in the batch.
Classification The classification custom backend is a simple backend that performs an argmax/TopK for any tensor passed in. The input tensor is expected to be of size NxD where N is batch size and D is the number of output classes. An optional k configuration parameter modifies the default behavior from argmax to TopK. You can override this on a per-request basis via a secondary scalar input in the request.
A text file containing class names, one per line, must be specified via the configuration parameter. The backend will take the indices from the argmax/TopK operation and lookup the appropriate class label based on its line number in the text file. In all cases, the return is a vector of class labels in rank order and an additional vector of logits corresponding to those classes.
Sequence labeling The sequence labeling custom backend operates in a similar manner to the classification backend. It, however, operates along an additional dimension (expected input NxSxD) to provide a top-1 label per timestep.
A text file containing class names, one per line, must be specified via the configuration parameter. The backend will take the indices from the argmax and lookup the appropriate class label based on its line number in the text file. The return is a vector of class labels, one per timestep, and an additional vector of logits corresponding to those classes. The custom backend may also return the original sequence annotated.
Training A Model With Your Data¶
The following models can be trained using Neural Modules (NeMo).
BERT¶
The BERT model can be trained using NeMo. You can train your BERT model with either online or offline data preprocessing, depending on the size of your dataset.
Before training on your own data, ensure you follow the step-by-step NeMo documentation about how to pre-train BERT.
Train your model. You can choose to train your model using either online or offline data preprocessing.
Online data preprocessing (recommended for small datasets)
a. Create a data set directory <data_set> at <data_dir> > and store the custom cleaned text data in the form of train.txt, > test.txt, and valid.txt. For example:
mkdir dataset; echo “this is an example for a training file” > dataset
a. Choose between the tokenizer sentence-piece and nemo-bert using the > command-line argument –tokenizer. For example:
–tokenizer nemo-bert
a. Specify the following parameters that are applicable to your model.
–masking probability
–short sequence probability
–vocabulary size
–sample size
–work directory
with the command-line arguments. For example:
–masking probability 0.15
–short sequence probability 0.1
–vocabulary size 3200
–sample size 1000000
–work directory outputs/bert_lm
a. Decide on the BERT architecture layout either through:
The command-line arguments, for example for BERT base uncased:
–vocab_size 30522
–hidden_act gelu
–num_hidden_layers 12
–hidden_size 768
–num_attention_heads 12
–intermediate_size 3072
–max_position_embeddings 512
or
Pass a JSON file using the command line argument <config_file> with the above parameters and their values.
a. Pre-train BERT. For example:
python bert_pretraining.py –work_dir=<work_dir> –data_dir=<data_dir> –dataset_name=<data_set> –sample_size=<sample_size> –vocab_size=<vocab_size> –tokenizer=sentence-piece –short_seq_prob=<short_seq_prob> –mask_probability=<mask_probability> –config_file=<config_file> [more options]
Offline data preprocessing (recommended for large datasets)
a. On cleaned custom text data, run the offline preprocessing script > create_datasets_from_start.sh > located > here > and extract the data into <data_dir>.
a. Decide on the BERT architecture layout either through:
The command-line arguments, for example, for BERT base uncased:
–vocab_size 30522
–hidden_act gelu
–num_hidden_layers 12
–hidden_size 768
–num_attention_heads 12
–intermediate_size 3072
–max_position_embeddings 512
or
Pass a JSON file using the command-line argument <config_file> with the above parameters and their values
a. Pre-train BERT using the –preprocessed_data flag, for example:
python bert_pretraining.py –work_dir=<work_dir> –data_dir=<data_dir> –config_file=<config_file> [more options].
The checkpoints are stored at the path specified by the input argument –work_dir. By default, this is outputs/bert_lm/. Different components of the model weights or meta-information are stored based on the number of training steps, <step>. The optimizer checkpoint which is stored in the file trainer-STEP-<step>.pt, for each model component its weights are stored in separate files. For BERT pre-training, these are:
BERT-STEP-<step>.pt for the main network part until the logits
BertTokenClassifier-STEP-<step>.pt for the masked language model head
SequenceClassifier-STEP-<step>.pt for the next sentence prediction head
Restore the checkpoints.
a. Continue training, including the optimizer, by passing the > checkpoint directory as the input –load_dir argument in the > pre-training command. For example:
python bert_pretraining.py –load_dir=<checkpoints_dir>
a. Restore the weights. To merely restore (partial) weights, for > example, for fine-tuning, pass the BERT checkpoint as the > input –bert_checkpoint argument in the pre-training command. > For example:
python bert_pretraining.py –bert_checkpoint=<checkpoint_file>
Named Entity Recognition¶
The named entity recognition model can be trained using NeMo. To train a named entity recognition model with your own data, follow the Named Entity Recognition Tutorial.
Intent Detection and Slot Tagging¶
The intent detection and slot tagger models can be trained using NeMo. This model is based on a pre-trained BERT model introduced in BERT for Joint Intent Classification and Slot Filling.
To train this model with your own data, ensure you follow the step-by-step NeMo documentation about how to train an intent detection and slot filling model on ATIS and SNIPS datasets.
Store the train and evaluation date data in tab separated tsv files named train.tsv and eval.tsv respectively. Each line in the file corresponds to a sample. Both files should have the following format:
<intent><TAB><start1:end1:slotlabel1,start2:end2:slotlabel2,…><TAB><BOS prev_intent sentence EOS>
For example:
weather_humidity<TAB>53:59:weatherplace,63:67:weathertime<TAB>BOS no_intent how will be chances of humidness in moscow at 4 pm EOS
<TAB> specifies the tab character which is used for separating columns.
Put the train.tsv and eval.tsv files in a folder, for example ./mydata.
Run the joint_intent_slot_with_bert.py script to process and convert your data into NeMo’s data format (if not done already) and to train your model for 100 epochs. For example:
python joint_intent_slot_with_bert.py –num_epochs=100 –dataset_name=jarvis-mydata –data_dir=”./mydata” –eval_file_prefix=eval
Where ./mydata is where you stored the train.tsv and eval.tsv files.
The trained checkpoints are stored in a folder called ./outputs. To train on more than one GPU, refer to the NeMo documentation.
Domain Classification¶
The domain classification model is based on a pre-trained BERT model and can be trained using NeMo.
Store the train and evaluation date data in tab-separated tsv files named train.tsv and eval.tsv respectively. Each line in the file corresponds to a sample. Both files should have the following format:
<domain><TAB><BOS sentence EOS>
For example:
weather<TAB>BOS please tell me the chances of humidness in moscow on this sunday 4 pm EOS
<TAB> specifies the tab character which is used for separating columns.
Put the train.tsv and eval.tsv files in a folder, for example ./mydata.
Run the sentence_classification_with_bert.py script to process and convert your data into NeMo’s data format (if not done already) and to train your model for 100 epochs. For example:
python sentence_classification_with_bert.py –num_epochs=100 –dataset_name=jarvis-mydata –data_dir=”./mydata” –eval_file_prefix=eval
Where ./mydata is where you stored the train.tsv and eval.tsv files.
The trained checkpoints are stored in a folder called ./outputs. To train on more than one GPU, refer to the NeMo documentation.
Punctuation¶
An ASR system typically generates text with no punctuation and no capitalization of the words. Below you can find the steps on how to train a punctuation model in NeMo that will predict punctuation and capitalization for each word in a sentence to make ASR output more readable and to boost the performance of the downstream tasks such as Named Entity Recognition or machine translation. This model is based on a pre-trained BERT model.
For every word in our training dataset, we’re going to predict the following:
The punctuation mark that should follow the word.
Decide whether the word should be capitalized.
In this model, we’re jointly training 2 token-level classifiers on top of the pre-trained BERT model: one classifier to predict punctuation and the other one for word capitalization.
Prepare the dataset. This model can work with any dataset as long as it follows the format specified below. Here we’re going to use the Tatoeba collection of sentences. Download and preprocess the dataset by running the get_tatoeba_data.py script. For example:
python scripts/get_tatoeba_data.py
The training and evaluation data is divided into 2 files: text.txt and labels.txt. Each line of the text.txt file contains text sequences, where words are separated by spaces: [WORD] [SPACE] [WORD] [SPACE] [WORD], for example:
when is the next flight to new york
the next flight is …
…
The labels.txt file contains corresponding labels for each word in text.txt, the labels are separated by spaces. Each label in labels.txt file consists of 2 symbols:
the first symbol of the label indicates what punctuation mark should
follow the word (where O means no punctuation needed);
the second symbol determines if the word needs to be capitalized or
not (where U indicates that the associated labeled word should be uppercased, and O - no capitalization required.)
We’re considering only commas, periods, and question marks for this task; the rest punctuation marks were removed. Each line of the labels.txt should follow the format: [LABEL] [SPACE] [LABEL] [SPACE] [LABEL] (for labels.txt). For example, labels for the above text.txt file should be:
OU OO OO OO OO OO OU ?U
OU OO OO OO …
…
The complete list of all possible labels for this joint task is:
OO, ,O, .O, ?O, OU, ,U, .U, ?U
Put text_train.txt, text_dev.txt, labels_train.txt, and labels_dev.txt files in a folder, for example, ./mydata.
Run the punctuation_capitalization.py script to train the model. For example:
python examples/nlp/punctuation_capitalization.py /
–data_dir ./mydata /
–pretrained_bert_model=bert-base-uncased /
–work_dir outputs/punctuation
The trained checkpoints are stored in a folder called ./outputs/punctuation/checkpoints.
To train on more than one GPU, refer to the NeMo documentation.
Run a simple inference on the trained model run:
python examples/nlp/punctuation_capitalization_infer.py –punct_labels_dict ./mydata/punct_label_ids.csv –capit_labels_dict ./mydata/capit_label_ids.csv
–work_dir outputs/punctuation/checkpoints/
Note: The punct_label_ids.csv and capit_label.csv files are generated during training and are stored in the ./mydata folder.
Generating The Triton Inference Server Model Repository¶
There are two ways to generate a Triton Inference Server model repository depending on the source of your model:
You can use one of our pre-trained models in NGC, or
You can use a fine-tune custom model with Neural Modules (NeMo).
Pretrained NLP Models On NGC¶
Jarvis 0.2 EA ships with a variety of example NLP models intended to serve as demonstrators and baselines from which additional fine-tuning can be performed.
Sequence and Token Classification
ea-2-jarvis::jarvis_intent_context:config.yaml:ea2
An example of intent classification and a slot filling model for queries that seem contextual.
ea-2-jarvis::jarvis_intent_poi:config.yaml:ea2
An example of intent classification and a slot filling model for queries related to places of interest/navigation.
ea-2-jarvis::jarvis_intent_retail:config.yaml:ea2
An example of intent classification and a slot filling model for queries related to retail scenarios.
ea-2-jarvis::jarvis_intent_weather:config.yaml:ea2
An example of intent classification and slot filling model for queries related to weather.
Token Classification
ea-2-jarvis::jarvis_ner:config.yaml:ea2
Classify Named Entities such as Persons, Places, Organizations, etc.
ea-2-jarvis::jarvis_punctuation:config.yaml:ea2
Classify tokens that should be capitalized or followed by punctuation.
Sequence Classification
ea-2-jarvis::jarvis_seqclass_domain:config.yaml:ea2
An example of a domain model to classify sequences into one of the four supported intent domains, or other.
Any subset of these models can be deployed locally using the Local Deployment Using Quick Start Scripts or in a server deployment using Helm (Using Helm To Deploy Jarvis AI Services on Kubernetes). If using the quick-start tools, modify the config.sh file to include only the models of interest. If deploying to a server using Helm, modify the helm chart’s values.yaml file to include only the models of interest.
Creating A Model Repository Using A Fine-Tuned Model From NeMo¶
To generate a model repository using the models generated from Nemo, move the checkpoint files generated by Nemo to a subdirectory called nemo at the path where you want to generate the model repository.
Generate the model repository. For example, if you want to generate the model repository at /tmp/jarvis and use fine-tuned models for the domain classification task, run the following commands:
NEMO_MODEL_DIR=/tmp/jarvis/nemo/jarvis_seqclass_domain/1/
mkdir -p $NEMO_MODEL_DIR
Copy the checkpoint files from your Nemo output folder to this directory, following the naming convention below:
NGC path Model checkpoint names Class label files
ea-2-jarvis/jarvis_intent_nemo Encoder: nn_encoder.pt Intents: dict.intents.csv
Classifier: nn_classifier.pt Slots: dict.slots.csv
ea-2-jarvis/jarvis_tokens_nemo Encoder: nn_encoder.pt dict.ner.csv
Classifier: nn_classifier.pt
ea-2-jarvis/jarvis_seqclass_nemo Encoder: nn_encoder.pt dict.sequence_labels.csv
Classifier: nn_classifier.pt
If training a joint intent/slot model, use the NGC configuration jarvis_intent_nemo, if token classification (like named entity recognition), use jarvis_tokens_nemo, and if a simple text classifier, use jarvis_seqclass_nemo.
Download the template configuration file from NGC and copy it to NEMO_MODEL_DIR. For example, for the sequence classification task, run:
ngc registry model download-version ea-2-jarvis/jarvis_seqclass_nemo:ea2
cp jarvis_seqclass_nemo_v1/config.yaml /tmp/jarvis/nemo
a. Create the class label files in NEMO_MODEL_DIR with the naming > convention in the table above; where each file will have one class > label per line.
b. Edit $NEMO_MODEL_DIR/config.yaml so that the number of > token/sequence classes corresponds to the number of classes your > fine-tuned models were trained to predict. This variable will be > called num_token_classes for the intent, punctuation, and named > entity recognition models, or num_sequence_classes for the > domain classification model.
c. Copy the CSV files generated by NeMo that contain the class labels > for the model to NEMO_MODEL_DIR, using the file names listed in > the table above.
Generate the model repository by using the respective config files as listed in the table above. For example:
docker run –rm -e “NGC_API_KEY=<ngc_api_key>” \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/jarvis:/data \
nvcr.io/ea-2-jarvis/jarvis-model-tool:ea2 \
gen-model-repo \
–config-file=/data/nemo/config.yaml
Deploying Your Model¶
Regardless of whether you generated a Triton Inference Server model repository using a model that was pre-trained from NGC or whether you used a fine-tuned model that was trained in NeMo, the deployment process is the same.
To deploy your model, you can choose from the following:
You can launch a Docker container manually to deploy.
You can use a Helm chart to launch on Kubernetes to deploy.
Deploying A Model Using A Docker Container¶
Regardless of whether you generated a Triton Inference Server model repository using a model that was pre-trained from NGC or whether you used a fine-tuned model that was trained in NeMo, the deployment process is the same.
To deploy your model, you can choose from the following:
You can use the provided jarvis_start.sh script to launch Triton and the Jarvis Speech server.
You can use a Helm chart to launch on Kubernetes to deploy.
After your local model repository is properly configured (see the previous section), deployment of the model involves starting the Triton Inference Server and Jarvis API Server. Using the Quick Start scripts provided on NGC, this can be done by running:
quickstart/jarvis_start.sh
To verify that the Triton server has been launched properly, run:
docker logs jarvis-triton
The log will look similar to the following:
Starting endpoints, ‘inference:0’ listening on
I0428 00:48:19.701836 1 grpc_server.cc:1973] Started GRPCService at 0.0.0.0:8001
I0428 00:48:19.701868 1 http_server.cc:1443] Starting HTTPService at 0.0.0.0:8000
I0428 00:48:19.744082 1 http_server.cc:1458] Starting Metrics Service at 0.0.0.0:8002
Similarly, to verify that the Jarvis Speech server is running properly,
run:
docker logs jarvis-speech
The log will look similar to the following:
I0428 00:48:25.747217 1 model_registry.cc:89] Registered ‘jasper-nlp-trt-ensemble-streaming’ model
I0428 00:48:25.747478 1 model_registry.cc:89] Registered ‘jasper-nlp-trt-ensemble-vad-streaming-offline’ model
I0428 00:48:25.747623 1 model_registry.cc:89] Registered ‘jasper-nlp-trt-ensemble-vad-streaming’ model
I0428 00:48:25.747802 1 model_registry.cc:89] Registered ‘jasper-nlp-trt-ensemble-streaming-offline’ model
I0428 00:48:25.747862 1 model_registry.cc:94] Total models available for category 0 on server is 4
I0428 00:48:25.748091 1 grpc_jarvis_nlp.cc:195] Seeding RNG used for correlation id with time: 1588034905
I0428 00:48:25.748339 1 jarvis_server.cc:68] NLP Server connected to Triton Inference Server at jarvis-triton:8001
I0428 00:48:25.748348 1 jarvis_server.cc:71] Jarvis Conversational AI Server listening on 0.0.0.0:50051
Deploying A Model Using A Helm Chart¶
The Helm chart provided for Jarvis is responsible for downloading model artifacts (if necessary), setting up a model repository, and launching the required services. The Using Helm To Deploy Jarvis AI Services on Kubernetes section in the Jarvis Services Quick Start Guide describes in detail the process for retrieving the Helm chart from NGC and how to install.
When deploying to Kubernetes via Helm, it is possible to disable components that are not required. If Jarvis services other than NLP are not required, modify the values.yaml file before installing the Helm chart.
If ASR and/or TTS is not required, set jarvis.speechServices.[asr|tts] = false in values.yaml and remove ASR-related and/or TTS-related models from modelRepoGenerator.ngcModelConfigs.
If deploying fine-tuned models, configure modelTemplateVolume to map to a persistent storage device. This volume will be made available to the trtis-model-repo container at /templates.
When building your custom model deployments, use absolute paths including /templates to link to model artifacts stored in this persistent volume. Concretely, the yaml file used for the model generator should be stored in /templates/<name of your model>/config.yaml, along with any other model artifacts. These config paths are then specified in values.yaml in the localModelConfigs array.
Using The Jarvis NLP Service¶
There are two ways users can interact with the Jarvis NLP service: through the command-line client that is written in gRPC or through the Python API.
Interacting With The Jarvis NLP Service Using The gRPC API¶
Client applications interact with the Jarvis NLP Service using the gRPC protocol which supports multiple programming languages. For more information on the API, refer to the NLP API document.
We provide protobuf files so you can generate bindings for your language of choice. These files are located in the Jarvis Quick Start model script on NGC. In addition, a pip wheel is included for easy installation of the client bindings in Python.For more information, refer to the gRPC documentation for the respective programming language. We also provide pre-generated bindings for Python in the same folder.
Interacting With The Jarvis NLP Service Using The Python API¶
To interact with the Jarvis NLP Service using Python, install the Jarvis API pip wheel, or use the jarvis-api-client container.
The following sample code shows how to interact with the Jarvis NLP Service using its gRPC interface.
import grpc
import jarvis_api.jarvis_nlp_core_pb2 as jcnlp
import jarvis_api.jarvis_nlp_core_pb2_grpc as jcnlp_srv
import jarvis_api.jarvis_nlp_pb2 as jnlp
import jarvis_api.jarvis_nlp_pb2_grpc as jnlp_srv
# Establish connection to Jarvis API server and NLP service
jarvis_api_uri = 'localhost:50051'
channel = grpc.insecure_channel(jarvis_api_uri)
jarvis_nlp = jnlp_srv.JarvisNLPStub(channel)
jarvis_cnlp = jcnlp_srv.JarvisCoreNLPStub(channel)
# Use the TextTransform API to run the punctuation model
req = jcnlp.TextTransformRequest()
req.model.model_name = "jarvis_punctuation"
req.text.append("add punctuation to this sentence")
req.text.append("do you have any red nvidia shirts")
req.text.append("i need one cpu four gpus and lots of memory "
"for my new computer it's going to be very cool")
nlp_resp = jarvis_cnlp.TransformText(req)
print("TransformText Output:")
print("\n".join([f" {x}" for x in nlp_resp.text]))
# Submit an AnalyzeIntentRequest. We do not provide a domain with the query, so a domain
# classifier is run first, and based on the inferred value from the domain classifier,
# the query is run through the appropriate intent/slot classifier
# Note: the detected domain is also returned in the response.
req = jnlp.AnalyzeIntentRequest()
req.query = "Is it going to snow in Burlington, Vermont tomorrow night?"
resp = jarvis_nlp.AnalyzeIntent(req)
print(resp)
For more information, refer to the API documentation.
Integrating NLP With Jarvis¶
Within the jarvis-api container, refer to the Jupyter Notebook located in the /notebooks/Jarvis_AI_services_demo.ipynb folder for an example of how to integrate the NLP service with Jarvis. To run it, launch the jarvis-api container with the following command:
docker run –rm –net host –name jarvis-api-server nvcr.io/ea-2-jarvis/jarvis-api-client:ea2 /bin/bash -c “cd /notebooks; jupyter notebook –allow-root”
Then, follow the link shown on the screen to access the notebook in your browser.
Troubleshooting And Support¶
Typically this section comes together after the user has been exposed to the product.