AI Workbench Security and Operations FAQ#
The following section is a list of questions and answers that were used to create the security and operations FAQ for NVIDIA AI Workbench.
Note
This section is not a comprehensive list of all the questions and answers that were used to create the security and operations FAQ for NVIDIA AI Workbench.
Security and Operations Questions#
Skip ahead to:
What is the system name?#
Answer: NVIDIA AI Workbench.
Provide a description of the system and what it does.#
Answer: NVIDIA AI Workbench is a software tool that allows users to work with GPU-enabled environments on local or remote systems. It provides a full-stack user experience for developers, including features for Git, containers, and GPUs. For more information, see What is NVIDIA AI Workbench?.
Is this a new software implementation or a software upgrade?#
Answer: This is a new software implementation that we are constantly developing and improving based on user feedback.
Are servers being installed or replaced as part of this project?#
Answer: No.
Has all licensing been reviewed by IT Service Quality (ITAM)?#
Answer: Yes.
What are the software requirements and dependencies for the system? List all software (including versions) that is not part of our base system builds.#
Answer: The system has the following software requirements and dependencies:
NVIDIA AI Workbench
NVIDIA GPU Driver
NVIDIA Container Runtime
NVIDIA CUDA Toolkit
NVIDIA cuDNN
NVIDIA TensorRT
For a list of OS, software, and hardware dependencies, see Install, Update, and Uninstall AI Workbench.
Is this an internal solution, cloud based, or Hybrid with cloud and on-premise components? If this is a cloud solution, where is it hosted and what whitelisting will be configured?#
Answer: AI Workbench is an internal solution that can make use of an end user machine, internally hosted servers, or CSP hosted servers.
Are there any vendors providing hardware or virtual images associated with this system?#
Answer: No.
Does this system have an externally facing interface used by customers to authenticate into? If yes, how will the third-party code review be completed?#
Answer: No.
Provide a detailed technical network diagram of the solution.#
Answer: The following diagram shows a high level overview of the system.

Provide a detailed listing of Firewall rule changes required with this design.#
Expanded Question: This list needs to include Source IP or Range, Source DNS Name, Source System Region (Dev/UA/Prod), Destination IP or Range, Destination DNS Name, Destination System Region (Dev/UA/Prod), Port Number, Transmission Type, and Protocol. Excel Spreadsheet template is available upon request if multiple firewall rules are required.
Answer:#
No firewall rules are required. The following are the situations where Workbench uses inbound network connections. All other connections are outbound connections to well known services like NGC, Github, and Dockerhub.
For connecting to a remote context a SSH protocol TCP connection is used. This does not need to be on port 22. If the remote context is on the same local network as the user’s machine there typically doesn’t need to be a change in the firewall. If the remote context is on a CSP there may need to be a firewall rule allowing SSH connections to the CSP hosted machine, depending on corporate network configuration.
When external access is enabled when activating a context port 10000 serves a HTTPS endpoint protected with a self-signed SSL cert. Depending on network configuration and context location there may need to be a firewall rule allowing access to tcp port 10000.
We are not sure how Docker compose based services are exposed. We are unsure if they run through the proxy and expose services to the network.
List all network ports, protocols, and security ciphers the application uses to function or integrate with other systems.#
Answer: See above for more context as some of these are optional.
Inbound TCP Port 22 SSH, no specific ciphers defined
Inbound TCP Port 10000 HTTPS, no specific ciphers defined
Outbound TCP Port 443 HTTPS, no specific ciphers defined
Inbound Localhost TCP Port 10000 HTTP (Workbench Proxy, starts at 10000 and is incremented by two for new contexts)
Inbound Localhost TCP Port 10001 HTTP (Workbench Service, starts at 10001 and is incremented by two for new contexts)
Are data file transmissions required for this system? If so, how are those performed? List any MoveIt or other file transmission types.#
Answer: No. If users want support they are instructed to create a support bundle, which includes log files and sanitized configuration files. Transmission of the support bundle is left to the user, typically it is either posted to the support forms or emailed to NVIDIA.
Will data migration occur as part of this effort? If so, what migration process will be used?#
Answer: No.
What data types reside within this system?#
Expanded Question: Identify and evaluate data types stored or passed through this system, and list any sensitive data types (PII, PCI, GLBA, HIPPA, etc.) involved. Also detail what protection methods are used to protect sensitive data that could include using strong encryption, masking, truncating, or tokenization.
Answer:#
AI Workbench handles data types and protects sensitive data types as follows:
No sensitive data is directly stored or manipulated by AI Workbench. The user may use AI Workbench to work with sensitive data.
AI Workbench stores configuration information related to remote contexts. This includes the IP/Hostname, SSH port, SSH User, and path to an SSH key. Password protected SSH keys are supported, using the SSH Agent.
AI Workbench stores 3rd party credentials / tokens for services like NGC, Github, and Gitlab. These credentials are stored encrypted on disk using the system’s keyring and are provided to AI Workbench as needed. On remote contexts the credentials are kept in memory and only written to disk for applications that cannot have credentials passed directly to them. Any credentials written to disk are deleted when the AI Workbench Service is shut down.
AI Workbench stores Project secrets unencrypted in a runtime directory on the disk of the context machine containing the Project. These secrets are deleted when the Project is removed from the context. While no encryption at rest is used our design is equivalent to how Docker and Podman manage secrets for the user.
Is there development occurring on this system? If so, who is performing the development, and what secure coding and SDLC practices are being used? Who owns the code, and will a code proxy be required?#
Answer: Yes, the end user can use AI Workbench to facilitate their own development by using supported tools like Jupyterlab. Secure coding and SDLC practices are up to the end user to use and follow. Any work created or developed in or using AI Workbench is still the property of the end user. No code proxy is required.
Explain how the system is kept up to date and maintained. Detail if the vendor provides patches or updates, and how/when those are applied.#
Answer: AI Workbench regularly publishes updates to the AI Workbench software and AI Workbench maintained Base Environments. Users are notified of available updates, including the changelog of what is included in the update. The user has the option of if and when to download and apply the updates. The user is responsible for ensuring the system is kept up to date.
Are there any configuration changes required to our existing security protection applications or controls? List any Antivirus exclusions, whitelists, etc.#
Answer: Yes.
What logging capabilities does the system have? Is it possible to SysLog?#
Answer: AI Workbench produces its own logs, stored under the following directories. Each binary produces its own logs. There is no current integration with any log aggregation system.
~/.nvwb/logs/
~/.nvwb/proc/
(daemon logs from stdout/stderr)%AppData%\Local\NVIDIA Corporation\AI Workbench\logs\
Is there any fraud potential with this system? If so, what additional fraud detection capabilities will be used?#
Answer: No, AI Workbench doesn’t handle financial data.
Is this a SOX application?#
Answer: No.
Does this system integrate with Active Directory? If so, what method is used for integration? (LDAPS, OAUTH, SAML2.0, ETC.)?#
Answer: No. AI Workbench uses OAuth for authenticating to 3rd party services, like Github, so that the software can perform actions on that platform on behalf of the user.
Does the system support Multi-Factor Authentication? If so, how will it be implemented? MFA is required for VPN and accessing systems remotely.#
Answer: No.
Does the system require a database?#
Expanded Question: If yes, please work with the DBA team and provide details on the DB structure and how it aligns with their standards. Detail if sensitive data exists in the DB, and what encryption type is used to protect it. Database encryption (SQL TDE) is required for databases that include sensitive information. Also detail how all connections are established and secured to the database (user and server side)? List the DB authentication types used.
Answer:#
No, AI Workbench does not require a database.
Does this system require additional architecture for failover, outside of what is already provided as part of the standard server infrastructure?#
Answer: No, AI Workbench does not require additional architecture for failover.
Does this system require SMTP for email relay? If so, will it be relayed internally, externally, or both? What data types will be in the emails, and what archiving requirements are needed for compliance?#
Expanded Question: Is an external email delivery service being utilized? What account is being used to send email messages? If being sent externally, how is the sending subdomain configured in order to protect the standard fnb-corp.com email domain from being blacklisted in the case of overflow of messages being sent.
Answer:#
No, AI Workbench does not require SMTP for email relay.
What service accounts are required, what is their purpose, and what are the minimum permissions they need to function?#
Answer: No service accounts are required.
Are there any default/built-in accounts or back-door access capabilities in this system (examples: DB2ADMIN, root)? If yes, explain how those will be locked down or removed. This can include renaming, disabling, and having a strong password applied.#
Answer: No, there are no default or built-in accounts.
What account is used to perform installations and upgrades?#
Answer: The user’s account on the machine they are using.
Does an outside 3rd party access the system for support purposes? If so, how do they connect to it?#
Answer: No, AI Workbench does not require 3rd party access for support purposes. The user explicitly sends data to NVIDIA to receive help from support.
Are there any areas in the system where passwords are stored in cleartext?#
Answer: Yes. Base64 encoded passwords are written to disk to allow Docker or Podman to authenticate to the specified container registries, for pulling private container images.
Who are the Application admins, DB ADMINS, and other Admin based functions for the system?#
Answer: There is no separate “admin” for the system. Everything runs as the user, with the exception of the Docker service.