Ecosystem Partner Software#
The NVIDIA AI Stack includes partners across the broader AI ecosystem that form integral components of the AI Factory solution, delivering capabilities that meet the security requirements for government and regulated deployments:
Category |
Partner |
Role in AI Workflow |
AI Platform |
DataRobot |
Data ingest, data connectors, AI deployment, and insights |
AI Platform |
H2O.ai |
AutoML, model deployment, model monitoring and model management |
AI Platform |
Dataiku |
Data preparation, machine learning, analytics, MLOps, and AI governance |
AI Platform |
Domino Data Labs |
MLOps platform for collaborative model development, deployment, and governance |
AI Platform |
Weights & Biases (Models + Weave) |
Experiment tracking, model evaluation, tracing, optimization |
AI Platform |
Elastic |
Vector Database (VectorDB) management and scalable data storage |
AI Platform |
EnterpriseDB |
Enterprise PostgreSQL with built-in Oracle compatibility for easier migration and reliable data management |
Infrastructure |
Canonical |
Enterprise Kubernetes orchestration, container platform, automated operations |
Infrastructure |
Red Hat OpenShift Container Platform |
Enterprise Kubernetes orchestration, container platform, automated operations |
Infrastructure |
Mirantis |
Enterprise Kubernetes orchestration, container platform, automated operations |
Infrastructure |
Nutanix |
Hyperconverged infrastructure, AI inference platform, unified VM and container management |
Infrastructure |
Spectro Cloud |
Multi cloud Kubernetes management, cluster provisioning, edge orchestration |
Infrastructure |
Broadcom |
Unified private cloud platform, compute/storage virtualization, AI workload support |
Observability |
Dynatrace |
Observability, centralized logging, detailed app tracing |
Observability |
Fiddler |
AI model monitoring, explainability, performance analytics, guardrails |
Observability |
Splunk |
Full-stack monitoring, unified metrics, logs, and traces, AI-powered troubleshooting |
Security |
Crowdstrike |
Endpoint, workload, and host protection with real-time, AI-driven threat detection and response |
Security |
Trend Micro |
AI-driven endpoint, cloud and threat protection offering real-time cybersecurity defense |
Security |
Fortanix |
Confidential computing, runtime encryption, secure enclave management |
Security |
Protopia AI |
Privacy-preserving data transformation for secure AI inference and deployment |
Artifact Repository |
JFrog Artifactory |
Universal artifact repository manager designed to store, manage, and secure software artifacts and dependencies across various technologies |
AI Platform#
DataRobot: DataRobot significantly accelerates the AI development lifecycle by automating many of the complex and time-consuming tasks involved in building, training, deploying, and managing agents, including embedded support for NVIDIA AI Enterprise. For an AI Factory focused on agents, DataRobot can help rapidly prototype and deploy the resources that might power the intelligence of these agents, allowing developers to focus more on the agentic logic and integration rather than model tuning from scratch. Its platform also ensures models are monitored and managed effectively in production.
For federal customers, DataRobot holds Agency ATO, delivers appliance-based deployments for SCIF environments, and is available in CSP marketplaces such as AWS GovCloud and Azure Gov. The platform is engineered to meet stringent federal compliance standards, offers IL 5-ready security, end-to-end encryption, and supports operations in classified, air-gapped, and hybrid environments, ensuring mission-critical AI can be delivered securely at scale.
H2O.ai: H2O.ai provides an on-premise AI platform (featuring Enterprise h2oGPTe for Agentic AI and H2O Driverless AI for AutoML) specifically designed for building and deploying both predictive and generative AI models, including advanced agentic AI applications. Their software is optimized for NVIDIA accelerated compute, leveraging NVIDIA RAPIDS and accelerated libraries for maximum performance, and supports Kubernetes for effortless scalability. This enables enterprises to develop and operationalize secure, production-grade AI agents that can utilize NVIDIA NIM microservices for inference, all within their own data centers and on NVIDIA hardware.
H2O.ai has integrated NVIDIA Nemotron-Super-49B-v1.5 reasoning model into its enterprise h2oGPTe platform, enabling multi-step, reasoning-driven analysis across enterprise and domain data. This empowers organizations to operationalize secure, production-grade AI agents that combine Deep Research, retrieval-augmented generation (RAG), code interpretation, and multi-agent orchestration to automate complex workflows.
As a partner focused on regulated and security-first industries, H2O.ai offers a fully air-gapped, on-premises end-to-end Generative, Agentic, and Predictive AI Platform. This solution empowers users to deploy Agentic AI, AutoML, deep learning, computer vision, NLP, and document AI capabilities—all with robust model fine-tuning and evaluation support for over 50 open-source LLMs and all major closed-weight models.
Dataiku: Dataiku is The Universal AI Platform™, accelerating the delivery of enterprise AI solutions directly tied to business efficiency and revenue growth. Running on top of any data platform and any LLM, it unites all teams, from business users to data scientists, and all AI techniques, from predictive analytics to agentic, in one secure, governed environment.
Dataiku platform enables government and federal agencies to develop and operate trusted, mission-critical systems. The platform supports diverse deployment models, including on-premises, hybrid, air-gapped, and dedicated government cloud infrastructures such as AWS GovCloud and Azure Government. Engineered to meet stringent federal requirements, Dataiku’s Universal AI platform is compliant with security standards like FIPS and STIG. It delivers a comprehensive framework of controls, featuring end-to-end auditability, granular role-based governance, and full encryption support. These capabilities empower agencies to achieve compliance and maintain robust security within their own managed environments.
Domino Data Labs: Domino Data Lab offers a comprehensive data science and MLOps platform designed to accelerate the development and operationalization of AI models across the full lifecycle. It enables teams to quickly build, deploy, and manage analytical and AI workloads at scale—from research and experimentation to reproducible production pipelines. Domino’s flexible architecture runs on Kubernetes, supporting installation in customer VPCs, on-premises, or as a managed SaaS solution on AWS, including AWS GovCloud and DoD IL5 environments. Notably, Domino has partnered with NVIDIA to integrate GPU acceleration, allowing data scientists to leverage NVIDIA hardware for high-performance model training and inferencing natively within the Domino platform.
From a security perspective, Domino Data Lab is engineered with the controls required for highly regulated industries and government. The platform achieves and maintains certifications such as ISO 27001:2022, SOC 2, and ISO 9001:2015, and supports customer compliance with FISMA, NIST 800-53, and FedRAMP through robust security features and deployment options. Domino is also deployed in environments with DISA STIG and DoD IL5 requirements, is contracted for IL6 deployments, provides FIPS 140-2 validated cryptography, and is available as an approved solution in Iron Bank, facilitating rapid Authority to Operate (ATO) for federal agencies. Its broad compliance and security framework ensures agencies can confidently develop, deploy, and scale sensitive AI workloads in even the most stringent environments.
Weights & Biases (W&B): Weights & Biases provides a comprehensive AI developer platform for agent and model training, iteration, evaluation, monitoring, and inference. Their tools enable AI researchers and engineers to analyze agent traces, compare versions, and understand complex multi-turn workflows in depth, all while supporting comprehensive experiment tracking for metrics, hyperparameters, and artifacts. Integration with NVIDIA NIM and the NVIDIA AI Enterprise stack empowers teams to efficiently develop, evaluate, and deploy AI agents and applications, with robust support for both on-premises and cloud infrastructure. Weights & Biases supports customer managed deployments or federal agencies to use the cloud provider of their choice, such as AWS GovCloud, helping agencies maintain data sovereignty and meet compliance needs. Enterprise deployments include flexible security controls, centralized data and model lineage tracking, and governance to ensure users retain visibility over AI assets which is essential for regulated or sensitive mission environments.
Elastic: The Elastic Search AI Platform provides on premise, cloud, and hybrid solutions for search, observability, and security. For AI Factories, Elasticsearch acts as the context engine that powers Retrieval Augmented Generation (RAG) and Agentic AI workloads. With NVIDIA GPU-accelerated models, you can generate embeddings that Elasticsearch stores and uses to deliver the most relevant context to your NIM powered agents. Elasticsearch aggregates logs, monitors metrics (including NVIDIA GPU metrics via DCGM exporters), and visualizes AI platform operations with Kibana, supporting workloads built on NVIDIA accelerated libraries and NIM.
US federal agencies have trusted Elastic with their sensitive data for more than a decade. Elastic’s open source platform enables public sector teams to make faster, more informed decisions by connecting all their data – no matter the format or location. By combining the precision of search with the intelligence of AI, Elastic’s distributed data mesh approach powers real time insights, analysis, and automated actions that help agencies comply with regulations like M-21-31, implement Zero Trust architectures, strengthen operational resilience, build efficient AI experiences, and more.
EnterpriseDB: EDB, in partnership with NVIDIA, delivers a powerful data and AI platform designed for the federal and enterprise market. The platform, EDB Postgres® AI, integrates NVIDIA NIM microservices to provide a suite of pre-built, optimized AI models that allow organizations to quickly build and deploy generative AI applications, such as chatbots and agentic solutions, using RAG techniques. Leveraging NVIDIA NeMo Retriever for extraction, embedding, and re-ranking, EDB Postgres AI makes it simple to ingest and process enterprise data for real-time AI applications. Tight integration with the NVIDIA AI Enterprise stack, including hardware acceleration with GPUs, ensures high-performance, scalable AI workloads. EDB Postgres AI features a low-code/no-code environment, native Kubernetes support, and a hybrid data observability layer empowering teams to operationalize AI, maintain data sovereignty, and accelerate AI development securely across their infrastructure of choice.
Infrastructure Software#
Canonical: Canonical’s hardened Kubernetes (Canonical Kubernetes) distribution delivers a high-performance, CNCF-conformant implementation of upstream Kubernetes, specifically optimized for Federal and regulated environments. It integrates essential cluster components—like container runtimes, networking, DNS, ingress, and observability—into an opinionated, fully managed stack with up to 12 years of security maintenance. Built on Ubuntu and with Ubuntu Pro enabled, the platform incorporates FIPS 140-3 certified cryptographic modules and DISA-STIG hardening out of the box, enabling end-to-end compliance from host OS to container workloads. Canonical’s collaboration with NVIDIA extends these capabilities to AI-driven infrastructures, marrying Canonical’s proven FedRAMP-ready patching and FIPS validation expertise with NVIDIA’s AI Factory initiatives, thus ensuring a consistent, securely designed, and performant substrate for AI workloads across the enterprise and Federal cloud continuum.
Canonical’s security posture is designed around the stringent needs of Federal and regulated deployments. The combination of FIPS 140-3 certified crypto, DISA-STIG hardened baselines, and continuous vulnerability management—including CISA KEV tracking—ensures compliance and rapid remediation. Each software layer, from the Ubuntu host to K8s and STIG-hardened OCI containers, is backed by Canonical’s CVE patching service, maintaining cryptographic integrity and traceability required for FedRAMP and DoD environments. Canonical’s STIG-hardened containers embed FIPS-validated libraries and inherit the same hardened controls as their Ubuntu base, ensuring full-stack compliance when deployed together. With tooling like the Ubuntu Security Guide and automated rebuild pipelines, Canonical enables secure-by-default infrastructure that scales confidently under Federal mandates while providing a robust foundation for NVIDIA’s AI-optimized, compliant solutions.
Red Hat OpenShift Container Platform: Red Hat OpenShift is the industry’s leading hybrid cloud application platform powered by Kubernetes and relied on by U.S. government agencies to manage container workloads at scale. It delivers automated upgrades, workload autoscaling, and robust observability tools, supporting cloud, on-prem, and air-gapped deployments with both managed and self-managed options. Collaborations with AWS and Microsoft help ensure Red Hat OpenShift is available on FedRAMP High and IL4-certified government clouds. Its deep integration with NVIDIA enables streamlined, secure AI workloads on approved infrastructures.
Red Hat OpenShift’s security posture is built on a special-purpose and container-optimized Red Hat Enterprise Linux (RHEL) CoreOS. Red Hat’s official hardening guidance aligns with federal mandates such as DISA STIG, FISMA, NIST 800-53, and FedRAMP. Agencies can accelerate ATO and meet continuous audit requirements by inheriting RHEL’s FIPS validations and using Red Hat OpenShift’s automated SCAP-based compliance scanning, role-based access controls, and integrated vulnerability management. Customers can deploy Red Hat OpenShift in highly regulated environments, maintaining strict control over data and operations while leveraging a secure, compliant foundation for mission workloads.
Mirantis: Mirantis k0rdent AI automates provisioning, lifecycle management, and multi-tenant orchestration across compute, network, storage, and GPU resources. Automated deployment workflows and validated templates accelerate application delivery while also ensuring consistent, compliant deployments. Designed to meet FED and SLED requirements, k0rdent AI streamlines operations through built-in authentication, observability, and auditing – with required core services such as DNS, IDM, and VPN. Via agency tenants, k0rdent AI deploys training, fine-tuning, and inference workloads alongside data solutions such as PostgreSQL+PGVector, Weaviate, or ChromaDB – ensuring secure access to advanced AI capabilities across mission-critical environments.
Mirantis has a long track record of providing accredited, STIG- and FedRAMP-aligned solutions for federal and defense customers. Its technologies are certified across Linux and Windows environments, supporting full-stack lifecycle management — even in air-gapped, classified, or disconnected infrastructures. The NVIDIA AI Factory for Government, meanwhile, delivers an end-to-end, validated AI platform design that integrates NVIDIA’s ecosystem of frameworks, libraries, and data center software — including PyTorch, TensorRT, Triton, TAO, and GPU Operator — as part of an architecture built for assurance and performance.
Nutanix: Nutanix Enterprise AI is a full-stack AI inference platform that provides streamlined deployment and management for large language models (LLMs) and supports rapid, secure deployment and ongoing governance of AI workloads. Built on the Nutanix Kubernetes Platform (NKP), it includes an enterprise-grade Kubernetes layer with built-in resiliency, security, and simplified day 2 operations with standardized management for fleets of clusters across public clouds, data centers, and the edge. This combined solution empowers enterprises to efficiently build and scale AI agents and deploy generative AI applications, with unified workflows across on-prem, edge, and major cloud platforms. Both the Nutanix Kubernetes Platform and Nutanix Enterprise AI (NAI) are fully compatible with NVIDIA AI Enterprise on NVIDIA-Certified Systems.
NAI and NKP are part of the Nutanix Cloud Platform (NCP) that is tailored for federal and mission-critical environments. It delivers robust, software-defined infrastructure with built-in STIG-hardening, native encryption, micro-segmentation, Zero Trust alignment, and certifications such as CISA’s CDM and DoDIN APL. Nutanix is trusted for air-gapped, classified, and unclassified network deployments, enabling rapid, compliant rollout with centralized management and flexible scaling. For organizations operating in government-regulated environments, Nutanix solutions can be governed by the existing ATO for cloud enclaves, making it possible to uphold stringent operational and compliance requirements regardless of deployment.
Spectro Cloud: Spectro Cloud provides PaletteAI Secure backed by the VerteX platform, a versatile solution for deploying and managing Kubernetes-based environments in government and highly regulated sectors. With PaletteAI Secure, organizations can centrally orchestrate both container and virtual machine workloads across on-premises infrastructures such as VMware, OpenStack, Nutanix, and bare metal, as well as multi-cloud deployments with AWS, Azure, Google Cloud, and their respective secret and top secret regions. The platform is available as a self-hosted deployment, enabling unified management for air-gapped, classified, and edge environments. Spectro Cloud also partners with NVIDIA, enabling efficient lifecycle management of NVIDIA BlueField DPUs, integration with NVIDIA AI Enterprise and DOCA, and streamlined deployment of GPU-accelerated AI infrastructure and edge applications across diverse environments.
Security and compliance are foundational to PaletteAI Secure, which is fully FIPS 140-3 validated, supporting certified cryptographic modules, strict multi-environment separation, and a choice of operating systems and Kubernetes distributions that comply with federal cryptographic standards. Its architecture is specifically designed to support air-gapped and classified deployments, with granular governance and audit trails, vulnerability scanning, and automated remediation, ensuring that regulated agencies can confidently deploy and scale mission critical Kubernetes clusters while meeting the highest benchmarks for confidentiality, integrity, and availability.
Broadcom: Broadcom (through its VMware products) and NVIDIA have a decade-long partnership that continues to advance capabilities for modern private cloud and Artificial Intelligence deployments. The companies’ joint innovations began with GPU virtualization and have evolved to include co-engineered platforms such asVMware Private AI Foundation with NVIDIA. This joint AI platform, built and run on VMware Cloud Foundation (VCF), simplifies AI deployments and is widely adopted in government, public sector, and enterprises addressing privacy and security, choice, cost, performance, and compliance concerns. VCF combines workload security, software-defined networking, role-based access controls, automated patching, and policy-driven segmentation to help agencies reduce risk while supporting regulatory compliance (via STIG and FIPS) and continuous operations. VCF Private AI services, which will become part of VCF subscription, helps enterprises build and deploy private and secure AI models with advanced safety features like Model Store and Air-gapped support.
Observability#
Dynatrace: Dynatrace, delivers industry leading observability, application security protection, contextual analytics and workflow automation for mission-critical agentic AI environments. Dynatrace provides automated discovery, topology mapping and AI-powered analytics for complex dynamic AI ecosystems with full stack, end-to-end tracing. Customers benefit from real-time insights, automated anomaly detection, A/B model testing, data governance and audit trails, and runtime application security vulnerability detection and remediation (e.g. OSS / CLV, DISA STIG, NIST SP 800-53) for AI services built with NVIDIA AI Enterprise, NIM, and related libraries.
Fiddler: Fiddler is a pioneer in AI Observability and Security for responsible AI. With evaluation, monitoring, analytics, explainability, and guardrails capabilities, Fiddler delivers end-to-end observability, providing visibility, context, and control from pre-production to production. Teams gain actionable insights to build accurate, safe, and trustworthy AI agents, generative, and predictive applications. For organizations building AI factories with agents and complex models, Fiddler delivers aggregate and granular insights across the agentic hierarchy to continuously improve performance and reliability. Fiddler integrates with major ecosystems, including NVIDIA accelerated platforms and NIM, and supports human-in-the-loop decision making for mission-critical deployments.
Fiddler’s security-centric posture upholds the requirements of federal and public-sector environments, with features such as end-to-end visibility, incident alerts, cleared key personnel, and support for Impact Level 5 and 6 on-premises installations, including AWS GovCloud, which protect sensitive operations. Fiddler maintains an active Authority to Operate (ATO) with federal agencies, helping AI programs protect citizen trust and national security.
Splunk: Splunk observability delivers comprehensive monitoring and troubleshooting capabilities across hybrid, three tier, and cloud-native environments for infrastructure, applications, and digital experiences, unifying metrics, logs, and traces for real-time visibility into business health and performance. The platform provides AI powered-insights, automated root cause analysis, customizable dashboards, and intelligent alerting to detect and resolve issues faster. It integrates with certain NVIDIA technologies, monitoring key components, including NVIDIA AI Enterprise, the NVIDIA NIM Operator, and NIMs for LLM inferencing.
Splunk maintains strong federal credentials, with Splunk Cloud Platform authorized at FedRAMP High and DoD Impact Level 5, and listed on the StateRAMP Authorized Products List. These solutions support secure, resilient AI operations for government agencies while meeting rigorous federal compliance requirements. Splunk Observability Cloud has received FedRAMP “In Process” designation and is pursuing Moderate Impact Level authorization from the FedRAMP Program Management Office.
Security#
CrowdStrike: CrowdStrike provides a robust cloud-native platform for protecting critical areas of enterprise risk – endpoints and cloud workloads, identity, data, and AI systems.In an AI Factory, CrowdStrike secures the hardware, containers, workloads, and AI applications and agents. The CrowdStrike Falcon® platform provides real-time visibility and advanced threat detection and response capabilities in part by leveraging NVIDIA AI Enterprise, which is essential for securing valuable IP — such as models and data — as well as the operational integrity of the AI platform. Integration with the NVIDIA Enterprise AI Factory validated design delivers full lifecycle defense, from data ingestion to model deployment and runtime, while also addressing business risks such as data poisoning, model tampering, and exposure of sensitive information.
CrowdStrike achieved FedRAMP High authorization, delivering strong protection for regulated industries. CrowdStrike® Charlotte AI™ AgentWorks enables analysts to use NVIDIA Nemotron models to build, deploy, and monitor custom security agents without coding. These AI agents can automate a wide range of security use cases like delivering threat detection and response, providing audit trails, supporting governance, enhancing visibility into security processes, and more, supporting transparent and compliant operations for any organization leveraging NVIDIA AI Enterprise and Enterprise AI Factory.
Trend Micro: Trend Micro provides an enterprise cybersecurity platform, Trend Vision One™, that can be deployed on-premises, offering protection for servers, containers, and networks within the AI Factory. Their platform leverages NVIDIA AI and accelerated computing to help secure the underlying infrastructure, including NVIDIA-Certified systems, workloads running NVIDIA AI Enterprise, NIM, and accelerated libraries from malware, vulnerabilities, and other threats. This integration contributes directly to the overall security posture of hardware-accelerated environments, making it possible to protect complex enterprise and government use cases with proven, scalable proactive security.
For government and federal needs, Trend maintains FedRAMP Authorization and active Authority to Operate (ATO) with the Trend Vision One platform, which includes Trend Companion AI (AI assistant), advanced AI security solutions, and proven threat protection, detection, and response that ensures 100% data jurisdiction to support agency missions. Trend’s platform is also available in the major cloud service provider (CSP) marketplaces, including AWS, Microsoft Azure, and Google Cloud, ensuring flexible procurement and secure deployment for federal clients.
Fortanix: Fortanix delivers confidential computing and data security solutions for federal agencies, providing tools to secure data at rest, in transit, and in use. The Data Security Manager (DSM) platform supports key management, tokenization, and advanced encryption on FIPS 140-2 Level 3 appliances backed by Intel SGX and, for AI workloads, integrates confidential computing-enabled GPU technology from NVIDIA. Fortanix Armet AI enables secure LLM training and inference and allows federal customers to use NVIDIA AI Enterprise software seamlessly within a confidential computing environment.
Fortanix delivers DSM as an on-premises solution for federal clients, with appliance-based offerings accessible through distributors, resellers, systems integrators, and cloud service provider marketplaces. The platform utilizes fully attestable confidential computing, supports hybrid deployments across trusted cloud and on-premises hardware, and meets strict regulatory and compliance requirements for federal data security.
Protopia AI: Protopia AI’s Stained Glass Transform (SGT) unlocks private inference on multi-tenant AI Factories through NVIDIA NIM support. SGT converts plaintext prompts into irreversible stochastic representations that preserve model accuracy while rendering data indecipherable to operators, co-tenants, or any unauthorized parties—delivering privacy-by-default for federal, sovereign, and regulated deployments. By unlocking previously restricted data, SGT lifts token throughput, increases GPU utilization, and improves ROI. This makes AI Factories for Government a perfect fit for the White House AI Action Plan objectives by allowing agencies to pilot sensitive workloads, deploy open-weight and proprietary models cost-efficiently while retaining plain-text ownership to organizational trust boundaries.
Artifact Repository#
JFrog Artifactory: The JFrog Software Supply Chain Platform serves as a universal artifact repository manager, essential in an AI Factory for managing the complete lifecycle of all binaries, including NGC container images for AI applications and agents, Python packages, model files, and other dependencies. It functions as a single source of truth for all build artifacts, offering robust versioning and seamless integration with CI/CD pipelines to ensure reproducible and reliable builds and deployments. Its advanced security capabilities deliver deep artifact analysis, curation, vulnerability scanning, and license compliance, essential for maintaining a secure and trustworthy software supply chain.
JFrog integrates with NVIDIA NIM to store NIM microservices and models within its unified artifact management platform, providing centralized governance, secure distribution, and streamlined DevSecOps workflows.
JFrog provides federal agencies with an end-to-end DevSecOps solution that manages, secures, and monitors every artifact across the software supply chain. JFrog’s advanced security scanning continuously analyzes every binary and container for vulnerabilities and compliance risks, ensuring that only trusted, verified components progress from development to production. This integration helps organizations meet mandates such as the NIST SP 800-218 Secure Software Development Framework, providing rapid, automated, and fully auditable protection for government software pipelines.