Azure Stack HCI (ASHCI) Deployment Guide (Latest)
Azure Stack HCI (ASHCI) Deployment Guide (Latest)

Before You Begin

This section describes building a Proof of Concept (POC), sizing your VDI environment, general prerequisites, and general preparatory steps that must be addressed before deployment.

You should test your unique workloads to determine the best NVIDIA virtual GPU solution to meet your organizational needs and goals. The most successful customer deployments start with a proof of concept (POC) and are “tuned” throughout the deployment lifecycle. Beginning with a POC enables customers to understand the expectations and behavior of their users and optimize their deployment for the best user density while maintaining required performance levels. Continued monitoring is essential because user behavior can change throughout a project or an individual change within the organization. A user who was once a light graphics user (vApps, vPC) might become a heavy graphics user via professional visualization (RTX vWS) when they change teams and/or projects.

Consider the following during your POC:

  • A comprehensive review of all user groups, their workloads, applications utilized, and current and future projections should be considered

  • A vision for balancing user density with end-user experience measurements and analysis

  • Gather feedback from IT and end-users regarding infrastructure and productivity needs.

Based on your Proof of Concept (POC), we recommend sizing an appropriate environment for each user group you are trying to reach with your evaluation. NVIDIA provides in-depth sizing guides to assist with the process of optimally scaling your organization’s workloads.

Please refer to the appropriate sizing guides below to build your NVIDIA vGPU environment:

As an overview:

  • Scope your environment for the needs of each end-user

  • Run a proof of concept for each deployment type

  • Implement the NVIDIA recommended sizing methodology

  • Utilize benchmark testing to help validate your deployment

  • Utilize NVIDIA-specific and industry-wide performance tools for monitoring

  • Ensure performance and experience metrics are within acceptable thresholds

Before deploying Azure Stack HCI, version 23H2, ensure that your hardware is up to date by:

  • Determining the current version of your Solution Builder Extension (SBE) package.

  • Finding the best method to download, install, and update your SBE package.

Azure Stack HCI Hardware Requirements

  • Number of servers: 1 to 16 servers are supported. Each server must be the same model, and manufacturer, have the same network adapters, and have the same number and type of storage drives.

  • CPU: A 64-bit Intel Nehalem grade or AMD EPYC or later compatible processor with second-level address translation (SLAT).

  • Memory: A minimum of 32 GB RAM per node.

  • Host network adapters: At least two network adapters are listed in the Windows Server Catalog. Or dedicated network adapters per intent, which does require two separate adapters for storage intent. For more information, see Windows Server Catalog.

  • Boot drive: A minimum size of 200-GB size.

  • Data drives: At least two disks with a minimum capacity of 500 GB (SSD or HDD). Single servers must use only a single drive type: Nonvolatile Memory Express (NVMe) or Solid-State (SSD) drives.

  • Trusted Platform Module (TPM): TPM version 2.0 hardware must be present and turned on.

  • Secure boot: Secure Boot must be present and turned on.

For demonstration purposes, this guide uses Azure Stack HCI version 23H2.

Review the deployment prerequisites and system requirements for Azure Stack HCI. Also, review the prerequisites for the GPU partitioning feature.

Ensure to use the appropriate NVIDIA GPU for your use case. Refer to the NVIDIA Virtual GPU Positioning Guide to better understand which GPU suits your deployment requirements. For additional guidance, contact your NVIDIA and Microsoft sales representatives.

Azure requirements:

  • Azure subscription: You can use an existing subscription of any type.

  • Azure account: If you don’t already have an Azure account, first create an account.

GPU partitioning on Azure Stack HCI supports these guest operating systems:

  • Windows 10 or later

  • Windows 10 Enterprise multi-session or later

  • Windows Server 2019 or later

  • Linux Ubuntu 20.04 LTS, Linux Ubuntu 22.04 LTS

  • Red Hat Enterprise Linux 9.4, 8.9.0, and 7.9.0

The following GPUs support GPU partitioning:

  • NVIDIA A2

  • NVIDIA A10

  • NVIDIA A16

  • NVIDIA A40

  • NVIDIA L4

  • NVIDIA L40

  • NVIDIA L40S

Note

GPU partitioning is unsupported if your configuration isn’t homogeneous (meaning all GPUs must be the same type). You can’t assign a physical GPU as both Discrete Device Assignment (DDA) or partitionable GPU.

Check with your OEM regarding the necessary generic BIOS settings for Azure Stack HCI, version 23H2. These settings may include hardware virtualization, TPM enabled, and secure core.

Configure the BIOS to enable Intel VT or AMD-V.

Previous Overview
Next Installing Azure Stack HCI
© Copyright © 2024, NVIDIA Corporation. Last updated on Aug 2, 2024.