What can I help you with?

NVIDIA Documentation Hub

Get started by exploring the latest technical information and product documentation

New Homepage Image
  • Documentation Center
    The integration of NVIDIA RAPIDS into the Cloudera Data Platform (CDP) provides transparent GPU acceleration of data analytics workloads using Apache Spark. This documentation describes the integration and suggested reference architectures for deployment.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    This documentation should be of interest to cluster admins and support personnel of enterprise GPU deployments. It includes monitoring and management tools and application programming interfaces (APIs), in-field diagnostics and health monitoring, and cluster setup and deployment.
  • Documentation Center
    Developer documentation for Megatron Core covers API documentation, quickstart guide as well as deep dives into advanced GPU techniques needed to optimize LLM performance at scale.
  • Documentation Center
    NeMo Curator on DGX Cloud provides a cloud-based, GPU-accelerated solution for curating video datasets for post-training. This user guide walks you through the UI and API process for uploading and managing datasets for curation.
  • Product
    NeMo Retriever Extraction (NV-Ingest) is a scalable, performance-oriented document content and metadata extraction microservice. NV-Ingest uses specialized NVIDIA NIM microservices to find, contextualize, and extract text, tables, charts and images for use in downstream generative applications.
  • Documentation Center
    nvCOMP is a high performance GPU enabled data compression library. Includes both open-source and non-OS components. The nvCOMP library provides fast lossless data compression and decompression using a GPU. It features generic compression interfaces to enable developers to use high-performance GPU compressors in their applications.
  • Product
    NVIDIA AgentIQ is an open-source library for connecting, evaluating, and accelerating teams of AI agents.
  • Product
    NVIDIA AI Aerial™ is a suite of accelerated computing platforms, software, and services for designing, simulating, and operating wireless networks. Aerial contains hardened RAN software libraries for telcos, cloud service providers (CSPs), and enterprises building commercial 5G networks. Academic and industry researchers can access Aerial on cloud or on-premises setups for advanced wireless and AI/machine learning (ML) research for 6G.
    • Edge Computing
    • Telecommunications
  • Product
    NVIDIA AI Enterprise is an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade co-pilots and other generative AI applications. Easy-to-use microservices provide optimized model performance with enterprise-grade security, support, and stability to ensure a smooth transition from prototype to production for enterprises that run their businesses on AI.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    A simulation platform that allows users to model data center deployments with full software functionality, creating a digital twin. Transform and streamline network operations by simulating, validating, and automating changes and updates.
  • Documentation Center
    NVIDIA Ansel is a revolutionary way to capture in-game shots and share the moment. Compose your screenshots from any position, adjust them with post-process filters, capture HDR images in high-fidelity formats, and share them in 360 degrees using your mobile phone, PC, or VR headset.
  • Documentation Center
    Your guide to NVIDIA APIs including NIM and CUDA-X microservices.
  • Product
    The NVIDIA Attestation Suite enhances Confidential Computing by providing robust mechanisms to ensure the integrity and security of devices and platforms. The suite includes NVIDIA Remote Attestation Service (NRAS), the Reference Integrity Manifest (RIM) Service, and the NDIS OCSP Responder.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Product
    NVIDIA Base Command Manager streamlines cluster provisioning, workload management, and infrastructure monitoring. It provides all the tools you need to deploy and manage an AI data center. NVIDIA Base Command Manager Essentials comprises the features of NVIDIA Base Command Manager that are certified for use with NVIDIA AI Enterprise.
    • Data Center / Cloud
  • Technical Overview
    NVIDIA Base Command Platform is a world-class infrastructure solution for businesses and their data scientists who need a premium AI development experience.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Product
    NVIDIA Base OS implements the stable and fully qualified operating systems for running AI, machine learning, and analytics applications on the DGX platform. It includes system-specific configurations, drivers, and diagnostic and monitoring tools and is available for Ubuntu, Red Hat Enterprise Linux, and Rocky Linux.
    • Data Center / Cloud
  • Documentation Center
    NVIDIA Bright Cluster Manager offers fast deployment and end-to-end management for heterogeneous HPC and AI server clusters at the edge, in the data center and in multi/hybrid-cloud environments. It automates provisioning and administration for clusters ranging in size from a single node to hundreds of thousands, supports CPU-based and NVIDIA GPU-accelerated systems, and orchestration with Kubernetes.
    • HPC / Scientific Computing
    • Edge Computing
    • Data Center / Cloud
  • Documentation Center
    NVIDIA Capture SDK (formerly GRID SDK) enables developers to easily and efficiently capture, and optionally encode, the display content.
  • Documentation Center
    NVIDIA’s program that enables enterprises to confidently deploy hardware solutions that optimally run accelerated workloads—from desktop to data center to edge.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Product
    NVIDIA® Clara™ is an open, scalable computing platform that enables developers to build and deploy medical imaging applications into hybrid (embedded, on-premises, or cloud) computing environments to create intelligent instruments and automate healthcare workflows.
    • Healthcare & Life Sciences
    • Computer Vision / Video Analytics
  • Product
    Serverless API to deploy and manage AI workloads on GPUs at planetary scale.
  • Product
    NVIDIA cloud-native technologies enable developers to build and run GPU-accelerated containers using Docker and Kubernetes.
    • Cloud Services
    • Data Center / Cloud
  • Documentation Center
    CloudXR is NVIDIA's solution for streaming virtual reality (VR), augmented reality (AR), and mixed reality (MR) content from any OpenVR XR application on a remote server--desktop, cloud, data center, or edge.
  • Documentation Center
    Compute Sanitizer is a functional correctness checking suite included in the CUDA toolkit. This suite contains multiple tools that can perform different type of checks. The memcheck tool is capable of precisely detecting and attributing out of bounds and misaligned memory access errors in CUDA applications. The tool can also report hardware exceptions encountered by the GPU. The racecheck tool can report shared memory data access hazards that can cause data races. The initcheck tool can report cases where the GPU performs uninitialized accesses to global memory. The synccheck tool can report cases where the application is attempting invalid usages of synchronization primitives. This document describes the usage of these tools.
  • Product
    A developer-first world foundation model (WFM) platform designed to help Physical AI developers build their Physical AI systems better and faster.
  • Product
    The NVIDIA® CUDA® Toolkit provides a comprehensive development environment for C and C++ developers building GPU-accelerated applications. With the CUDA Toolkit, you can develop, optimize, and deploy your applications on GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    The NVIDIA CUDA® Deep Neural Network (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, attention, matmul, pooling, and normalization.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    The NVIDIA-managed cuOpt service is a high-performance, on-demand routing optimization service fully managed by NVIDIA.
    • Data Science
    • Robotics
  • Product
    NVIDIA cuVS is an open-source library for GPU-accelerated vector search and data clustering that enables higher throughput search, lower latency, and faster index build times.
  • Product
    The NVIDIA Data Loading Library (DALI) is a collection of highly optimized building blocks, and an execution engine, for accelerating the pre-processing of input data for deep learning applications. DALI provides both the performance and the flexibility for accelerating different data pipelines as a single library. This single library can then be easily integrated into different deep learning training and inference applications.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Documentation Center
    NVIDIA Data Center GPU drivers are used in Data Center GPU enterprise deployments for AI, HPC, and accelerated computing workloads. Documentation includes release notes, supported platforms, and cluster setup and deployment.
  • Documentation Center
    NVIDIA Data Center GPU Manager (DCGM) is a suite of tools for managing and monitoring NVIDIA Data Center GPUs in cluster environments.
  • Documentation Center
    Deep Graph Library (DGL) is a framework-neutral, easy-to-use, and scalable Python library used for implementing and training Graph Neural Networks (GNN). Being framework-neutral, DGL is easily integrated into an existing PyTorch, TensorFlow, or an Apache MXNet workflow.
  • Documentation Center
    GPUs accelerate machine learning operations by performing calculations in parallel. Many operations, especially those representable as matrix multipliers will see good acceleration right out of the box. Even better performance can be achieved by tweaking operation parameters to efficiently use GPU resources. The performance documents present the tips that we think are most widely useful.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Product
    NVIDIA DGX Cloud is an AI platform for enterprise developers, optimized for the demands of generative AI.
  • Product
    Built from the ground up for enterprise AI, the NVIDIA DGX platform incorporates the best of NVIDIA software, infrastructure, and expertise in a modern, unified AI development and training solution. Every aspect of the DGX platform is infused with NVIDIA AI expertise, featuring world-class software, record-breaking NVIDIA-accelerated infrastructure in the cloud or on-premises, and direct access to NVIDIA DGXPerts to speed the ROI of AI for every enterprise.
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
    • HPC / Scientific Computing
  • Product
    Deployment and management guides for NVIDIA DGX SuperPOD, an AI data center infrastructure platform that enables IT to deliver performance—without compromise—for every user and workload. DGX SuperPOD offers leadership-class accelerated infrastructure and agile, scalable performance for the most challenging AI and high-performance computing (HPC) workloads, with industry-proven results.
    • Data Center / Cloud
  • Product
    System documentation for the DGX AI supercomputers that deliver world-class performance for large generative AI and mainstream AI workloads.
    • Data Center / Cloud
  • Documentation Center
    The NVIDIA Deep Learning GPU Training System (DIGITS) can be used to rapidly train highly accurate deep neural networks (DNNs) for image classification, segmentation, and object-detection tasks. DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best-performing model from the results browser for deployment.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    The NVIDIA EGX platform delivers the power of accelerated AI computing to the edge with a cloud-native software stack (EGX stack), a range of validated servers and devices, Helm charts, and partners who offer EGX through their products and services.
  • Product
    NVIDIA’s accelerated computing, visualization, and networking solutions are expediting the speed of business outcomes. NVIDIA’s experts are here for you at every step in this fast-paced journey. With our expansive support tiers, fast implementations, robust professional services, market-leading education, and high caliber technical certifications, we are here to help you achieve success with all parts of NVIDIA’s accelerated computing, visualization, and networking platform.
  • Documentation Center
    FLARE (Federated Learning Active Runtime Environment) is Nvidia’s open source extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflow to a privacy preserving federated paradigm. FLARE makes it possible to build robust, generalizable AI models without sharing data.
  • Product
    Documentation for GameWorks-related products and technologies, including libraries (NVAPI, OpenAutomate), code samples (DirectX, OpenGL), and developer tools (Nsight, NVIDIA System Profiler).
    • Gaming
    • Content Creation / Rendering
  • Product
    NVIDIA LaunchPad is a free program that provides users short-term access to a large catalog of hands-on labs. Now enterprises and organizations can immediately tap into the necessary hardware and software stacks to experience end-to-end solution workflows in the areas of AI, data science, 3D design collaboration and simulation, and more.
    • Edge Computing
    • Data Center / Cloud
  • Documentation Center
    NVIDIA Data Center GPU Manager (DCGM) is a suite of tools for managing and monitoring NVIDIA Data Center GPUs in cluster environments.
  • Product
    Deep learning (DL) frameworks offer building blocks for designing, training, and validating deep neural networks through a high-level programming interface. Widely-used DL frameworks, such as PyTorch, TensorFlow, PyTorch Geometric, DGL, and others, rely on GPU-accelerated libraries, such as cuDNN, NCCL, and DALI to deliver high-performance, multi-GPU-accelerated training.
  • Documentation Center
    NVIDIA GVDB Voxels is a new framework for simulation, compute and rendering of sparse voxels on the GPU.
  • Documentation Center
    NVIDIA MAGNUM IO™ software development kit (SDK) enables developers to remove input/output (IO) bottlenecks in AI, high performance computing (HPC), data science, and visualization applications, reducing the end-to-end time of their workflows. Magnum IO covers all aspects of data movement between CPUs, GPUsns, DPUs, and storage subsystems in virtualized, containerized, and bare-metal environments.
  • Documentation Center
    The integration of NVIDIA RAPIDS into the Cloudera Data Platform (CDP) provides transparent GPU acceleration of data analytics workloads using Apache Spark. This documentation describes the integration and suggested reference architectures for deployment.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    Metropolis Microservices for Jetson is a platform that simplifies development, deployment and management of Edge AI applications on NVIDIA Jetson. It provides a modular & extensible architecture for developers to distill large complex applications into smaller modular microservice with APIs to integrate into other apps & services.
  • Product
    NVIDIA TAO eliminates the time-consuming process of building and fine-tuning DNNs from scratch for IVA applications.
    • Public Sector
    • Edge Computing
    • Computer Vision / Video Analytics
  • Documentation Center
    NVIDIA Ansel is a revolutionary way to capture in-game shots and share the moment. Compose your screenshots from any position, adjust them with post-process filters, capture HDR images in high-fidelity formats, and share them in 360 degrees using your mobile phone, PC, or VR headset.
  • Product
    NVIDIA® License System is used to serve a pool of floating licenses to NVIDIA licensed products. The NVIDIA License System is configured with licenses obtained from the NVIDIA Licensing Portal.
    • Data Center / Cloud
  • Documentation Center
    FLARE (Federated Learning Active Runtime Environment) is Nvidia’s open source extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflow to a privacy preserving federated paradigm. FLARE makes it possible to build robust, generalizable AI models without sharing data.
  • Product
    NVIDIA Base OS implements the stable and fully qualified operating systems for running AI, machine learning, and analytics applications on the DGX platform. It includes system-specific configurations, drivers, and diagnostic and monitoring tools and is available for Ubuntu, Red Hat Enterprise Linux, and Rocky Linux.
    • Data Center / Cloud
  • Documentation Center
    Unique IP-based solution that boosts video and data streaming performance. Rivermax together with NVIDIA GPU accelerated computing technologies unlocks innovation for a wide range of applications in Media and Entertainment (M&E), Broadcast, Healthcare, Smart Cities and more.
  • Documentation Center
    The NVIDIA Virtual Reality Capture and Replay (VCR) SDK enables developers and users to accurately capture and replay VR sessions for performance testing, scene troubleshooting, and more.
  • Documentation Center
    The Triton Inference Server provides an optimized cloud and edge inferencing solution.
  • Product
    System documentation for the DGX AI supercomputers that deliver world-class performance for large generative AI and mainstream AI workloads.
    • Data Center / Cloud
  • Documentation Center
    nvCOMP is a high performance GPU enabled data compression library. Includes both open-source and non-OS components. The nvCOMP library provides fast lossless data compression and decompression using a GPU. It features generic compression interfaces to enable developers to use high-performance GPU compressors in their applications.
  • Product
    Built from the ground up for enterprise AI, the NVIDIA DGX platform incorporates the best of NVIDIA software, infrastructure, and expertise in a modern, unified AI development and training solution. Every aspect of the DGX platform is infused with NVIDIA AI expertise, featuring world-class software, record-breaking NVIDIA-accelerated infrastructure in the cloud or on-premises, and direct access to NVIDIA DGXPerts to speed the ROI of AI for every enterprise.
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
    • HPC / Scientific Computing
  • Product
    NVIDIA AI Aerial™ is a suite of accelerated computing platforms, software, and services for designing, simulating, and operating wireless networks. Aerial contains hardened RAN software libraries for telcos, cloud service providers (CSPs), and enterprises building commercial 5G networks. Academic and industry researchers can access Aerial on cloud or on-premises setups for advanced wireless and AI/machine learning (ML) research for 6G.
    • Edge Computing
    • Telecommunications
  • Product
    NVIDIA NVSHMEM is an NVIDIA based “shared memory” library that provides an easy-to-use CPU-side interface to allocate pinned memory that is symmetrically distributed across a cluster of NVIDIA GPUs.
    • Data Center / Cloud
  • Documentation Center
    NVIDIA Bright Cluster Manager offers fast deployment and end-to-end management for heterogeneous HPC and AI server clusters at the edge, in the data center and in multi/hybrid-cloud environments. It automates provisioning and administration for clusters ranging in size from a single node to hundreds of thousands, supports CPU-based and NVIDIA GPU-accelerated systems, and orchestration with Kubernetes.
    • HPC / Scientific Computing
    • Edge Computing
    • Data Center / Cloud
  • Product
    Accelerated Networks for Modern Workloads: One-third of the 30 million data center servers shipped each year are consumed running the software-defined data center stack.
    • Networking
  • Product
    The NVIDIA Attestation Suite enhances Confidential Computing by providing robust mechanisms to ensure the integrity and security of devices and platforms. The suite includes NVIDIA Remote Attestation Service (NRAS), the Reference Integrity Manifest (RIM) Service, and the NDIS OCSP Responder.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Documentation Center
    The NVIDIA Material Definition Language (MDL) is a programming language for defining physically based materials for rendering. The MDL SDK is a set of tools to integrate MDL support into rendering applications. It contains components for loading, inspecting, editing of material definitions as well as compiling MDL functions to GLSL, HLSL, Native x86, PTX and LLVM-IR. With the NVIDIA MDL SDK, any physically based renderer can easily add support for MDL and join the MDL eco-system.
  • Documentation Center
    GPU-accelerated enhancements to gradient boosting library XGBoost to provide fast and accurate ways to solve large-scale AI and data science problems.
  • Product
    NVIDIA DGX Cloud is an AI platform for enterprise developers, optimized for the demands of generative AI.
  • Product
    NVIDIA Run:ai is a GPU orchestration and optimization platform that helps organizations maximize compute utilization for AI workloads. By optimizing the use of expensive compute resources, NVIDIA Run:ai accelerates AI development cycles, and drives faster time-to-market for AI-powered innovations.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Documentation Center
    Warp and Blend are interfaces exposed in NVAPI for warping (image geometry corrections) and blending (intensity and black level adjustment) a single display output or multiple display outputs.
  • Documentation Center
    NVIDIA PhysX is a scalable multi-platform physics simulation solution supporting a wide range of devices, from smartphones to high-end multicore CPUs and GPUs. The powerful SDK brings high-performance and precision accuracy to industrial simulation use cases from traditional VFX and game development workflows, to high-fidelity robotics, medical simulation, and scientific visualization applications.
  • Product
    NVIDIA NeMo™ Framework is a development platform for building custom generative AI models. The framework supports custom models for language (LLMs), multimodal, computer vision (CV), automatic speech recognition (ASR), natural language processing (NLP), and text to speech (TTS).
    • Generative AI / LLMs
  • Product
    Documentation for GameWorks-related products and technologies, including libraries (NVAPI, OpenAutomate), code samples (DirectX, OpenGL), and developer tools (Nsight, NVIDIA System Profiler).
    • Gaming
    • Content Creation / Rendering
  • Product
    NVIDIA IGX Orin™ is an industrial-grade platform that combines enterprise-level hardware, software, and support. As a single, holistic platform, IGX allows companies to focus on application development and realize the benefits of AI faster.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    NVIDIA GPUDirect Storage (GDS) enables the fastest data path between GPU memory and storage by avoiding copies to and from system memory, thereby increasing storage input/output (IO) bandwidth and decreasing latency and CPU utilization.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Documentation Center
    The NVIDIA Collective Communications Library (NCCL) is a library of multi-GPU collective communication primitives that are topology-aware and can be easily integrated into applications. Collective communication algorithms employ many processors working in concert to aggregate data. NCCL is not a full-blown parallel programming framework; rather, it’s a library focused on accelerating collective communication primitives.
  • Documentation Center
    NVIDIA Morpheus is an open AI application framework that provides cybersecurity developers with a highly optimized AI pipeline and pre-trained AI capabilities and allows them to instantaneously inspect all IP traffic across their data center fabric.
  • Product
    NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
  • Documentation Center
    NVIDIA Data Center GPU drivers are used in Data Center GPU enterprise deployments for AI, HPC, and accelerated computing workloads. Documentation includes release notes, supported platforms, and cluster setup and deployment.
  • Product
    UCF is a fully accelerated framework for developing real-time edge AI applications.
    • Manufacturing
    • Retail / Consumer Packaged Goods
    • Automotive / Transportation
  • Documentation Center
    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs to provide better performance with lower memory utilization in both training and inference, and an FP8 automatic-mixed-precision-like API that can be used seamlessly with your model code.
  • Product
    Create block-compressed textures and write custom asset pipelines using NVTT 3, an SDK for CUDA-accelerated texture compression and image processing.
    • Gaming
    • Content Creation / Rendering
  • Product
    NVIDIA cloud-native technologies enable developers to build and run GPU-accelerated containers using Docker and Kubernetes.
    • Cloud Services
    • Data Center / Cloud
  • Documentation Center
    Your guide to NVIDIA APIs including NIM and CUDA-X microservices.
  • Documentation Center
    OpenACC is a directive-based programming model designed to provide a simple yet powerful approach to accelerators without significant programming effort. With OpenACC, a single version of the source code will deliver performance portability across the platforms. OpenACC offers scientists and researchers a quick path to accelerated computing with less programming effort. By inserting compiler “hints” or directives into your C11, C++17 or Fortran 2003 code, with the NVIDIA OpenACC compiler you can offload and run your code on the GPU and CPU.
  • Documentation Center
    Extract valuable insights from large quantities of video and sensor data with NVIDIA Metropolis for smart cities. Build with a powerful set of software tools, including the DeepStream SDK, NVIDIA TAO Toolkit, pretrained models from the NVIDIA NGC™ catalog, and NVIDIA® TensorRT™. Take advantage of containers to package these applications in a cloud-native format for flexible deployment that can be easily scaled out with the NVIDIA EGX™ platform.
  • Documentation Center
    NVIDIA IndeX is a 3D volumetric interactive visualization SDK that allows scientists and researchers to visualize and interact with massive data sets, make real-time modifications, and navigate to the most pertinent parts of the data, all in real-time, to gather better insights faster. IndeX leverages GPU clusters for scalable, real-time, visualization and computing of multi-valued volumetric data together with embedded geometry data.
  • Product
    NVIDIA’s accelerated computing, visualization, and networking solutions are expediting the speed of business outcomes. NVIDIA’s experts are here for you at every step in this fast-paced journey. With our expansive support tiers, fast implementations, robust professional services, market-leading education, and high caliber technical certifications, we are here to help you achieve success with all parts of NVIDIA’s accelerated computing, visualization, and networking platform.
  • Product
    NVIDIA® Riva is an SDK for building multimodal conversational systems. Riva is used for building and deploying AI applications that fuse vision, speech, sensors, and services together to achieve conversational AI use cases that are specific to a domain of expertise. It offers a complete workflow to build, train, and deploy AI systems that can use visual cues such as gestures and gaze along with speech in context.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction