NVIDIA Tegra
NVIDIA Jetson Linux Developer Guide
32.4.2 Release

 

ASoC Driver for Jetson Products

 
ALSA
DAPM
Device Tree
ASoC Driver
Advanced Linux Sound Architecture (ALSA) is a framework which defines an API for sound device drivers. The NVIDIA ALSA System-on-Chip (ASoC) driver enables ALSA to work seamlessly with different NVIDIA SoCs. Platform-independent and generic components are maintained by the upstream Linux community.
The ASoC driver may not directly support all audio hardware interfaces or functionality present on Jetson products. Review the following sections to determine whether the driver supports the audio features you need.
For Jetson Xavier NX Developer Kit and Jetson TX2 Developer Kit, audio playback is supported by the carrier board’s HDMI™ and DP interfaces. For Jetson TX2 Developer Kit and Jetson Xavier NX Developer Kit, hardware for audio capture is not included in the developer kit; however, audio capture can be supported using interfaces such as I2S or DMIC.
For NVIDIA® Jetson Nano™ Developer Kit, audio playback is supported by the carrier board’s HDMI™ interface. Hardware for audio capture is not included in the developer kit; however, audio capture can be supported using interfaces such as USB, I2S, or DMIC.
The carrier board included with NVIDIA® Jetson AGX Xavier™ Developer Kit supports audio playback over the front panel audio header via an on-board Realtek ALC5658 codec, the HDMI™ interface, and the USB‑C interfaces (via a USB‑C to Display Port Adapter/USB). Audio capture is supported by the on-board Realtek ALC5658 codec. It is also possible to capture audio via other interfaces such as USB (using a USB microphone), I2S, and DMIC.

ALSA

The ALSA framework is a part of the Linux kernel that is supported and maintained by the larger Linux community. This makes it feasible to adapt the framework to the Jetson platform by designing a driver that leverages NVIDIA audio routing support. ALSA includes a collection of sound card drivers, including actual codec drivers, and can support adding new codec drivers.
ALSA includes libraries and utilities that enable more refined audio control in Linux user space. These libraries control audio applications without having to interact with kernel space drivers directly. These libraries include ALSA:
amixer
aplay
arecord
The following diagram illustrates the ALSA software hierarchy.
A screenshot of a cell phone Description automatically generated
User space ALSA applications interact with ALSA core (kernel space) through APIs provided by user-space libraries that initialize the actual hardware codecs at the backend of the audio pipeline.
More information about the ALSA framework is available at:
https://www.kernel.org

DAPM

ALSA is designed to support various functionalities including, but not limited to, dynamic audio routing to the available PCM devices. The component of ALSA core that provides this support is called Dynamic Audio Power Management (DAPM). DAPM controls the power flow into and out of various codec blocks in the audio subsystem, minimizing power consumption. DAPM introduces switches or kernel controls in the form of widgets to turn a module’s power on and off and to help manipulate the required bit of the specific register dynamically using user space applications such as aplay, arecord, and amixer. The widgets are classified into various groups.
In terms of software hierarchy, DAPM is part of the ALSA core, which manages the codec module’s power efficiency. See the ALSA software hierarchy diagram under ALSA for details.
For more information see Clocking and Power Management.

Device Tree

The device tree is a data structure that describes devices on the platform. It is passed to the operating system at boot time to avoid hard coding component details in the operating system. This makes it easier to change hardware configurations without rebuilding the kernel.
The device tree is composed of nodes and properties. Each node can have properties or child nodes. Each property consists of a name and one or more values. Device tree structures must be written in the correct format so that the data structure can be parsed by the operating system.
A simple device tree example is available at Codec Driver Instantiation Using Device Tree.

ASoC Driver

The ASoC driver provides better ALSA support for embedded system-on-chip processors (e.g. DSP, AHUB) and portable audio codecs. It comprises the components:
Platform driver: Responsible for PCM registration and interfacing with the PCM driver. ADMAIF is the platform driver.
Codec drivers: Any driver that registers the snd_soc_codec_driver structure with the ASoC core can be viewed as a codec driver.
A codec driver must have at least one input or one output.
The structure provides a way to define your own DAPM widgets for power management and kcontrols for register setting from user space.
Machine driver: Connects one or more codec drivers and a PCM driver together for a given platform.
For details on writing a machine driver and identifying a sound card, consult ASoC Machine Driver.

Audio Hub Hardware Architecture

Audio Hub is a hardware module which is intended for audio acceleration. This topic provides an overview of:
The audio hub hardware architecture inside the SoC.
The software architecture of the ASoC driver.
This diagram summarizes the hardware architecture of the ASoC.
A screenshot of a cell phone Description automatically generated
The audio hub contains the modules I2S, DSPK, and the Digital MIC Controller (DMIC) that interface with the external world. The audio hub also contains:
Mixer
Sampling Frequency Converter (SFC)
Master Volume Control (MVC)
Audio Multiplexers (AMX)
Audio Demultiplexers (ADX)
An Audio Direct Memory Access (ADMA) component is included (ADMAIF-DMA) to communicate with memory.
The Crossbar (XBAR) facilitates the routing of audio samples through these modules using a proprietary protocol called Audio Client Interface (ACIF).
Module
Jetson Xavier NX and Jetson AGX Xavier Instances
Jetson TX2 Instances
Jetson Nano & TX1
Instances
I2S
4x
6x
2x
DMIC
2x
3x
2x
Mixer
10 inputs, 5 outputs
10 inputs, 5 outputs
10 inputs, 5 outputs
AMX
4x
4x
2x
ADX
4x
4x
2x
SFC
4x
4x
4x
MVC
2x
2x
2x
ADMA
32 channels
32 channels
22 channels
ADMAIF
20 TX and RX channels
20 TX and RX channels
10 TX and RX channels
DSPK
1x
1x
None
 
Note:
Not all instances of an I/O module (I2S, DMIC or DSPK) are exposed on each platform. See Board Interfaces for more information about supported I/O instances.
The modules in the audio hub support various kinds of audio devices that are expected to interface with the application processor, such as:
Cellular baseband devices
Different types of audio CODECs
Bluetooth modules
Digital microphones
Digital speakers
The audio hub supports the different interfaces and signal quality requirements of these devices.
Each of the AHUB modules has at least one RX port or one TX port or both. Some AHUB modules have more than one RX/TX port.
The RX ports receive data from XBAR, and the TX ports send data to XBAR. Hence, XBAR is a switch where an audio input can be fed to multiple outputs depending on the use case.
Each ADMAIF has TX and RX FIFOs that support simultaneous recording and playback. ADMA transfers the data to the ADMAIF FIFO for all audio routing scenarios.
For dynamic audio routing examples, see Usage and Examples. For details on hardware configuration for each module, see Xavier Series (SoC) Technical Reference Manual (for Jetson Xavier NX and Jetson AGX Xavier series), Tegra X2 (Parker Series SoC) Technical Reference Manual (for Jetson TX2 series), and Tegra X1 (SoC) Technical Reference Manual (for Jetson Nano and Jetson TX1).

ASoC Driver Software Architecture

 
Platform Driver
ADMAIF
Codec Driver
Codec Driver Instantiation Using Device Tree
XBAR
AMX
ADX
I2S
Mixer
SFC
DMIC
MVC
DSPK
AHUB Client TX Port Names
ASoC Machine Driver
Clocking and Power Management
The software architecture of the ASoC driver for Jetson leverages the features supported by the hardware and conforms to the ALSA framework.
As mentioned earlier, the ASoC driver comprises the platform, codec and machine drivers. The roles of these drivers are described briefly below, and in more detail in subsequent sections.
The ASoC driver provides NVIDIA Audio Hub (AHUB) hardware acceleration to the platform and codec drivers. AHUB Direct Memory Access Interface (ADMAIF) is implemented as a platform driver with PCM interfaces for playback/record. The rest of the AHUB modules, such as the Audio Cross Bar (XBAR), Audio Multiplexer (AMX), Audio Demultiplexer (ADX), and Inter-IC sound (I2S), are implemented as codec drivers. Each of the drivers is connected to XBAR through a Digital Audio Interface (a DAI), inside a machine driver, forming an audio hub.
The machine driver probe instantiates the sound card device and registers all of the PCM interfaces as exposed by ADMAIF. After booting, but before using these interfaces to play back or record audio, you must set up the audio paths inside XBAR. By default, XBAR has no routing connections at boot, and no complete DAPM paths to power on the corresponding widgets. The XBAR driver introduces MUX widgets for all of the audio components and enables custom routing through kcontrols from user space using the ALSA amixer utility. If the audio path is not complete, the DAPM path is not closed, the hardware settings are not applied, and audio output cannot be heard.
For more details on how to set up the route and how to play or record on the PCM interfaces, see Usage and Examples.

Platform Driver

The platform driver initializes and instantiates the ports for playback and capture inside the AHUB.
Users must connect some or all of these ports to form a full audio routing path. For examples of full audio paths, see the examples in Usage and Examples. Note that there are other elements in a full audio path setup that are discussed in subsequent sections and that the playback/capture ports set up by platform driver are only a subset.

ADMAIF

ADMAIF is the platform driver in the ASoC driver. It interfaces with the PCM driver. The PCM driver helps perform DMA operations by overriding the function pointers exposed by the snd_pcm_ops structure. The PCM driver is platform-agnostic, and interacts with the SoC DMA engine’s upstream APIs. The DMA engine then interacts with the platform-specific DMA driver to get the correct DMA settings. The ADMAIF platform driver defines DAIs and registers them with ASoC core.
A Jetson platform has either 10 or 20 ADMAIFs depending on its generation, facilitating either 10 or 20 streams of playback and capture.
Hardware Devices in the ASoC Driver Registered under Primary Audio Codec Board
The ADMAIF channels are mapped to:
/dev/snd/pcmC1Dnp for playback
/dev/snd/pcmC1Dnc for capture
Where n is 1 less than the channel number. For example:
ADMAIF1 is mapped to pcmC1D0p for playback, and pcmC1D0c for capture
ADMAIF2 is mapped to pcmC1D1p for playback, and pcmC1D1c for capture
…and so on.

Codec Driver

An overview of codec drivers is presented in the section ASoC Driver. In the ASoC driver implementation, the rest of the AHUB modules, except for ADMAIF, are implemented as codec drivers. Their responsibilities include:
Interfacing to other modules by defining DAIs
Defining DAPM widgets and establishing DAPM routes for dynamic power switching
Exposing additional kcontrols as needed for user space utilities to dynamically control module behavior

Codec Driver Instantiation Using Device Tree

Based on architecture, the Makefile in the following directory conditionally compiles the required device tree structure files into DTB files:
$KERNEL_TOP/arch/arm64/boot/dts/
When the kernel is flashed, the flash script chooses the appropriate board-specific DTB file for parsing during boot, and the ASoC codecs listed in device tree are instantiated. To add new devices to the device tree, edit the DTS file identified in the dmesg log (as in the following example) and reflash the target.
[ 0.977503] DTS File Name: $KERNEL_TOP/kernel/kernel-4.9/arch/arm64/boot/dts/../../../../../../hardware/nvidia/platform/t19x/galen/kernel-dts/tegra194-p2888-0001-p2822-0000.dts
[ 0.977582] DTB Build time: Oct 9 2018 10:22:39
To add a new device, add the device name with the base address and status as "okay":
ahub {
status = "okay";
i2s@2901000 {
status = "okay";
};
};

XBAR

The XBAR codec driver defines RX, TX and MUX widgets for all of the interfacing modules: ADMAIF, AMX, ADX, I2S, DMIC, Mixer, SFC and MVC. MUX widgets are permanently routed to the corresponding TX widgets inside the snd_soc_dapm_route structure.
XBAR interconnections are made by connecting any RX widget block to any MUX widget block as needed using the ALSA amixer utility. The get/put handlers for these widgets are implemented so that audio connections are stored by setting the appropriate bit in the hardware MUX register.
Mixer Controls
Sound card instantiation after boot indicates that the machine driver interconnects all codec drivers and the platform driver. The remaining step before obtaining the audio output on the physical codecs involves the use of MUX widgets to establish the DAPM path in order to route data from a specific input module to a specific output module. Input and output modules are dependent on the applicable use case. This provides flexibility for complex use cases.
For example, on Jetson AGX Xavier,[amixer –c tegrasndt19xmob cset name='I2S1 Mux' 'ADMAIF1'] realizes the internal AHUB path ADMAIF1 RX → XBAR → I2S1 TX. Usage and examples of various AHUB modules can be found in Usage and Examples.

AMX

The Audio Multiplexer (AMX) module can multiplex up to four streams of up to 16 channels, with a maximum of 32 bits per channel, into a time division multiplexed (TDM) stream of up to 16 channels with a maximum of 32 bits per channel. The AMX has four RX ports for receiving data from XBAR and one TX port for transmitting the multiplexed output to XBAR. Each port is exposed as a DAI and this is indicated with a solid line. Routes are established using DAPM widgets as indicated with the dotted lines.
A screenshot of a cell phone Description automatically generated
The AMX code driver supports these features:
Capable of multiplexing up to four input streams of up to 16 channels each and generating one output stream of up to 16 channels
A “byte ram” which can assemble an output frame from any combination of bytes from the four input frames
Two modes for data synchronization for the first output frame:
Wait for All mode: Wait for all enabled input streams to have data before forming the first output frame.
Wait for Any mode: Start forming the first output frame as soon as data is available in any enabled input stream.
 
Note:
AMX handles data synchronization for subsequent output frames differently on different Jetson platforms. On Jetson Nano, TX2, and TX1, it does not generate subsequent output frames until all input frames have been received. On Jetson AGX Xavier, it generates them as soon as one or more input frames have been received.
Byte Map Configuration
Each byte in the output stream is uniquely mapped from a byte in one of the four input streams. The mapping of bytes from input streams to output stream is software configurable via a byte map in the AMX module.
Each byte in the byte map is encoded with these fields:
Field
Bits
Description
Input stream
7:6
Identifies which of the four input streams the byte is mapped from, where 0 is RxCIF0, etc.
Input stream channel
5:2
Identifies which of the 16 possible channels in the input stream the byte is mapped from, where 0 is channel 0, etc.
Input stream byte
1:0
Identifies which byte in the input stream channel the byte is mapped from, where 0 is byte 0, etc.
Given that the maximum output frame size supported is 16 samples (from 16 channels) with 32 bits per sample, the byte map is organized as 16 words of 32 bits each: 64 bytes in total. Each 32-bit word in the byte map corresponds to one input channel. Therefore, if the output frame has samples from only two channels, then only the bytes in word 0 and word 1 need be programmed. If the output frame has samples from all 16 channels, then bytes in all 16 words must be programmed. Which bytes must be programmed in each word depends on the output frame sample size. If the sample size of each channel in the output frame is 16 bits, then it is only necessary to program byte 0 and byte 1 of each word in the byte map. If the sample size of each channel in the output frame is 32 bits, then it is necessary to program all four bytes of each word in the byte map.
Bear these points in mind:
Input bytes must be mapped to output bytes in order. For example, if input frame bytes 0 and 1 are both mapped to the output frame, byte 1 must be mapped to a position in the output frame after byte 0.
Not all bytes from an input frame need be mapped to the output frame.
Each byte in the output frame has a software-configurable enable flag. If a particular byte’s enable flag is cleared, the corresponding mapping in the byte map is ignored, and that byte is populated with zeros.
Mixer Controls
Mixer controls are registered for each instance of AMX by respective codec driver and are used to configure the path, characteristics and processing method of audio data. In the table below, instance specific mixer controls are listed.
Mixer Control *
Description
Possible Values
AMX<i>-<j> Mux
Selects the AHUB client device from which the AMX input receives data.
AMX<i> Input<j> Channels
Configures the channel count of input streams.
0-16
AMX<i> Output Channels
Configures the channel count of the output stream.
0-16
AMX<i> Byte Map <byte_num>
Configures the byte map (see ByteMap Configuration).
0-255
* <i> refers to the instance ID of the AMX client, and <j> refers to the input port ID.
 
Usage and examples of AMX module can be found here.

ADX

The Audio Demultiplexer (ADX) module can demultiplex a single TDM stream of up to 16 channels and a maximum of 32 bits per channel into four streams of up to 16 channels and 32 bits per channel. The RX port of ADX receives input data from XBAR, and four TX ports transmit demultiplexed output to XBAR. Each port is exposed as a DAI, indicated by a solid line and routes are established using DAPM widgets as indicated by the dotted lines in the following diagram.
A screenshot of a cell phone Description automatically generated
ADX has one input RxCIF, which supplies the input stream. The core logic selects bytes from this input stream based on a byte RAM map and forms output streams which are directed to a TxCIF FIFO to be transmitted to a downstream module in AHUB.
The ADX demultiplexer supports these features:
Demultiplexing one input stream of up to 16 channels to four output streams of up to 16 channels each
A “byte RAM” which can assemble output frames that contain any combination of bytes from the input frame. The byte RAM design is exactly same as the byte RAM in AMX, except that the direction of data flow is reversed.
Byte Map Configuration
Each byte in each output stream is mapped from a byte in the input stream. The mapping of the bytes from input stream to output streams is software configurable via a Byte Map in the ADX module through this mixer control.
Field
Bits
Description
Output Stream
7:6
Identifies the output stream that the byte is mapped to, where 0 = TxCIF0, etc.
Output Stream Channel
5:2
Identifies the output stream channel that the byte is mapped to, where 0 = channel 0, etc.
Output Stream Byte
1:0
Identifies the byte in the output stream channel that the byte is mapped to, where 0 = byte 0, etc.
Given that the maximum output frame size supported per stream is 16 channels with 32-bits per sample, the Byte Map is organized as 16 32-bit words (64 bytes in total). Each 32-bit word in the Byte Map corresponds to one channel in the input frame. Therefore, if the input frame only has two channels then only the bytes in word 0 and word 1 need to be programmed, while if the input frame has 16 channels then bytes in all 16 words need to be programmed. The bytes that need to be programmed in each word are dependent on the input frame sample size. If the sample size of each channel in the input frame is 16 bits, then it is only necessary to program byte 0 and byte 1 for each word in Byte Map. If the sample size of each channel in the input frame is 32 bits, then it is necessary to program all four bytes for each word in the Byte Map.
Bear these points in mind:
Input bytes must be mapped to output bytes in order. For example, if input frame bytes 0 and 1 are both mapped to the output frame, byte 1 must be mapped to a position in the output frame after byte 0.
Not all bytes in an input frame need be mapped to the output frame.
Each byte in the output frame has a software-configurable enable flag. If a particular byte’s enable flag is cleared, the corresponding mapping in the byte map is ignored, and that byte is populated with zeros.
Mixer Controls
Mixer controls are registered for each instance of ADX by the respective codec driver, and are used to configure the path, characteristics, and processing method audio data. The table below lists the instance-specific mixer controls for each instance of the ADX module.
Mixer Control *
Description
Possible Values
ADX<i> Mux
Selects the AHUB client device from which the ADX input receives data.
ADX<i> Input Channels
Configures the channel count of the input stream.
0-16
ADX<i> output<j> Channels
Configures the channel count of the output streams.
0-16
ADX<i> Byte Map <byte_num>
Configures the byte map (see Byte Map Configuration)
0-255
* <i> refers to the instance ID of the ADX client, and <j> refers to the output port ID.
Usage and examples of ADX module can be found in the section ADX: Demultiplexing a Single Stereo Stream into Two Mono Streams.

I2S

An I2S codec driver supports bidirectional data flow and thereby defines CIF and DAP RX/TX widgets as follows. The CIF side of I2S interfaces with XBAR, and the DAP side interfaces with the physical codec on the given platform.
The DAPM routes established using these widgets are shown in the diagram below as dotted lines. I2S modules also expose kernel control to enable internal I2S loopback.
A screenshot of a cell phone Description automatically generated
The I2S controller implements full-duplex and half-duplex point-to-point serial interfaces. It can interface with I2S-compatible products, such as digital audio tape devices, digital sound processors, modems, Bluetooth chips, etc.
The I2S codec driver supports the following features:
Can operate both as master and slave
Supports the following modes of data transfer:
LRCK modes: I2S mode, Left Justified Mode (LJM), or Right Justified Mode (RJM)
FSYNC modes: DSP A or B mode
Can transmit and receive data:
Sample size: 8 bits (S8), 16 (S16_LE), or 24/32 bits (S32_LE)
Sample rate: 8000, 11025, 16000, 22050, 24000, 32000, 44100, 48000, 88400, 96000, 176400, or 192000 Hz
Channels: LRCK modes support stereo data; DSP A and B modes support 1 to 16 channels
Device Tree Entry
This I2S node entry enables a given I2S instance on a given chip.
aconnect@2a41000 {
compatible = "nvidia,tegra210-aconnect";
status = "okay";
...
tegra_axbar: ahub {
compatible = "nvidia,tegra186-axbar";
status = "okay";
...
tegra_i2s1: i2s@2901000 {
compatible = "nvidia,tegra186-i2s";
reg = <0x0 0x2901000 0x0 0x100>;
nvidia,ahub-i2s-id = <0>;
clocks = <&tegra_car TEGRA186_CLK_I2S1>,
<&tegra_car TEGRA186_CLK_PLL_A_OUT0>,
<&tegra_car TEGRA186_CLK_I2S1_SYNC_INPUT>,
<&tegra_car TEGRA186_CLK_SYNC_I2S1>,
<&tegra_car TEGRA186_CLK_I2S1_SYNC_INPUT>;
clock-names = "i2s1",pll_a_out0, "ext_audio_sync","audio_sync", "clk_sync_input";
fsync-width = <31>;
status = "okay";
};
...
};
};
The snippet above is from the device tree structure for Jetson AGX Xavier. Note that your address and a few other properties are platform-specific, and may be referenced by platform-specific device tree files. In the case of I2S, the device entry above specifies the names of clocks needed by the device, the source of each clock, and the register base address and address range belonging to the device. Other properties such as fsync-width may be adjusted to fit the use case’s requirements.
Mixer Controls
Mixer controls are registered for each instance of I2S by the respective codec driver, and are used to configure the path, characteristics, and processing method of audio data. The table below lists instance-specific mixer controls.
Mixer Control
Description
Possible Values
I2S<i> Loopback
Enable internal I2S loopback.
On or Off
I2S<i> Input Bit Format
Configures length of input sample in bits.
16 or 32
I2S<i> Codec Bit Format
Configures length of output sample in bits.
16 or 32
I2S<i> Fsync Width
Configures frame sync signal’s width in terms of bit-clocks.
0-31
I2S<i> Sample Rate
Configures sample rate of audio stream.
8000, 11025, 16000, 22500, 24000, 32000, 44100, 48000, 88400, 96000, 176400, or 192000 Hz
I2S<I> Channels
Configures channel count of audio stream.
0-16
I2S<i> Capture Stereo to Mono Conv
Configures stereo to mono conversion method to be applied to input stream.
None, CH0, CH1, or AVG
I2S<i> Capture Mono to Stereo Conv
Configures mono to stereo conversion to be applied to input stream.
None, ZERO, or COPY
I2S<i> Playback Stereo to Mono Conv
Configures stereo to mono conversion method to be applied to output stream.
None, CH0, CH1, or AVG
I2S<i> Playback Mono to Stereo Conv
Configures mono to stereo conversion to be applied to output stream.
None, ZERO, or COPY
I2S<i> Playback FIFO Threshold
Configures CIF’s FIFO threshold needed for playback to start.
0-63
I2S<i> Mux
Selects the AHUB client device from which the I2S input receives data.
* <i> refers to the instance ID of the ADX client, and <j> refers to the output port ID.
For usage and an example for the I2S module, see Usage and Examples: I2S.

Mixer

The Mixer mixes audio streams from any of the 10 input ports that receive data from XBAR to any of the 5 output ports that transmit data onto XBAR. The DAPM widgets and routes for Mixer are as shown in the figure below. The Mixer driver also exposes RX Gain and Mixer Enable as additional kcontrols to set the volume of each input stream and to globally enable or disable the Mixer respectively.
A screenshot of a cell phone Description automatically generated
Features Supported
Supports mixing up to 10 input streams
Supports five outputs, each of which can be a mix of any combination of 10 input streams
Can transmit and receive:
Sample size: 8, 16, 24, or 32
Sample rate: 8000, 11025, 16000, 22500, 24000, 32000, 44100, 48000, 88400, 96000, or 192000  Hz
Channel: 1-8
Fixed gain for each stream is also available
Mixer Controls
Mixer controls are registered for each instance of Mixer by the corresponding codec driver. They are used to the configure path, characteristics, and processing method of audio data. The table below lists instance-specific mixer controls.
Mixer Control *
Description
Possible Values
Mixer1-<i> Mux
Selects the AHUB client device from which the I2S input receives data.
Mixer Enable
Enables Mixer.
On or Off
Adder<j> RX<i>
Enables input stream <i> on Adder <j>.
On or Off
RX<i> Channels
Configures channel count of input stream.
0-8
TX<j> Channels
Configures channel count of output stream.
0-8
RX<i> Gain
Configures gain for a given input stream before mixing in the adder.
0-131072
RX<i> Gain Instant
Configures gain for a given input stream before mixing in the adder.
0-131072
* <i> refers to the input port of the mixer, and <j> refers to the output port of the mixer.
For usage and examples for the Mixer module, see Usage and Examples: Mixer: Mixing Two Input Streams.

SFC

The Sampling Frequency Converter (SFC) converts the input sampling frequency to the required sampling rate. SFC has one input port and one output port, which are connected to XBAR.
A screenshot of a cell phone Description automatically generated
Features Supported
Sampling frequency conversion of streams of up to two channels (stereo)
Very low latency (maximum latency less than 125 microseconds)
Supports the following frequency conversions marked by ‘X’. (Shaded cells represent the same frequency in and out. These cases bypass frequency conversion.)
Fs in/
Fs out
8
11.025
16
22.05
24
32
44.1
48
88.2
96
176.4
192
8.
 
 
X
 
X
 
X
X
 
 
 
 
11.025
 
 
 
 
 
 
X
X
 
 
 
 
16.
X
 
 
 
X
 
X
X
 
 
 
 
22.05
 
 
 
 
 
 
X
X
 
 
 
 
24.
 
 
 
 
 
 
X
X
 
 
 
 
32.
 
 
 
 
 
 
X
X
 
 
 
 
44.1
X
 
X
 
 
 
 
X
 
 
 
 
48.
X
 
X
 
X
 
X
 
 
X
 
X
88.2
 
 
 
 
 
 
X
X
 
 
 
 
96.
 
 
 
 
 
 
X
X
 
 
 
 
176.4
 
 
 
 
 
 
X
X
 
 
 
 
192.
 
 
 
 
 
 
X
X
 
 
 
 
Mixer Controls
Mixer controls are registered for each instance of SFC by the corresponding codec driver. They are used to configure the path, characteristics, and processing method of audio data. The table below lists instance-specific mixer controls.
Mixer Control *
Description
Possible Values
SFC<i> Mux
Selects the AHUB client device from which the I2S input receives data.
SFC<i> Init
Enables the instance of SFC.
On or Off
SFC<i> Input Rate
Configures the sampling rate of the input stream.
8000, 11025, 16000, 22050, 24000, 32000, 44100, 48000, 88200, 96000, 176400, or 192000 Hz
SFC<i> Output Rate
Configures the sampling rate of the output stream.
8000, 11025, 16000, 22050, 24000, 32000, 44100, 48000, 88200, 96000, 176400, or 192000 Hz
SFC<i> Channels
Configures the channel count of the input stream.
1, 2
SFC<i> Input Bit Format
Configures the sample size of the input stream.
16 or 32
SFC<i> Output Bit Format
Configures the sample size of output stream.
16 or 32
SFC<I> Input Stereo Conv
Configures mono to stereo conversion method.
None, ZERO, or AVG
SFC<i> Output mono Format
Configures stereo to mono conversion method.
None, CH1, or CH2
* <i> refers to the instance ID of SFC.
For usage and examples for the SFC module, see Usage and Examples: SFC: Rate Conversion from 48000 to 44100 Hz.

DMIC

The DMIC controller converts PDM signals to PCM (pulse code modulation) signals.
The DMIC controller can directly interface to PDM input devices to avoid the need for an external PDM-capable codec.
The following diagram shows the DAPM widgets and routes.
A screenshot of a cell phone Description automatically generated
Features Supported
Conversion from PDM (pulse density modulation) signals to PCM (Pulse code modulation) signals
Sample rate: 8000, 16000, 44100, or 48000 Hz
Sample size: 16 bits (S16_LE) or 24 bits (S32_LE)
OSR (oversampling ratio): 64, 128, or 256
Device Tree Entry
The following device tree node definition illustrates generic device tree entries. This node enables one instance of I2S on Jetson AGX Xavier.
aconnect@2a41000 {
compatible = "nvidia,tegra210-aconnect";
status = "okay";
...
tegra_axbar: ahub {
compatible = "nvidia,tegra186-axbar";
status = "okay";
...
tegra_dmic1: dmic@2904000 {
compatible = "nvidia,tegra210-dmic";
reg = <0x0 0x2904000 0x0 0x100>;
nvidia,ahub-dmic-id = <0>;
clocks = <&bpmp_clks TEGRA186_CLK_DMIC1>,
<&bpmp_clks TEGRA186_CLK_PLLA_OUT0>,
<&bpmp_clks TEGRA186_CLK_SYNC_DMIC1>;
clock-names = "dmic1", "pll_a_out0", "sync_dmic1";
status = "okay";
};
        ...
};
};
In the case of DMIC, a device entry specifies the instance ID of DMIC through ahub-dmic-id. It also specifies the register base address and address range belonging to the device, apart from clock-names and their sources.
Mixer Controls
Mixer controls are registered for each instance of DMIC by the corresponding codec driver. They are used to configure the path, characteristics, and processing method of audio data. The table below lists instance specific mixer controls.
Mixer Control *
Description
Possible Values
DMIC<i> Boost Gain
Configures volume.
0 to 25599, representing 0 to 256 in linear scale (with 100x factor)
DMIC<i> Mono Channel Select
Selects channel for mono recording.
Left or Right
DMIC<i> TX Mono to Stereo Conv
Configures mono to stereo conversion method for DMIC output.
None, ZERO, or COPY
 
DMIC<i> Output Bit Format
Configures output sample size in bits.
16 or 32
DMIC<i> Sample Rate
Configures sample rate of DMIC output.
8000, 11025, 16000, 22050, 24000, 32000, 44100, or 48000 Hz
DMIC<i> OSR Value
Configures OSR (oversampling ratio). OSR<i> ndicates selecting one sample from the several samples received on input lines of the DMIC processing block.
OSR_64, OSR_128, and OSR_256
DMIC<i> LR Select
Left or Right
* <i> refers to the instance ID of the DMIC client.
For usage and examples for DMIC, see Usage and Examples: DMIC.

MVC

MVC (volume control) applies gain or attenuation to a digital signal path. The MVC block is a generic block. It can be used to apply volume control:
To the input or output digital signal path
Per-stream and to all streams (master volume control)
The following diagram shows MVC’s DAPM widgets and routes.
A screenshot of a cell phone Description automatically generated
Features Supported
Programmable volume gain for data formats:
Sample size: 8, 16, 24, or 32 bits
Sample rate: 8000, 11025, 16000, 22050, 24000, 32000, 44100, 48000, 88400, 96000, 176400, or 192000 Hz
Channel: 1-8
Programmable curve ramp for volume control
Separate mute and unmute controls
Mixer Controls
Mixer controls are registered for each instance of MVC by the corresponding codec driver. They are used to configure the path, characteristics, and processing method of audio data. The table below lists instance-specific mixer controls.
Mixer Control *
Description
Possible Values
MVC<i> Vol
Configures volume.
0 to 16000
(representing -120 to +40 dB with 100x scale factor)
MVC<i> Mute
Mutes/unmutes input stream.
On or Off
MVC<i> Curve Type
Configures the volume ramp curve type.
Poly or Linear
 
MVC<i> Channels
Configures channels of audio data passing through MVC.
0-8
MVC<i> input bit format
Configures sample size of input audio data through MVC.
16 or 32
MVC<i> Bits
Configures sample size of output audio data through MVC.
OSR_64, OSR_128, or OSR_256
MVC<i> Mux
Selects the AHUB client device from which the MVC input receives data.
* <i> refers to the instance ID of the MVC client.
For usage and examples of the MVC module, see Usage and Examples: MVC: Applying Gain on Stream with MVC.

DSPK

The Digital Speaker (DSPK) is a PDM transmit block that converts multibit PCM audio input to oversampled one-bit PDM output. The DSPK controller consists of an interpolator that oversamples the incoming PCM and a delta-sigma modulator that converts the PCM signal to PDM.
A screenshot of a cell phone Description automatically generated
Features Supported
Sample rate: 8000, 11025, 16000, 22050, 24000, 32000, 44100, 48000, 88400, 96000, 176400, or 192000 Hz
Input PCM bit width: 16 bits (S16_LE) or 24 bits (S32_LE)
Oversampling ratio: 32, 64, 128, or 256
Passband frequency response: ≤0.5 dB peak-to-peak in 10 Hz – 20 kHz range
Dynamic range: ≥105 dB
Device Tree Entry
This DSPK node entry enables a given DSPK instance on a given chip.
aconnect@2a41000 {
compatible = "nvidia,tegra210-aconnect";
status = "okay";
...
tegra_axbar: ahub {
compatible = "nvidia,tegra186-axbar";
status = "okay";
...
tegra_dspk1: dspk@2905000 {
compatible = "nvidia,tegra186-dspk";
reg = <0x0 0x2905000 0x0 0x100>;
nvidia,ahub-dspk-id = <0>;
clocks = <&bpmp_clks TEGRA194_CLK_DSPK1>,
<&bpmp_clks TEGRA194_CLK_PLLA_OUT0>,
<&bpmp_clks TEGRA194_CLK_SYNC_DSPK1>;
clock-names = "dspk", "pll_a_out0", "sync_dspk";
status = "okay";
};
...
};
};
This example is from the device tree structure file of Jetson AGX Xavier. In case of DSPK, the device entry specifies the instance ID of DSPK through ahub-dspk-id. It also specifies the register base address and address range belonging to the device, the clocks required, and their sources.
Mixer Controls
Mixer controls are registered for each instance of DSPK by the corresponding codec driver. They are used to the configure path, characteristics, and processing method of audio data. The table below lists instance-specific mixer controls.
Mixer Control *
Description
Possible Values
DSPK<i> Mux
Selects the AHUB client device which the DSPK input receives data from
DSPK<i> OSR Value
Configures the oversampling ratio. OSR_i indicates that i bits of PCM audio input to oversampled 1-bit PDM output
OSR_32, OSR_64, OSR_128, OSR_256
DSPK<i> Rx FIFO threshold
Specifies the number of words that need to be present in the FIFO before a CIF starts transfer
0-3
* <i> refers to the instance ID of the DSPK client.
For usage and examples for the DSPK module, see Usage and Examples: DSPK: Playback.

AHUB Client TX Port Names

Below is a list of names of AHUB clients’ TX ports.
Note:
The suffix <i> refers to the instance ID of a given AHUB client.
 
AHUB Client
TX Port Names *
ADMAIF
ADMAIF<i>
I2S
I2S<i>
DMIC
DMIC<i>
DSPK
DSPK<i>
AMX
AMX<i>
ADX
ADX<i>-1, ADX<i>-2, ADX<i>-3, ADX<i>-4
* <i> represents the instance ID of a given AHUB client.

ASoC Machine Driver

The ASoC machine driver connects the codec drivers to a PCM driver by linking the DAIs exposed by each module. It instantiates the sound card (a software component in ASoC architecture). Its header file defines the snd_soc_dai_link structure.
In brief, the machine driver’s functions are to:
Populate the SoC DAI links (i.e. those between XBAR and interfaces such as I2S). These DAI links are internal to the SoC and connect the various hardware modules and interfaces to XBAR.
Define DAPM widgets for the platform’s interfaces, such as headphone jacks, speakers, and microphones.
Find platform-specific DAI links in the device tree (i.e. links between the audio codec and the SoC) and route DAPM widgets between the machine driver and the audio codec headphone/speaker outputs and microphone inputs.
Configure the APE subsystem and codec clocks.
Propagate the runtime PCM parameters (sample-rate, sample-size, etc.).
The Jetson ASoC machine driver is available in the kernel sources archive in this location:
kernel/nvidia/sound/soc/tegra-alt/machine_drivers/tegra_machine_driver_mobile.c
To support a custom codec board on Jetson ASoC machine driver
1. Update the DAI link and DAPM widgets defined in the device tree.
You must update these items because the machine driver parses the device tree in order to instantiate the sound card. Update the device tree properties of the sound node for the platform when you customize the platform to support a third-party audio codec.
2. Make any necessary changes for configuring the codec clock. For more information about this topic, see Device Tree Configuration for a custom audio card.
The following example gives an overview of the DAI links and DAPM widgets for the audio codec found in the device tree’s sound node for Jetson AGX Xavier.
sound {
compatible = "nvidia,tegra-audio-t186ref-mobile-rt565x";
nvidia,model = "tegra-snd-t186ref-mobile-rt565x";
nvidia,num-codec-link = <12>;
nvidia,num-clk = <8>;
nvidia,clk-rates = < 270950400 /* PLLA_x11025_RATE */
11289600 /* AUD_MCLK_x11025_RATE */
45158400 /* PLLA_OUT0_x11025_RATE */
45158400 /* AHUB_x11025_RATE */
245760000 /* PLLA_x8000_RATE */
12288000 /* AUD_MCLK_x8000_RATE */
49152000 /* PLLA_OUT0_x8000_RATE */
49152000 >; /* AHUB_x8000_RATE */
clocks = <&tegra_car TEGRA186_CLK_PLLP_OUT0>,
<&tegra_car TEGRA186_CLK_PLLA>,
<&tegra_car TEGRA186_CLK_PLL_A_OUT0>,
<&tegra_car TEGRA186_CLK_AHUB>,
<&tegra_car TEGRA186_CLK_CLK_M>,
<&tegra_car TEGRA186_CLK_AUD_MCLK>;
clock-names = "pll_p_out1", "pll_a", "pll_a_out0", "ahub",
"clk_m", "extern1";
resets = <&tegra_car TEGRA186_RESET_AUD_MCLK>;
reset-names = "extern1_rst";
 
status = "okay";
nvidia,audio-routing =
"x Headphone Jack", "x HPO L Playback",
"x Headphone Jack", "x HPO R Playback",
"x IN1P", "x Mic Jack",
"x Int Spk", "x SPO Playback",
"x DMIC L1", "x Int Mic",
"x DMIC L2", "x Int Mic",
"x DMIC R1", "x Int Mic",
"x DMIC R2", "x Int Mic",
"y Headphone", "y OUT",
"y IN", "y Mic",
"z IN", "z OUT",
"m Headphone", "m OUT",
"m IN", "m Mic",
"n Headphone", "n OUT",
"n IN", "n Mic",
"o Headphone", "o OUT",
"o IN", "o Mic",
"a IN", "a Mic",
"b IN", "b Mic",
"c IN", "c Mic",
"d IN", "d Mic",
"d1 Headphone", "d1 OUT",
"d2 Headphone", "d2 OUT";
nvidia,xbar = <&tegra_axbar>;
rt565x_dai_link: nvidia,dai-link-1 {
link-name = "rt565x-playback";
cpu-dai = <&tegra_i2s1>;
codec-dai = <&rt5658>;
cpu-dai-name = "I2S1";
codec-dai-name = "rt5659-aif1";
format = "i2s";
bit-format = "s16_le";
bclk_ratio = <0>;
srate = <48000>;
num-channel = <2>;
ignore_suspend;
name-prefix = "x";
status = "okay";
};
...
}
The sound node is added to the device tree file for sound card registration and passing platform-related data to the machine driver. Some of the sound node’s properties are described below. All of the described properties are required except as noted.
compatible: Specifies the machine driver with which the sound node is compatible. Its value must be:
nvidia,tegra-audio-t210ref-mobile-rt565x for Jetson Nano or Jetson TX1
nvidia,tegra-audio-t186ref-mobile-rt565x for Jetson Xavier NX, Jetson AGX Xavier series, or Jetson TX2 series
nvidia,model: Specifies the sound card’s name, and can be customized as necessary.
nvidia,audio-routing: Describes the route between the Jetson ASoC machine driver widgets and the codec widgets. The machine driver defines DAPM widgets for the platform’s physical microphone, headphone, and speakers. These must be connected to the corresponding DAPM widgets on the codec, which represent the codec’s microphones, headphones, speakers, etc.
nvidia,num-codec-link: Defines the number of platform-specific DAI links that connect interfaces, such as I2S, DSPK or DMIC, to other external devices, such as codecs. This must be the same as the number of nvidia,dai-link-<number> child nodes.
nvidia,dai-link-<number>: Represents the interface between the SoC and an external audio device. The properties for the node describe the configuration of the interface.
You must define one of these properties (with a unique <number>) for each DAI link.
By default, a few interfaces may be connected to dummy codecs (spdif-dit0, spdif-dit1, etc.) for testing when no physical codec is present. This allows the SoC to drive the interface pins even when no external device is present. These dummy codecs can be replaced by actual codecs (like rt5658 in example above) when a codec is connected to a given interface.
The Jetson ASoC machine driver uses a DAI link’s link-name property to identify the DAI link and perform any necessary configuration such as configuring the codec clock. For other generic properties of a DAI link, see the Simple-Card specification.
The properties in a DAI node are described in the section Definition of a DAI Node, below.
Reference definitions of the device tree’s sound node for the various Jetson products are available in the kernel source archive in these locations:
Jetson Xavier NX:
hardware/nvidia/platform/t19x/jakku/kernel-dts/common/ tegra194-audio-p3668.dtsi
Jetson Nano:
hardware/nvidia/platform/t210/porg/kernel-dts/tegra210-porg-p3448-common.dtsi
Jetson AGX Xavier:
hardware/nvidia/platform/t19x/galen/kernel-dts/common/tegra194-audio-p2822-0000.dtsi
Jetson TX2:
hardware/nvidia/platform/t18x/common/kernel-dts/t18x-common-platforms/tegra186-quill-common.dtsi
Jetson TX1:
hardware/nvidia/platform/t210/jetson/kernel-dts/tegra210-jetson-cv-base-p2957-2180-a00.dts
For a complete example of how to customize the device tree for a different audio codec, see 40-pin GPIO Expansion Header (which describes interfacing to a codec on the 40-pin GPIO expansion header).
Definition of a DAI Node
Each DAI link for the I2S interface must be defined by a DAI node, which is a subnode of the sound node. The overall format of a DAI node is described in the preceding section.
For each I2S interface DAI link, the following properties must be configured:
bitclock-master and frame-master: Optional Booleans; specify whether the codec is a slave or a master. The codec is the I2S bit-clock and frame master if these properties are present, or the I2S slave if they are absent.
format: Configures CPU/CODEC common audio format. Value may be i2s, right_j, left_j, dsp_a, or dsp_b.
bclk-ratio: An integer used to configure the I2S bit clock rate. The I2S bit clock rate is the product of this value and the PCM sample-rate. A value of 0 yields the same clock rate as 1.
tx-mask (applicable when codec I2S is transmitting) and rx-mask (applicable when codec I2S is receiving): Optional; configure the TDM TX and RX slot enable mask, respectively. These bit masks indicate which TDM slots contain data. They default to values determined by the number of channels, and need be set only when the configuration is being customized.
For example, if there are eight channels, a mask of 0xff enables all eight channels in the TDM frame; a mask of 0x0f enables only the first four slots in the frame. If slots are not enabled, the total TDM frame size is still the same size as the total number of channels, but the data in the slots is ignored.
Other DAI link properties are common for I2S, DMIC, and DSPK interface-based DAI links:
srate: PCM Data stream sample rate
bit-format: Data stream sample size
num-channel: Data stream number of channels

Clocking and Power Management

The following debugfs node listing, obtained from /sys/kernel/debug/clk/clk_summary, shows the clock tree of the ASoC driver for Jetson AGX Xavier in the idle state, when no audio record or playback operations are in progress. The clock trees for the other Jetson devices are similar.
clock enable_cnt prepare_cnt rate req_rate accuracy phase
i2s6_sync_input 0 0 0 0 0 0
i2s5_sync_input 0 0 0 0 0 0
i2s4_sync_input 0 0 0 0 0 0
i2s3_sync_input 0 0 0 0 0 0
i2s2_sync_input 0 0 0 0 0 0
i2s1_sync_input 0 0 0 0 0 0
dmic4_sync_clk 0 0 0 0 0 0
dmic3_sync_clk 0 0 0 0 0 0
dmic2_sync_clk 0 0 0 0 0 0
dmic1_sync_clk 0 0 0 0 0 0
i2s6_sync_clk 0 0 0 0 0 0
i2s5_sync_clk 0 0 0 0 0 0
i2s4_sync_clk 0 0 0 0 0 0
i2s3_sync_clk 0 0 0 0 0 0
i2s2_sync_clk 0 0 0 0 0 0
i2s1_sync_clk 0 0 0 0 0 0
pll_a1 0 0 600000000 600000000 0 0
ape 0 0 150000000 150000000 0 0
apb2ape 0 0 150000000 150000000 0 0
pll_a 0 0 258000000 258000000 0 0
dmic4 0 0 12285714 12285714 0 0
dmic3 0 0 12285714 12285714 0 0
dmic2 0 0 12285714 12285714 0 0
dmic1 0 0 12285714 12285714 0 0
i2s6 0 0 23454545 23454545 0 0
i2s5 0 0 23454545 23454545 0 0
i2s4 0 0 23454545 23454545 0 0
i2s3 0 0 23454545 23454545 0 0
i2s2 0 0 23454545 23454545 0 0
i2s1 0 0 23454545 23454545 0 0
ahub 0 0 86000000 86000000 0 0
The clocks of the individual modules, AMX, ADX, AFC, SFC, MIXER, and others, are internally driven by the APE clock.
The clock for all codec drivers (I2S, DMIC, DSPK, XBAR, etc.) are switched off in the idle state. They are turned on when audio playback or recording begins.
The idle_bias_off option for the ASoC core (snd_soc_codec_driver) is set to 1 in the individual codec drivers for dynamic audio power management. This option causes the ASoC core to call the suspend() and resume()functions in the codec driver that was registered using SET_RUNTIME_PM_OPS during platform probe. It calls suspend() and resume() when playback or record stops or starts. suspend()saves the device state and disables the clock; resume()enables the clock and restores the device state.

High Definition Audio

 
Features Supported
Software Driver Details
Jetson platforms support one or more High Definition Audio (HDA) interfaces, through on-board HDMI, DP, and USB‑C ports. These interfaces can be used to perform high-quality audio rendering on devices like TVs, A/V receivers, etc. These HDA interfaces are available on various Jetson platforms:
Jetson Xavier NX, Jetson TX2: one HDMI, one DP
Jetson Nano, Jetson TX1: one HDMI
Jetson AGX Xavier: one HDMI, two DP over USB-C
HDMI and DP interfaces can be connected using the respective connectors. DP over USB-C needs a USB-C to DP converter to connect to a DP sink.

Features Supported

Jetson High Definition Audio supports the following features:
Compliant with High Definition Audio Specification Revision 1.0
Supports HDMI 1.3a and DP
Audio Format Support
Channels: 2 to 8
Sample size: 16 bits (S16_LE) or 24 bits (S32_LE)
Sample rate:
32000, 44100, 48000, 88200, 96000, 176400, or 192000 Hz (HDMI)
32000, 44100, 48000, 88200, or 96000 Hz (DP)
You may experience issues when playing high resolution audio formats (using multichannel output or a high sampling rate), even with an audio format that is supported by your monitor. This is because the available audio bandwidth depends on the HDMI configuration, increasing with higher display resolutions.
If you encounter issues when playing a high resolution audio format, NVIDIA recommends setting your display resolution at least to the level that corresponds to your audio format in the following table. This table is taken from the HDMI 1.3a specification document.
Display resolution
Format timing
Pixel repetition
Vertical frequency (Hz)
Maximum fs, 8 channels (Hz)
Maximum frame rate, 2 channels, comp **
SuperAudio CD channel count
VGA
640x480p
none
59.94/60
48000
192
2
480i
1440x480i
2
59.94/60
88200
192
2
2880x480i
4
192000
768
8
240p
1440x240p
2
59.94/60
88200
192
2
2880x240p
4
192000
768
8
480p
720x480p
none
59.94/60
48000
192
2
1440x480p
2
176400
384
8
2880x480p
4
192000
768
8
720p
1280x720p
none
59.94/60
192000
768
8
1080i
1920x1080i
1080p
1920x1080p

Software Driver Details

HDA interfaces are accessible through standard ALSA interfaces. You can use the aplay utility for rendering audio:
aplay -Dhw:<cardname>,<deviceID> <in.wav>
Where:
<cardname> is the sound card name; see the table under Board Interfaces.
<deviceID> is the sound interface’s device ID.
<in.wav> is the name of the sound file to be played. It should be a .wav file.
Here are some further details about driver usage:
All HDA interfaces are available under one card.
You can read card details from /proc/asound/cards.
You can see available PCM devices (i.e. HDA interfaces) under /proc/asound/card<n>/.
16-bit audio is supported in S16_LE format; 20 or 24-bit audio is supported in S32_LE format.

USB Audio

 
Features Supported
Software Driver Details
All Jetson platforms provide a USB host interface for connecting various USB devices, including USB audio devices such as speakers, microphones and headsets.

Features Supported

Jetson High Definition Audio supports the following features:
Channel count: 8 maximum
Sample size: 16 bits (S16_LE) or 24 bits (S24_3LE)
Sample rate: 32000, 44100, 48000, 88200, 96000, 176400, and 192000 Hz
Supported audio formats are determined by the USB audio equipment connected.

Software Driver Details

USB audio is accessible through standard ALSA interfaces. You can use the aplay and arecord utilities for rendering and capturing audio, respectively:
aplay -Dhw:<cid>,<did> <file.wav>
arecord -Dhw:<cid>,<did> -r <rate> -c <chan> -f <sample_format> <file.wav>
Where:
<cid> is the card ID
<did> is the device ID
<rate> is the sampling rate
<chan> is the number of audio channels
<sample_format> is the sample format
<file.wav> is the name of the input file (for aplay) or output file (for arecord), which must be a WAV file
Here are some further details about driver usage:
The USB audio card is enumerated upon connecting the USB device (e.g. a USB headphone).
You can read card details from /proc/asound/cards.
You can see available PCM devices under /proc/asound/card<n>/.

Enabling Bluetooth Audio

Applies to: Original Jetson TX2 and Jetson TX2i
To ensure the Bluetooth® software stack is conformant for the configuration, Bluetooth audio is disabled by default. If additional Bluetooth audio profiles are enabled, product conformance may be impacted.
To enable Bluetooth audio
3. Navigate to this file:
/lib/systemd/system/bluetooth.service.d/nv-bluetooth-service.conf
4. Use a text editor to change this line…
ExecStart=/usr/lib/bluetooth/bluetoothd -d --noplugin=audio,a2dp,avrcp
…to this:
ExecStart=/usr/lib/bluetooth/bluetoothd -d
(That is, delete the ‑‑noplugin switch and all of its values.)
5. Enter these commands to update the apt-get package list and install the pulse audio package:
$ sudo apt-get update
$ sudo apt-get install pulseaudio-module-bluetooth
6. Enter this command to reboot the Jetson device:
$ sudo reboot
7. When the reboot is complete, pair and connect any Bluetooth headset.

Board Interfaces

The tables below list all of the audio interfaces exposed by Jetson developer kits. Note that some interfaces may not be directly available for use in the BSP provided. The pins may need to be configured for the desired function.
The need for pinmux configuration is indicated in the tables by the “Pinmux Setting Required” field.
For information about pinmux configuration, see the part of the Platform Adaptation and Bring-Up topic that applies to your platform.
Jetson Xavier NX Developer Kit
Supported interfaces/Instance
Pinmux setting required
Carrier board interface/connector
Sound card name
I2S3 (via DAP3 pins)
No
M2.E key slot
jetsonxaviernxa
DMIC1 (via DAP3 pins)
Yes
DMIC2 (via DAP3 pins)
Yes
I2S5/DSPK1/DSPK2/DMIC4 (via DAP5 pins)
Yes
40-pin GPIO expansion header
FS (pin 35)
SCLK (pin 12)
DIN (pin 38)
DOUT (pin 40)
HDA (HDMI/DP)
No
 
 
tegragalent1
HAD (HDMI/DP)
No
 
 
 
Jetson Nano Developer Kit
Supported interfaces/Instance
Pinmux setting required
Carrier board interface/connector
Sound card name
I2S3 (via DAP3 pins)
No
M2.E Key Slot
tegrasndt210ref
DMIC1 (via DAP3 pins) *
Yes
DMIC2 (via DAP3 pins) *
Yes
I2S4 (via DAP4 pins)
Yes
40-pin GPIO expansion header
FS (pin 35)
SCLK (pin 12)
DIN (pin 39)
DOUT (pin 40)
AUD_MCLK
Yes
Pin 7
USB
No
4 USB ports
Peripheral-specific
HDA (HDMI/DP)
No
HDMI
tegrahda
* DMIC1 and DMIC2 are pinmuxed on DAP3.
 
Jetson AGX Xavier Developer Kit
Supported interfaces/Instance
Pinmux setting required
Carrier board interface/connector
Sound card name
I2S1 (via DAP1 pins)
No
HD audio header
tegrasndt19xmob
AUD_MCLK
No
I2D2 (via DAP2 pins)
Yes
40-pin GPIO expansion header
FS (pin 35)
SCLK (pin 12)
DIN (pin 38)
DOUT (pin 40)
DMIC3
Yes
CLK (pin 32)
DAT (pin 16)
DMIC2(via DAP3 pins)
Yes
M2.E key slot
I2S4(via DAP4 pins)
No
DSPK1 (via DAP5 pins)
Yes
Camera header
I2S6 (via DAP6 pins)
Yes
BT SCO
HDA (HDMI/DP 0)
No
Via USB-C J512 (DP)
tegrahdagalent1
HDA (HDMI/DP 1)
No
Via USB-C J513 (DP)
HDA (HDMI/DP 2)
No
HDMI
USB
No
eSATA connector
Peripheral-specific
 
Jetson TX2 Developer Kit
Supported interfaces/Instance
Pinmux setting required
Carrier board interface/connector
Sound card name
I2S1 (via DAP1 pins)
Yes
40-pin GPIO expansion header
FS (pin 35)
SCLK (pin 12)
DIN (pin 38)
DOUT (pin 40)
tegrasndt186ref
DMIC3
Yes
CLK (pin 32)
DAT (pin 16)
AUD_MCLK
Yes
Pin 7
I2S2 (via DAP2 pins)
No
40-pin expansion header
I2S5 (via DAP5 pins)
Yes
DSPK2
No
I2S3(via DAP3 pins)
No
M2.E key slot
DMIC1 (via DAP3 pins)
Yes
DMIC2 (via DAP3 pins)
Yes
I2S4 (via DAP4 pins)
No
Camera header
DMIC3
Yes
Camera header
I2S6 (via DAP6 pins)
No
SCO
HDA (HDMI/DP 0)
No
DP
tegrahda
HDA (HDMI/DP 1)
NO
HDMI
 
Jetson TX1
Supported interfaces/Instance
Pinmux setting required
Carrier board interface/connector
Sound card name
I2S2 (via DAP2 pins)
No
tegrasndt210ref
I2S3 (via DAP3 pins)
No
M2.E key slot
DMIC1 (via DAP3 pins)
Yes
DMIC2 (via DAP3 pins)
Yes
I2S4 (via DAP4 pins)
No
Camera header
I2S5 (via DAP5 pins)
No
40-pin expansion header
I2S1 (via DAP1 pins)
Yes
40-pin GPIO expansion header
FS (pin 35)
SCLK (pin 12)
DIN (pin 38)
DOUT (pin 40)
DMIC3
Yes
CLK (pin 32)
DATA (pin 16)
AUD_MCLK
Yes
Pin 7

40-pin GPIO Expansion Header

 
Pinmux Configuration
Device Tree Configuration for a Custom Audio Card
Populate Codec Node
Jetson I2S Node
Codec and CPU DAI Link Setup
Enable Codec Driver
Update the Machine Driver to Support a Custom Audio Card
Add an Initialization Function for the Codec
Register the Initialization Function for the Codec
Add Support for Runtime Configuration of Codec Parameters
All of the carrier boards used in Jetson developer kits have a 40-pin GPIO header which exposes audio I/O connections, as shown in table above. You can use this header to connect various audio cards to your Jetson platform.
When you choose an audio codec to use with a Jetson device, be sure that:
It is hardware-compatible in terms of functional pins (I2S, DMIC, etc.), GPIO, power, and clocks required to support the codec.
It is compatible with the Jetson I2S interface (sample rates, sample sizes, frame formats, etc.).
A Linux kernel driver is available for the codec
ALSA examples are available for it to show how to configure its audio routing and general setup. Configuring the audio routing can be the most complex part of integrating an I2S codec.
The 40-pin expansion header’s pinout can be inferred from the schematics for the Jetson platform. Subsequent sections give guidance for the software changes required to interface audio cards with Jetson boards.

Pinmux Configuration

The SoC I/O pins may operate as either a GPIO or a special-function I/O (SFIO) such as I2S or DMIC. Therefore, it is necessary to make sure that the any audio I/O pins are configured as an SFIO.
If a pin is not configured as desired by default, you must perform pinmux configuration on it. See the part of the Configuring the 40-Pin Expansion Header topic that applies to your platform.
See the table in the section for information about certain pins that can have alternate functions, e.g. a pin that could be configured as DSPK, DMIC, I2S, or GPIO.

Device Tree Configuration for a Custom Audio Card

To support a custom audio card or other external audio device, you may need to add or update various device tree nodes such as clocks and power supplies.

Populate Codec Node

To enable the codec, you must add the codec under the device tree node of the device that is used to access the codec. Most codecs use either I2C or SPI for access. In the example below, the codec uses I2C for its control interface, and so is added to the appropriate I2C node.
i2c@<addr> {
sgtl5000: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
clocks = <&sgtl5000_mclk>;
micbias-resistor-k-ohms = <2>;
micbias-voltage-m-volts = <3000>;
VDDA-supply = <&vdd_3v3>;
VDDIO-supply = <&vdd_3v3>;
status = "okay";
};
};
See the device tree binding documentation to determine what properties must be populated for the codec and how to configure them.
Make sure that the relevant control interface (I2C, SPI, etc.) is enabled in the platform’s device tree. The 40-pin GPIO expansion header exposes an I2C controller; the table below shows the address of the I2C controller exposed on each Jetson platform.
Platform
40-pin expansion header I2C Address
Jetson Xavier NX
0x031e0000
Jetson Nano
0x7000c400
Jetson AGX Xavier
0x031e0000
Jetson TX2
0x0c240000
Jetson TX1
0x7000c000
Certain codecs may use an on-board oscillator as a clock source for the codec master clock (MCLK). If the codec’s device tree documentation requires that MCLK be defined, you may need to add a device tree node to represent the on-board oscillator. For example, an SGTL5000 codec may be clocked by a 12.288 MHz fixed-rate clock, which is present on the codec board, so a dummy clock is added to device tree:
clocks {
sgtl5000_mclk: sgtl5000_mclk {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <12288000>;
clock-output-names = "sgtl5000-mclk";
status = "okay";
};
};

Jetson I2S Node

The 40-pin GPIO expansion header exposes an I2S interface. The following table shows the address of the I2S interface exposed on the different Jetson platforms.
Platform
40-pin expansion header I2S address
Jetson Xavier NX
0x2901400
Jetson Nano
0x702d1300
Jetson AGX Xavier
0x02901000
Jetson TX2
0x02901100
Jetson TX1
0x702d1000
Make sure that the appropriate I2S interface is enabled by ensuring that the status property is set to okay:
i2s@<addr> {
status = "okay";
};
 

Codec and CPU DAI Link Setup

Under the sound node, update the DAI link that represents the link between the appropriate I2S interface and the codec. The ASoC machine driver parses the sound node, its DAI links, and its properties to instantiate the sound card, configure codec clocking, establish appropriate DAPM routes, and propagate runtime PCM parameters.
Configure I2S and Codec DAI Link
A DAI link must have a unique link-name which the Jetson ASoC machine driver can use to identify the link and perform any necessary codec configuration. A DAI link must also have a unique cpu-dai and codec-dai which respectively point to the SoC audio interface’s device node (e.g., I2S1 in the example below) and the codec board’s device node (SGTL5000 in the example below). cpu-dai-name and codec-dai-name are custom names which represent CPU DAI and CODEC DAI respectively. Details of the other properties are given in the following sections.
nvidia,dai-link-x {
link-name = "sgtl5000-codec";
cpu-dai = <&tegra_i2s1>;
codec-dai = <&sgtl5000>;
cpu-dai-name = "I2S1";
codec-dai-name = "sgtl5000";
format = "i2s";
bitclock-master;
frame-master;
bit-format = "s16_le";
name-prefix = "z";
};
Note that the DAI link instance associated with the 40-pin GPIO expansion header is platform-specific. Instance names are in the following table.
Platform
40-pin expansion header DAI link ID
Jetson Xavier NX
nvidia,dai-link-5
Jetson Nano
nvidia,dai-link-1
Jetson AGX Xavier
nvidia,dai-link-2
Jetson TX2
nvidia,dai-link-1
Jetson TX1
nvidia,dai-link-1
Codec as I2S Master/Slave
The codec board may support both master and slave modes. If it does check whether its Linux driver also supports both modes. If the codec and its driver both support both modes, review the driver and decide which mode to use.
Note that the device tree’s DAI link for the I2S codec interface is always configured from the perspective of the codec, so the absence of bitclock-master and frame-master implies that the codec is the slave.
If the codec operates in slave mode, there are two ways to set its clock.
If AUD_MCLK is available on the header, you can use it to drive the codec’s MCLK.
You can set AUD_MCLK to use a fixed rate by setting the following property under the device tree’s sound node:
sound {
nvidia,mclk-rate = <mclk_fixed_rate>;
mclk_parent = <mclk parent> //optional
};
Alternatively, you can set AUD_MCLK as function of the sampling rate by setting the following property under the sound node:
sound {
mclk-fs = <scaling factor for sampling rate>;
mclk_parent = <mclk parent> //optional
};
The mclk_parent property refers to the parent of AUD_MCLK. You can obtain information about possible parents and rates of AUD_MCLK from sysfs nodes at:
/sys/kernel/debug/clk/aud_mclk for Jetson Xavier NX, Jetson AGX Xavier, and Jetson TX2
/sys/kernel/debug/clk/extern1 for Jetson Nano and Jetson TX1
The parent clock’s rate must be an integer multiple of the AUD_MCLK rate. Choose the parent clock rate based on the MCLK rate that the codec requires. By default, without any of the entries above, AUD_MCLK uses the SoC audio PLL (PLLA_OUT0) as clock parent, with a frequency equal to 256 times the sampling rate.
If AUD_MCLK is not available, the codec can still act as slave if the codec driver supports using the SoC I2S bit clock to drive the codec's internal PLL which, in turn can be used to drive the codec’s MCLK. Note that SoC I2S derives its bit clock from PLLA_OUT_0 (PLL internal to SoC).
When the codec operates in master mode, the codec I2S bit clock typically is driven by the codec's internal PLL, which is driven in turn by a fixed rate external clock source. The properties below must be set in the appropriate DAI link to indicate that the codec should operate in master mode.
nvidia,dai-link-x {
bitclock-master;
frame-master;
};
I2S Mode Setting
To operate the I2S interface in LJM, RJM, or I2S, set the DAI link node’s format property to left_j, right_j, or i2s, respectively. For I2S to operate in FSYNC mode (dsp-a, dsp-b), set the format property to dsp_a or dsp_b. Configure the I2S frame-sync width as required for the appropriate I2S interface. Typically for dsp-a/b mode the frame sync width is a single bit clock.
nvidia,dai-link-x {
format = "dsp_a";
fsync-width = <0>;
};

Enable Codec Driver

The ASoC machine driver can be enabled or disabled in the Linux kernel by enabling or disabling kernel configuration symbols. On Jetson TX2, for example, the corresponding ASoC machine driver is enabled in the Linux kernel by selecting the kernel configuration symbol SND_SOC_TEGRA_T186REF_MOBILE_ALT.
To enable the SGTL5000 codec driver, update the kernel configuration entry for the SND_SOC_TEGRA_T186REF_MOBILE_ALT symbol to select this driver, so that whenever the machine driver is enabled, the SGTL5000 codec driver is also enabled. The following diff patch shows one way to accomplish this.
diff --git a/sound/soc/tegra-alt/Kconfig b/sound/soc/tegra-alt/Kconfig
index 2d559708ce2e..0bd8c1248672 100644
--- a/sound/soc/tegra-alt/Kconfig
+++ b/sound/soc/tegra-alt/Kconfig
@@ -247,6 +248,7 @@ config SND_SOC_TEGRA_T186REF_MOBILE_ALT
tristate "SoC Audio support for T186Ref Mobile"
depends on SND_SOC_TEGRA_T186REF_ALT
select SND_SOC_RT5659
+ select SND_SOC_SGTL5000
help
Say Y or M here.
A similar patch to the Jetson ASoC machine driver kernel configuration is required for enabling the codec driver on other platforms. The following table shows the kernel configuration symbol that enables the Jetson ASoC machine driver for each Jetson platform.
Platform
Jetson ASoC machine driver kernel config
Jetson Xavier NX
SND_SOC_TEGRA_T186REF_MOBILE_ALT
Jetson Nano
SND_SOC_TEGRA_T210REF_MOBILE_ALT
Jetson AGX Xavier
SND_SOC_TEGRA_T186REF_MOBILE_ALT
Jetson TX2
SND_SOC_TEGRA_T186REF_MOBILE_ALT
Jetson Nano & TX1
SND_SOC_TEGRA_T210REF_MOBILE_ALT

Update the Machine Driver to Support a Custom Audio Card

The machine driver must be updated to support a custom audio card in order to configure the codec clock and DAI params. The example below shows the machine driver update for an SGTL5000 codec.

Add an Initialization Function for the Codec

You may need to update the machine driver when you integrate a new codec in order to perform any required codec initialization.
The codec SYSCLK or MCLK signals (the clock required for internal codec operation) may be sourced from the SoC I2S bit clock or from AUD_MCLK, available on the 40-pin GPIO expansion header or external oscillator sitting on codec board. Consequently the SYSCLK source must be configured in the initialization function. Usually the codec provides set_sysclk() callbacks which are triggered by calling snd_soc_dai_set_sysclk(). This facilitates configuration, since snd_soc_dai_set_sysclk()expects the SYSCLK source as one of its parameters.
When you use the SGTL5000 with a fixed codec MCLK you must add an initialization function to set the MCLK frequency as explained below.
static int tegra_machine_sgtl5000_init(struct snd_soc_pcm_runtime *rtd)
{
struct device *dev = rtd->card->dev;
int err;
 
err = snd_soc_dai_set_sysclk(rtd->codec_dai, SGTL5000_SYSCLK, 12288000,
SND_SOC_CLOCK_IN);
if (err) {
dev_err(dev, "failed to set sgtl5000 sysclk!\n");
return err;
}
 
return 0;
}
This would set the codec MCLK to receive the clock signal from an external oscillator on the codec board.

Register the Initialization Function for the Codec

The dai_link_setup function must register the initialization function as shown below for it to be executed. The link-name property of the codec’s DAI link identifies the codec, enabling the ASoC machine driver to populate the corresponding init function. (For SGTL5000 the value of link-name is sgtl5000-codec, as the section Configure I2S and Codec DAI Link shows.)
static void dai_link_setup(struct platform_device *pdev)
{
.
.
.
/* set codec init */
for (i = 0; i < machine->num_codec_links; i++) {
if (tegra_machine_codec_links[i].name) {
.
.
.
} else if (strstr(tegra_machine_codec_links[i].name,
"sgtl5000-codec")) {
tegra_machine_codec_links[i].init =
tegra_machine_sgtl5000_init;
}
}
}
}

Add Support for Runtime Configuration of Codec Parameters

PCM parameters are populated with the help of the patch for codec shown below. This patch updates the DAI parameters that are passed to the codec whenever playback/capture starts, so that the codec uses the codec’s current property values (sample-rate, channels, etc.).
As mentioned earlier, the codec’s SYSCLK or MCLK might be sourced from the SoC I2S bit clock. In that case, the PLL may be needed to upscale the BCLK rate to the desired SYSCLK rate (usually 256*fs or 512*fs). The codec driver provides set_pll() callbacks for facilitating PLL configuration, which are triggered on calling snd_soc_dai_set_pll() from tegra_machine_dai_init(). You can infer PLL setup details from the codec driver data sheet for a given BCLK rate (equal to sample_rate*channels*word_size). The expected SYSCLK rate (scale * sample_rate), and parameters for snd_soc_dai_set_pll can be defined as required.
static int tegra_machine_dai_init(struct snd_soc_pcm_runtime *runtime,
int rate,
int channels,
u64 formats,
bool is_playback)
{
.
.
.
rtd = snd_soc_get_pcm_runtime(card, "sgtl5000-codec");
if (rtd) {
dai_params =
(struct snd_soc_pcm_stream *)rtd->dai_link->params;
dai_params->rate_min = clk_rate;
dai_params->channels_min = channels;
dai_params->formats = formats;
}
return 0;
}

HD Audio Header for Jetson AGX Xavier

 
Audio Formats Supported
Usage Guide
Jetson AGX Xavier has an “Audio Panel Header” (J511) on the bottom of the developer kit carrier board, as shown in figure below.
A close up of text on a white background Description automatically generated
Header J511 supports Intel’s HD front panel audio connector. Details of Intel’s front panel audio header pinout configuration can be found in Intel document at:
https://www.intel.com/content/www/us/en/support/articles/000005512/boards-and-kits/desktop-boards.html
The header is internally connected to the onboard ALC5658 codec.

Audio Formats Supported

The Jetson AGX Xavier ASoC driver supports these formats:
Sample size: 8 bits (S8), 16 bits (S16_LE), or 24/32 bits (S32_LE)
Sample rate: 8000, 11025, 16000, 22.5,0 24000, 32000, 44100, 48000, 88400, 96000, 176400, or 192000 Hz
Channels: 1 or 2

Usage Guide

To set up and configure the audio path to play or capture audio via the header, you must configure various ALSA mixer controls for both the Jetson device and the RT5658 codec. The following examples detail the ALSA mixer controls that you must configure.
Codec Mixer Controls
Codec mixer controls are registered by the codec driver and prefixed with the substring defined by the name-prefix property of the corresponding DAI link in sound device tree node.
To view the codec-specific mixer controls, enter this command line with the appropriate name prefix:
amixer -c tegrasndt19xmob controls | grep <name-prefix>
Alternatively, look for the codec-specific controls in the codec driver.
Playback
You can connect headphones or speakers to either or both of the playback ports, PORT 2R and PORT 2L, to play back mono or stereo recordings. Use the mixer control settings shown below.
Note:
See the manufacturer’s documentation for the Front Panel Audio Connector for port numbering details.
For mono playback to pin PORT 2R:
#AHUB Mixer Controls
amixer -c tegrasndt19xmob cset name="I2S1 Mux" "ADMAIF1"
 
#Codec Mixer Controls
amixer -c tegrasndt19xmob cset name="x Headphone Playback Volume" 30
amixer -c tegrasndt19xmob cset name="x Stereo DAC MIXR DAC R1 Switch" "off"
amixer -c tegrasndt19xmob cset name="x Stereo DAC MIXL DAC L1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x HPO R Playback Switch" "off"
amixer -c tegrasndt19xmob cset name="x HPO L Playback Switch" "on"
 
#Start playback
aplay -D hw:tegrasndt19xmob,1 <in.wav>
For mono playback to port PORT 2L:
#AHUB Mixer Controls
amixer -c tegrasndt19xmob cset name="I2S1 Mux" "ADMAIF1"
 
#Codec Mixer Controls
amixer -c tegrasndt19xmob cset name="x Headphone Playback Volume" "0x1e"
amixer -c tegrasndt19xmob cset name="x Stereo DAC MIXR DAC R1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x Stereo DAC MIXL DAC L1 Switch" "off"
amixer -c tegrasndt19xmob cset name="x HPO R Playback Switch" "on"
amixer -c tegrasndt19xmob cset name="x HPO L Playback Switch" "off"
 
#Start playback
aplay -D hw:tegrasndt19xmob,1 <in.wav>
For stereo playback to both playback ports:
#AHUB Mixer Controls
amixer -c tegrasndt19xmob cset name="I2S1 Mux" "ADMAIF1"
 
#Codec Mixer Controls
amixer -c tegrasndt19xmob cset name="x Headphone Playback Volume" "0x1e"
amixer -c tegrasndt19xmob cset name="x Stereo DAC MIXR DAC R1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x Stereo DAC MIXL DAC L1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x HPO R Playback Switch" "on"
amixer -c tegrasndt19xmob cset name="x HPO L Playback Switch" "on"
 
#Start playback
aplay -D hw:tegrasndt19xmob,1 <in.wav>
Mic Capture
You can connect microphones to either or both of the recording ports, PORT 1R and PORT 1L, to record mono or stereo sound capture. Use these mixer control settings:
For mono mic capture from PORT 1R:
amixer -c tegrasndt19xmob cset name="ADMAIF1 Mux" "I2S1"
 
# To disable capture from PORT 1L
amixer -c tegrasndt19xmob cset name="x RECMIX1L BST1 Switch" "off"
amixer -c tegrasndt19xmob cset name="x RECMIX1R BST1 Switch" "off"
 
 
#To enable capture from PORT 1R
amixer -c tegrasndt19xmob cset name="x RECMIX1L BST2 Switch" "on"
amixer -c tegrasndt19xmob cset name="x RECMIX1R BST2 Switch" "off"
 
#Volume control for PORT 1R
amixer -c tegrasndt19xmob cset name="x IN2 Boost Volume" 40
 
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC Source" "ADC1"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC1 Source" "ADC"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC MIXL ADC1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC MIXR ADC1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x TDM Data Mux" "AD1:AD2:DAC:NUL"
 
#Start record
arecord -Dhw:tegrasndt19xmob,0 -c 1 -r 48000 -f S16_LE -d 15 <out.wav>
For mono mic capture from PORT 1L:
amixer -c tegrasndt19xmob cset name="ADMAIF1 Mux" "I2S1"
 
 
# To enable capture from PORT 1L
amixer -c tegrasndt19xmob cset name="x RECMIX1L BST1 Switch" "on"'
amixer -c tegrasndt19xmob cset name="x RECMIX1R BST1 Switch" "off"'
 
#Volume control for PORT 1L
amixer -c tegrasndt19xmob cset name="x IN1 Boost Volume" 40
 
#To enable capture from PORT 1R
amixer -c tegrasndt19xmob cset name="x RECMIX1L BST2 Switch" "off"
amixer -c tegrasndt19xmob cset name="x RECMIX1R BST2 Switch" "off"
 
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC Source" "ADC1"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC1 Source" "ADC"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC MIXL ADC1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC MIXR ADC1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x TDM Data Mux" "AD1:AD2:DAC:NUL"
 
#Start record
arecord -Dhw:tegrasndt19xmob,0 -c 1 -r 48000 -f S16_LE -d 15 <out.wav>
For stereo mic capture from both recording ports:
amixer -c tegrasndt19xmob cset name="ADMAIF1 Mux" "I2S1"
 
#To enable capture from PORT 1L
amixer -c tegrasndt19xmob cset name="x RECMIX1L BST1 Switch" "on"
 
#Volume control for PORT 1L
amixer -c tegrasndt19xmob cset name="x IN1 Boost Volume" 40
 
#To enable capture from PORT 1R
amixer -c tegrasndt19xmob cset name="x RECMIX1R BST2 Switch" "on"
 
#Volume control for PORT 1R
amixer -c tegrasndt19xmob cset name="x IN2 Boost Volume" 40
 
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC Source" "ADC1"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC1 Source" "ADC"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC MIXL ADC1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x Stereo1 ADC MIXR ADC1 Switch" "on"
amixer -c tegrasndt19xmob cset name="x TDM Data Mux" "AD1:AD2:DAC:NUL"
 
#Start record
arecord -Dhw:tegrasndt19xmob,0 -c 2 -r 48000 -f S16_LE -d 15 <out.wav>

Usage and Examples

 
I2S
Playback
Capture
Loopback
DMIC
Stereo Capture
Mono Capture (L)
Mono Capture (R)
DSPK: Playback
MVC: Applying Gain on Stream with MVC
AMX
Multiplexing Two Streams
Multiplexing Three Streams
ADX: Demultiplexing a Single Stereo Stream into Two Mono Streams
SFC: Rate Conversion from 48000 to 44100 Hz
Mixer: Mixing Two Input Streams
HDMI/DP Playback
USB
Playback
Capture
This section has examples of usage of the available I/O interfaces and AHUB modules.
Note:
All examples are written to use a specific ADMAIF, but you may choose any ADMAIF you want.
In these examples:
<cardname> represents the soundcard name. Use the appropriate value from the following table.
Board name
AHUB card name
HDA card name
USB card name
Jetson Xavier NX
jetsonxaviernxa
tegrahdagalent1
 
Jetson Nano
tegrasndt210ref
tegrahda
See /proc/asound/cards for the name after plugging USB.
Jetson AGX Xavier
tegrasndt19xmob
tegrahdagalent1
Jetson TX2
tegrasndt186ref
tegrahda
Jetson TX1
tegrasndt210ref
tegrahda
<cardID> is index of the sound card’s name in the /proc/asound/cards file, e.g. 0 for the first sound card name in the file, 1 for the second, etc.
<devID> is the device ID of the sound card, as shown in the following table. For an AHUB card, it refers to the ADMAIF channel index being used.
For example, if there are 10 ADMAIF channels on a given platform, <devID> may have a value (an ADMAIF index) from 0 to 9.
In general, refer to the available PCM devices at /dev/snd/pcmC<ID>D<devID> to find possible <devID> values for a given <cardID>.
The table below lists port to <devID> mappings for HDA devices as different HDA ports are mapped to specific <devID> values.
 
Port to device ID map
Device
Port Name
PCM Device ID
Jetson Xavier NX
HDMI/DP (HDMI)
7
HDMI/DP (DP)
3
Jetson Nano
HDMI/DP
3
Jetson AGX Xavier
HDMI_DP0 (USB-C J512)
3
HDMI_DP1 (USB-C J513)
7
HDMI_DP2 (HDMI J504)
8
Jetson TX2 series
HDMI/DP 0  (DP)
3
HDMI/DP 1 (HDMI)
7
Jetson TX1
HDMI/DP (HDMI)
3
<i> is the number of the ADMAIF channel to be used.
<i−1> is the value one less than <i>.
<in.wav> and <out.wav> represent input and output sound files respectively. These files must be WAV files.
<rate> is the sampling rate to be used.
<bits> is the number of bits per sample.
<channels> is the number of channels to be used.

I2S

The following sections provide examples of use for the various modules.

Playback

To use I2S2 with ADMAIF<i> for playback:
amixer -c <cardname> cset name="I2S2 Mux" ADMAIF<i>
aplay -D hw:<cardname>,<i-1> <in.wav>

Capture

To use I2S2 with ADMAIF<i> for capture:
amixer -c <cardname> cset name="ADMAIF<i>Mux" I2S2
arecord -D hw:<cardname>,<i-1> -r <rate> -c <channels> -f <sample_format> <out.wav>

Loopback

To perform a loopback on I2S2 via ADMAIFN:
amixer -c <cardname> cset name="I2S2 Mux" "ADMAIFN"
amixer -c <cardname> cset name="ADMAIF<i> Mux" "I2S2"
amixer -c <cardname> cset name="I2S2 Loopback" "on"
aplay -D hw:<cardname>,<i-1> <in.wav>
arecord -D hw:<cardname>,<i-1> -r <rate> -c <channels> -f <sample_format> <out.wav>

DMIC

Stereo Capture

This example shows how to capture stereo data from DMIC3 via ADMAIF<i>.
For Jetson AGX Xavier:
amixer -c <cardname> cset name="ADMAIF<i> Mux" DMIC3
#Gain must be tuned as per sensitivity of the external mic
amixer -c <cardname> cset name="DMIC3 Boost Gain" 400
arecord -D hw:<cardname>,<i-1> -r 48000 -c 2 -f S16_LE <out.wav>
For other Jetson devices:
amixer -c <cardname> cset name="MVC1 Mux" DMIC3
amixer -c <cardname> cset name="ADMAIF<i> Mux" MVC1
amixer -c <cardname> cset name="DMIC3 Boost Gain" 50
amixer -c <cardname> cset name="MVC1 Vol" 12602
#32-bit stage is maintained between DMIC and MVC in order to avoid loss due to attenuation.
amixer -c <cardname> cset name="MVC1 input bit format" 32
amixer -c <cardname> cset name="DMIC3 output bit format" 32
arecord -D hw:<cardname>,<i-1> -r 48000 -c 2 -f S16_LE <out.wav>
For Jetson devices other than Jetson AGX Xavier, you must set the boost gain of DMIC to 50 or less and add volume gain through MVC. In essence, MVC is used to apply compensatory gain while the signal is attenuated in DMIC to avoid saturation.

Mono Capture (L)

This example shows how to perform mono capture from DMIC3 via ADMAIF<i> (Left MIC).
For Jetson AGX Xavier:
amixer -c <cardname> cset name="ADMAIF<i> Mux" DMIC3
amixer -c <cardname> cset name="DMIC3 Boost Gain" 400
amixer -c <cardname> cset name="DMIC3 Mono Channel Select" L
arecord -D hw:<cardname>,<i-1> -r 48000 -c 1 -f S16_LE <out.wav>
For other Jetson devices:
amixer -c <cardname> cset name="MVC1 Mux" DMIC3
amixer -c <cardname> cset name="ADMAIF<i> Mux" MVC1
amixer -c <cardname> cset name="DMIC3 Boost Gain" 50
amixer -c <cardname> cset name="MVC1 Vol" 12602
amixer -c <cardname> cset name="MVC1 input bit format" 32
amixer -c <cardname> cset name="DMIC3 output bit format" 32
amixer -c <cardname> cset name="DMIC3 Mono Channel Select" L
arecord -D hw:<cardname>,<i-1> -r 48000 -c 1 -f S16_LE <out.wav>

Mono Capture (R)

This example shows how to perform mono capture from DMIC3 via ADMAIF<i> (Right MIC).
For Jetson AGX Xavier:
amixer -c <cardname> cset name="ADMAIF<i> Mux" DMIC3
amixer -c <cardname> cset name="DMIC3 Boost Gain" 400
amixer -c <cardname> cset name="DMIC3 Mono Channel Select" R
arecord -D hw:<cardname>,<i-1> -r 48000 -c 1 -f S16_LE <out.wav>
For other Jetson devices:
amixer -c <cardname> cset name="MVC1 Mux" DMIC3
amixer -c <cardname> cset name="ADMAIF<i> Mux" MVC1
amixer -c <cardname> cset name="DMIC3 Boost Gain" 50
amixer -c <cardname> cset name="MVC1 Vol" 12602
amixer -c <cardname> cset name="MVC1 input bit format" 32
amixer -c <cardname> cset name="DMIC3 output bit format" 32
amixer -c <cardname> cset name="DMIC3 Mono Channel Select" R
arecord -D hw:<cardname>,<i-1> -r 48000 -c 1 -f S16_LE <out.wav>

DSPK: Playback

This example shows how to perform stereo playback on DSPK1 via ADMAIF<i>.
amixer -c <cardname> cset name="DSPK1 Mux" ADMAIF<i>
aplay -D hw:<cardname>,<i-1> <in.wav>

MVC: Applying Gain on Stream with MVC

This example shows how to use the MVC module to control volume during playback on I2S.
amixer -c <cardname> cset name="MVC1 Mux" ADMAIF<i>
amixer -c <cardname> cset name="I2S1 Mux" MVC1
amixer -c <cardname> cset name="MVC1 Vol" <Q8.24_Val>
aplay -D hw:<cardname>,<i-1> <in.wav>

AMX

These sections provide usage examples for multiplexing two and three streams and for demultiplexing one stereo stream into two mono streams.

Multiplexing Two Streams

This example shows how to use the AMX module to multiplex two stereo streams, DMIC2 (connected to RxCIF0) and DMIC3 (connected to RxCIF1).
amixer -c <cardname> cset name="AMX2-1 Mux" "DMIC2"
amixer -c <cardname> cset name="AMX2-2 Mux" "DMIC3"
amixer -c <cardname> cset name="AMX2 Output Channels" 4
 
amixer -c <cardname> cset name="ADMAIF<i> Mux" AMX2
amixer -c <cardname> cset name="ADMAIF<i> Channels" 4
 
amixer -c <cardname> cset name="AMX2 Byte Map 0" 0
amixer -c <cardname> cset name="AMX2 Byte Map 1" 1
amixer -c <cardname> cset name="AMX2 Byte Map 2" 2
amixer -c <cardname> cset name="AMX2 Byte Map 3" 3
amixer -c <cardname> cset name="AMX2 Byte Map 4" 4
amixer -c <cardname> cset name="AMX2 Byte Map 5" 5
amixer -c <cardname> cset name="AMX2 Byte Map 6" 6
amixer -c <cardname> cset name="AMX2 Byte Map 7" 7
 
amixer -c <cardname> cset name="AMX2 Byte Map 8" 64
amixer -c <cardname> cset name="AMX2 Byte Map 9" 65
amixer -c <cardname> cset name="AMX2 Byte Map 10" 66
amixer -c <cardname> cset name="AMX2 Byte Map 11" 67
amixer -c <cardname> cset name="AMX2 Byte Map 12" 68
amixer -c <cardname> cset name="AMX2 Byte Map 13" 69
amixer -c <cardname> cset name="AMX2 Byte Map 14" 70
amixer -c <cardname> cset name="AMX2 Byte Map 15" 71
arecord -D hw:<cardname>,<i-1> -r 48000 -c 4 -f S16_LE <out.wav>

Multiplexing Three Streams

This example shows how to use the AMX module to multiplex three stereo streams, DMIC2 (connected to RxCIF0), DMIC3 (connected to RxCIF1), and I2S (connected to RxCIF2).
amixer -c <cardname> cset name="AMX2-1 Mux" "DMIC2"
amixer -c <cardname> cset name="AMX2-2 Mux" "DMIC3"
amixer -c <cardname> cset name="AMX2-3 Mux" "I2S2"
amixer -c <cardname> cset name="I2S2 Channels" 2
 
amixer -c <cardname> cset name="AMX2 Output Channels" 6
 
amixer -c <cardname> cset name="ADMAIF<i> Mux" AMX2
amixer -c <cardname> cset name="ADMAIF<i> Channels" 6
 
amixer -c <cardname> cset name="AMX2 Byte Map 0" 0
amixer -c <cardname> cset name="AMX2 Byte Map 1" 1
amixer -c <cardname> cset name="AMX2 Byte Map 2" 2
amixer -c <cardname> cset name="AMX2 Byte Map 3" 3
amixer -c <cardname> cset name="AMX2 Byte Map 4" 4
amixer -c <cardname> cset name="AMX2 Byte Map 5" 5
amixer -c <cardname> cset name="AMX2 Byte Map 6" 6
amixer -c <cardname> cset name="AMX2 Byte Map 7" 7
 
amixer -c <cardname> cset name="AMX2 Byte Map 8" 64
amixer -c <cardname> cset name="AMX2 Byte Map 9" 65
amixer -c <cardname> cset name="AMX2 Byte Map 10" 66
amixer -c <cardname> cset name="AMX2 Byte Map 11" 67
amixer -c <cardname> cset name="AMX2 Byte Map 12" 68
amixer -c <cardname> cset name="AMX2 Byte Map 13" 69
amixer -c <cardname> cset name="AMX2 Byte Map 14" 70
amixer -c <cardname> cset name="AMX2 Byte Map 15" 71
 
amixer -c <cardname> cset name="AMX2 Byte Map 16" 128
amixer -c <cardname> cset name="AMX2 Byte Map 17" 129
amixer -c <cardname> cset name="AMX2 Byte Map 18" 130
amixer -c <cardname> cset name="AMX2 Byte Map 19" 131
amixer -c <cardname> cset name="AMX2 Byte Map 20" 132
amixer -c <cardname> cset name="AMX2 Byte Map 21" 133
amixer -c <cardname> cset name="AMX2 Byte Map 22" 134
amixer -c <cardname> cset name="AMX2 Byte Map 23" 135
 
arecord -D hw:<cardname>,<i-1> -r 48000 -c 6 -f S16_LE <out.wav>

ADX: Demultiplexing a Single Stereo Stream into Two Mono Streams

This example shows how to use the ADX module to demultiplex 16-bit stereo streams onto DSPK1 and DSPK2.
amixer -c <cardname> cset name="ADX1 Mux" ADMAIF<i>
amixer -c <cardname> cset name="ADX1 Input Channels" 2
 
amixer -c <cardname> cset name="DSPK1 Mux" "ADX1-1"
amixer -c <cardname> cset name="DSPK2 Mux" "ADX1-1"
amixer -c <cardname> cset name="ADX1 Output1 Channels” 1
amixer -c <cardname> cset name="ADX1 Output2 Channels” 1
 
amixer -c <cardname> cset name="ADX1 Byte Map 0" 0
amixer -c <cardname> cset name="ADX1 Byte Map 1" 1
amixer -c <cardname> cset name="ADX1 Byte Map 2" 2
amixer -c <cardname> cset name="ADX1 Byte Map 3" 3
amixer -c <cardname> cset name="ADX1 Byte Map 4" 64
amixer -c <cardname> cset name="ADX1 Byte Map 5" 65
amixer -c <cardname> cset name="ADX1 Byte Map 6" 66
amixer -c <cardname> cset name="ADX1 Byte Map 7" 67
aplay -D hw:<cardname>,<i-1> <in.wav>

SFC: Rate Conversion from 48000 to 44100 Hz

This example shows how to capture using ADMAIF2, where ADMAIF3 feeds the SFC1 and generates sample frequency-converted output.
amixer -c <cardname> cset name="SFC1 Mux" ADMAIF3
amixer -c <cardname> cset name="ADMAIF2 Mux" SFC1
amixer -c <cardname> cset name="SFC1 input rate" 48000
amixer -c <cardname> cset name="SFC1 output rate" 44100
aplay -D hw:<cardname>,2 <in.wav>
arecord -D hw:<cardname>,1 -r 44100 -c <channels> -f <sample_format> <out.wav>

Mixer: Mixing Two Input Streams

This example shows how to mix two input streams to generate a single output stream via Adder1 of the Mixer module.
amixer -c <cardname> cset name="MIXER1-1 Mux" ADMAIF1
amixer -c <cardname> cset name="MIXER1-2 Mux" ADMAIF2
amixer -c <cardname> cset name="Adder1 RX1" 1
amixer -c <cardname> cset name="Adder1 RX2" 1
amixer -c <cardname> cset name="Mixer Enable" 1
amixer -c <cardname> cset name="ADMAIF3 Mux" MIXER1-1
aplay -D hw:<cardname>,0 <inputfile1.wav>
aplay -D hw:<cardname>,1 <inputfile2.wav>
arecord -D hw:<cardname>,2 -r <rate> -c <channels> -f <sample_format> <out.wav>

HDMI/DP Playback

This example shows how to perform playback on an HDMI device (e.g. a monitor with speakers).
aplay -Dhw:<cardname>,<devID> <in.wav>

USB

The following sections provide usage examples of playback and capture on USB.

Playback

This example shows how to perform playback on a USB device.
aplay -Dhw:<cardname>,<devID> <in.wav>

Capture

This example shows how to perform capture on a USB device.
arecord -Dhw:<cardname>,<devID> -r <rate> -c <channels> -f <sample_format> <out.wav>

Troubleshooting

 
No Sound Cards Found
Sound Not Audible or Not Recorded
I2S Software Reset Failed
XRUN Observed During Playback/Capture
This section describes some issues that are liable to occur when you are working with ASoC drivers, and their probable causes and solutions.

No Sound Cards Found

This has several possible causes. Some typical ones are described below. In most cases the dmesg output can provide clues.
Source Widget not Found
The dmesg output shows that no ‘source widget’ was found:
$ dmesg | grep "ASoC"
[4.874720] tegra-audio-t210ref tegra-audio-t210ref.0: ASoC: no source widget found for x OUT
[4.874724] tegra-audio-t210ref tegra-audio-t210ref.0: ASoC: Failed to add route x OUT-> direct-> Headphone-x
[4.874736] tegra-audio-t210ref tegra-audio-t210ref.0: ASoC: no sink widget found for x IN
[4.874739] tegra-audio-t210ref tegra-audio-t210ref.0: ASoC: Failed to add route LineIn-x-> direct-> x IN
x OUT and x IN are the widgets of the spdif-dit dummy codec. The ASoC may not have instantiated this codec. Enter this command to determine whether spdif-dit is instantiated:
cat /sys/kernel/debug/asoc/codecs
If spdif-dit is not instantiated, it is most likely because the SPDIF codec is not enabled in the Linux kernel configuration. Enter these commands to determine whether the SPDIF codec is enabled:
$ zcat /proc/config.gz | grep CONFIG_SND_SOC_SDPIF
The SPDIF codec is enabled if the output is:
CONFIG_SND_SOC_SDPIF=y
CPU DAI Not Registered
The dmesg output shows that no ‘CPU DAI’ was found:
$ dmesg | grep "ASoC"
[4.874720] tegra-audio-t210ref tegra-audio-t210ref.0: ASoC: CPU DAI DAP not registered
In this case, “DAP” is the CPU DAI for the I2S-to-codec DAI link.
The ASoC may not have instantiated the I2S codec. To determine whether the codec is instantiated, enter the command:
$ cat /sys/kernel/debug/asoc/codecs
If the I2S codec is instantiated, it has a name like tegra210-i2s, where the “tegra” number indicates the type of processor in the Jetson device.
Identifying the DAI link at the point of failure can give a clue to the I2S instance number that failed to instantiate. Accordingly, you can instantiate the I2S codec driver by providing a suitable entry point in the device tree structure (DTS) file as described in Codec Driver Instantiation Using Device Tree.

Sound Not Audible or Not Recorded

Follow this procedure to diagnose the issue:
1. Determine whether the DAPM path is completed. For tracing the DAPM path, DAPM tracing events must be enabled before you run the playback/record use case using the command:
for i in `find /sys/kernel/debug/tracing/events -name "enable" | grep snd_soc_`; do echo 1 > $i; done
If the DAPM path is not complete, the use case cannot proceed. The DAPM path is populated in the file below as and when it is set up.
cat /sys/kernel/debug/tracing/trace_pipe | grep *
Below is sample complete DAPM path for use case of recording through the microphone jack on Jetson AGX through rt5659, ADMAIF1, and I2S1.
snd_soc_dapm_path: *ADMAIF1 TX -> (direct) -> ADMAIF1 Transmit
snd_soc_dapm_path: *ADMAIF1 Mux -> (direct) -> ADMAIF1 TX
snd_soc_dapm_path: *I2S1 RX -> I2S1 -> ADMAIF1 Mux
snd_soc_dapm_path: *I2S1 Receive -> (direct) -> I2S1 RX
snd_soc_dapm_path: *I2S1 CIF Transmit-I2S1 Receive -> (direct) -> I2S1
snd_soc_dapm_path: *I2S1 CIF Transmit -> (direct) -> I2S1 CIF
snd_soc_dapm_path: *I2S1 CIF TX -> (direct) -> I2S1 CIF
snd_soc_dapm_path: *I2S1 DAP RX -> (direct) -> I2S1 CIF
snd_soc_dapm_path: *I2S1 DAP Receive -> (direct) -> I2S1 DAP
snd_soc_dapm_path: *x AIF1 Capture-I2S1 DAP Receive -> (direct) ->
snd_soc_dapm_path: *x AIF1 Capture -> (direct) -> x AIF1
snd_soc_dapm_path: *x AIF1TX -> (direct) -> x AIF1 Capture
snd_soc_dapm_path: *x IF1 ADC -> (direct) -> x AIF1TX
snd_soc_dapm_path: *x IF1 DAC1 -> IF1_DAC1 -> x SPDIF
snd_soc_dapm_path: *x IF1 DAC1 -> (direct) -> x IF1
snd_soc_dapm_path: *x IF1 DAC1 R -> IF1 DAC1 ->
snd_soc_dapm_path: *x DAC R1 Mux -> DAC1 Switch ->
snd_soc_dapm_path: *x DAC1 MIXR -> DAC R1 Switch ->
snd_soc_dapm_path: *x DAC1 MIXR -> (direct) -> x DAC_REF
snd_soc_dapm_path: *x DAC_REF -> DAC_REF -> x IF3 ADC
snd_soc_dapm_path: *x DAC_REF -> (direct) -> x TDM AD2:DAC
snd_soc_dapm_path: *x DAC_REF -> (direct) -> x TDM AD1:AD2:DAC
snd_soc_dapm_path: *x TDM AD1:AD2:DAC -> AD1:AD2:DAC:NUL -> x TDM
snd_soc_dapm_path: *x TDM Data Mux -> L/R -> x
snd_soc_dapm_path: *x IF1 67 ADC Swap Mux -> (direct)
snd_soc_dapm_path: *x TDM Data Mux -> L/R -> x
snd_soc_dapm_path: *x IF1 45 ADC Swap Mux -> (direct)
snd_soc_dapm_path: *x TDM Data Mux -> L/R -> x
snd_soc_dapm_path: *x IF1 23 ADC Swap Mux -> (direct)
snd_soc_dapm_path: *x TDM Data Mux -> L/R -> x
snd_soc_dapm_path: *x IF1 01 ADC Swap Mux -> (direct)
snd_soc_dapm_path: *x IF1 DAC1 -> (direct) -> x IF1
snd_soc_dapm_path: *x IF1 DAC1 L -> IF1 DAC1 ->
snd_soc_dapm_path: *x DAC L1 Mux -> DAC1 Switch ->
snd_soc_dapm_path: *x DAC1 MIXL -> DAC L1 Switch ->
snd_soc_dapm_path: *x DAC1 MIXL -> (direct) -> x DAC_REF
snd_soc_dapm_path: *x IF_ADC2 -> IF_ADC2 -> x IF2 ADC
snd_soc_dapm_path: *x IF_ADC2 -> (direct) -> x TDM AD2:DAC
snd_soc_dapm_path: *x IF_ADC2 -> (direct) -> x TDM AD1:AD2:DAC
snd_soc_dapm_path: *x IF_ADC1 -> (direct) -> x TDM AD1:AD2:DAC
snd_soc_dapm_path: *x Stereo1 ADC Volume R -> (direct) ->
snd_soc_dapm_path: *x Stereo1 ADC Volume L -> (direct) ->
snd_soc_dapm_path: *x Stereo1 ADC MIXR -> (direc:qt) -> x
snd_soc_dapm_path: *x Stereo1 ADC MIXL -> (direct) -> x
snd_soc_dapm_path: *x Stereo1 ADC R1 Mux -> ADC1 Switch
snd_soc_dapm_path: *x Stereo1 ADC L1 Mux -> ADC1 Switch
snd_soc_dapm_path: *x Stereo1 ADC R Mux -> ADC ->
snd_soc_dapm_path: *x Stereo1 ADC L Mux -> ADC ->
snd_soc_dapm_path: *x ADC1 R -> ADC1 -> x Stereo1
snd_soc_dapm_path: *x ADC1 L -> ADC1 L -> x
snd_soc_dapm_path: *x ADC1 L -> ADC1 L -> x
snd_soc_dapm_path: *x ADC1 L -> ADC1 -> x Stereo1
snd_soc_dapm_path: *x RECMIX1R -> (direct) -> x ADC1 R
snd_soc_dapm_path: *x RECMIX1L -> (direct) -> x ADC1 L
snd_soc_dapm_path: *x BST1 -> BST1 Switch -> x RECMIX1R
snd_soc_dapm_path: *x BST1 -> BST1 Switch -> x RECMIX1L
snd_soc_dapm_path: *x IN1N -> (direct) -> x BST1
snd_soc_dapm_path: *x IN1P -> (direct) -> x BST1
snd_soc_dapm_path: *x Mic Jack -> (direct) -> x IN2P
snd_soc_dapm_path: *x Mic Jack -> (direct) -> x IN1P
You must ensure that the physical codec pins (input/output, IN1P, and IN2P in this case) are connected directly or indirectly to codec DAI’s AIF_OUT/AIF_IN. AIF_OUT/AIF_IN (AIF1 Capture in this case) interface with the CPU DAI (I2S in this case), which in turn must be connected directly or indirectly to the platform DAI (ADMAIF in this case).
2. Confirm the settings for the audio interface pins. The pins for the audio interface must be configured as special function IOs (SFIOs) and not GPIOs. Then the pinmux settings for the SFIO must select the desired audio function. See Board Interfaces to determine whether the pinmux settings are required. If so, see the appropriate section of the topic Platform Adaptation and Bring-Up for pinmux change instructions.
To verify the default SFIO pinmux configuration, check the pinmux node in the appropriate device tree source file after applying override in case of SFIO configuration through override.
3. Confirm that the audio interface’s status property is set to okay in the appropriate device tree source file.
For example, for Jetson TX2, the device tree file is at:
hardware/nvidia/soc/t210/kernel-dts/tegra210-soc/tegra210-audio.dtsi
4. Probe the audio signals with an oscilloscope.
For example, if using I2S, probe the frame sync (FS) and bit clock (BCLK) to verify that the timings are correct.

I2S Software Reset Failed

A common problem is that the I2S software reset fails when starting playback or capture via an I2S interface. Error messages like this one appear in the dmesg log:
tegra210-i2s tegra210-i2s.0: Failed at I2S0_TX sw reset
tegra210-i2s tegra210-i2s.0: ASoC: PRE_PMU: I2S1 DAP TX event failed: -22
This problem occurs when the clock for the I2S interface is not active and hence the software reset fails. It typically occurs when the I2S interface is the bit clock slave and hence the bit clock is provided by an external device such as a codec. If this problem occurs, check that the bit clock is being enabled when the playback or capture is initiated.

XRUN Observed During Playback/Capture

An XRUN is either an underrun (on playback) or overrun (on capture) of the audio circular buffer. In the case of playback, the CPU writes to the audio circular buffer. The DMA reads it and sends the data to the appropriate audio interface (such as I2S, etc.) via the AHUB. In the case of capture, the DMA writes to the audio circular buffer (with data received from the AHUB) and the CPU reads it.
An XRUN event typically indicates that the CPU is unable to keep up with the DMA. In the case of playback, the DMA reads stale data. In the case of capture, data is lost. Hence, an XRUN event can signify a system performance/latency issue, which can have many different causes.
If an XRUN occurs, try these measures to determine whether there is a performance issue:
Enable maximum performance by running the jetson_clocks.sh script. This script is in the user HOME directory on the Jetson platform’s root file system.
Use a RAM file system for reading and writing the audio data. The default root file system format for Jetson platforms is EXT4 with journaling enabled. Latencies have been observed with journaling file systems such as EXT4, and can lead to XRUN events. Enter these commands to create a simple 100 MB RAM file system:
$ sudo mkdir /mnt/ramfs
$ sudo mount -t tmpfs -o size=100m tmpfs /mnt/ramfs
You can increase the size of the audio circular buffer to reduce the impact of system latencies. The default size of the audio circular buffer is 32 KB. It is defined by the buffer_bytes_max member of the tegra_alt_pcm_hardware structure in the Linux kernel source file:
kernel/nvidia/sound/soc/tegra-alt/utils/tegra_pcm_alt.c