# Audio spectrogram¶

## Background¶

In this example we will go through the steps to build a DALI audio processing pipeline, including the calculation of a spectrogram. A spectrogram is a representation of a signal (e.g. an audio signal) that shows the evolution of the frequency spectrum in time.

Typically, a spectrogram is calculated by computing the fast fourier transform (FFT) over a series of overlapping windows extracted from the original signal. The process of dividing the signal in short term sequences of fixed size and applying FFT on those independently is called Short-time Fourier transform (STFT). The spectrogram is then calculated as the (typically squared) complex magnitude of the STFT.

Extracting short term windows of the original image affects the calculated spectrum by producing aliasing artifacts. This is often called spectral leakage. To control/reduce the spectral leakage effect, we use different window functions when extracting the windows. Some examples of window functions are: Hann, Hanning, etc.

It is beyond the scope of this example to go deeper into the details of the signal processing concepts we mentioned above. More information can be found here: - STFT - Window functions

## Reference implementation¶

To verify the correctness of DALI’s implementation, we will compare it against librosa (https://librosa.github.io/librosa/).

:

import librosa as librosa
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import librosa.display


Librosa provides an API to calculate the STFT, producing a complex output (i.e. complex numbers). It is then trivial to calculate the power spectrum from the complex STFT by the following

:

# Size of the FFT, which will also be used as the window length
n_fft=2048

# Step or stride between windows. If the step is smaller than the window lenght, the windows will overlap
hop_length=512

# Calculate the spectrogram as the square of the complex magnitude of the STFT
spectrogram_librosa = np.abs(librosa.stft(
y, n_fft=n_fft, hop_length=hop_length, win_length=n_fft, window='hann')) ** 2


We can now transform the spectrogram output to a logarithmic scale by transforming the amplitude to decibels. While doing so we will also normalize the spectrogram so that its maximum represent the 0 dB point.

:

spectrogram_librosa_db = librosa.power_to_db(spectrogram_librosa, ref=np.max)


The last step is to display the spectrogram

:

librosa.display.specshow(spectrogram_librosa_db, sr=sr, y_axis='log', x_axis='time', hop_length=hop_length)
plt.title('Reference power spectrogram')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
plt.show() ## Calculating the spectrogram using DALI¶

To demonstrate DALI’s Spectrogram operator we will define a DALI pipeline, whose input will be provided externally with the help of ExternalSource operator. For demonstration purposes, we can just feed the same input in every iteration, as we will be only calculating one spectrogram.

:

from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
import nvidia.dali as dali

class SpectrogramPipeline(Pipeline):
def __init__(self, device, batch_size, nfft, window_length, window_step, num_threads=1, device_id=0):
self.device = device

self.batch_data = []
for _ in range(batch_size):
self.batch_data.append(np.array(y, dtype=np.float32))

self.external_source = ops.ExternalSource()
self.spectrogram = ops.Spectrogram(device=self.device,
nfft=nfft,
window_length=window_length,
window_step=window_step)

def define_graph(self):
self.data = self.external_source()
out = self.data.gpu() if self.device == 'gpu' else self.data
out = self.spectrogram(out)
return out

def iter_setup(self):
self.feed_input(self.data, self.batch_data)


With the pipeline defined, we can now just build it and run it

:

pipe = SpectrogramPipeline(device='gpu', batch_size=1, nfft=n_fft, window_length=n_fft, window_step=hop_length)
pipe.build()
outputs = pipe.run()
spectrogram_dali = outputs.as_cpu().at(0)


and display it as we did with the reference implementation

:

spectrogram_dali_db = librosa.power_to_db(spectrogram_dali, ref=np.max)
librosa.display.specshow(spectrogram_dali_db, sr=sr, y_axis='log', x_axis='time', hop_length=hop_length)
plt.title('DALI power spectrogram')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
plt.show() As a last sanity check, we can verify that the numerical difference between the reference implementation and DALI’s is insignificant

:

print("Average error: {0:.5f} dB".format(np.mean(np.abs(spectrogram_dali_db - spectrogram_librosa_db))))
assert(np.allclose(spectrogram_dali_db, spectrogram_librosa_db, atol=2))

Average error: 0.00359 dB


## Mel spectrogram¶

The mel scale is a non-linear transformation of frequency scale based on the perception of pitches. The mel scale is calculated so that two pairs of frequencies separated by a delta in the mel scale are perceived by humans as being equidistant. More information can be found here: https://en.wikipedia.org/wiki/Mel_scale.

In machine learning applications involving speech and audio, we typically want to represent the power spectrogram in the mel scale domain. We do that by applying a bank of overlapping triangular filters that compute the energy of the spectrum in each band.

Typically, we want the mel spectrogram represented in decibels. We can calculate a mel spectrogram in decibels by using the following DALI pipeline.

:

class MelSpectrogramPipeline(Pipeline):
def __init__(self, device, batch_size, nfft, window_length, window_step, num_threads=1, device_id=0):
self.device = device

self.batch_data = []
for _ in range(batch_size):
self.batch_data.append(np.array(y, dtype=np.float32))

self.external_source = ops.ExternalSource()
self.spectrogram = ops.Spectrogram(device=self.device,
nfft=nfft,
window_length=window_length,
window_step=window_step)

self.mel_fbank = ops.MelFilterBank(device=self.device,
sample_rate=sr,
nfilter = 128,
freq_high = 8000.0)

self.dB = ops.ToDecibels(device=self.device,
multiplier = 10.0,
cutoff_db = -80)

def define_graph(self):
self.data = self.external_source()
out = self.data.gpu() if self.device == 'gpu' else self.data
out = self.spectrogram(out)
out = self.mel_fbank(out)
out = self.dB(out)
return out

def iter_setup(self):
self.feed_input(self.data, self.batch_data)

:

pipe = MelSpectrogramPipeline(device='cpu', batch_size=1, nfft=n_fft, window_length=n_fft, window_step=hop_length)
pipe.build()
outputs = pipe.run()
mel_spectrogram_dali_db = outputs.at(0)


We can now verify that it produces the same result as Librosa

:

librosa.display.specshow(mel_spectrogram_dali_db, sr=sr, y_axis='mel', x_axis='time', hop_length=hop_length)
plt.title('DALI Mel-frequency spectrogram')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
plt.show() :

mel_spectrogram_librosa = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128, fmax=8000)
mel_spectrogram_librosa_db = librosa.power_to_db(mel_spectrogram_librosa, ref=np.max)
assert(np.allclose(mel_spectrogram_dali_db, mel_spectrogram_librosa_db, atol=1))


## Mel-frequency cepstral coefficients (MFCCs)¶

MFCCs are an alternative representation of the Mel-frequency spectrogram often used in audio applications. The MFCCs are calculated by applying the discrete cosine transform (DCT) to a mel-frequency spectrogram.

DALI’s implementation of DCT uses uses the formulas described in https://en.wikipedia.org/wiki/Discrete_cosine_transform

In addition to the DCT, a cepstral filter (also known as liftering) can be applied to emphasize higher order coefficients.

A liftered cepstral coefficient is calculated according to the formula

\begin{align}\begin{aligned} \widehat{\text{MFCC}_i} = w_{i} \cdot \text{MFCC}_{i}\\where\end{aligned}\end{align}
\begin{align}\begin{aligned} w_i = 1 + \frac{L}{2}\sin\Big(\frac{\pi i}{L}\Big)\\where :math:L is the *liftering* coefficient.\end{aligned}\end{align}

We can use DALI’s MFCC operator to transform the mel-spectrogram into a set of MFCCs

:

class MFCCPipeline(Pipeline):
def __init__(self, device, batch_size, nfft, window_length, window_step,
dct_type, n_mfcc, normalize, lifter, num_threads=1, device_id=0):
self.device = device

self.batch_data = []
for _ in range(batch_size):
self.batch_data.append(np.array(y, dtype=np.float32))

self.external_source = ops.ExternalSource()
self.spectrogram = ops.Spectrogram(device=self.device,
nfft=nfft,
window_length=window_length,
window_step=window_step)

self.mel_fbank = ops.MelFilterBank(device=self.device,
sample_rate=sr,
nfilter = 128,
freq_high = 8000.0)

self.dB = ops.ToDecibels(device=self.device,
multiplier = 10.0,
cutoff_db = -80.0)

self.mfcc = ops.MFCC(device=self.device,
axis=0,
dct_type=dct_type,
n_mfcc=n_mfcc,
normalize=normalize,
lifter=lifter)

def define_graph(self):
self.data = self.external_source()
out = self.data.gpu() if self.device == 'gpu' else self.data
out = self.spectrogram(out)
out = self.mel_fbank(out)
out = self.dB(out)
out = self.mfcc(out)
return out

def iter_setup(self):
self.feed_input(self.data, self.batch_data)


Let’s now run the pipeline and display the output as we did previously

:

pipe = MFCCPipeline(device='cpu', batch_size=1, nfft=n_fft, window_length=n_fft, window_step=hop_length,
dct_type=2, n_mfcc=40, normalize=True, lifter=0)
pipe.build()
outputs = pipe.run()
mfccs_dali = outputs.at(0)

:

plt.figure(figsize=(10, 4))
librosa.display.specshow(mfccs_dali, x_axis='time')
plt.colorbar()
plt.title('MFCC (DALI)')
plt.tight_layout()
plt.show() As a last step, let’s verify that this implementation produces the same result as Librosa. Please note, we are comparing the ortho-normalized MFCCs, as Librosa’s DCT implementation uses a different formula which causes the output to be scaled by a factor of 2 when we compare it with the Wikipedia’s formulae.

:

mfccs_librosa = librosa.feature.mfcc(S=mel_spectrogram_librosa_db,
dct_type=2, n_mfcc=40, norm='ortho', lifter=0)
assert(np.allclose(mfccs_librosa, mfccs_dali, atol=1))