# BAST: Binaural Audio Spectrogram Transformer for Binaural Sound Localization
**Authors**: Sheng Kuang, Jie Shi, Kiki van der Heijden, Siamak Mehrkanoon
> sheng.kuang@maastrichtuniversity.nl
> j.shi1@uu.nl
> kiki.vanderheijden@donders.ru.nl
> siamak.mehrkanoon@maastrichtuniversity.nl;s.mehrkanoon@uu.nlDepartment of Information and Computing Sciences, Utrecht University, Utrecht, The NetherlandsDepartment of Data Science and Knowledge Engineering, Maastricht University, The Netherlands.Donders Institute for Brain Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
Abstract
Accurate sound localization in a reverberation environment is essential for human auditory perception. Recently, Convolutional Neural Networks (CNNs) have been utilized to model the binaural human auditory pathway. However, CNN shows barriers in capturing the global acoustic features. To address this issue, we propose a novel end-to-end Binaural Audio Spectrogram Transformer (BAST) model to predict the sound azimuth in both anechoic and reverberation environments. Two modes of implementation, i.e. BAST-SP and BAST-NSP corresponding to BAST model with shared and non-shared parameters respectively, are explored. Our model with subtraction interaural integration and hybrid loss achieves an angular distance of 1.29 degrees and a Mean Square Error of 1e-3 at all azimuths, significantly surpassing CNN based model. The exploratory analysis of the BAST’s performance on the left-right hemifields and anechoic and reverberation environments shows its generalization ability as well as the feasibility of binaural Transformers in sound localization. Furthermore, the analysis of the attention maps is provided to give additional insights on the interpretation of the localization process in a natural reverberant environment.
keywords: Transformer , Sound localization , Binaural integration journal: A
1 Introduction
Sound source localization is a fundamental ability in everyday life. Accurately and precisely localizing incoming auditory streams is required for auditory perception and social communication. In the past decades, the biological basis and neural mechanism of sound localization have been extensively explored [1, 2, 3, 4]. Normal hearing listeners extract horizontal acoustic cues by mainly relying on the interaural level differences (ILD) and interaural time differences (ITD) of the auditory input. These cues are encoded through the human auditory subcortical pathway, in which the auditory structures in the brainstem integrate and convey the binaural signals from cochleas to the auditory cortex [2, 4]. However, sound localization is frequently affected by the noise and reverberations in the complex real-word environment, which distort the spatial cues of the sound source of interest [5]. Yet, it is still not clear how the spatial position of acoustic signals in complex listening environments is extracted by the human brain.
Recently, Deep Learning (DL) [6] has been proposed to model auditory processing and has achieved great success. These approaches enable optimization of auditory models for real-life auditory environment [7, 8, 9, 10]. In the early attempts, DL methods were combined with conventional feature engineering to deal with the noise and reverberation problems [7, 11]. For instance, in [7], binaural spectral and spatial features were separately extracted, providing complementary information for a two-layer Deep Neural Network (DNN). Similarly, in [8, 12, 13], deep neural networks were used to de-noise and de-reverberate complex sound stimuli. In a CNN-based azimuth estimation approach, researchers utilized a Cascade of Asymmetric Resonators with Fast-Acting Compression to analyze sound signals and used onsite-generated correlograms to eliminate the echo interference [14]. However, most of these approaches highly depend on feature selection. To reduce this constraint, end-to-end Deep Residual Network (DRN) was recommended [15, 16]. Instead of selecting features from acoustic signal, raw spectrograms of sound was utilized in Deep Residual Network for azimuth prediction [16]. DRN was shown to be robust even in the presence of unknown noise interference at the low signal-to-noise ratio. Subsequently, [9] proposed a pure attention-based Audio Spectrogram Transformer (AST) and achieved the state-of-the-art results for audio classification on multiple datasets. Although these DL-based methods have yielded promising results, however due to a lack of similar architectures to the human binaural auditory pathway, they may not resemble the neural processing of sound localization.
To encode the neural mechanisms underlying sound localization, the performance of deep learning methods is commonly compared to the human sound localization behavior [17, 18, 19]. For instance, [19] systematically explored the performance of binaural sound clips localization of a CNN in a real-life listening environment, however, its architecture does not resemble the structure of human auditory pathway. This issue has been addressed by utilizing a hierarchical neurobiological-inspired CNN (NI-CNN) to model the binaural characteristics of human spatial hearing [17]. This unique hierarchical design, models the binaural signal integration process and is shown to have brain-like latent feature representations. However, NI-CNN [17] is not an end-to-end model as it leverages a cochlear method to generate auditory nerve representations as model input. Furthermore, considering the wide range of frequencies of sound input, the convolution operations in NI-CNN mainly extract local-scale features and therefore may exhibit limitations for extracting global features in the acoustic time-frequency spectrogram.
In this study, we build on the success and barriers of previously proposed deep neural networks at localizing sound sources to further develop an end-to-end transformer based model for human sound localization, which captures the global acoustic features from auditory spectrograms. We aimed at (i) investigating the performance of a pure Transformer-based hierarchical binaural neural network for addressing human real-life sound localization; (ii) exploring the effect of various loss functions and binaural integration methods on the localization acuity at different azimuths; (iii) visualizing the attention flow of the proposed model to demonstrate the localization process.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: Binaural Audio Processing with Transformer Encoders
### Overview
The image presents a block diagram illustrating a binaural audio processing system using transformer encoders. It is divided into three main sections: (a) feature extraction and interaural integration, (b) the structure of a transformer encoder, and (c) three methods for interaural integration.
### Components/Axes
**Section (a): Feature Extraction and Interaural Integration**
* **Input:** Spectrograms labeled "Left" and "Right".
* **Overlapped Patch Generation:** Indicates the division of the spectrograms into overlapping patches. The patches are labeled with "N<sub>H</sub>N<sub>T</sub>".
* **Linear Projection:** A linear transformation applied to the patches.
* **Patch Embeddings:** The output of the linear projection, representing the embedded patches.
* **Position Embeddings:** Positional information added to the patch embeddings.
* **Transformer Encoder-L:** Transformer encoder for the left channel.
* **Transformer Encoder-R:** Transformer encoder for the right channel.
* **Interaural Integration:** Combines information from the left and right channels.
* **Transformer Encoder-C:** Transformer encoder applied after interaural integration.
* **Average over Patches:** Averages the output over all patches.
* **Linear Layer:** A final linear layer that maps the output to (x, y) ∈ R<sup>2</sup>.
**Section (b): Transformer Encoder**
* **Patch Embeddings:** Input to the transformer encoder.
* **3x:** Indicates that the following block is repeated three times.
* **Layer Norm:** Layer normalization.
* **Multi-Head Attention:** Multi-head attention mechanism.
* **MLP:** Multilayer Perceptron.
* **Output Sequence:** The output of the transformer encoder.
**Section (c): Interaural Integration Methods**
* **Method 1: Concatenation:** Concatenates the left and right channel features.
* Left: D x N<sub>H</sub>N<sub>T</sub>
* Right: D x N<sub>H</sub>N<sub>T</sub>
* Output: 2D x N<sub>H</sub>N<sub>T</sub>
* **Method 2: Addition:** Adds the left and right channel features.
* Left: D x N<sub>H</sub>N<sub>T</sub>
* Right: D x N<sub>H</sub>N<sub>T</sub>
* Output: D x N<sub>H</sub>N<sub>T</sub>
* **Method 3: Subtraction:** Subtracts the right channel features from the left channel features.
* Left: D x N<sub>H</sub>N<sub>T</sub>
* Right: D x N<sub>H</sub>N<sub>T</sub>
* Output: D x N<sub>H</sub>N<sub>T</sub>
### Detailed Analysis
**Section (a):**
1. The process begins with left and right channel spectrograms.
2. These spectrograms are divided into overlapping patches, each of size N<sub>H</sub>N<sub>T</sub>.
3. Each patch undergoes linear projection, resulting in patch embeddings.
4. Positional embeddings are added to the patch embeddings to incorporate sequential information.
5. The left and right channel embeddings are processed by separate transformer encoders (Transformer Encoder-L and Transformer Encoder-R).
6. The outputs of these encoders are then integrated using one of the methods described in section (c).
7. The integrated features are passed through another transformer encoder (Transformer Encoder-C).
8. The output is averaged over all patches.
9. Finally, a linear layer maps the averaged output to a 2D coordinate (x, y) ∈ R<sup>2</sup>.
**Section (b):**
1. The transformer encoder consists of a series of repeated blocks (3x).
2. Each block includes layer normalization, multi-head attention, and an MLP.
3. Residual connections are used to improve training stability.
4. The output of the transformer encoder is a sequence of embeddings.
**Section (c):**
1. Three methods for interaural integration are presented: concatenation, addition, and subtraction.
2. Concatenation doubles the feature dimension, while addition and subtraction maintain the original dimension.
3. The input to each method consists of left and right channel features, each with dimensions D x N<sub>H</sub>N<sub>T</sub>.
### Key Observations
* The system processes binaural audio by extracting features from left and right channel spectrograms.
* Transformer encoders are used to model the temporal dependencies in the audio signals.
* Interaural integration is performed to combine information from the left and right channels.
* Three different methods for interaural integration are explored.
### Interpretation
The diagram illustrates a system for binaural audio processing that leverages transformer encoders. The system aims to extract relevant features from the left and right audio channels, integrate them effectively, and map the resulting representation to a 2D space. The use of transformer encoders allows the system to capture long-range dependencies in the audio signals, while the different interaural integration methods provide flexibility in how the left and right channel information is combined. The final linear layer suggests that the system might be used for tasks such as sound localization or spatial audio analysis, where the output (x, y) represents a spatial coordinate.
</details>
Figure 1: Architecture of the proposed Binaural Audio Spectrogram Transformer (BAST). (a) The architecture of the proposed model. (Here there are $N_{H}N_{T}$ number of patches). (b) The architecture of a single Transformer encoder. (c) Three examined interaural integration methods: concatenation, addition and subtraction.
2 Related Works
Binaural auditory models utilize head-related transfer functions to apply characteristics of human binaural hearing to monaural sound clips in order to simulate human spatial hearing. Conventional methods for sound source localization (SSL) have been based on microphone arrays and can be categorized into controllable beamforming, high-resolution spectrogram estimation, and time difference of sound techniques [20]. These conventional signal processing techniques are often used as baselines or input feature extraction for DL-based SSL methods. STEF (Short-Time Fourier Transform) [21] approach is used to convert the time-domain signals from each microphone into the time-frequency domain. The STFT provides a representation of the signal in both time and frequency, allowing for the analysis of how the frequency content of the signal evolves. Mixture Model (GMM), commonly used in machine learning-based studies, calculates the probability distribution of the source location in reverberant environments [22]. Gaussian mixture regression (GMR) was extended later to localize multi-source sounds [23]. Subsequently, model-based methods have been encouraged to extract ILD and ITD cues for DNN training [7]. Compressive sensing and sparse recovery techniques are extensively applied in acoustics. Sparse Bayesian learning (SBL) integrates the Bayesian framework with the concepts of sparse representations and compressive sensing. SBL has been used for SSL [24, 25, 26, 27]. However, the performance of these hybrid techniques remains unstable since the feature extraction routine varies across different datasets.
Advancements in deep learning have led to the development of convolutional neural network (CNN) based methods for sound source localization. The CNN designed by [28] uses the multichannel STFT phase spectrograms to predict multi-speakers’ azimuth in reverberant environments. The model consists of three convolutional layers with 64 filters of size ${2}×{1}$ to consider neighboring frequency bands and microphones. Some deeper CNN architectures [29, 30, 31, 32] are applied to estimate both the azimuth and elevation. Several three-dimensional convolutions networks [33, 34] report that networks for time, frequency, and channel can achieve better accuracy than 2D convolutions. Focusing on binaural audio-visual localization, Binaural Audio-Visual Network (BAVNet) [35], for pixel-level sound source localization using binaural recordings and videos, which significantly improves performance over traditional monaural audio methods, especially when visual information quality is limited. As a data-driven DL method, NI-CNN can learn latent features for azimuth prediction from human auditory nerve representations [17]. These studies highlight the importance of advanced neural network architectures and feature extraction methods for enhancing the accuracy and resolution of sound source localization systems.
Transformer was initially proposed in natural language processing to handle long-range dependencies [36, 37]. Recently, the Transformer was successfully applied in computer vision by casting images into patch embedding sequences [38, 39]. Many hybrid models combined the Transformer with a CNN or Recurrent Neural Network (RNN) in audio processing, and some studies even directly embedded attention blocks into CNN or RNN to capture global features in a parameter-efficient way [40, 41, 42, 43]. Transformer-based models in sound source localization have gained significant attention in recent years. [44] uses a transformer encoder with residual connections and evaluates various configurations to manage multiple sound events. PILOT [45] is a transformer-based framework for sound event localization, capturing temporal dependencies through self-attention mechanisms and representing estimated positions as multivariate Gaussian variables to include uncertainty. Transformer-based models in sound event tasks have gained significant attention in recent years. The Audio Pyramid Transformer (APT) [46] with attention mechanism for weakly supervised sound event detection and audio classification, highlighting the application of transformer-based models in audio tasks. Multi-head self-attention, the parallel use of several attention layers in transformers, has also been used in SSL. Employing the first-order Ambisonic signals, Subsequently, the authors in [9] introduced AST which uses a Transformer model and variable-length monaural spectrograms to perform sound classification tasks. AST uses the overlapped-patch embedding generation policy to convert the intra-patch local features to inter-patch attention weights as a convolution-free, pure attention-based model. AST has achieved state-of-the-art [47, 48] results on multiple datasets for audio classification tasks.
The Vision Transformer (ViT) [38] represents a significant shift in the architecture of deep learning models for computer vision tasks. ViT divides an image into a sequence of fixed-size patches, linearly embedding them, and then processes them as tokens in a standard Transformer model. This method leverages the self-attention mechanism to capture long-range dependencies and contextual information across the image. The Audio Vision Transformer (AViT) is a model architecture that extends the concepts of Vision Transformers (ViTs) into the domain of audio processing. The audio-spectrogram vision transformer (AS-ViT) [49] use vision transformer models to analyze audio-spectrogram images for identifying abnormal respiratory sounds. The potential of ViT in audio-visual tasks such as sound source localization has been recognized [50]. Additionally, HTS-AT [51], a hierarchical token-semantic audio transformer, was designed to reduce the model size and training time, addressing the limitations of existing audio transformers. Binaural sound localization in noisy environments has been investigated using Frequency-Based Audio Vision Transformer (FAViT) [52]. FAViT uses selective attention mechanisms inspired by the Duplex Theory, outperforming recent CNNs and standard audio ViT models in localizing noise speech. ViT-based localization has also been explored for through-ice or underwater acoustic tracking [53].
3 Method
3.1 Model architecture
The proposed Binaural Audio Spectrogram Transformer (BAST), is illustrated in Fig. 1. Similar to NI-CNN [17], a dual-input hierarchical architecture is utilized to simulate the human subcortical auditory pathway. As opposed to NI-CNN which uses convolution layers, here three Transformer Encoders (i.e., left, right and center), hereafter called TE-L, TE-R and TE-C are utilized to construct a pure attention-based model. In particular, the pre-processed left and right sound waves are converted to left and right spectrograms denoted by $x^{L}∈\mathbb{R}^{H× T}$ and $x^{R}∈\mathbb{R}^{H× T}$ . Here, $H$ indicates the frequency band and $T$ indicates the number of Tukey windows ( with shape parameter: 0.25).
In what follows, the TE-L path to process the input data is explained. The other path, i.e TE-R, follows the same process. At the beginning of patch embedding layer, the left spectrogram $x^{L}∈\mathbb{R}^{H× T}$ is first decomposed into an overlapped-patch sequence $x_{patch}^{L}∈\mathbb{R}^{P^{2}×(N_{H}N_{T})}$ , where $P$ is the patch size, $N_{H}$ and $N_{T}$ are the number of patches in height and width respectively obtained as follows,
$$
N_{H}=\left\lceil\frac{H-P+S}{S}\right\rceil,N_{T}=\left\lceil\frac{T-P+S}{S}%
\right\rceil. \tag{1}
$$
In case $H-P$ and $T-P$ are not divisible by the stride $S$ between patches, the spectrogram is zero-padded on the top and right respectively. A trainable linear projection is added to flatten each patch to a $D$ dimensional latent representation, hereafter called patch embeddings [38]. Since our model outputs the sound location coordinates, the classification token in the Transformer encoder is removed. A fixed absolute position embedding [38] is added to the patch embeddings to capture the position information of the spectrogram in the Transformer. Here, the learnable position embedding is not used as it did not significantly change model performance compared to absolute position embedding [9]. The output of the left position embedding layer $z_{in}^{L}∈\mathbb{R}^{D×(N_{H}N_{T})}$ is then fed to the Transformer encoder TE-L.
We use the identical Transformer encoder design in [38, 9], consisting of $K$ stacked Multi-head Self-Attention (MSA) and Multi-Layer Perceptron (MLP) blocks. The BAST model performance is compared when using shared and non-shared parameters across the left and right Transformer encoders. Hereafter, BAST-SP refers to BAST model whose left and right Transformer encoders share parameters whereas in BAST-NSP the parameters of the left and right Transformer encoders are not shared. The output of left and right Transformer encoder, i.e., $z_{out}^{L}$ and $z_{out}^{R}$ , represents neural signals underlying the initial auditory processing stage along the left and right auditory pathway respectively. Subsequently, these binaural feature maps are integrated to simulate the function of the human olivary nucleus. Similar to NI-CNN, here three integration methods, i.e. addition, subtraction and concatenation, are investigated. Specifically, addition is the summation of feature maps of both sides; subtraction represents left feature map subtracted from the right feature map; concatenation is implemented by concatenating $z_{out}^{L}$ and $z_{out}^{R}$ along the the first dimension to produce $z_{in}^{C}∈\mathbb{R}^{2D×(N_{H}N_{T})}$ . TE-C receives the integrated feature map $z_{in}^{C}$ and output sequence $z_{out}^{C}$ . Next, an average operation of the patch dimension and a linear transformer layer is applied to finally produce the sound location coordinates $(x,y)$ . The last linear layer does not have any activation function, therefore the estimated coordinates can be any point on the 2D plane.
3.2 Loss Function
Three loss functions, i.e., Angular Distance (AD) loss [54], Mean Square Error (MSE) loss, as well as hybrid loss with a convex combination of AD and MSE, are explored in training the proposed model. Let $C_{i}=(x_{i},y_{i})$ and $\hat{C}_{i}=(\hat{x}_{i},\hat{y}_{i})$ denote the ground truth and predication coordinates for the i- $th$ sample. MSE loss measures the squared difference of Euclidean distance between the prediction and the ground truth as follows:
$$
\textrm{MSE}=\frac{1}{N}\sum_{i}^{N}\|C_{i}-\hat{C}_{i}\|_{2}^{2}, \tag{2}
$$
where $(\hat{x_{i}},\hat{y_{i}})$ is the predicted coordinates, $(x_{i},y_{i})$ is the true sound location and $N$ is the batch size. Note that MSE loss is able to penalize the large Euclidean distance error but is insensitive to the angular distance, which means that the azimuth may differ with the same MSE. In contrast to MSE, AD loss merely measures the angular distance while ignoring the Euclidean distance:
$$
\textrm{AD}=\frac{1}{\pi N}\sum^{N}_{i}\arccos{(\frac{C_{i}\hat{C}_{i}^{T}}{\|%
C_{i}\|_{2}\|\hat{C}_{i}\|_{2}})}, \tag{3}
$$
where $C_{i},\hat{C}_{i}≠ 0$ .
Table 1: The performance of BAST-NSP and BAST-SP compared to NI-CNN and NI-CNN ∗ when different loss and binaural integration methods are used. The best performed model in AD and MSE are in bold. The ↓ indicates the lower the value of the metric, the better the model performance.
| Model | Loss | Angular Distance(AD) ↓ | Mean Squared Error (MSE) ↓ | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| SS | Concatenation | Addition | Subtraction | SS | Concatenation | Addition | Subtraction | | |
| CNN [28] ∗ | MSE | 3.90° | — | — | — | 0.010 | — | — | — |
| AD | 42.69° | — | — | — | — | — | — | — | |
| Hybrid | 3.09° | — | — | — | 0.011 | — | — | — | |
| FAVit [52] ∗ | MSE | 6.26° | — | — | — | 0.022 | — | — | — |
| AD | 17.37° | — | — | — | — | — | — | — | |
| Hybrid | 3.73° | — | — | — | 0.015 | — | — | — | |
| NI-CNN [17] | MSE | — | 4.80° | 4.80° | 5.30° | — | 0.011 | 0.013 | 0.014 |
| AD | — | 3.70° | 3.90° | 5.20° | — | — | — | — | |
| NI-CNN [17] ∗ | MSE | — | 8.92° | 3.51° | 3.67° | — | 0.077 | 0.032 | 0.038 |
| AD | — | 7.85° | 1.97° | 1.85° | — | — | — | — | |
| Hybrid | — | 8.35° | 3.53° | 3.19° | — | 0.074 | 0.033 | 0.031 | |
| BAST-NSP | MSE | — | 2.78° | 2.48° | 2.42° | — | 0.003 | 0.002 | 0.002 |
| AD | — | 2.39° | 1.30° | 1.63° | — | — | — | — | |
| Hybrid | — | 2.76° | 1.83° | 1.29° | — | 0.004 | 0.002 | 0.001 | |
| BAST-SP | MSE | — | 2.02° | 4.97° | 1.94° | — | 0.002 | 0.018 | 0.002 |
| AD | — | 2.66° | 13.87° | 1.43° | — | — | — | — | |
| Hybrid | — | 1.98° | 5.72° | 2.03° | — | 0.003 | 0.026 | 0.002 | |
Table 2: The number of layers in each Transform encoder as well as the total number of trainable parameters of the proposed models. Tuple ( $·$ , $·$ , $·$ ) indicates the number of layers in the left, right and center Transformer encoder respectively.
| Model | Interaural Integration | Transformer Layers | Trainable Parameters |
| --- | --- | --- | --- |
| BAST-NSP | Concatenation | (3, 3, 3) | ~76M |
| Addition/ Subtraction | (3, 3, 3) | ~57M | |
| BAST-SP | Concatenation | (3, 3, 3) | ~57M |
| Addition/ Subtraction | (3, 3, 3) | ~38M | |
4 Experiments
4.1 Dataset
We use the binaural audio data in [17], which consists of a training dataset and an independent testing dataset. In the training dataset, 4600 real-life sound waves (duration:500 ms, sampling rate: 16000) are placed in 36 azimuth positions, respectively, with $10\degree$ azimuth resolution, $0\degree$ elevation, and 1-meter distance from the center point. In addition, sound waves are spatialized with two acoustic environments, i.e. an anechoic environment (AE) without reverberation and a 10m $×$ 14m lecture hall with reverberation (RV). In particular, here the training and test sets contain data from both AE and RV environments. In total, the training dataset has 331200 binaural learning samples. Similarly, the independent testing dataset contains 400 new sound waves processed with the same method as described above, producing 28800 testing samples.
4.2 Baseline methods
In this study, we establish a comprehensive framework for evaluating the performance of our proposed model by comparing it against four baseline models widely utilized in the field. Four baseline models were employed in this work: two-stream CNN-based models NI-CNN ∗ [17] and NI-CNN [17], one-stream CNN model [28], and ViT-based FAViT [52]. NI-CNN and NI-CNN ∗ models use cochleogram and spectrogram as model inputs, respectively. The CNN and FAVit model inputs are spectrogram. The hyper-parameters of all baseline models are empirically found to be optimized. By benchmarking our proposed model against these established baselines, we provide a comprehensive evaluation framework to assess its efficacy.
4.3 Model Evaluation
Models were evaluated by means of MSE and AD errors defined in Eq. (2) and (3). The lower AD and MSE errors the better localization performance is. Note that the MSE metric has no meaning when BAST is trained by AD loss because this loss does not optimize the Euclidean distance between the ground truth and the prediction, and that BAST has no constraints on the numerical range of the predicted coordinates. The one-stream models CNN and FAViT have no concatenation, addition, or subtraction modes, so the MSE and AD of these two models are measured only once.
4.4 Training Settings
As mentioned in 3.1, each sound wave are transformed to binaural spectrogram (size: 2 $×$ 129 $×$ 61, frequency range: 0-8000Hz, window length: 128ms, overlap: 64ms) before training. It is important to note that while the STFT transformation employed may reduce the fine-grained temporal differences between channels, our focus predominantly lies on interaural level differences (ILDs) rather than interaural time differences (ITDs). In order to have balanced training samples, we randomly select 75% binaural spectrograms in each azimuth position and listening environments of training dataset. The remaining data is used as validation set. As stated before in the Dataset section, a separate test dataset is available for this study. This setting results in $248400$ training samples and $82800$ validation samples. Here, Adam optimizer [55] is used to train the model for 50 epochs with a batch size of 48 and a fixed learning rate of 1e-4. In patch embedding layers, the stride of patches is set to 6, yielding 180 patches per spectrogram. Each Transformer encoder has three layers, with 1024 hidden dimensions (2048 dimensions when using concatenation as integration method in the last Transformer encoder TE-C), 16 attention heads in MSA blocks, 1024 dimensions in MLP blocks, 0.2 dropout rate in patch embeddings and MLP blocks. Our model implementation code available at https://github.com/ShengKuangCN/BAST is based on Python 3.8 and Pytorch 1.9.0, and are trained from scratch on 2 $×$ NVIDIA GeForce GTX 1080Ti GPUs with 11GB of RAM. The empirically found and used number of layers in each Transformer encoder as well as the total number of trainable parameters are presented in Table 2.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Radar Charts: Loss Comparison for BAST-NSP and BAST-SP
### Overview
The image presents six radar charts comparing the performance of two models, BAST-NSP and BAST-SP, across three different loss functions: MSE loss, AD loss, and Hybrid loss. Each chart displays the angular error distribution using three different methods: Concatenation (blue), Addition (orange), and Subtraction (green). The radial axis represents the magnitude of the error in degrees.
### Components/Axes
* **Rows:**
* Top Row: BAST-NSP
* Bottom Row: BAST-SP
* **Columns:**
* Left Column: MSE loss
* Middle Column: AD loss
* Right Column: Hybrid loss
* **Radial Axis:** Error magnitude in degrees, ranging from 0° to 10°, with markers at 2°, 4°, 6°, 8°, and 10°.
* **Angular Axis:** Represents the angular distribution, marked with a compass rose in the top-left corner indicating 0°, 90°, and 180°.
* **Legend (bottom):**
* Blue: Concatenation
* Orange: Addition
* Green: Subtraction
### Detailed Analysis
**Top Row: BAST-NSP**
* **MSE loss:**
* Concatenation (blue): The error ranges approximately from 1° to 4°.
* Addition (orange): The error ranges approximately from 1° to 3°.
* Subtraction (green): The error ranges approximately from 0.5° to 2.5°.
* **AD loss:**
* Concatenation (blue): The error ranges approximately from 1° to 5°.
* Addition (orange): The error ranges approximately from 1° to 3°.
* Subtraction (green): The error ranges approximately from 0.5° to 2.5°.
* **Hybrid loss:**
* Concatenation (blue): The error ranges approximately from 1° to 6°.
* Addition (orange): The error ranges approximately from 1° to 3°.
* Subtraction (green): The error ranges approximately from 0.5° to 2.5°.
**Bottom Row: BAST-SP**
* **MSE loss:**
* Concatenation (blue): The error ranges approximately from 1° to 3°.
* Addition (orange): The error ranges approximately from 1° to 10°.
* Subtraction (green): The error ranges approximately from 0.5° to 2.5°.
* **AD loss:**
* Concatenation (blue): The error ranges approximately from 1° to 3°.
* Addition (orange): The error ranges approximately from 1° to 2°.
* Subtraction (green): The error ranges approximately from 0.5° to 2.5°.
* **Hybrid loss:**
* Concatenation (blue): The error ranges approximately from 1° to 3°.
* Addition (orange): The error ranges approximately from 1° to 8°.
* Subtraction (green): The error ranges approximately from 0.5° to 2.5°.
### Key Observations
* Subtraction consistently shows the lowest error across all loss functions and both models.
* Addition exhibits the highest error in the MSE loss and Hybrid loss for BAST-SP.
* Concatenation generally shows intermediate error levels.
* BAST-SP with MSE loss and Hybrid loss using Addition shows significantly higher error compared to other configurations.
### Interpretation
The radar charts provide a visual comparison of the angular error distributions for different model configurations. The consistent low error of the Subtraction method suggests it is a more robust approach for these models and loss functions. The high error observed with the Addition method for BAST-SP, particularly with MSE and Hybrid loss, indicates a potential instability or incompatibility between this model and loss function combination. The choice of loss function and the method (Concatenation, Addition, Subtraction) significantly impacts the model's performance, highlighting the importance of careful selection and tuning.
</details>
Figure 2: The angular distance (AD) error of the proposed BAST in each azimuth with different loss functions and interaural integration methods.
<details>
<summary>x3.png Details</summary>

### Visual Description
## Box Plot Grid: Angular Distance vs. Concatenation Method for Different Loss Functions
### Overview
The image presents a grid of box plots comparing the angular distance for different concatenation methods (Concat., Add., Sub.) under three loss functions (MSE loss, AD loss, Hybrid loss) for two conditions ("left" and "right"). The y-axis represents the angular distance in degrees, and the x-axis represents the concatenation method. The grid is organized into two rows, representing two different BAST metrics: BAST-NSP (top row) and BAST-SP (bottom row).
### Components/Axes
* **Titles:**
* Top Row: MSE loss, AD loss, Hybrid loss
* Left Column: BAST-NSP, BAST-SP
* **Y-axis:** Angular distance (°), ranging from 0 to 10 for all plots except the bottom-middle plot, which ranges from 0 to 40.
* **X-axis:** Concatenation methods: Concat., Add., Sub.
* **Legend:** Located in the top-left plot, indicating "left" (blue) and "right" (orange).
* **Statistical Significance:** Asterisks (*) indicate statistically significant differences between "left" and "right" conditions.
### Detailed Analysis
**Top Row: BAST-NSP**
* **MSE loss:**
* "Left" (blue): The median is around 2.5° for Concat., 2.2° for Add., and 2.2° for Sub. There are outliers above 4°.
* "Right" (orange): The median is around 2.8° for Concat., 3.0° for Add., and 2.8° for Sub.
* Trend: The angular distance is relatively consistent across the different concatenation methods for both "left" and "right".
* Statistical Significance: A statistically significant difference is indicated between the "left" and "right" conditions.
* **AD loss:**
* "Left" (blue): The median is around 1.2° for Concat., 1.2° for Add., and 1.2° for Sub.
* "Right" (orange): The median is around 2.5° for Concat., 1.8° for Add., and 2.0° for Sub.
* Trend: The angular distance is relatively consistent across the different concatenation methods for both "left" and "right".
* **Hybrid loss:**
* "Left" (blue): The median is around 1.5° for Concat., 1.5° for Add., and 1.2° for Sub.
* "Right" (orange): The median is around 2.8° for Concat., 2.0° for Add., and 1.8° for Sub.
* Trend: The angular distance is relatively consistent across the different concatenation methods for both "left" and "right".
**Bottom Row: BAST-SP**
* **MSE loss:**
* "Left" (blue): The median is around 2.0° for Concat., 4.5° for Add., and 1.5° for Sub.
* "Right" (orange): The median is around 2.5° for Concat., 5.0° for Add., and 2.0° for Sub.
* Trend: The angular distance is higher for the "Add." method compared to "Concat." and "Sub." for both "left" and "right".
* **AD loss:**
* "Left" (blue): The median is around 2.0° for Concat., 15° for Add., and 1.0° for Sub.
* "Right" (orange): The median is around 2.0° for Concat., 2.0° for Add., and 1.0° for Sub.
* Trend: The angular distance is significantly higher for the "Add." method in the "left" condition compared to the "right" condition.
* Statistical Significance: A statistically significant difference is indicated between the "left" and "right" conditions.
* **Hybrid loss:**
* "Left" (blue): The median is around 1.8° for Concat., 5.0° for Add., and 1.5° for Sub.
* "Right" (orange): The median is around 2.5° for Concat., 6.0° for Add., and 2.5° for Sub.
* Trend: The angular distance is higher for the "Add." method compared to "Concat." and "Sub." for both "left" and "right".
### Key Observations
* The AD loss function generally results in lower angular distances for BAST-NSP compared to MSE and Hybrid loss.
* For BAST-SP, the "Add." concatenation method often leads to higher angular distances, particularly with MSE and Hybrid loss.
* The AD loss function shows a significant difference between "left" and "right" conditions for the "Add." concatenation method in BAST-SP.
### Interpretation
The box plots illustrate the impact of different loss functions and concatenation methods on angular distance, a measure of error or deviation. The choice of loss function and concatenation method significantly influences the performance, especially for BAST-SP. The statistical significance observed in the AD loss function for BAST-SP suggests that this combination may be particularly sensitive to differences between "left" and "right" conditions. The higher angular distances observed with the "Add." method for BAST-SP indicate that this concatenation approach may introduce more error or variability compared to "Concat." and "Sub." in certain scenarios.
</details>
Figure 3: The AD error of the proposed BAST-NSP and BAST-SP in the left and right hemifield. The boxplot indicates quartiles of the metric distribution with respect to azimuths. The asterisk between two boxes indicates the statistical significance (p $<$ 0.05, paired t-test with FDR correction) between the left and right hemifield.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Box Plot Comparison: MSE and Hybrid Loss for BAST-NSP and BAST-SP
### Overview
The image presents four box plots comparing the Mean Square Error (MSE) loss and Hybrid loss for two models, BAST-NSP and BAST-SP, across three different operations: Concatenation (Concat.), Addition (Add.), and Subtraction (Sub.). The box plots show the distribution of the error for "left" and "right" data.
### Components/Axes
* **Titles:**
* Top-Left: "MSE loss"
* Top-Right: "Hybrid loss"
* Left-Side (Top and Bottom): "BAST-NSP" and "BAST-SP" respectively.
* **Y-Axis (Vertical):** "Mean square error"
* Top Plots (BAST-NSP): Scale from 0.000 to 0.010, incrementing by 0.002.
* Bottom Plots (BAST-SP): Scale from 0.00 to 0.07, incrementing by 0.01.
* **X-Axis (Horizontal):** "Concat.", "Add.", "Sub." representing different operations.
* **Legend:** Located at the top-center of the image.
* Blue: "left"
* Orange: "right"
* **Statistical Significance:** Asterisks (*) indicate statistically significant differences between groups.
### Detailed Analysis
**1. MSE Loss (BAST-NSP - Top-Left)**
* **Concat.:** The "left" data (blue) has a median around 0.0025, with the box extending from approximately 0.0015 to 0.004. The "right" data (orange) has a median around 0.002, with the box extending from approximately 0.001 to 0.003.
* **Add.:** The "left" data (blue) has a median around 0.0015, with the box extending from approximately 0.001 to 0.003. The "right" data (orange) has a median around 0.002, with the box extending from approximately 0.001 to 0.004. There is an outlier at approximately 0.005.
* **Sub.:** The "left" data (blue) has a median around 0.0015, with the box extending from approximately 0.001 to 0.0025. The "right" data (orange) has a median around 0.002, with the box extending from approximately 0.001 to 0.003. There is an outlier at approximately 0.005.
* **Statistical Significance:** There is a statistically significant difference between "left" and "right" for both "Concat." and "Add." operations.
**2. Hybrid Loss (BAST-NSP - Top-Right)**
* **Concat.:** The "left" data (blue) has a median around 0.003, with the box extending from approximately 0.002 to 0.004. The "right" data (orange) has a median around 0.0045, with the box extending from approximately 0.003 to 0.006.
* **Add.:** The "left" data (blue) has a median around 0.0015, with the box extending from approximately 0.001 to 0.0025. The "right" data (orange) has a median around 0.0015, with the box extending from approximately 0.001 to 0.003. There are outliers for the "right" data at approximately 0.002 and 0.003.
* **Sub.:** Both "left" (blue) and "right" (orange) data have medians around 0.001, with boxes extending from approximately 0.0005 to 0.0015. There are outliers for the "right" data at approximately 0.002 and 0.003.
**3. MSE Loss (BAST-SP - Bottom-Left)**
* **Concat.:** Both "left" (blue) and "right" (orange) data have medians close to 0.00, with boxes extending from approximately 0.00 to 0.001.
* **Add.:** The "left" data (blue) has a median around 0.015, with the box extending from approximately 0.01 to 0.025. The "right" data (orange) has a median around 0.015, with the box extending from approximately 0.01 to 0.02. There is an outlier for the "left" data at approximately 0.055.
* **Sub.:** Both "left" (blue) and "right" (orange) data have medians close to 0.00, with boxes extending from approximately 0.00 to 0.001.
**4. Hybrid Loss (BAST-SP - Bottom-Right)**
* **Concat.:** Both "left" (blue) and "right" (orange) data have medians close to 0.00, with boxes extending from approximately 0.00 to 0.001. There is an outlier for the "left" data at approximately 0.015.
* **Add.:** The "left" data (blue) has a median around 0.02, with the box extending from approximately 0.01 to 0.03. The "right" data (orange) has a median around 0.03, with the box extending from approximately 0.02 to 0.04. The "right" data has a long tail extending to approximately 0.07.
* **Sub.:** Both "left" (blue) and "right" (orange) data have medians close to 0.00, with boxes extending from approximately 0.00 to 0.002.
### Key Observations
* For BAST-NSP, the MSE loss shows statistically significant differences between "left" and "right" data for "Concat." and "Add." operations.
* For BAST-SP, the MSE loss is significantly higher for the "Add." operation compared to "Concat." and "Sub."
* For BAST-SP, the Hybrid loss is also higher for the "Add." operation compared to "Concat." and "Sub.", with a long tail for the "right" data.
* BAST-SP generally has higher error values than BAST-NSP, especially for the "Add." operation.
### Interpretation
The box plots provide a comparative analysis of the performance of BAST-NSP and BAST-SP models under different loss functions (MSE and Hybrid) and operations (Concat., Add., Sub.). The data suggests that:
* The choice of operation significantly impacts the error, particularly for BAST-SP, where "Add." results in much higher error values.
* The Hybrid loss function seems to mitigate the error for BAST-SP in the "Add." operation compared to the MSE loss.
* The statistical significance observed in BAST-NSP's MSE loss for "Concat." and "Add." indicates a consistent difference in performance between "left" and "right" data for these operations.
* BAST-NSP generally performs better than BAST-SP, as indicated by the lower error values across most operations and loss functions.
</details>
Figure 4: The MSE of the proposed BAST-NSP and BAST-SP in the left and right hemifield. The boxplot indicates quartiles of the metric distribution with respect to azimuths. The asterisk between two boxes indicates the statistical significance (p $<$ 0.05, paired t-test with FDR correction) between the left and right hemifield.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Heatmap: Layer Activation Patterns
### Overview
The image displays a series of heatmaps representing activation patterns in different layers of a neural network. The heatmaps are arranged in three groups, labeled TE-L, TE-R, and TE-C, each showing the activation patterns for the 1st, 2nd, and 3rd layers. The heatmaps are square, with axes ranging from 0 to 180. Arrows indicate the progression from one layer to the next within each group.
### Components/Axes
* **X-axis:** Ranges from 0 to 180.
* **Y-axis:** Ranges from 0 to 180.
* **Heatmap Color Scale:** Dark purple indicates low activation, transitioning to light blue/green for higher activation.
* **Titles:**
* 1st layer TE-L
* 2nd layer TE-L
* 3rd layer TE-L
* 1st layer TE-R
* 2nd layer TE-R
* 3rd layer TE-R
* 1st layer TE-C
* 2nd layer TE-C
* 3rd layer TE-C
* **Arrows:** Indicate the progression from the 1st layer to the 2nd, and from the 2nd layer to the 3rd, for each of the TE-L, TE-R, and TE-C groups.
### Detailed Analysis
**TE-L Group:**
* **1st layer TE-L:** Shows a diagonal line of higher activation from the bottom-left to the top-right corner. There are also scattered points and short vertical lines of activation.
* **2nd layer TE-L:** The diagonal line is less prominent. Vertical lines of activation are more pronounced and frequent.
* **3rd layer TE-L:** Vertical lines of activation are even more dominant and distinct.
**TE-R Group:**
* **1st layer TE-R:** Similar to the 1st layer TE-L, with a diagonal line of activation and scattered points.
* **2nd layer TE-R:** Vertical lines of activation become more prominent.
* **3rd layer TE-R:** Vertical lines of activation are the most dominant feature.
**TE-C Group:**
* **1st layer TE-C:** Shows a diagonal line of higher activation from the bottom-left to the top-right corner. There are also scattered points and short vertical lines of activation.
* **2nd layer TE-C:** The diagonal line is less prominent. Vertical lines of activation are more pronounced and frequent.
* **3rd layer TE-C:** Vertical lines of activation are even more dominant and distinct.
### Key Observations
* All three groups (TE-L, TE-R, TE-C) show a similar trend: a shift from a diagonal activation pattern in the first layer to a predominantly vertical line activation pattern in the later layers.
* The intensity of the vertical lines seems to increase from the 1st to the 3rd layer in all groups.
* The diagonal line is most visible in the first layer of each group and diminishes in subsequent layers.
### Interpretation
The heatmaps illustrate how activation patterns evolve through the layers of a neural network under different conditions or configurations (TE-L, TE-R, TE-C). The shift from diagonal to vertical activation patterns suggests that the network is learning to emphasize specific features or relationships as information propagates through the layers. The increasing intensity of the vertical lines indicates that certain neurons or feature detectors are becoming more strongly activated in the deeper layers. The similarity in trends across TE-L, TE-R, and TE-C suggests a common underlying learning process, although the specific details of the activation patterns may differ.
</details>
Figure 5: An example of the attention matrices in the proposed model (i.e., BAST-NSP, hybrid loss and subtraction). The corresponding sound clip was randomly selected in the category of human speech with reverberation. For each layer, we present the patch-to-patch attention matrix (size: 180 $×$ 180) calculated by the rollout method in [56]. Note that we initialize the attention matrix at the first layer of TE-C by summing the attention matrices at the last layer of TE-L and TE-R.
5 Results
5.1 Overall Performance
The proposed BAST-NSP and BAST-SP models’ performance is compared with those of the NI-CNN, CNN, and FAVit models. In particular, for the NI-CNN model, two modes of implementations corresponding to correlogram and spectrogram as model inputs have been considered and are denoted by NI-CNN and NI-CNN ∗, respectively. The obtained results of the compared models with different combinations of binaural integration methods and loss functions are tabulated in Table 1. BAST-NSP has achieved the best AD error of 1.29°and the best MSE of 0.001 when using the subtraction binaural integration and the hybrid loss function. Compared to NI-CNN, BAST-NSP reduces AD error 65.4% from 3.70°to 1.29°and MSE 90.9% from 0.011 to 0.001. In addition, BAST-NSP outperforms NI-CNN ∗ (NI-CNN ∗: AD=1.85°, MSE=0.031), although both models received the same input. BAST-SP has achieved AD=1.43°, MSE=0.002, surpassing the performance of NI-CNN and NI-CNN ∗ models while still inferior to BAST-NSP. In this study, BAST-NSP outperforms the other tested models in performing binaural sound localization.
Two-stream models (NI-CNN, NI-CNN ∗, BAST-NSP, BAST-SP) outperform one-stream models (CNN, FAVit) in both Angular Distance (AD) and Mean Squared Error (MSE). BAST-NSP shows the best overall performance among the two-stream models, followed closely by BAST-SP. BAST-NSP and BAST-SP show significant improvements in AD, especially in the hybrid loss function, with BAST-NSP’s best AD at 1.29°and BAST-SP’s best AD at 1.43°, compared to the best one-stream AD of 3.09°(CNN). Two-stream models generally have lower MSE compared to one-stream models. BAST-NSP has the lowest MSE, with the best performance at 0.001 (Hybrid loss with Subtraction), compared to the lowest MSE of one-stream models at 0.010 (CNN). One-stream models have shown more poor performance in AD loss than the best-performing two-stream models.
We further analyze the influence of different binaural integration methods on the BAST-NSP and BAST-SP performance. Here, the performances are compared in terms of AD error. Specifically, in both cases when the BAST-NSP is trained by AD loss and hybrid loss, binaural integration through addition and subtraction improved the model performance compared to concatenation (AD loss: Add.=1.30°, Sub.=1.63°, Concat.=2.39°; Hybrid loss: Add.=1.83°, Sub. =1.29°, Concat.=2.76°, see Table 1). In case of MSE loss, the performance across the three integration methods of BAST-NSP is similar. In BAST-SP, addition integration causes a huge AD error increment over BAST-NSP (BAST-SP: 4.97°, BAST-NSP: 1.30°), indicating that the left-right identical feature addition brings a great challenge to the model to predict the azimuth.
The effect of three types of loss functions on BAST-NSP and BAST-SP performance are not the same. In BAST-NSP, AD loss achieves lower AD when using concatenation or addition, but the hybrid loss yields the lowest AD in terms of subtraction. In BAST-SP, one can observe an interaction of loss function and binaural integration methods, i.e. the best loss function depends on the applied binaural integration methods.
5.2 Performance at different azimuths
To better understand the localization performance of the models in different azimuth, the test AD error of each azimuth is shown in Fig. 2. The test AD error in BAST-NSP is much smaller when the sound source is located closer to the interaural midline. This the error pattern for BAST-NSP is similar as for humans, highlighting the relevance of independent processing in the left and right stream. However, this pattern is not observed in BAST-SP.
<details>
<summary>x6.png Details</summary>

### Visual Description
## Spectrogram and Attention Rollout Visualization
### Overview
The image presents a comparative analysis of spectrograms and attention rollouts for audio data. It consists of three sections: (a) Spectrograms for the left and right audio channels, (b) Attention rollouts corresponding to TE-L and TE-R, and (c) Attention rollout for TE-C. The spectrograms display frequency content over time, while the attention rollouts visualize the model's focus on different frequency bands over time.
### Components/Axes
* **Section (a): Spectrogram**
* **Top Subplot:** Labeled "Left". Represents the spectrogram of the left audio channel.
* Y-axis: "freq./kHz" ranging from 0 to 6 kHz.
* X-axis: "time".
* **Bottom Subplot:** Labeled "Right". Represents the spectrogram of the right audio channel.
* Y-axis: "freq./kHz" ranging from 0 to 6 kHz.
* X-axis: "time".
* **Section (b): Attention Rollout**
* **Top Subplot:** Labeled "TE-L". Represents the attention rollout for TE-L.
* Y-axis: "freq./kHz" ranging from 0 to 6 kHz.
* X-axis: "time".
* **Bottom Subplot:** Labeled "TE-R". Represents the attention rollout for TE-R.
* Y-axis: "freq./kHz" ranging from 0 to 6 kHz.
* X-axis: "time".
* **Section (c): Attention Rollout**
* Single Subplot: Labeled "TE-C". Represents the attention rollout for TE-C.
* Y-axis: "freq./kHz" ranging from 0 to approximately 4 kHz.
* X-axis: "time".
### Detailed Analysis or ### Content Details
* **Spectrograms (Section a):**
* The spectrograms for both left and right channels show similar patterns. There are distinct horizontal bands indicating prominent frequencies over time. The intensity of the color (ranging from blue to yellow/green) represents the amplitude or energy at each frequency.
* Both spectrograms show activity across the entire frequency range (0-6 kHz), with concentrations in the lower to mid frequencies.
* **Attention Rollouts (Sections b and c):**
* The attention rollouts (TE-L, TE-R, and TE-C) highlight the regions of the spectrogram that the model is focusing on. Warmer colors (yellow/orange) indicate higher attention, while cooler colors (blue) indicate lower attention.
* TE-L and TE-R show distinct attention patterns, with concentrated attention around specific frequency bands and time intervals.
* TE-C has a smaller frequency range (0-4 kHz) and shows a different attention pattern compared to TE-L and TE-R.
### Key Observations
* The spectrograms provide a visual representation of the audio signal's frequency content, while the attention rollouts show which parts of the spectrogram the model is attending to.
* The attention patterns differ between TE-L, TE-R, and TE-C, suggesting that these components focus on different aspects of the audio signal.
* The attention rollouts are more sparse than the spectrograms, indicating that the model is selectively attending to specific frequency bands and time intervals.
### Interpretation
The image illustrates how a model processes audio data by visualizing both the raw audio signal (spectrograms) and the model's attention mechanisms (attention rollouts). The differences in attention patterns between TE-L, TE-R, and TE-C suggest that these components may be responsible for extracting different features from the audio signal. This type of visualization is useful for understanding and debugging the behavior of audio processing models, as it provides insights into which parts of the input signal are most important for the model's decision-making process. The attention rollouts help to interpret the model's internal workings and can be used to improve its performance by guiding its attention to relevant features.
</details>
Figure 6: Attention rollout corresponding to the spectrogram shown in Fig. 5. (a): Left and right spectrogram. (b): The left and right attention rollout obtained from the 3rd layer of TE-L and TE-R Transformers. (c): The left and right Attention rollout obtained from the 3rd layer of TE-C.
5.3 Performance in left and right hemifield
To explore the symmetry of the model predictions, we further compare the evaluation metrics between the left-right hemifields. One can observe comparable model performance in left and right hemifield in Fig. 3 and 4. This result is confirmed by paired t-test (False Discovery Rate (FDR) corrected for multiple comparisons). More specifically, Fig. 3, shows an insignificant difference of AD error between the left and right hemifield in most conditions (corrected p $>$ 0.05). However, a minor but significant difference (corrected p $<$ 0.05) is observed in BAST-NSP trained with MSE loss and addition integration, and in BAST-SP trained with AD loss and subtraction. More precisely, the difference in AD error between the left and right hemifield was not significant in most conditions (corrected p $>$ 0.05, Fig. 3), thus supporting the consistent symmetry of model predictions.
5.4 Performance in different environments
We conduct two additional experiments to illustrate the generalization of the proposed model by training in one listening environment and testing in both environments separately, i.e., AE and RV. This analysis is conducted on the best performing model, i.e., BAST-NSP with hybrid loss and subtraction integration method. As shown in Table 3, the model that is trained using the data of both AE and RV environments, achieves the best test results compared to other models which are trained only using the data of one of the environment.
Table 3: The performance of the proposed BAST-NSP model in different listening environments. AE and RV indicate the anechoic and reverberation environments respectively.
| Training Environment | Testing Environment | AD | MSE |
| --- | --- | --- | --- |
| AE | AE | 1.14° | 0.001 |
| RV | 8.66° | 0.027 | |
| RV | AE | 16.70° | 0.078 |
| RV | 1.65° | 0.002 | |
| AE+RV | AE | 1.10° | 0.001 |
| RV | 1.48° | 0.001 | |
5.5 Attention Analysis
To interpret the localization process, we utilize Attention Rollout [57] to visualize the attention maps of the proposed model (BAST-NSP with subtraction method and hybrid loss). Rollout calculates the attention matrix by recursively multiplying the attention matrices along the forward propagation path. [56] enhanced this method by adding an additional identical matrix before multiplication to simulate the effect of residual connection of MSA. Due to the interaural integration layer, in BAST-NSP, we initialize the attention matrix of TE-C by summing the attention weights from both sides regardless of the integration method.
Fig. 5 shows the patch-to-patch attention matrices (size: 180 $×$ 180) of a randomly selected spectrogram. In the first layer of each Transformer, most of the patches are self-focused and pay attention to some scattered patches. However, in the last layer, all patches yield nearly consistent attention weights to some specific patches. The attention rollout heat map with respect to the left and right spectrogram are depicted in Fig. 6. Although parameter-sharing setting is not used in BAST-NSP, one can still observe that the model focuses most of its attention on similar regions on both sides, see Fig. 6 (b). The final attention map, Fig. 6 (c), shows that the model further processes the attention after the integration layer and boosts the attention weights in bottom left regions.
6 Conclusion
In this paper, a novel Binaural Audio Spectrogram Transformer (BAST) for sound source localization is proposed. The obtained results show that this pure attention-based model leads to significant azimuth acuity improvement compared to CNN, FAVit and NI-CNN models. In particular, subtraction interaural integration and hybrid loss is the best training combination for BAST. Additionally, we found that the performance and statistical significance in left-right hemifields vary with different combinations of training settings. In conclusion, this work contributes to a convolution-free model of real-life sound localization. The data and implementation of our BAST model are available at https://github.com/ShengKuangCN/BAST.
References
- [1] D. W. Batteau, The role of the pinna in human localization, Proceedings of the Royal Society of London. Series B. Biological Sciences 168 (1011) (1967) 158–180.
- [2] J. O. Pickles, Auditory pathways: anatomy and physiology, Handbook of clinical neurology 129 (2015) 3–25.
- [3] K. van der Heijden, J. P. Rauschecker, B. de Gelder, E. Formisano, Cortical mechanisms of spatial hearing, Nature Reviews Neuroscience 20 (10) (2019) 609–623.
- [4] B. Grothe, M. Pecka, D. McAlpine, Mechanisms of sound localization in mammals, Physiological reviews 90 (3) (2010) 983–1012.
- [5] J. Blauert, S. Hearing, The psychophysics of human sound localization, in: Spatial Hearing, MIT Press, 1997.
- [6] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, nature 521 (7553) (2015) 436–444.
- [7] X. Zhang, D. Wang, Deep learning based binaural speech separation in reverberant environments, IEEE/ACM transactions on audio, speech, and language processing 25 (5) (2017) 1075–1084.
- [8] S. Y. Lee, J. Chang, S. Lee, Deep learning-based method for multiple sound source localization with high resolution and accuracy, Mechanical Systems and Signal Processing 161 (2021) 107959.
- [9] Y. Gong, Y.-A. Chung, J. Glass, Ast: Audio spectrogram transformer, arXiv preprint arXiv:2104.01778 (2021).
- [10] T.-D. Truong, C. N. Duong, H. A. Pham, B. Raj, N. Le, K. Luu, et al., The right to talk: An audio-visual transformer approach, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1105–1114.
- [11] L. Perotin, R. Serizel, E. Vincent, A. Guérin, Crnn-based joint azimuth and elevation localization with the ambisonics intensity vector, in: 2018 16th International Workshop on Acoustic Signal Enhancement (IWAENC), IEEE, 2018, pp. 241–245.
- [12] T. Yoshioka, S. Karita, T. Nakatani, Far-field speech recognition using cnn-dnn-hmm with convolution in time, in: 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, 2015, pp. 4360–4364.
- [13] S. Park, Y. Jeong, H. S. Kim, Multiresolution cnn for reverberant speech recognition, in: 2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA), IEEE, 2017, pp. 1–4.
- [14] Y. Xu, S. Afshar, R. K. Singh, R. Wang, A. van Schaik, T. J. Hamilton, A binaural sound localization system using deep convolutional neural networks, in: 2019 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, 2019, pp. 1–5.
- [15] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- [16] N. Yalta, K. Nakadai, T. Ogata, Sound source localization using deep learning models, Journal of Robotics and Mechatronics 29 (1) (2017) 37–48.
- [17] K. van der Heijden, S. Mehrkanoon, Goal-driven, neurobiological-inspired convolutional neural network models of human spatial hearing, Neurocomputing 470 (2022) 432–442.
- [18] K. van der Heijden, S. Mehrkanoon, Modelling human sound localization with deep neural networks., in: ESANN, 2020, pp. 521–526.
- [19] A. Francl, J. H. McDermott, Deep neural network models of sound localization reveal how perception is adapted to real-world environments, Nature Human Behaviour 6 (1) (2022) 111–133.
- [20] J. Mathews, J. Braasch, Multiple sound-source localization and identification with a spherical microphone array and lavalier microphone data, The Journal of the Acoustical Society of America 143 (3_Supplement) (2018) 1825–1825.
- [21] L. Durak, O. Arikan, Short-time fourier transform: two fundamental properties and an optimal implementation, IEEE Transactions on Signal Processing 51 (5) (2003) 1231–1242.
- [22] N. Ma, J. A. Gonzalez, G. J. Brown, Robust binaural localization of a target sound source by combining spectral source models and deep neural networks, IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (11) (2018) 2122–2131.
- [23] P.-A. Grumiaux, S. Kitić, L. Girin, A. Guérin, A survey of sound source localization with deep learning methods, The Journal of the Acoustical Society of America 152 (1) (2022) 107–151.
- [24] P. Gerstoft, C. F. Mecklenbräuker, A. Xenaki, S. Nannuru, Multisnapshot sparse bayesian learning for doa, IEEE Signal Processing Letters 23 (10) (2016) 1469–1473.
- [25] S. Nannuru, A. Koochakzadeh, K. L. Gemba, P. Pal, P. Gerstoft, Sparse bayesian learning for beamforming using sparse linear arrays, The Journal of the Acoustical Society of America 144 (5) (2018) 2719–2729.
- [26] G. Ping, E. Fernandez-Grande, P. Gerstoft, Z. Chu, Three-dimensional source localization using sparse bayesian learning on a spherical microphone array, The Journal of the Acoustical Society of America 147 (6) (2020) 3895–3904.
- [27] A. Xenaki, J. Bünsow Boldt, M. Græsbøll Christensen, Sound source localization and speech enhancement with sparse bayesian learning beamforming, The Journal of the Acoustical Society of America 143 (6) (2018) 3912–3921.
- [28] S. Chakrabarty, E. A. Habets, Multi-speaker doa estimation using deep convolutional networks trained with noise signals, IEEE Journal of Selected Topics in Signal Processing 13 (1) (2019) 8–21.
- [29] C. Pang, H. Liu, X. Li, Multitask learning of time-frequency cnn for sound source localization, IEEE Access 7 (2019) 40725–40737. doi:10.1109/ACCESS.2019.2905617.
- [30] R. Varzandeh, K. Adiloğlu, S. Doclo, V. Hohmann, Exploiting periodicity features for joint detection and doa estimation of speech sources using convolutional neural networks, in: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 566–570. doi:10.1109/ICASSP40776.2020.9054754.
- [31] P. Vecchiotti, N. Ma, S. Squartini, G. J. Brown, End-to-end binaural sound localisation from the raw waveform, in: ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 451–455. doi:10.1109/ICASSP.2019.8683732.
- [32] A. Fahim, P. N. Samarasinghe, T. D. Abhayapala, Multi-source doa estimation through pattern recognition of the modal coherence of a reverberant soundfield, IEEE/ACM Transactions on Audio, Speech, and Language Processing 28 (2020) 605–618. doi:10.1109/TASLP.2019.2960734.
- [33] D. Krause, A. Politis, K. Kowalczyk, Comparison of convolution types in cnn-based feature extraction for sound source localization, in: 2020 28th European Signal Processing Conference (EUSIPCO), 2021, pp. 820–824. doi:10.23919/Eusipco47968.2020.9287344.
- [34] D. Diaz-Guerra, A. Miguel, J. R. Beltran, Robust sound source tracking using srp-phat and 3d convolutional neural networks, IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021) 300–311. doi:10.1109/TASLP.2020.3040031.
- [35] X. Wu, Z. Wu, L. Ju, S. Wang, Binaural audio-visual localization, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, 2021, pp. 2961–2968.
- [36] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin, Attention is all you need, Advances in neural information processing systems 30 (2017).
- [37] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805 (2018).
- [38] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv:2010.11929 (2020).
- [39] K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu, et al., A survey on vision transformer, IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).
- [40] Z. Zhang, S. Xu, S. Zhang, T. Qiao, S. Cao, Attention based convolutional recurrent neural network for environmental sound classification, Neurocomputing 453 (2021) 896–903.
- [41] Y.-B. Lin, Y.-C. F. Wang, Audiovisual transformer with instance attention for audio-visual event localization, in: Proceedings of the Asian Conference on Computer Vision, 2020.
- [42] Q. Kong, Y. Xu, W. Wang, M. D. Plumbley, Sound event detection of weakly labelled data with cnn-transformer and automatic threshold optimization, IEEE/ACM Transactions on Audio, Speech, and Language Processing 28 (2020) 2450–2460.
- [43] C. Schymura, T. Ochiai, M. Delcroix, K. Kinoshita, T. Nakatani, S. Araki, D. Kolossa, Exploiting attention-based sequence-to-sequence architectures for sound event localization, in: 2020 28th European Signal Processing Conference (EUSIPCO), 2021, pp. 231–235. doi:10.23919/Eusipco47968.2020.9287224.
- [44] N. Yalta, Y. Sumiyoshi, Y. Kawaguchi, The hitachi dcase 2021 task 3 system: Handling directive interference with self attention layers, Tech. rep., Technical Report, DCASE 2021 Challenge (2021).
- [45] C. Schymura, B. Bönninghoff, T. Ochiai, M. Delcroix, K. Kinoshita, T. Nakatani, S. Araki, D. Kolossa, Pilot: Introducing transformers for probabilistic sound event localization, arXiv preprint arXiv:2106.03903 (2021).
- [46] Y. Xin, D. Yang, Y. Zou, Audio pyramid transformer with domain adaption for weakly supervised sound event detection and audio classification., in: INTERSPEECH, 2022, pp. 1546–1550.
- [47] K. J. Piczak, Esc: Dataset for environmental sound classification, in: Proceedings of the 23rd ACM international conference on Multimedia, 2015, pp. 1015–1018.
- [48] P. Warden, Speech commands: A dataset for limited-vocabulary speech recognition, arXiv preprint arXiv:1804.03209 (2018).
- [49] W. Ariyanti, K.-C. Liu, K.-Y. Chen, et al., Abnormal respiratory sound identification using audio-spectrogram vision transformer, in: 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), IEEE, 2023, pp. 1–4.
- [50] Y.-B. Lin, Y.-L. Sung, J. Lei, M. Bansal, G. Bertasius, Vision transformers are parameter-efficient audio-visual learners, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 2299–2309.
- [51] K. Chen, X. Du, B. Zhu, Z. Ma, T. Berg-Kirkpatrick, S. Dubnov, Hts-at: A hierarchical token-semantic audio transformer for sound classification and detection, in: ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 646–650. doi:10.1109/ICASSP43922.2022.9746312.
- [52] W. Phokhinanan, N. Obin, S. Argentieri, Binaural sound localization in noisy environments using frequency-based audio vision transformer (favit), in: INTERSPEECH, ISCA, 2023, pp. 3704–3708.
- [53] S. Whitaker, A. Barnard, G. D. Anderson, T. C. Havens, Through-ice acoustic source tracking using vision transformers with ordinal classification, Sensors 22 (13) (2022) 4703.
- [54] X. Xiao, S. Zhao, X. Zhong, D. L. Jones, E. S. Chng, H. Li, A learning-based approach to direction of arrival estimation in noisy and reverberant environments, in: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2015, pp. 2814–2818.
- [55] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014).
- [56] H. Chefer, S. Gur, L. Wolf, Transformer interpretability beyond attention visualization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 782–791.
- [57] S. Abnar, W. Zuidema, Quantifying attention flow in transformers, arXiv preprint arXiv:2005.00928 (2020).