Abstract
In this paper, we propose Anchor-agnostic Transformers (AaTs) that can exploit the attention mechanism for Received Signal Strength (RSS) based fingerprinting localization. In real-world applications, the RSS modality is inherently well-known for its extreme sensitivity to dynamic environments. Since most machine learning algorithms applied to the RSS modality do not possess any attention mechanism, they can only capture superficial representations, yet subtle but distinct ones characterizing specific locations, thereby leading to significant degradation in the testing phase. In contrast, AaTs are enabled to focus exclusively on relevant anchors at every Received Signal Strength (RSS) sequence for these subtle but distinct representations. This also facilitates the model to neglect redundant clues formed by noisy ambient conditions, thus achieving better accuracy in fingerprinting localization. Moreover, explicitly resolving collapse problems at the feature level (i.e., none-informative or homogeneous features) can further invigorate the self-attention process, by which subtle but distinct representations to specific locations are radically captured with ease. To this end, we enhance our proposed model with two sub-constraints, namely covariance and variance losses that are mediated with the main task within the representation learning stage towards a novel multi-task learning manner. To evaluate our AaTs, we compare the models with the state-of-the-art (SoTA) methods on three benchmark indoor localization datasets. The experimental results confirm our hypothesis and show that our proposed models could provide much higher accuracy.
Original language | English |
---|---|
Title of host publication | 2023 IEEE International Conference on Pervasive Computing and Communications (PerCom) |
Place of Publication | Piscataway, NJ |
Publisher | IEEE |
Pages | 150-159 |
Number of pages | 10 |
ISBN (Electronic) | 978-1-6654-5378-3 |
ISBN (Print) | 978-1-6654-5379-0 |
DOIs | |
Publication status | Published - 18 Apr 2023 |
Event | 21st International Conference on Pervasive Computing and Communications, PerCom 2023 - Georgia State University, Atlanta, United States Duration: 13 Mar 2023 → 17 Mar 2023 Conference number: 21 https://www.percom.org/ |
Publication series
Name | IEEE International Conference on Pervasive Computing and Communications (PerCom) |
---|---|
Publisher | IEEE |
Volume | 2023 |
ISSN (Print) | 2474-2503 |
ISSN (Electronic) | 2474-249X |
Conference
Conference | 21st International Conference on Pervasive Computing and Communications, PerCom 2023 |
---|---|
Abbreviated title | PerCom 2023 |
Country/Territory | United States |
City | Atlanta |
Period | 13/03/23 → 17/03/23 |
Internet address |
Keywords
- Transformer
- Self-attention
- CNNs
- indoor localization
- Indoor positioning
- Deep Learning (DL)
Fingerprint
Dive into the research topics of 'Learning the world from its words: Anchor-agnostic Transformers for Fingerprint-based Indoor Localization'. Together they form a unique fingerprint.Prizes
-
Best Paper Nominee
Nguyen, S. (Recipient), Le, D. V. (Recipient) & Havinga, P. J. M. (Recipient), 13 Mar 2023
Prize