Abstract
Quick and easy access to performance data during matches and training sessions is important for both players and coaches. While there are many video tagging systems available, these systems require manual efforts. In this project, we use Inertial Measurement Units (IMU) sensors strapped on the wrists of volleyball players to capture motion data and use Machine Learning techniques to model their actions and non-actions events during matches and training sessions.
Analysis of the results suggests that all sensors in the IMU (i.e. magnetometer, accelerometer, barometer and gyroscope) contribute unique information in the classification of volleyball-specific actions. We demonstrate that while the accelerometer feature set provides the best Unweighted Average Recall (UAR) overall, decision fusion of the accelerometer with the magnetometer improves UAR slightly from 85.86\% to 86.9\%. Interestingly, it is also demonstrated that the non-dominant hand provides better UAR than the dominant hand. These results are even more marked with decision fusion.
Apart from machine learning models, the project proposes a modular architecture for a system to automatically supplement video recording by detecting events of interests in volleyball matches and training sessions and to provide tailored and interactive multi-modal feedback by utilizing an html5/JavaScript application. A proof of concept prototype is also developed based on this architecture.
Analysis of the results suggests that all sensors in the IMU (i.e. magnetometer, accelerometer, barometer and gyroscope) contribute unique information in the classification of volleyball-specific actions. We demonstrate that while the accelerometer feature set provides the best Unweighted Average Recall (UAR) overall, decision fusion of the accelerometer with the magnetometer improves UAR slightly from 85.86\% to 86.9\%. Interestingly, it is also demonstrated that the non-dominant hand provides better UAR than the dominant hand. These results are even more marked with decision fusion.
Apart from machine learning models, the project proposes a modular architecture for a system to automatically supplement video recording by detecting events of interests in volleyball matches and training sessions and to provide tailored and interactive multi-modal feedback by utilizing an html5/JavaScript application. A proof of concept prototype is also developed based on this architecture.
Original language | English |
---|---|
Title of host publication | eNTERFACE’19, 15th International Summer Workshop on Multimodal Interface |
Place of Publication | Ankara |
Publisher | Bilkent University |
Number of pages | 9 |
Publication status | Published - 1 Jan 2020 |
Event | eNTERFACE'19: 15th International Summer Workshop on Multimodal Interfaces - Bilkent University, Ankara, Turkey Duration: 8 Jul 2019 → 2 Aug 2019 Conference number: 15 |
Conference
Conference | eNTERFACE'19 |
---|---|
Abbreviated title | eNTERFACE'19 |
Country/Territory | Turkey |
City | Ankara |
Period | 8/07/19 → 2/08/19 |