Abstract
Following the exposition of quantitative, identifiable idiosyncrasy in violin performance – via neural network classification – we demonstrate that smartwatch-based synchronous audio-gesture logging facilitates interpretable practice feedback in violin performance. The novelty of our approach is twofold: we exploit convenient multimodal data capture using a consumer smartwatch, recording wrist-movement and audio data in parallel. Further, we prioritise the delivery of performance insights at their most interpretable, quantifying tonal and temporal performance trends. Using such accessible hardware to observe meaningful, approachable performance insights, the feasibility of our approach is maximised for use in real-world teaching and learning environments. Presented analyses draw upon a primary dataset compiled from nine violinists executing defined performance exercises. Recordings segmented via note onset detection are subject to subsequent analyses. Trends identified include a cross-participant tendency to ‘rush’ up-bows versus down-bows, along with lesser temporal and tonal consistency when bowing Spiccato versus Legato.
| Original language | English |
|---|---|
| Pages (from-to) | 283-299 |
| Number of pages | 16 |
| Journal | Transactions of the International Society for Music Information Retrieval |
| Volume | 8 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published (VoR) - 4 Sept 2025 |
Funding
Internal PhD funding