Abstract
Musicians, audio engineers and producers often make use of common timbral adjectives to describe musical signals and transformations. However, the subjective nature of these terms, and the variability with respect to musical context often leads to inconsistencies in their de?nition. In this study, a model is proposed for controlling an equaliser by navigating clusters of datapoints, which represent grouped parameter settings with the same timbral description. The associated interface allows users to identify the nearest cluster to their current parameter setting and recommends changes based on its relationship to a cluster centroid. To do this, we apply dimensionality reduction to a dataset of equaliser curves described as warm and bright using a stacked autoencoder, then group the entries using an agglomerative clustering algorithm with a coherence-based distance criterion. To test the e?cacy of the system, we implement listening tests and show that subjects are able to match datapoints to their respective sub-representations with 93.75% mean accuracy.
Original language | English |
---|---|
Pages | 56-61 |
Number of pages | 6 |
Publication status | Published (VoR) - 2017 |