Discriminador Voz/Música Baseado na Estimação de Múltiplas Freqüências Fundamentais
(Speech/Music Discriminator Based on Multiple Fundamental Frequencies Estimation)
Jayme Garcia Arnal Barbedo (firstname.lastname@example.org), Amauri Lopes (email@example.com)
Universidade Estadual de Campinas
This paper appears in: Revista IEEE América Latina
Publication Date: Sept. 2007
Volume: 5, Issue: 5
This paper introduces a new technique to discriminate between music and speech. The strategy is based on the concept of multiple fundamental frequencies estimation, which provides the elements for the extraction of three features from the signal. The discrimination between speech and music is obtained by properly combining such features. The reduced number of features, together with the fact that no training phase is necessary, makes this strategy very robust to a wide range of practical conditions. The performance of the technique is analyzed taking into account the precision of the speech/music separation, the robustness face to extreme conditions, and computational effort. A comparison with previous works reveals an excellent performance under all points of view.
speech/music discrimination, multiple fundamental frequencies, MIDI scale
Documents that cite this
This function is not implemented yet.
[PDF Full-Text (266)]