PASTIC Dspace Repository

Efficient Music Fingerprinting For Music Speech Segregation

Show simple item record

dc.contributor.author Qazi, Khurram Ashfaq
dc.date.accessioned 2019-07-04T06:51:48Z
dc.date.accessioned 2020-04-11T15:36:02Z
dc.date.available 2020-04-11T15:36:02Z
dc.date.issued 2018
dc.identifier.govdoc 17378
dc.identifier.uri http://142.54.178.187:9060/xmlui/handle/123456789/5080
dc.description.abstract Since decade, a lot of music fingerprinting and speech segregation algorithms have exhaled. Music speech segregation includes music identification and followed by speech segregation. This becomes challenging in the presences of the noisy environment and noisy sample case. A rapid development has taken place in the field of multimedia content analysis. Music information retrieval applications increased the emphases on the development of music fingerprinting algorithms. Noise affects the efficiency and accuracy of the audio information retrieval algorithms. This research thesis presents a deep analysis of music fingerprinting and speech segregation algorithms. A novel algorithm is presented for music fingerprinting which is used for efficient speech segregation in which music fingerprinting is performed over a noisy audio sample. This research work proposes a system that performs music fingerprinting in-depth evolving the speech segregation processes in presence of background noise. Noise is removed from the audio signal using layered separation model of the recurrent neural network. Music fingerprinting is performed on the basis of pitch based acoustic features classified using distributed dictionary based features learning model. The classified music is processed for speech segregation after noise removal using layered separation model. Speech is segregated using vocal based acoustic features. Features are classified using improved dictionary based fisher algorithm. Structured based classes are used for the classification process. The systematic evaluation of the proposed system for music fingerprinting and speech segregation produces competitive results for three datasets (i.e. TIMIT, MIR-1K, and MusicBrainz), and the results indicate the strength of the proposed system. The proposed system produces significantly better results when the qualitative and quantitative analysis is carried out over the standard datasets showing the better efficiency of our proposed system from the past systems. en_US
dc.description.sponsorship Higher Education Commission, Pakistan en_US
dc.language.iso en_US en_US
dc.publisher University of Engineering & Technology, Taxila. en_US
dc.subject Software Engineering en_US
dc.title Efficient Music Fingerprinting For Music Speech Segregation en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account