Please use this identifier to cite or link to this item: https://hdl.handle.net/11147/2018
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTüfekçi, Zekeriya-
dc.contributor.authorGürbüz, Sabri-
dc.date.accessioned2016-07-29T13:48:40Z
dc.date.available2016-07-29T13:48:40Z
dc.date.issued2005
dc.identifier.citationTüfekçi, Z., and Gürbüz, S. (2005, March 18-23). Noise robust speaker verification using mel-frequency discrete wavelet coefficients and parallel model compensation. Paper presented at 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '05. doi:10.1109/ICASSP.2005.1415199en_US
dc.identifier.issn1520-6149
dc.identifier.issn1520-6149-
dc.identifier.urihttp://doi.org/10.1109/ICASSP.2005.1415199
dc.identifier.urihttp://hdl.handle.net/11147/2018
dc.description.abstractInterfering noise severely degrades the performance of a speaker verification system. The Parallel Model Combination (PMC) technique is one of the most efficient techniques for dealing with such noise. Another method is to use features local in the frequency domain. Recently, Mel-Frequency Discrete Wavelet Coefficients (MFDWCs) [1, 2] were proposed as speech features local in frequency domain. In this paper, we discuss using PMC along with MFDWCs features to take advantage of both noise compensation and local features (MFDWCs) to decrease the effect of noise on speaker verification performance. We evaluate the performance of MFDWCs using the NIST 1998 speaker recognition and NOISEX-92 databases for various noise types and noise levels. We also compare the performance of these versus MFCCs and both using PMC for dealing with additive noise. The experimental results show significant performance improvements for MFDWCs versus MFCCs after compensating the Gaussian Mixture Models (GMMs) using the PMC technique. The MFDWCs gave 5.24 and 3.23 points performance improvement on average over MFCCs for -6 dB and 0 dB SNR values, respectively. These correspond to 26.44% and 23.73% relative reductions in equal error rate (EER), respectively.en_US
dc.language.isoengen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.relation.ispartofICASSP, IEEE International Conference on Acoustics, Speech and Signal Processingen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectSpeech recognitionen_US
dc.subjectAcoustic noiseen_US
dc.subjectDatabase systemsen_US
dc.subjectError analysisen_US
dc.subjectNatural frequenciesen_US
dc.subjectWavelet transformsen_US
dc.titleNoise robust speaker verification using mel-frequency discrete wavelet coefficients and parallel model compensationen_US
dc.typeConference Objecten_US
dc.authoridTR132910en_US
dc.institutionauthorTüfekci, Zekeriya-
dc.departmentIzmir Institute of Technology. Electronics and Communication Engineeringen_US
dc.identifier.volume1en_US
dc.identifier.startpage657en_US
dc.identifier.endpage660en_US
dc.identifier.wosWOS:000229404200165en_US
dc.identifier.scopus2-s2.0-33646765445en_US
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.identifier.doi10.1109/ICASSP.2005.1415199-
dc.relation.doi10.1109/ICASSP.2005.1415199en_US
dc.coverage.doi10.1109/ICASSP.2005.1415199en_US
dc.identifier.scopusquality--
item.openairetypeConference Object-
item.languageiso639-1en-
item.fulltextWith Fulltext-
item.grantfulltextopen-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
Appears in Collections:Electrical - Electronic Engineering / Elektrik - Elektronik Mühendisliği
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Files in This Item:
File Description SizeFormat 
2018.pdfConference Paper224.93 kBAdobe PDFThumbnail
View/Open
Show simple item record

CORE Recommender

SCOPUSTM   
Citations

10
checked on Jan 21, 2023

Page view(s)

72
checked on Jan 23, 2023

Download(s)

98
checked on Jan 23, 2023

Google ScholarTM

Check

Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.