dc.contributor.author |
Dilpazir, Hammad |
|
dc.date.accessioned |
2019-10-09T06:20:24Z |
|
dc.date.accessioned |
2020-04-15T03:21:36Z |
|
dc.date.available |
2020-04-15T03:21:36Z |
|
dc.date.issued |
2018 |
|
dc.identifier.govdoc |
17392 |
|
dc.identifier.uri |
http://142.54.178.187:9060/xmlui/handle/123456789/11521 |
|
dc.description.abstract |
Information fusion is an essential part of distributed wireless sensor networks as well as perceptual user interfaces. Irrelevant and redundant data severely affect the performance of the information fusion framework. The work presented in this thesis addresses the problem of acceptability of data from multiple sources. A uni-modal information-theoretic framework for face recognition is first developed. Performance of the proposed method is compared with existing principal component analysis based face recognition algorithms. The uni-modal framework is extended to develop a multi-modal algorithm. The algorithm uses multivariate mutual information to validate the acceptability of data from two sources. Unlike the preceding algorithms, the framework does not require any preprocessing, such as automatic face recognition, etc. Moreover, it also does not condition any statistical modeling or feature extraction and learning algorithms to extract the maximum information regions. The various cases containing a single speaker as well as a group of speakers are analyzed. The applicability of the algorithm is tested by incorporating it with Haar like features of Voila and Jones for lip activity detection. |
en_US |
dc.description.sponsorship |
Higher Education Commission Pakistan |
en_US |
dc.language.iso |
en_US |
en_US |
dc.publisher |
Quaid-i-Azam University, Islamabad. |
en_US |
dc.subject |
physics |
en_US |
dc.title |
Multivariate Mutual Information for Audio Video Fusion |
en_US |
dc.type |
Thesis |
en_US |