Conference paper

HRADIŠ Michal, ŘEZNÍČEK Ivo and BEHÚŇ Kamil. Semantic Class Detectors in Video Genre Recognition. In: Proceedings of VISAPP 2012. Rome: SciTePress - Science and Technology Publications, 2012, pp. 640-646. ISBN 978-989-8565-03-7.
Publication language:english
Original title:Semantic Class Detectors in Video Genre Recognition
Title (cs):Detektory sémantických tříd pro rozpoznávání žánru
Proceedings:Proceedings of VISAPP 2012
Conference:International Conference on Computer Vision Theory and Applications 2012
Place:Rome, IT
Publisher:SciTePress - Science and Technology Publications
+Type Name Title Size Last modified
icon2011-Hradis-VISAPP.pdf937 KB2012-03-16 10:20:36
^ Select all
With selected:
genre recogntion, SIFT, SVM, classifier fusion, bag of words
This paper presents our approach to video genre recognition which we developed for MediaEval 2011 evaluation. We treat the genre recognition task as a classification problem. We encode visual information in standard way using local features and Bag of Word representation. Audio channel is parameterized in similar way starting from its spectrogram. Further,  we exploit available automatic speech transcripts and user generated meta-data for which we compute BOW representations as well. It is reasonable to expect that semantic content of a video is strongly related to its genre, and if this semantic information was available it would make genre recognition simpler and more reliable. To this end, we used annotations for 345 semantic classes from TRECVID 2011 semantic indexing task to train semantic class detectors. Responses of these detectors were then used as features for genre recognition. The paper explains the approach in detail, it shows relative performance of the individual features and their combinations measured on MediaEval 2011 genre recognition dataset, and it sketches possible future research. The results show that, although, meta-data is more informative compared to the content-based features, results are improved by adding content-based information to the meta-data. Despite the fact that the semantic detectors were trained on completely different dataset, using them as feature extractors on the target dataset provides better result than the original low-level audio and video features.
   author = {Michal Hradi{\v{s}} and Ivo
	{\v{R}}ezn{\'{i}}{\v{c}}ek and Kamil
   title = {Semantic Class Detectors in Video Genre
   pages = {640--646},
   booktitle = {Proceedings of VISAPP 2012},
   year = 2012,
   location = {Rome, IT},
   publisher = {SciTePress - Science and Technology Publications},
   ISBN = {978-989-8565-03-7},
   language = {english},
   url = {}

Your IPv4 address:
Switch to https