TA2: Together Anywhere, Together Anytime

Reseach leader:Smrž Pavel
Team leaders:Zemčík Pavel
Agency:EU-7FP-ICT
Code:214793
Start:2010
End:2012
Keywords:social interaction, multimedia processing
Annotation:
TA2 (Together Anywhere, Together Anytime), pronounced "tattoo", aims at defining end-to-end systems for the development and delivery of new, creative forms of interactive, immersive, high quality media experiences for groups of users such as households and families. The overall vision of TA2 can be summarised as "making communications and engagement easier among groups of people separated in space and time".

One of the key components of TA2 is a set of generic and reliable audio, video, and multimodalities integration and recognition tools. This includes automatic extraction of cues from raw data streams. The running TA2 project stresses low-level "instantaneous" cues; it does not deal with semantic-aware integration of contextual information which could significantly improve the quality of cues.

The proposed TA2 project extension focuses on the medium-level (context-aware) cues taking into account not only low-level analysis outputs but also contextual information, e.g., about the activated scenario. The created semantic cues will be used by the TA2 system to orchestrate (i.e. frame, crop and represent) the audio-visual elements of the interaction between people.

The addition of BUT to the consortium will allow the semantic relevance of the metadata extracted from the analysis to be interpreted within the particular contexts described in the project. This will make the subsequent orchestration of the video more effective and more efficient and hence improve the end-user experience. The extension will enable building better applications that help families to interact easily and openly through games, through improved semi automatic production and publication of user generated content, and through enhanced ambient connectedness between families.

Products

2010Classifier creation framework for diverse classification tasks, software, 2010
Authors: Bařina David, Hradiš Michal, Řezníček Ivo, Zemčík Pavel
 Online human action recognition framework, software, 2010
Authors: Řezníček Ivo, Hradiš Michal, Zemčík Pavel
 Shared Image Preprocessing, software, 2010
Authors: Žák Pavel, Hradiš Michal, Smrž Pavel, Zemčík Pavel

Publications

2012BEDNAŘÍK, R., VRZÁKOVÁ, H. and HRADIŠ, M. What you want to do next: A novel approach for intent prediction in gaze-based interaction. In: ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications. Santa Barbara: Association for Computing Machinery, 2012, pp. 83-90. ISBN 978-1-4503-1221-9.
 HRADIŠ, M., EIVAZI, S. and BEDNAŘÍK, R. Voice activity detection in video mediated communication from gaze. In: ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications. Santa Barbara: Association for Computing Machinery, 2012, pp. 329-332. ISBN 978-1-4503-1221-9.
 HRADIŠ, M., ŘEZNÍČEK, I. and BEHÚŇ, K. Semantic Class Detectors in Video Genre Recognition. In: Proceedings of VISAPP 2012. Rome: SciTePress - Science and Technology Publications, 2012, pp. 640-646. ISBN 978-989-8565-03-7.
 KRÁL, J. and HRADIŠ, M. Restricted Boltzman Machines for Image Tag Suggestion. In: Proceedings of the 19th Conference STUDENT EEICT 2012. Brno: Brno University of Technology, 2012, p. 5.
 MOTLÍČEK, P., VALENTE, F. and SZŐKE, I. Improving Acoustic Based Keyword Spotting Using LVCSR Lattices. In: Proc. International Conference on Acoustics, Speech, and Signal Processing 2012. Kyoto: IEEE Signal Processing Society, 2012, pp. 4413-4416. ISBN 978-1-4673-0044-5.
 POLÁČEK, O., KLÍMA, M., SPORKA, A., J., ŽÁK, P., HRADIŠ, M., ZEMČÍK, P. and PROCHÁZKA, V. A comparative study on distant free-hand pointing. In: EuroiTV '12 Proceedings of the 10th European conference on Interactive tv and video. Berlin, Germany: Association for Computing Machinery, 2012, pp. 139-142. ISBN 978-1-4503-1107-6.
2011HRADIŠ, M., ŘEZNÍČEK, I. and BEHÚŇ, K. Brno University of Technology at MediaEval 2011 Genre Tagging Task. In: Working Notes Proceedings of the MediaEval 2011 Workshop. Pisa, Italy: CEUR-WS.org, 2011, pp. 1-2. ISSN 1613-0073.
 ŘEZNÍČEK, I. and ZEMČÍK, P. On-line human action detection using space-time interest points. In: Zborník príspevkov prezentovaných na konferencii ITAT, september 2011. Praha: Faculty of Mathematics and Physics, 2011, pp. 39-45. ISBN 978-80-89557-01-1.
2010HRADIŠ, M., BERAN, V., ŘEZNÍČEK, I., HEROUT, A., BAŘINA, D., VLČEK, A. and ZEMČÍK, P. Brno University of Technology at TRECVid 2010. In: TRECVID 2010: Participant Notebook Papers and Slides. Gaithersburg, MD: National Institute of Standards and Technology, 2010, p. 11.
 ŽÁK, P., BARTOŇ, R. and ZEMČÍK, P. Vision based user interface framework. In: Proceedings of the DT workshop. Žilina, 2010, p. 3. ISBN 978-80-554-0304-5.
 ŘEZNÍČEK, I. and BAŘINA, D. Classifier creation framework for diverse classification tasks. In: Proceedings of the DT workshop. Žilina: Brno University of Technology, 2010, p. 3. ISBN 978-80-554-0304-5.

Your IPv4 address: 54.161.133.166
Switch to IPv6 connection

DNSSEC [dnssec]