Prof. Dr. Ing. Pavel Zemčík

AL-HAMES Marc, HAIN Thomas, ČERNOCKÝ Jan, SCHREIBER Sascha, POEL Mannes, MÜLLER Ronald, MARCEL Sebastien, VAN Leeuwen David, ODOBEZ Jean-Marc, BA Sileye, BOURLARD Herve, CARDINAUX Fabien, GATICA-PEREZ Daniel, JANIN Adam, MOTLÍČEK Petr, REITER Stephan, RENALS Steve, VAN Rest Jeroen, RIENKS Rutger, RIGOLL Gerhard, SMITH Kevin, THEAN Andrew a ZEMČÍK Pavel. Audio-Visual Processing in Meetings: Seven Questions and Current AMI Answers. In: Proc. 3nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI 2006). Washington D.C., 2006, s. 12.
Jazyk publikace:angličtina
Název publikace:Audio-Visual Processing in Meetings: Seven Questions and Current AMI Answers
Název (cs):Audiovizuální zpracování meetingů - sedm otázek a odpovědí projektu AMI
Strany:12
Sborník:Proc. 3nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI 2006)
Konference:3nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms
Místo vydání:Washington D.C., US
Rok:2006
URL:http://www.fit.vutbr.cz/~cernocky/publi/2006/wp4_mlmi_final.pdf [PDF]
Klíčová slova
speech processing, video processing, multi-modal interaction
Anotace
Článek je o audiovizuálním zpracování meetingů, klade sedm otázek a presentuje 7 odpovědí projektu AMI
Abstrakt
The project Augmented Multi-party Interaction (AMI) is concerned with the development of meeting browsers and remote meeting assistants for instrumented meeting rooms - and the required component technologies R and D themes: group dynamics, audio, visual, and multimodal processing, content abstraction, and human-computer interaction. The audio-visual processing workpackage within AMI addresses the automatic recognition from audio, video, and combined audio-video streams, that have been recorded during meetings. In this article we describe the progress that has been made in the first two years of the project. We show how the large problem of audio-visual processing in meetings can be split into seven questions, like "Who is acting during the meeting?". We then show which algorithms and methods have been developed and evaluated for the automatic answering of these questions
BibTeX:
@INPROCEEDINGS{
   author = {Marc Al-Hames and Thomas Hain and Jan
	{\v{C}}ernock{\'{y}} and Sascha Schreiber and
	Mannes Poel and Ronald M{\"{u}}ller and Sebastien
	Marcel and David Leeuwen van and Jean-Marc Odobez
	and Sileye Ba and Herve Bourlard and Fabien
	Cardinaux and Daniel Gatica-Perez and Adam Janin
	and Petr Motl{\'{i}}{\v{c}}ek and Stephan Reiter
	and Steve Renals and Jeroen Rest van and Rutger
	Rienks and Gerhard Rigoll and Kevin Smith and
	Andrew Thean and Pavel Zem{\v{c}}{\'{i}}k},
   title = {Audio-Visual Processing in Meetings: Seven
	Questions and Current AMI Answers},
   pages = 12,
   booktitle = {Proc. 3nd Joint Workshop on Multimodal Interaction and
	Related Machine Learning Algorithms (MLMI 2006)},
   year = 2006,
   location = {Washington D.C., US},
   language = {english},
   url = {http://www.fit.vutbr.cz/research/view_pub.php.cs?id=8237}
}

Vaše IPv4 adresa: 3.81.28.94
Přepnout na https