CBMI 2014

12th International Workshop on Content-Based Multimedia Indexing

Keynote Speakers


We are honored to announce the following inspiring invited speakers:

Keynote 1: Can you trust your content based retrieval system?
June 18, 2014

Andreas_Uhl Speaker: Univ.-Prof. Dr. Andreas Uhl, Computer Sciences Department, Paris Lodron University of Salzburg, Austria


Content-based media processing is not a domain well known for its high security-awareness. We will discuss questions related to security and privacy in such systems. On the one hand, content-based retrieval systems can be deceived by fraudulent manipulated queries, on the other hand data privacy is threatened especially in distributed search and retrieval setups. We will discuss eventual solutions as proposed in the context of biometric systems and will check for their generalisability to other content-based processing application areas.

Short Bio: Andreas Uhl is currently head of the Computer Sciences Department at the Paris Lodron University of Salzburg, Austria. There he heads the MultiMedia Signal Processing and Security (WaveLab) group, specialising in media security and watermarking, biometrics, medical image analysis, and image and video processing and compression topics in general. Andreas Uhl holds Master degrees in Pure Mathematics as well as Secondary/High School teacher education (with qualifications for mathematics, computer science, philosophy, and psychology), a PhD in applied mathematics, and is a tenured professor for computer science. He is an associate editor of ACM TOMCCAP, Signal Processing: Image Communication, EURASIP Journal of Image and Video Processing, and the ETRI Journal. In June 2014, he acts as a general chair for the 2nd ACM Workshop on Information Hiding and Multimedia Security 2014, to be held in Salzburg, Austria.


Keynote 2: Video Browsing
June 18,2014

Klaus_SchoeffmannSpeaker: Ass.-Prof. DI Dr. Klaus Schöffmann,
Klagenfurt University

Abstract: Video browsing tools combine content-based multimedia indexing with human-computer-interaction methods to provide interactive search in video content. Over the years many sophisticated tools have been proposed in the literature. These tools complement content-based video retrieval applications and are particularly useful in situations where no query can be formulated for a specific search need. Instead of relying only on content-based indexing methods they provide flexible interaction features and use abstract content visualization methods to support the interactive search process of the user. In this talk I will first motivate why video browsing is a powerful and convenient way of content-based search in video. I will discuss state-of-the-art tools designed for interactive search in video and outline how they integrate users in the search process and allow them to use their potential knowledge of the underlying content. I will also focus on ways to evaluate such tools, discuss the Video Browser Showdown competition and draw conclusions for further work on video browsing. The last part of the talk will focus on how to use modern features of mobile multimedia devices to develop novel video search tools.

Short Bio Klaus Schöffmann: Klaus Schoeffmann is senior assistant professor at the Institute of Information Technology at Klagenfurt University, Austria. His current research focuses on visual content analysis and interactive video search, including user interfaces, image/video content analysis, content visualization and interaction models for video search and retrieval. He is the author of numerous peer-reviewed publications on video browsing, video exploration, and video content analysis. He received his Ph.D. degree (Dr. techn.) from Klagenfurt University in June 2009. In his Ph.D. thesis he investigated possibilities to combine video browsing, video retrieval, and video summarization in order to allow for immediate video exploration. Klaus Schoeffmann teaches various courses in computer science (including Media Technology, Multimedia Systems, Operating Systems, and Distributed Systems) and he has (co-) organized international conferences, special sessions and workshops. He is member of the IEEE and the ACM and a regular reviewer for international conferences and journals in the field of Multimedia.


Keynote 3: Activity Recognition and Psychophysics: Towards Real-Time Processing of 3D Tele-immersive Video
June 20,2014

 Klara_NahrstedtSpeaker: Prof. Klara Nahrstedt
Department of Computer Science
University of Illinois at Urbana-Champaign

Abstract: With the decreasing cost and ubiquity of 3D cameras, real-time 3D tele-immersive video is becoming a real possibility in the Next Generation of stored and interactive teleimmersive systems. However, real-time processing of 3D tele-immersive videos is still very challenging when executing queries, transmission and/or user interactions with 3D tele-immersive video. In this talk, we will make the case that in order to achieve real-time processing of 3D tele-immersive video, we need to consider two important issues, the understanding of the user perception, and detection of semantics.  Based on different case studies, we discuss two psychophysical activity-driven approaches for real-time 3D teleimmersive video. Both approaches achieve real-time processing due to adaptive compression of 3D teleimmersive videos based on understanding of user perception and detecting activities semantics in the content. Both approaches reduce underlying resource usage, while preserving the overall perceived visual Quality of Experience.

In the first case study, we show the importance of the semantic factor “CZLoD: Color-plus-Depth Level-of-Details”.  Through psychophysical study we show the existence of two important thresholds, the Just Noticeable Degradation and Just Unacceptable Degradation thresholds on the CZLoD factor which are activity-dependent.  This approach then utilizes the activity-dependent CZLoD thresholds in the real-time perception-based quality adaptor, while reducing underlying resource usage and enhancing perceived visual quality. In the second study, we combine activity recognition and real-time morphing-based compression of 3D tele-immersive video, where the morphing rate is controlled by user experience and activities within a resource adaptor, reducing bandwidth and preserving perceived visual quality. In both studies, the results are very encouraging, promising fast retrieval and transmission times, and high quality user interactions with 3D teleimmersive video.

Short Bio: Klara Nahrstedt is the Ralph and Catherine Fisher Professor in the Computer Science Department, and Acting Director of Coordinated Science Laboratory in the College of Engineering at the University of Illinois at Urbana-Champaign. Her research interests are directed  toward 3D teleimmersive systems, mobile systems, Quality of Service (QoS) and resource management, Quality of Experience in multimedia systems, and real-time security in mission-critical systems. She is the co-author of widely used multimedia books `Multimedia: Computing, Communications and Applications’ published by Prentice Hall, and ‘Multimedia Systems’ published by Springer Verlag. She is the recipient of the IEEE Communication Society Leonard Abraham Award for Research Achievements, University Scholar, Humboldt Award, IEEE Computer Society Technical Achievement Award, and the former chair of the ACM Special Interest Group in Multimedia. She was the general chair of ACM Multimedia 2006, general chair of ACM NOSSDAV 2007 and the general chair of IEEE Percom 2009.

Klara Nahrstedt received her Diploma in Mathematics from Humboldt University, Berlin, Germany in numerical analysis in 1985. In 1995 she received her PhD from the University of Pennsylvania in the Department of Computer and Information Science. She is ACM Fellow, IEEE Fellow, and Member of the Leopoldina German National Academy of Sciences.