Lai-Tee Cheok
Samsung
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lai-Tee Cheok.
acm multimedia | 2000
Michelle Y. Kim; Steve Wood; Lai-Tee Cheok
This paper describes the Extensible MPEG-4 Textual format (XMT), a framework for representing MPEG-4 scene description using a textual syntax. The XMT allows the content authors to exchange their content with other authors, tools or service providers, and facilitates interoperability with both the X3D, developed by the Web3D consortium, and the Synchronized Multimedia Integration Language (SMIL) from the W3C consortium.
international conference on multimedia and expo | 2010
Lai-Tee Cheok; Nikhil Gagvani
Video surveillance systems increasingly use H.264 coding to achieve 24×7 recording and streaming. However, with the proliferation of security cameras, and the need to store several months of video, bandwidth and storage costs can be significant. We propose a new compression technique to significantly improve the coding efficiency of H.264 for surveillance video. Video content is analyzed and video semantics are extracted using video analytics algorithms such as segmentation, classification and tracking. In contrast to existing approaches, our Analytics-Modulated Compression (AMC) scheme does not require coding of object shape information and produces bit-streams that are standards-compliant and not limited to specific H.264 profiles. Extensive experiments were conducted involving real surveillance scenes. Results show that our technique achieves compression gains of 67% over JM. We also introduced AMC Rate Control (AMC RC) which allocates bits in response to scene dynamics. AMC RC is shown to significantly reduce artifacts in constant-bitrate video at low bitrates.
IEEE Transactions on Circuits and Systems for Video Technology | 1999
Hari Kalva; Li Tang; Jean-François Huard; George S. Tselikis; Javier Zamora; Lai-Tee Cheok; Alexandros Eleftheriadis
We describe the implementation of a streaming client-server system for object-based audio-visual presentations in general and MPEG-4 content in particular. The system augments the MPEG-4 demonstration software implementation (IM1) for PCs by adding network-based operation with full support for the Delivery Multimedia Integration Framework (DMIF) specification, a streaming PC-based server with DMIF support (via Xbind Inc.s XDMIF suite), and multiplexing software. We describe XDMIF, the first reference implementation of the DMIF specification. The MPEG-4 server is designed for delivering object-based audio-visual presentation. We discuss the issues in the design and implementation of MPEG-4 servers. The system also implements a novel architecture for client-server interaction in object-based audio-visual presentations, using the mechanism of command routes and command descriptors. This new concept of command routes and command descriptors is useful in developing sophisticated interactive applications.
Proceedings of SPIE | 2012
Velibor Adzic; Hari Kalva; Lai-Tee Cheok
Cues from human visual system (HVS) can be used for further optimization of compression in modern hybrid video coding platforms. We present work that explores and exploits motion related attentional limitations. Algorithms for exploiting motion triggered attention were developed and compared with MPEG AVC/H.264 encoder with various settings for different bitrate levels. For the sequences with high motion activity our algorithm provides up to 8% bitrate savings.
international symposium on multimedia | 2012
Lai-Tee Cheok; Sol Yee Heo; Donato Mitrani; Anshuman Tewari
Face recognition is one of the most promising and successful applications of image analysis and understanding. Applications include biometrics identification, gaze estimation, emotion recognition, human computer interface, among others. A closed system trained to recognize only a predetermined number of faces will become obsolete very easily. In this paper, we describe a demo that we have developed using face detection and recognition algorithms for recognizing actors/actresses in movies. The demo runs on a Samsung tablet to recognize actors/actresses in the video. We also present our proposed method that allows user to interact with the system during training while watching video. New faces are tracked and trained into new face classifiers as video is continuously playing and the face database is updated dynamically.
international conference on multimedia information networking and security | 2012
Lai-Tee Cheok; Sungryuel Rhyu; Jae-Yeon Song
Mashups refer to integrating content from disparate sources to create new data for supporting new functionality. By hybridization of information, reusing and remixing available content through associations and recombination, new services can be provided to the users. Although data mashups are not new, mashups of multimedia content is still in its infancy. We propose a new language - Multimedia Mash up Markup Language (M3L) - that enables flexible access and easy combination of multimedia content. We first describe state-of-the-art in mashups before presenting our proposed language, while emphasizing its distinct features compared with state-of-the-art. We also propose extending Synchronized Multimedia Integration Language (SMIL) to support logic for data manipulation necessary for handling result set processing.
Archive | 2011
Lai-Tee Cheok; Nhut Nguyen; Jae-Yeon Song; Sung-ryeul Rhyu; Seo-Young Hwang; Kyung-Mo Park
Archive | 2012
Nhut Nguyen; Lai-Tee Cheok; Hojin Ha
Archive | 2012
Nhut Nguyen; Hojin Ha; Lai-Tee Cheok
Archive | 2012
Lai-Tee Cheok; Jaeyong Song; Kyungmo Park