Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeho Nam is active.

Publication


Featured researches published by Jeho Nam.


acm multimedia | 1999

Dynamic video summarization and visualization

Jeho Nam; Ahmed H. Tewfik

In this paper, we introduce a new video summarization procedure that produces a dynamic (video) abstract of the original video sequence. Our technique compactly summarizes a video data by preserving its original temporal characteristics (visual activity) and semantically essential information. It relies on an adaptive nonlinear sampling. The local sampling rate is directly proportional to the amount of visual’ activity in localized sub-shot units of the video. The resulting video abstract is highly compact. To get very short, yet semantically meaningful summaries, we propose an event-oriented abstraction scheme, in which two semantic events; emotionaldialogue and violentfeatured action, are characterized and abstracted into the video summary before all other events. If the length of the summary permits, other non key events are then added.


international conference on acoustics, speech, and signal processing | 1997

Combined audio and visual streams analysis for video sequence segmentation

Jeho Nam; Ahmed H. Tewfik

We present a new approach to video sequence segmentation into individual shots. Unlike previous approaches, our technique segments the video sequence by combining two streams of information extracted from the visual track with audio track segmentation information. The visual streams of information are computed from the coarse data in a 3-D wavelet decomposition of the video track. They consist of (i) information derived from temporal edges detected along the time evolution of the intensity of each pixel in temporally sub-sampled spatially filtered coarse frames, and (ii) information derived from the coarse spatio-temporal evolution of intra-frame edges in the spatially filtered coarse frames. Our approach is particularly matched to progressively transmitted video.


multimedia signal processing | 1999

Video abstract of video

Jeho Nam; Ahmed H. Tewfik

We present a new video summarization procedure that produces a dynamic (video) abstract of the original video sequence. Our approach relies on an adaptive nonlinear sampling of the video. The local sampling rate is directly proportional to the amount of visual activity in localized sub-shot units of the video. The resulting video abstract is highly compact. At playtime, linear interpolation is used to provide the viewer with a summary of the video that accurately preserves the relative length and amount of activity in each sub-shot unit.


international conference on image processing | 1997

Speaker identification and video analysis for hierarchical video shot classification

Jeho Nam; A. Enis Cetin; Ahmed H. Tewfik

We present a new video shot classification and clustering technique to support content-based indexing, browsing and retrieval in video databases. The proposed method is based on the analysis of both the audio and visual data tracks. The visual stream is analyzed using a 3-D wavelet transform and segmented into shot units which are matched and clustered by visual content. Simultaneously, speaker changes are detected by tracking voiced phonemes in the audio signal. The clues obtained from the video and speech data are combined to classify and group the isolated video shots. This integrated approach also allows effective indexing of the audio-visual objects in multimedia databases.


Multimedia Tools and Applications | 2002

Event-Driven Video Abstraction and Visualization

Jeho Nam; Ahmed H. Tewfik

In this paper, we propose a new video summarization procedure that produces a dynamic (video) abstract of the original video sequence. Our technique compactly summarizes a video data by preserving its original temporal characteristics (visual activity) and semantically essential information. It relies on an adaptive nonlinear sampling. The local sampling rate is directly proportional to the amount of visual activity in localized sub-shot units of the video. To get very short, yet semantically meaningful summaries, we also present an event-oriented abstraction scheme, in which two semantic events; emotional dialogue and violent action, are characterized and abstracted into the video summary before all other events. If the length of the summary permits, other non key events are then added. The resulting video abstract is highly compact.


international conference on acoustics speech and signal processing | 1998

Progressive resolution motion indexing of video object

Jeho Nam; Ahmed H. Tewfik

We present a novel motion-based video indexing scheme for fast content-based browsing and retrieval in a video database. The proposed technique constructs a dictionary of prototype objects to support query by motion. The first step in our approach extracts moving objects by analyzing layered images constructed from the coarse data in a 3-D wavelet decomposition of the video sequence. These images capture motion information only. Moving objects are modeled as collections of interconnected rigid polygonal shapes in the motion sequences that we derive from the wavelet representation. The motion signatures of the object are computed from the rotational and translational motions associated to the elemental polygons that form the objects. These signatures are finally stored as potential query terms.


visual communications and image processing | 1998

Motion-based video object indexing using multiresolution analysis

Jeho Nam; Ahmed H. Tewfik

In this paper, we describe an efficient video indexing scheme based on motion behavior of video objects for fast content-based browsing and retrieval in a video database. The proposed novel method constructs a dictionary of prototype objects. The first step in our approach extracts moving objects by analyzing layered images constructed from the coarse data in a 3D wavelet decomposition of the video sequence. These images capture motion information only. Moving objects are modeled as collections of interconnected rigid polygonal shapes in the motion sequences that we derive from the wavelet representation. The motion signatures of the object are computed from the rotational and translational motions associated to the elemental polygons that form the objects. These signatures are finally stored as potential query terms.


international conference on image processing | 1998

Audio-visual content-based violent scene characterization

Jeho Nam; Masoud Alghoniemy; Ahmed H. Tewfik


IEEE Transactions on Multimedia | 2005

Detection of gradual transitions in video sequences using B-spline interpolation

Jeho Nam; Ahmed H. Tewfik


international conference on multimedia and expo | 2000

Dissolve transition detection using B-splines interpolation

Jeho Nam; Ahmed H. Tewfik

Collaboration


Dive into the Jeho Nam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge