Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tanveer Fathima Syeda-Mahmood is active.

Publication


Featured researches published by Tanveer Fathima Syeda-Mahmood.


acm multimedia | 2001

Learning video browsing behavior and its application in the generation of video previews

Tanveer Fathima Syeda-Mahmood; Dulce B. Ponceleon

With more and more streaming media servers becoming commonplace, streaming video has now become a popular medium of instruction, advertisement, and entertainment. With such prevalence comes a new challenge to the servers: Can they track browsing behavior of users to determine what interest users? Learning this information is potentially valuable not only for improved customer tracking and context-sensitive e-commerce, but also in the generation of fast previews of videos for easy pre-downloads. In this paper, we present a formal learning mechanism to track video browsing behavior of users. This information is then used to generate fast video previews. Specifically, we model the states a user transitions while browsing through videos to be the hidden states of a Hidden Markov Model. We estimate the parameters of the HMM using maximum likelihood estimation for each sample observation sequence of user interaction with videos. Video previews are then formed from interesting segments of the video automatically inferred from an analysis of the browsing states of viewers. Audio coherence in the previews is maintained by selecting clips spanning complete clauses containing topically significant spoken phrases. The utility of learning video browsing behavior is demonstrated through user studies and experiments.


Proceedings IEEE Workshop on Detection and Recognition of Events in Video | 2001

Recognizing action events from multiple viewpoints

Tanveer Fathima Syeda-Mahmood; M. Alex O. Vasilescu; Saratendu Sethi

A first step towards an understanding of the semantic content in a video is the reliable detection and recognition of actions performed by objects. This is a difficult problem due to the enormous variability in an actions appearance when seen from different viewpoints and/or at different times. In this paper we address the recognition of actions by taking a novel approach that models actions as special types of 3D objects. Specifically, we observe that any action can be represented as a generalized cylinder, called the action cylinder. Reliable recognition is achieved by recovering the viewpoint transformation between the reference (model) and given action cylinders. A set of 8 corresponding points from time-wise corresponding cross-sections is shown to be sufficient to align the two cylinders under perspective projection. A surprising conclusion from visualizing actions as objects is that rigid, articulated, and nonrigid actions can all be modeled in a uniform framework.


international conference on web services | 2005

Searching service repositories by combining semantic and ontological matching

Tanveer Fathima Syeda-Mahmood; Gauri Shah; Rama Akkiraju; Anca-Andreea Ivan; Richard Goodwin

In this paper, we explore the use of domain-independent and domain-specific ontologies to find matching service descriptions. The domain-independent relationships are derived using an English thesaurus after tokenization and part-of-speech tagging. The domain-specific ontological similarity is derived by an inference on the semantic annotations associated with Web service descriptions. Matches due to the two cues are combined to determine an overall semantic similarity score. By combining multiple cues, we show that better relevancy results can be obtained for service matches from a large repository, than could be obtained using any one cue alone.


international conference on web services | 2006

SEMAPLAN: Combining Planning with Semantic Matching to Achieve Web Service Composition

Rama Akkiraju; Biplav Srivastava; Anca-Andreea Ivan; Richard Goodwin; Tanveer Fathima Syeda-Mahmood

In this paper, we present a novel algorithm to compose Web services in the presence of semantic ambiguity by combining semantic matching and AI planning algorithms. Specifically, we use cues from domain-independent and domain-specific ontologies to compute an overall semantic similarity score between ambiguous terms. This semantic similarity score is used by AI planning algorithms to guide the searching process when composing services. Experimental results indicate that planning with semantic matching produces better results than planning or semantic matching alone. The solution is suitable for semi-automated composition tools or directory browsers


international conference of the ieee engineering in medicine and biology society | 2007

Shape-based Matching of ECG Recordings

Tanveer Fathima Syeda-Mahmood; David Beymer; Fei Wang

An electrocardiogram (ECG) is an important and commonly used diagnostic aid in cardiovascular disease diagnosis. Physicians routinely perform diagnosis by a simple visual examination of ECG waveform shapes. In this paper, we address the problem of shape-based retrieval of ECG recordings, both digital and scanned from paper, to infer similarity in diagnosed diseases. Specifically, we use the knowledge of ECG recording structure to segment and extract curves representing various recording channels from ECG images. We then present a method of capturing the perceptual shape similarity of ECG waveforms by combining shape matching with dynamic time warping. The shape similarity of each recording channel is combined to develop an overall shape similarity measure between ECG recordings. Results are presented that demonstrate the method on shape-based matching of various cardiovascular diseases.


acm multimedia | 2000

Detecting topical events in digital video

Tanveer Fathima Syeda-Mahmood; Savitha Srinivasan

The detection of events is essential to high-level semantic querying of video databases. It is also a very challenging problem requiring the detection and integration of evidence for an event available in multiple information modalities, such as audio, video and language. This paper focuses on the detection of specific types of events, namely, topic of discussion events that occur in classroom/lecture environments. Specifically, we present a query-driven approach to the detection of topic of discussion events with foils used in a lecture as a way to convey a topic. In particular, we use the image content of foils to detect visual events in which the foil is displayed and captured in the video stream. The recognition of a foil in video frames exploits the color and spatial layout of regions on foils using a technique called region hashing. Next, we use the textual phrases listed on a foil as an indication of a topic, and detect topical audio events as places in the audio track where the best evidence for the topical phrases was heard. Finally, we use a probabilistic model of event likelihood to combine the results of visual and audio avent detection that exploits their time cooccurrence. The resulting identification of topical events is evaluated in the domain of classroom lectures and talks.


computer vision and pattern recognition | 2009

Echocardiogram view classification using edge filtered scale-invariant motion features

Ritwik Kumar; Fei Wang; David Beymer; Tanveer Fathima Syeda-Mahmood

In an 2D echocardiogram exam, an ultrasound probe samples the heart with 2D slices. Changing the orientation and position on the probe changes the slice viewpoint, altering the cardiac anatomy being imaged. The determination of the probe viewpoint forms an essential step in automatic cardiac echo image analysis. In this paper we present a system for automatic view classification that exploits cues from both cardiac structure and motion in echocardiogram videos. In our framework, each image from the echocardiogram video is represented by a set of novel salient features. We locate these features at scale invariant points in the edge-filtered motion magnitude images and encode them using local spatial, textural and kinetic information. Training in our system involves learning a hierarchical feature dictionary and parameters of a pyramid matching kernel based support vector machine. While testing, each image, classified independently, casts a votes towards parent video classification and the viewpoint with maximum votes wins. Through experiments on a large database of echocardiograms obtained from both diseased and control subjects, we show that our technique consistently outperforms state-of-the-art methods in the popular four-view classification test. We also present results for eight-view classification to demonstrate the scalability of our framework.


acm multimedia | 1999

CueVideo: automated multimedia indexing and retrieval

Dulce B. Ponceleon; Arnon Amir; Savitha Srinivasan; Tanveer Fathima Syeda-Mahmood; Dragutin Petkovic

We demonstrate CueVideo: a system for automated indexing and retrieval of multimedia. The system consists of the following components: video analysis and segmentation, visualization and summarization techniques, spoken document retrieval and cross-modal indexing of audio/video, related slides and text


computer vision and pattern recognition | 2000

Indexing for topics in videos using foils

Tanveer Fathima Syeda-Mahmood

A long-standing goal of distance learning has been to provide a quality of learning comparable to the face-to-face environment of a traditional classroom for teaching or training. One of the fundamental problems in achieving this goal is providing effective ways of high-level semantic querying such as for the retrieval of relevant learning material relating to a topic of discussion. In this paper we present a method of identifying video segments relating to a topic of discussion by indexing videos using the image and text content of foils. Specifically, we present a novel method of locating and recognizing foil images in video using the color and spatial layout geometry of their regions. We then search the audio associated with video based on the text content of the foil to identify related video segments in which concepts represented on a foil are heard. Finally, we combine the results of foil image and text search of video exploiting their time co-occurrence. The resulting identification of topics is evaluated in the domain of classroom lectures and talks.


acm multimedia | 1999

Multimedia access and retrieval (panel session): the state of the art and future directions

Shih-Fu Chang; Gwendal Auffret; Jonathan Foote; Chung-Shen Li; Behzad Shahraray; Tanveer Fathima Syeda-Mahmood; Hong-Jiang Zhang

Several years have passed since the research topic of content based multimedia retrieval emerged. We have witnessed the burgeoning research activities into a plenitude of new indexing, retrieval, and filtering tools for images, video, audio, music, graphics, and their combinations with text-based information. Exciting research opportunities arise when integrating knowledge from multiple disciplines, such as media content processing, database, information retrieval, and machine user interface. In the commercial domain, we have also witnessed several impressive efforts moving technologies into practical arenas.

Collaboration


Dive into the Tanveer Fathima Syeda-Mahmood's collaboration.

Researchain Logo
Decentralizing Knowledge