Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuan-Fang Wang is active.

Publication


Featured researches published by Yuan-Fang Wang.


international symposium on computer vision | 1995

An eigenspace update algorithm for image analysis

B. S. Manjunath; Shiv Chandrasekaran; Yuan-Fang Wang

During the past few years several interesting applications of eigenspace representation of images have been proposed. These include face recognition, video coding, pose estimation, etc. However, the vision research community has largely overlooked parallel developments in signal processing and numerical linear algebra concerning efficient eigenspace updating algorithms. These new developments are significant for two reasons: adopting them makes some of the current vision algorithms more robust and efficient. More important is the fact that incremental updating of eigenspace representations opens up new and interesting research applications in vision such as active recognition and learning. The main objective of the paper is to put these in perspective and discuss a recently introduced updating scheme that has been shown to be numerically stable and optimal. We provide an example of one particular application to 3D object representation projections and give an error analysis of the algorithm. Preliminary experimental results are shown.


Information Processing and Management | 2002

The use of bigrams to enhance text categorization

Chade-Meng Tan; Yuan-Fang Wang; Chan-Do Lee

In this paper, we present an efficient text categorization algorithm that generates bigrams selectively by looking for ones that have an especially good chance of being useful. The algorithm uses the information gain metric, combined with various frequency thresholds. The bigrams, along with unigrams, are then given as features to two different classifiers: Naive Bayes and maximum entropy. The experimental results suggest that the bigrams can substantially raise the quality of feature sets, showing increases in the break-even points and F1 measures. The McNemar test shows that in most categories the increases are very significant. Upon close examination of the algorithm, we concluded that the algorithm is most successful in correctly classifying more positive documents, but may cause more negative documents to be classified incorrectly.


international conference on multimedia and expo | 2004

Human activity detection and recognition for video surveillance

Wei Niu; Jiao Long; Dan Han; Yuan-Fang Wang

We present a framework for detecting and recognizing human activities for outdoor video surveillance applications. Our research makes the following contributions: For activity detection and tracking, we improve robustness by providing intelligent control and fail-over mechanisms, built on top of low-level motion detection algorithms such as frame differencing and feature correlation. For activity recognition, we propose an efficient representation of human activities that enables recognition of different interaction patterns among a group of people based on simple statistics computed on the tracked trajectories, without building complicated Markov chain, hidden Markov models (HMM), or coupled hidden Markov models (CHMM). We demonstrate our techniques using real-world video data to automatically distinguish normal behaviors from suspicious ones in a parking lot setting, which can aid security surveillance


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

Geometric and illumination invariants for object recognition

Ronald-Bryan O. Alferez; Yuan-Fang Wang

We propose invariant formulations that can potentially be combined into a single system. In particular, we describe a framework for computing invariant features which are insensitive to rigid motion, affine transform, changes of parameterization and scene illumination, perspective transform, and view point change. This is unlike most current research on image invariants which concentrates on either geometric or illumination invariants exclusively. The formulations are widely applicable to many popular basis representations, such as wavelets, short-time Fourier analysis, and splines. Exploiting formulations that examine information about shape and color at different resolution levels, the new approach is neither strictly global nor local. It enables a quasi-localized, hierarchical shape analysis which is rarely found in other known invariant techniques, such as global invariants. Furthermore, it does not require estimating high-order derivatives in computing invariants (unlike local invariants), whence is more robust. We provide results of numerous experiments on both synthetic and real data to demonstrate the validity and flexibility of the proposed framework.


acm multimedia | 2003

Multi-camera spatio-temporal fusion and biased sequence-data learning for security surveillance

Gang Wu; Yi-Leh Wu; Long Jiao; Yuan-Fang Wang; Edward Y. Chang

We present a framework for multi-camera video surveillance. The framework consists of three phases: detection, representation, and recognition. The detection phase handles multi-source spatio-temporal data fusion for efficiently and reliably extracting motion trajectories from video. The representation phase summarizes raw trajectory data to construct hierarchical, invariant, and content-rich descriptions of the motion events. Finally, the recognition phase deals with event classification and identification on the data descriptors. Because of space limits, we describe only briefly how we detect and represent events, but we provide in-depth treatment on the third phase: event recognition. For effective recognition, we devise a sequence-alignment kernel function to perform sequence data learning for identifying suspicious events. We show that when the positive training instances (i.e., suspicious events) are significantly outnumbered by the negative training instances (benign events), then SVMs (or any other learning methods) can suffer a high incidence of errors. To remedy this problem, we propose the kernel boundary alignment (KBA) algorithm to work with the sequence-alignment kernel. Through empirical study in a parking-lot surveillance setting, we show that our spatio-temporal fusion scheme and biased sequence-data learning method are highly effective in identifying suspicious events.


computational systems bioinformatics | 2003

CTSS: a robust and efficient method for protein structure alignment based on local geometrical and biological features

Tolga Can; Yuan-Fang Wang

We present a new method for conducting protein structure similarity searches, which improves on the accuracy, robustness, and efficiency of some existing techniques. Our method is grounded in the theory of differential geometry on 3D space curve matching. We generate shape signatures for proteins that are invariant, localized, robust, compact, and biologically meaningful. To improve matching accuracy, we smooth the noisy raw atomic coordinate data with spline fitting. To improve matching efficiency, we adopt a hierarchical coarse-to-fine strategy. We use an efficient hashing-based technique to screen out unlikely candidates and perform detailed pairwise alignments only for a small number of candidates that survive the screening process. Contrary to other hashing based techniques, our technique employs domain specific information (not just geometric information) in constructing the hash key, and hence, is more tuned to the domain of biology. Furthermore, the invariancy, localization, and compactness of the shape signatures allow us to utilize a well-known local sequence alignment algorithm for aligning two protein structures. One measure of the efficacy of the proposed technique is that we were able to discover new, meaningful motifs that were not reported by other structure alignment methods.


Journal of Image Guided Surgery | 1995

Automated instrument tracking in robotically assisted laparoscopic surgery

Darrin R. Uecker; Cheolwhan Lee; Yuan-Fang Wang; Yulun Wang

This paper describes a practical and reliable image analysis and tracking algorithm to achieve automated instrument localization and scope maneuvering in robotically assisted laparoscopic surgery. Laparoscopy is a minimally invasive surgical procedure that utilizes multiple small incisions on the patients body through which the surgeon inserts tools and a videoscope in order to conduct an operation. The scope relays images of internal organs to a camera, and the images are displayed on a video screen. The surgeon performs the operation by viewing the scope images rather than performing the traditional “open” procedure, where a large incision is made on the patients body for direct viewing. The current mode of laparoscopy employs an assistant to hold the scope and position it in response to the surgeons verbal commands. However, this results in suboptimal visual feedback, because the scope is often aimed incorrectly and vibrates due to hand trembling. We have developed a robotic laparoscope positioner to replace the assistant. The surgeon commands the robotic positioner through a hand/foot controller interface. To further simplify the human-machine interface that controls the robotic scope positioner, we report here a novel scope-positioning scheme using automated image analysis and robotic visual servoing. The scheme enables the surgeon to control visual feedback and to perform surgery more efficiently without requiring additional use of the hands. J Image Guid Surg 1:308–325 (1995).


Computerized Medical Imaging and Graphics | 1998

A new framework for vision-enabled and robotically assisted minimally invasive surgery

Yuan-Fang Wang; Darrin R. Uecker; Yulun Wang

This paper presents our on-going research at bringing the state-of-the-art in vision and robotics technologies to enhance the emerging minimally invasive surgery, in particular the laparoscopic surgical procedure. A framework that utilizes intelligent visual modeling, recognition, and serving capabilities for assisting the surgeon in maneuvering the scope (camera) in laparoscopy is proposed. The proposed framework integrates top-down model guidance, bottom-up image analysis, and surgeon-in-the-loop monitoring for added patient safety. For the top-down directives, high-level models are used to represent the abdominal anatomy and to encode choreographed scope movement sequences based on the surgeons knowledge. For the bottom-up analysis, vision algorithms are designed for image analysis, modeling, and matching in a flexible, deformable environment (the abdominal cavity). For reconciling the top-down and bottom-up activities, robot servoing mechanisms are realized for executing choreographed scope movements with active vision guidance. The proposed choreographed scope maneuvering concept facilitates the surgeons control of his/her visual feedback in a handless manner, reduces the risk to the patient from inappropriate scope movements by an assistant, and allows the operation to be performed faster and with greater ease. In this paper, we describe the new framework and present some preliminary results on laparoscopic image analysis for segmentation and instrument localization, and on instrument tracking.


computer science and information engineering | 2009

Smoke Detection in Video

Dong-Keun Kim; Yuan-Fang Wang

In this paper, we propose a method for smoke detection in outdoor video sequences. We assume that the camera is mounted on a pan/tilt device.The proposed method is composed of three steps. The first step is to decide whether the camera is moving or not. While the camera is moving, we skip the ensuing steps. Otherwise, the second step is to detect the areas of change in the current input frame against the background image and to locate regions of interest (ROIs) by connected component analysis. The block-based approach is applied in both the first and second steps. In the final step, we decide whether the detected ROI is smoke by using the k- temporal information of its color and shape extracted from the ROI. We show the experimental results using in the forest surveillance videos.


computer vision and pattern recognition | 2006

Using Stationary-Dynamic Camera Assemblies for Wide-area Video Surveillance and Selective Attention

Ankur Jain; Dan Kopell; Kyle Kakligian; Yuan-Fang Wang

In this paper, we present a prototype video surveillance system that uses stationary-dynamic (or master-slave) camera assemblies to achieve wide-area surveillance and selective focus-of-attention. We address two critical issues in deploying such camera assemblies in real-world applications: off-line camera calibration and on-line selective focus-ofattention. Our contributions over existing techniques are twofold: (1) in terms of camera calibration, our technique calibrates all degrees-of-freedom (DOFs) of both stationary and dynamic cameras, using a closed-form solution that is both efficient and accurate, and (2) in terms of selective focus-of-attention, our technique correctly handles dynamic changes in the scene and varying object depths. This is a significant improvement over existing techniques that use an expensive and non-adaptable table-look-up process.

Collaboration


Dive into the Yuan-Fang Wang's collaboration.

Top Co-Authors

Avatar

Dan Koppel

University of California

View shared research outputs
Top Co-Authors

Avatar

Tolga Can

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Chao-I Chen

University of California

View shared research outputs
Top Co-Authors

Avatar

Xin Wang

University of California

View shared research outputs
Top Co-Authors

Avatar

Dusty Sargent

University of California

View shared research outputs
Top Co-Authors

Avatar

Hua Lee

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Che-Tsung Lin

Industrial Technology Research Institute

View shared research outputs
Top Co-Authors

Avatar

Long-Tai Chen

Industrial Technology Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge