Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mau-Tsuen Yang is active.

Publication


Featured researches published by Mau-Tsuen Yang.


Real-time Imaging | 2003

A novel method for detecting lips, eyes and faces in real time

Cheng-Chin Chiang; Wen-Kai Tai; Mau-Tsuen Yang; Yi-Ting Huang; Chi-Jaung Huang

This paper presents a real-time face detection algorithm for locating faces in images and videos. This algorithm finds not only the face regions, but also the precise locations of the facial components such as eyes and lips. The algorithm starts from the extraction of skin pixels based upon rules derived from a simple quadratic polynomial model. Interestingly, with a minor modification, this polynomial model is also applicable to the extraction of lips. The benefits of applying these two similar polynomial models are twofold. First, much computation time are saved. Second, both extraction processes can be performed simultaneously in one scan of the image or video frame. The eye components are then extracted after the extraction of skin pixels and lips. Afterwards, the algorithm removes the falsely extracted components by verifying with rules derived from the spatial and geometrical relationships of facial components. Finally, the precise face regions are determined accordingly. According to the experimental results, the proposed algorithm exhibits satisfactory performance in terms of both accuracy and speed for detecting faces with wide variations in size, scale, orientation, color, and expressions.


IEEE Transactions on Aerospace and Electronic Systems | 2003

Detection of obstacles in the flight path of an aircraft

Tarak Gandhi; Mau-Tsuen Yang; Rangachar Kasturi; Octavia I. Camps; Lee D. Coraor; Jeffrey W. McCandless

The National Aeronautics and Space Administration (NASA), along with members of the aircraft industry, recently developed technologies for a new supersonic aircraft. One of the technological areas considered for this aircraft is the use of video cameras and image-processing equipment to aid the pilot in detecting other aircraft in the sky. The detection techniques should provide high detection probability for obstacles that can vary from subpixel to a few pixels in size, while maintaining a low false alarm probability in the presence of noise and severe background clutter. Furthermore, the detection algorithms must be able to report such obstacles in a timely fashion, imposing severe constraints on their execution time. Approaches are described here to detect airborne obstacles on collision course and crossing trajectories in video images captured from an airborne aircraft. In both cases the approaches consist of an image-processing stage to identify possible obstacles followed by a tracking stage to distinguish between true obstacles and image clutter, based on their behavior. For collision course object detection, the image-processing stage uses morphological filter to remove large-sized clutter. To remove the remaining small-sized clutter, differences in the behavior of image translation and expansion of the corresponding features is used in the tracking stage. For crossing object detection, the image-processing stage uses low-stop filter and image differencing to separate stationary background clutter. The remaining clutter is removed in the tracking stage by assuming that the genuine object has a large signal strength, as well as a significant and consistent motion over a number of frames. The crossing object detection algorithm was implemented on a pipelined architecture from DataCube and runs in real time. Both algorithms have been successfully tested on flight tests conducted by NASA.


IEEE Transactions on Parallel and Distributed Systems | 2003

A pipeline-based approach for scheduling video processing algorithms on NOW

Mau-Tsuen Yang; Rangachar Kasturi; Anand Sivasubramaniam

Network Of Workstations (NOW) platforms put together with off-the-shelf workstations and networking hardware have become a cost effective, scalable, and flexible platform for video processing applications. Still, one has to manually schedule an algorithm to the available processors of the NOW to make efficient use of the resources. However, this approach is time-consuming and impractical for a video processing system that must perform a variety of different algorithms, with new algorithms being constantly developed. Improved support for program development is absolutely necessary before the full benefits of parallel architectures can be realized for video processing applications. Toward this goal, an automatic compile-time scheduler has been developed to schedule input tasks of video processing applications with precedence constraints onto available processors. The scheduler exploits both spatial (parallelism) and temporal (pipelining) concurrency to make the best use of machine resources. Two important scheduling problems are addressed. First, given a task graph and a desired throughput, a schedule is constructed to achieve the desired throughput with the minimum number of processors. Second, given a task graph and a finite set of available resources, a schedule is constructed such that the throughput is maximized while meeting the resource constraints. Results from simulations show that the scheduler and proposed optimization techniques effectively tackle these problems by maximizing processor utilization. A code generator has been developed to generate parallel programs automatically. The tools developed in this paper make it much easier for a programmer to develop video processing applications on these parallel architectures.


IEEE Transactions on Image Processing | 2006

Performance characterization of the dynamic programming obstacle detection algorithm

Tarak Gandhi; Mau-Tsuen Yang; Rangachar Kasturi; Octavia I. Camps; Lee D. Coraor; Jeffrey W. McCandless

A computer vision-based system using images from an airborne aircraft can increase flight safety by aiding the pilot to detect obstacles in the flight path so as to avoid mid-air collisions. Such a system fits naturally with the development of an external vision system proposed by NASA for use in high-speed civil transport aircraft with limited cockpit visibility. The detection techniques should provide high detection probability for obstacles that can vary from subpixels to a few pixels in size, while maintaining a low false alarm probability in the presence of noise and severe background clutter. Furthermore, the detection algorithms must be able to report such obstacles in a timely fashion, imposing severe constraints on their execution time. For this purpose, we have implemented a number of algorithms to detect airborne obstacles using image sequences obtained from a camera mounted on an aircraft. This paper describes the methodology used for characterizing the performance of the dynamic programming obstacle detection algorithm and its special cases. The experimental results were obtained using several types of image sequences, with simulated and real backgrounds. The approximate performance of the algorithm is also theoretically derived using principles of statistical analysis in terms of the signal-to-noise ration (SNR) required for the probabilities of false alarms and misdetections to be lower than prespecified values. The theoretical and experimental performance are compared in terms of the required SNR.


national aerospace and electronics conference | 2000

Real-time obstacle detection system for high speed civil transport supersonic aircraft

Mau-Tsuen Yang; Tarak Gandhi; Rangachar Kasturi; Lee D. Coraor; Octavia I. Camps; Jeffrey W. McCandless

The High Speed Civil Transport (HSCT) supersonic commercial aircraft under development by National Aeronautics and Space Administration (NASA) and its partners is expected to include an eXternal Visibility System (XVS) to aid the pilots limited view through their cockpit windows. XVS obtains video images using high resolution digital cameras mounted on the aircraft and directed outside the aircraft. The images captured by the XVS provide an opportunity for automatic computer analysis in real-time to alert pilots of potential hazards in the flight path. The system is useful to help pilots make decisions and avoid air collision. In this paper, we describe the design, implementation, and evaluation of such a computer vision system. Using this system, real-time image data was recently obtained successfully from night tests conducted at NASA Langley Research Center. The system successfully detected and tracked translating objects in real-time during the night test. The system is described in detail so that other researchers can easily replicate the work.


international parallel and distributed processing symposium | 2001

An automatic scheduler for real-time vision applications

Mau-Tsuen Yang; Rangachar Kasturi; Anand Sivasubramaniam

Many computes vision applications are computationally challenging especially when they need to meet real-time constraints. A major problem with special purpose systems is that they require the developers of image-processing applications to be aware of the low-level hardware design, making the task cumbersome. To avoid inflexible and expensive hardware designs, another possible alternative is a network of workstations (NOW) platform put together with off-the-shelf workstations and networking hardware. Still, one had to manually schedule an algorithm to the available processors of the NOW to make efficient use of the resources. However, this approach is time consuming and impractical for a vision system that must perform a variety of different algorithms, with new algorithms being constantly developed. Improved support for program development is absolutely necessary before the full benefits of parallel architectures can be realized for vision applications. Towards this goal, an automatic compile-time scheduler has been developed to schedule input tasks of vision applications with precedence constraints onto available processors. The scheduler exploits both spatial (parallelism) and temporal (pipelining) concurrency to make the best use of machine resources. Two important scheduling problems are addressed. First, given a task graph and a desired throughput, a schedule is constructed to achieve the desired throughput with the minimum number of processors. Second, given a task graph and a finite set of available resources, a schedule is constructed such that the throughput is maximized while meeting the resource constraints. Results from simulations show that the scheduler and proposed optimization techniques effectively tackle these problems by maximizing the processor utilization. A code generator has been developed to generate parallel programs automatically. The execution profiles of the resulting parallel programs demonstrate the feasibility of the scheduler. The tools developed in this paper make it much easier for a programmer to develop vision applications on these high-performance platforms.


Iet Computer Vision | 2013

Traffic flow estimation and vehicle-type classification using vision-based spatial-temporal profile analysis

Mau-Tsuen Yang; Rang-Kai Jhang; Jia-Sheng Hou

Vision-based traffic surveillance plays an important role in traffic management. However, outdoor illuminations, the cast shadows and vehicle variations often create problems for video analysis and processing. Thus, the authors propose a real-time cost-effective traffic monitoring system that can reliably perform traffic flow estimation and vehicle classification at the same time. First, the foreground is extracted using a pixel-wise weighting list that models the dynamic background. Shadows are discriminated utilising colour and edge invariants. Second, the foreground on a specified check-line is then collected over time to form a spatial-temporal profile image. Third, the traffic flow is estimated by counting the number of connected components in the profile image. Finally, the vehicle type is classified according to the size of the foreground mask region. In addition, several traffic measures, including traffic velocity, flow, occupancy and density, are estimated based on the analysis of the segmentation. The availability and reliability of these traffic measures provides critical information for public transportation monitoring and intelligent traffic control. Since the proposed method only process a small area close to the check-line to collect the spatial-temporal profile for analysis, the complete system is much more efficient than existing visual traffic flow estimation methods.


international conference on pattern recognition | 2006

Shadow Detection by Integrating Multiple Features

Kuo-Hua Lo; Mau-Tsuen Yang

Cast shadows of moving foreground objects in a scene often result in problems for many applications such as surveillance, object tracking/recognition, video content analysis and intelligent transportation systems. In this paper we presented an algorithm exploiting information of color, shading, texture, neighborhoods and temporal consistency to detect shadows in a scene efficiently and reliably. The experimental results showed that the proposed method can detect umbra as well as penumbra in different kinds of scenarios under various illumination conditions


Sensors | 2014

Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home

Mau-Tsuen Yang; Shen-Yen Huang

There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a homes entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.


Sensors | 2013

Fall risk assessment and early-warning for toddler behaviors at home

Mau-Tsuen Yang; Min-Wen Chuang

Accidental falls are the major cause of serious injuries in toddlers, with most of these falls happening at home. Instead of providing immediate fall detection based on short-term observations, this paper proposes an early-warning childcare system to monitor fall-prone behaviors of toddlers at home. Using 3D human skeleton tracking and floor plane detection based on depth images captured by a Kinect system, eight fall-prone behavioral modules of toddlers are developed and organized according to four essential criteria: posture, motion, balance, and altitude. The final fall risk assessment is generated by a multi-modal fusion using either a weighted mean thresholding or a support vector machine (SVM) classification. Optimizations are performed to determine local parameter in each module and global parameters of the multi-modal fusion. Experimental results show that the proposed system can assess fall risks and trigger alarms with an accuracy rate of 92% at a speed of 20 frames per second.

Collaboration


Dive into the Mau-Tsuen Yang's collaboration.

Top Co-Authors

Avatar

Wen-Kai Tai

National Dong Hwa University

View shared research outputs
Top Co-Authors

Avatar

Ya-Chun Shih

National Dong Hwa University

View shared research outputs
Top Co-Authors

Avatar

Cheng-Chin Chiang

National Dong Hwa University

View shared research outputs
Top Co-Authors

Avatar

Rangachar Kasturi

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Lee D. Coraor

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Tarak Gandhi

University of California

View shared research outputs
Top Co-Authors

Avatar

Kuo-Hua Lo

National Dong Hwa University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yong-Yuan Lin

National Dong Hwa University

View shared research outputs
Top Co-Authors

Avatar

Octavia I. Camps

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge