Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wei-Chiu Ma is active.

Publication


Featured researches published by Wei-Chiu Ma.


computer vision and pattern recognition | 2015

How do we use our hands? Discovering a diverse set of common grasps

De-An Huang; Minghuang Ma; Wei-Chiu Ma; Kris M. Kitani

Our aim is to show how state-of-the-art computer vision techniques can be used to advance prehensile analysis (i.e., understanding the functionality of human hands). Prehensile analysis is a broad field of multi-disciplinary interest, where researchers painstakingly manually analyze hours of hand-object interaction videos to understand the mechanics of hand manipulation. In this work, we present promising empirical results indicating that wearable cameras and unsupervised clustering techniques can be used to automatically discover common modes of human hand use. In particular, we use a first-person point-of-view camera to record common manipulation tasks and leverage its strengths for reliably observing human hand use. To learn a diverse set of hand-object interactions, we propose a fast online clustering algorithm based on the Determinantal Point Process (DPP). Furthermore, we develop a hierarchical extension to the DPP clustering algorithm and show that it can be used to discover appearance-based grasp taxonomies. Using a purely data-driven approach, our proposed algorithm is able to obtain hand grasp taxonomies that roughly correspond to the classic Cutkosky grasp taxonomy. We validate our approach on over 10 hours of first-person point-of-view videos in both choreographed and real-life scenarios.


international conference on image processing | 2015

Recognizing hand-object interactions in wearable camera videos

Tatsuya Ishihara; Kris M. Kitani; Wei-Chiu Ma; Hironobu Takagi; Chieko Asakawa

Wearable computing technologies are advancing rapidly and enabling users to easily record daily activities for applications such as life-logging or health monitoring. Recognizing hand and object interactions in these videos will help broaden application domains, but recognizing such interactions automatically remains a difficult task. Activity recognition from the first-person point-of-view is difficult because the video includes constant motion, cluttered backgrounds, and sudden changes of scenery. Recognizing hand-related activities is particularly challenging due to the many temporal and spatial variations induced by hand interactions. We present a novel approach to recognize hand-object interactions by extracting both local motion features representing the subtle movements of the hands and global hand shape features to capture grasp types. We validate our approach on multiple egocentric action datasets and show that state-of-the-art performance can be achieved by considering both local motion and global appearance information.


international conference on robotics and automation | 2017

Find your way by observing the sun and other semantic cues

Wei-Chiu Ma; Shenlong Wang; Marcus A. Brubaker; Sanja Fidler; Raquel Urtasun

In this paper we present a robust, efficient and affordable approach to self-localization which requires neither GPS nor knowledge about the appearance of the world. Towards this goal, we utilize freely available cartographic maps and derive a probabilistic model that exploits semantic cues in the form of sun direction, presence of an intersection, road type, speed limit and ego-car trajectory to produce very reliable localization results. Our experimental evaluation shows that our approach can localize much faster (in terms of driving time) with less computation and more robustly than competing approaches, which ignore semantic information.


computer vision and pattern recognition | 2017

Forecasting Interactive Dynamics of Pedestrians with Fictitious Play

Wei-Chiu Ma; De-An Huang; Namhoon Lee; Kris M. Kitani

We develop predictive models of pedestrian dynamics by encoding the coupled nature of multi-pedestrian interaction using game theory and deep learning-based visual analysis to estimate person-specific behavior parameters. We focus on predictive models since they are important for developing interactive autonomous systems (e.g., autonomous cars, home robots, smart homes) that can understand different human behavior and pre-emptively respond to future human actions. Building predictive models for multi-pedestrian interactions however, is very challenging due to two reasons: (1) the dynamics of interaction are complex interdependent processes, where the decision of one person can affect others, and (2) dynamics are variable, where each person may behave differently (e.g., an older person may walk slowly while the younger person may walk faster). To address these challenges, we utilize concepts from game theory to model the intertwined decision making process of multiple pedestrians and use visual classifiers to learn a mapping from pedestrian appearance to behavior parameters. We evaluate our proposed model on several public multiple pedestrian interaction video datasets. Results show that our strategic planning model predicts and explains human interactions 25% better when compared to a state-of-the-art activity forecasting method.


congress on evolutionary computation | 2014

Novel traffic signal timing adjustment strategy based on Genetic Algorithm

Hsiao-Yu Tung; Wei-Chiu Ma; Tian-Li Yu

Traffic signal timing optimization problem aims at alleviating traffic congestion and shortening the average traffic time. However, most existing research considered only the information of one or few intersections at a time. Those local optimization methods may experience a decrease in performance when facing large-scale traffic networks. In this paper, we propose a cellular automaton traffic simulation system and conduct tests on two different optimization schemes. We use Genetic Algorithm (GA) for global optimization and Expectation Maximization (EM) as well as car flow for local optimization. Empirical results show that the GA method outperforms the EM method. Then, we use linear regression to learn from the global optimal solution obtained by GA and propose a new adjustment strategy that outperforms recent optimization methods.


Group and Crowd Behavior for Computer Vision | 2017

Activity Forecasting: An Invitation to Predictive Perception

Kris M. Kitani; De-An Huang; Wei-Chiu Ma

We make a case for a decision-theoretic approach to human activity forecasting, which provides a principled framework for modeling the consequences of taking certain actions and the impact it can have on the future. We give an introductory exposition of the concept of Maximum Entropy Inverse Optimal Control in the context of visual future prediction. Presented examples show that such methods are able to generate more informed predictions over future actions of human activity.


arXiv: Computer Vision and Pattern Recognition | 2016

A Game-Theoretic Approach to Multi-Pedestrian Activity Forecasting.

Wei-Chiu Ma; De-An Huang; Namhoon Lee; Kris M. Kitani


european conference on computer vision | 2018

Single Image Intrinsic Decomposition without a Single Intrinsic Image

Wei-Chiu Ma; Hang Chu; Bolei Zhou; Raquel Urtasun; Antonio Torralba


computer vision and pattern recognition | 2018

Deep Parametric Continuous Convolutional Neural Networks

Shenlong Wang; Simon Suo; Wei-Chiu Ma; Andrei Pokrovsky; Raquel Urtasun


computer vision and pattern recognition | 2018

Hierarchical Recurrent Attention Networks for Structured Online Maps

Namdar Homayounfar; Wei-Chiu Ma; Shrinidhi Kowshika Lakshmikanth; Raquel Urtasun

Collaboration


Dive into the Wei-Chiu Ma's collaboration.

Top Co-Authors

Avatar

Kris M. Kitani

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hang Chu

University of Toronto

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chi-Hsien Yen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Hsiao-Yu Tung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Tian-Li Yu

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge