Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Miti Ruchanurucks is active.

Publication


Featured researches published by Miti Ruchanurucks.


ieee/sice international symposium on system integration | 2011

Kinect-based obstacle detection for manipulator

Panjawee Rakprayoon; Miti Ruchanurucks; Ada Coundoul

This paper presents a method to distinguish between obstacles and manipulator when they share the same workspace. Microsoft Kinect is used as a capturing device. A Kinect calibration method is explained. Furthermore, calibration between Kinect and the manipulator is addressed by iterative least-square method. 3D model of manipulator is generated using OpenGL library. Finally, the manipulator surface is deleted from the scene by intersection of data between the manipulator model and its corresponding point cloud.


Robotics and Autonomous Systems | 2009

Painting robot with multi-fingered hands and stereo vision

Shunsuke Kudoh; Koichi Ogawara; Miti Ruchanurucks; Katsushi Ikeuchi

In this paper, we describe a painting robot with multi-fingered hands and stereo vision. The goal of this study is for the robot to reproduce the whole procedure involved in human painting. A painting action is divided into three phases: obtaining a 3D model, composing a picture model, and painting by a robot. In this system, various feedback techniques including computer vision and force sensors are used. As experiments, an apple and a human silhouette are painted on a canvas using this system


international conference on multisensor fusion and integration for intelligent systems | 2006

Neural Network Based Foreground Segmentation with an Application to Multi-Sensor 3D Modeling

Miti Ruchanurucks; Koichi Ogawara; Katsushi Ikeuchi

This paper presents a technique for foreground/background segmentation using either color images or a combination of color and range images. In the case of images captured from a single 2D camera, a hybrid experience-based foreground segmentation technique is developed using a neural network and graph cut paradigm. This gives an advantage over methods that are based on color distribution or gradient information if the foreground/background color distributions are not well separated or the boundary is not clear. The system can segment images more effectively than the latest technology of graph cut, even if the foreground is very similar to the background. It also shows how to use the method for multi-sensor based 3D modeling by segmenting the foreground of each viewpoint in order to generate 3D models


asian conference on defence technology | 2015

Color marker detection with various imaging conditions and occlusion for UAV automatic landing control

Montika Sereewattana; Miti Ruchanurucks; Somying Thainimit; Sakol Kongkaew; Supakorn Siddhichai; Shoichi Hasegawa

Detection of markers for fixed-wing unmanned aerial vehicles play a crucial role in finding a runway to land, automatically. This is because the vehicles cannot land in limited area like rotor-wing UAV. Landing with the fixed-wing need to have a runway that is long and has a lot of symbols for demonstrating the landing point or touch down point. On the other hand, markers are difficult to be searched for, owing to having uncontrollable variables: illumination conditions, diverse environment and object occlusion. Moreover, the number of symbols on runway is another challenging issue. The aircraft controlled by autopilot that is at a height of 100 meters, e.g., may not be able to capture the markers properly before landing. Thus, it cannot land suitably. In order to reduce the complexity of the runway, four circular color markers are utilized to be a simple set of markers for the runway. The number can be increased to 6, 8, etc. for runway length expansion. Our proposed procedure is then: After normalized RGB colors of runway images to alleviate illumination error, detecting markers by Hough circular transform can be searched for even with occlusion. Experimental result shows around 72 to 87 percent accuracy tested by capturing in different scenarios: several exposures, gradations of tone, lens flares, motion blurs and uniform noise as well as object occlusion.


ieee international conference on cyber technology in automation control and intelligent systems | 2012

Planar surface area calculation using camera and orientation sensor

Surangrak Sutiworwan; Miti Ruchanurucks; Pongthorn Apiroop; Supakorn Siddhichai; Thitiporn Chanwimaluang; Makoto Sato

This research proposes a scheme of area calculation system, focusing on image warping to top view. The sensor attached to camera is used to compensate the cameras orientation in real time. One constraint is that the calculated area is planar surface. In practice, the alignment of sensor and camera is imperfect. This error makes the calculated area size inaccurate. Therefore, calibration between camera and sensor is required by using Iterative Least Square method. Then, the extrinsic parameters derived from calibrated sensor and pre-computed intrinsic parameter will be used to generate a homography matrix. In top view image, we can directly count the number of target pixel. Finally, we will also find the relationship between rotation angle and size of each pixel in real-world unit with calculating percentage accuracy of this method. The experimental results show top view target area size generated from the tilted camera using information from the orientation sensor in real time.


intelligent vehicles symposium | 2014

A novel method for extrinsic parameters estimation between a single-line scan LiDAR and a camera

Pakapoj Tulsuk; Panu Srestasathiern; Miti Ruchanurucks; Teera Phatrapornnant; Hiroshi Nagahashi

This paper presents a novel method for extrinsic parameters estimation of a single line scan LiDAR and a camera. Using a checkerboard, the calibration setup is simple and practical. Particularly, the proposed calibration method is based on resolving geometry of the checkerboard that visible to the camera and the LiDAR. The calibration setup geometry is described by planes, lines and points. Our novelty is a new hypothesis of the geometry which is the orthogonal distances between LiDAR points and the line from the intersection between the checkerboard and LiDAR scan plane. To evaluate the performance of the proposed method, we compared our proposed method with the state of the art method i.e. Zhang and Pless [1]. The experimental results showed that the proposed method yielded better results.


Transactions of the Institute of Measurement and Control | 2014

Kinect quality enhancement for triangular mesh reconstruction with applications in burn care

Miti Ruchanurucks; Amornrat Khongma; Panjawee Rakprayoon; Teera Phatrapornnant; Taweetong Koanantakool

This research presents a method to enhance the accuracy of Kinect’s depth data. The enhanced accuracy is especially useful for triangular mesh reconstruction. Performing such a reconstruction directly from Kinect’s data is erroneous. We show that using a spatial filtering technique can help to reduce such errors considerably. We further apply such a technique for human surface reconstruction in order to assist doctors in a hospital’s burn care unit. Body surface area is estimated by Heron’s formula. Kinect calibration is also documented in this work.


robotics and biomimetics | 2012

Obstacle modeling for manipulator using iterative least square (ILS) and iterative closest point (ICP) base on Kinect

Wantana Sukmanee; Miti Ruchanurucks; Panjawee Rakprayoon

This paper presents a method to distinguish between a manipulator and its surroundings using a depth sensor. The depth sensor used is Kinect. First Kinect calibration is addressed. Then coordinate calibration between Kinect and the manipulator are solved using iterative least square (ILS) algorithm. At this point, to delete the robot from the scene and keep only the surrounding surface, the accuracy of homogeneous transformation acquired from ILS is inadequate. We further focus on a matching method between the manipulators model and point cloud, to use iterative closest point (ICP) algorithm. ICP enhances the accuracy for a great deal. Experiment shows that this comprehensive method is practical and robust. It can be used in dynamic environment as well.


robotics and biomimetics | 2009

Offline and online trajectory generation with sequential physical constraints

Miti Ruchanurucks; Shin'ichiro Nakaoka

This paper presents a method to generate motion from human motion for a humanoid robot with physical limits. The method focuses on representing constraints for angle, collision, velocity, and dynamic force as B-spline coefficients. The constraints can be applied for offline optimization and online filtering. For optimization, the objective function is responsible for mimicking human trainers, while constraints are used to transform motion within the limit of the capabilities of the humanoid robot. An iterative soft-constraint paradigm is used to enhance the quality of offline velocity and force constraints. To refine precision, in regions of high-frequency motion not adequately modeled by an initial splining, B-spline is extensible into a hierarchy so that optimization that meets global criteria can be performed locally. For filtering, consideration is given to online B-spline decomposition and how to adapt the constraint functions as filters. The proposed method will be shown to outperform existing methods for space-time tasks with physical limits.


international conference on image processing | 2008

Integrating region growing and classification for segmentation and matting

Miti Ruchanurucks; Koichi Ogawara; Katsushi Ikeuchi

This paper presents a supervised foreground segmentation method that uses local and global feature similarity with edge constraint. This framework integrates and extends the notion of region growing and classification to deal with local and global fitness. It parameterizes constraint of growing using Chebyshevs inequality. The constraint is used to stop segmentation before matting. Matting relies on both local and global information. The proposed method outperforms many of the current methods in the sense of correctness and minimal user interaction, and it does so in a reasonable computation time.

Collaboration


Dive into the Miti Ruchanurucks's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Teera Phatrapornnant

Thailand National Science and Technology Development Agency

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shoichi Hasegawa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge