Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Djoko Purwanto is active.

Publication


Featured researches published by Djoko Purwanto.


Artificial Life and Robotics | 2009

Electric wheelchair control with gaze direction and eye blinking

Djoko Purwanto; Ronny Mardiyanto; Kohei Arai

We propose an electric wheelchair controlled by gaze direction and eye blinking. A camera is set up in front of a wheelchair user to capture image information. The sequential captured image is interpreted to obtain the gaze direction and eye blinking properties. The gaze direction is expressed by the horizontal angle of the gaze, and this is derived from the triangle formed by the centers of the eyes and the nose. The gaze direction and eye blinking are used to provide direction and timing commands, respectively. The direction command relates to the direction of movement of the electric wheelchair, and the timing command relates to the time when the wheelchair should move. The timing command with an eye blinking mechanism is designed to generate ready, backward movement, and stop commands for the electric wheelchair. Furthermore, to move at a certain velocity, the electric wheelchair also receives a velocity command as well as the direction and timing commands. The disturbance observer-based control system is used to control the direction and velocity. For safety purposes, an emergency stop is generated when the electric wheelchair user does not focus their gaze consistently in any direction for a specifi ed time. A number of simulations and experiments were conducted with the electric wheelchair in a laboratory environment.


International Journal of Advanced Robotic Systems | 2011

A Robust Obstacle Avoidance for Service Robot Using Bayesian Approach

Widodo Budiharto; Djoko Purwanto; Achmad Jazidie

The objective of this paper is to propose a robust obstacle avoidance method for service robot in indoor environment. The method for obstacles avoidance uses information about static obstacles on the landmark using edge detection. Speed and direction of people that walks as moving obstacle obtained by single camera using tracking and recognition system and distance measurement using 3 ultrasonic sensors. A new geometrical model and maneuvering method for moving obstacle avoidance introduced and combined with Bayesian approach for state estimation. The obstacle avoidance problem is formulated using decision theory, prior and posterior distribution and loss function to determine an optimal response based on inaccurate sensor data. Algorithms for moving obstacles avoidance method proposed and experiment results implemented to service robot also presented. Various experiments show that our proposed method very fast, robust and successfully implemented to service robot called Srikandi II that equipped with 4 DOF arm robot developed in our laboratory.


international conference on computer engineering and applications | 2010

Indoor Navigation Using Adaptive Neuro Fuzzy Controller for Servant Robot

Widodo Budiharto; Achmad Jazidie; Djoko Purwanto

We present our ongoing work on the development of Adaptive Neuro Fuzzy Inference System (ANFIS) Controller for humanoid servant robot designed for navigation based on vision. In this method, black line on the landmark used as a track for robot’s navigation using webcam as line sensor. We proposed architecture of ANFIS controller for servant robot based on mapping method, 3 input and 3 output applied to the controller. Only 45 training data used for navigation and best error starting at epoch 62. Each of the components are described in the paper and experimental results are presented. Humanoid servant robot also equipped with 4DOF arm robot, face recognition and text to speech processor. In order to demonstrate and measure the usefulness of such technologies for human-robot interaction, all components have been integrated and have been used for a servant robot named Srikandi I. Based on experiments, ANFIS controller successfully implemented as controller for robot’s navigation.


international symposium on innovations in intelligent systems and applications | 2012

Embedded Kalman Filter for Inertial Measurement Unit (IMU) on the ATMega8535

Hany Ferdinando; Handry Khoswanto; Djoko Purwanto

The Kalman Filter is very useful in prediction and estimation. In this paper, the Kalman Filter is implemented for Inertial Measurement Unit (IMU) on the ATMega8535. The sensors used in this system are accelerometer MMA7260QT and gyroscope GS-12. The system chooses the arbitrary sampling time and then it is evaluated for possible using smaller value. As the Kalman Filter operation needs matrix calculation, the formula is converted into several ordinary equations. The parameter being investigated in this paper is measurement covariance matrix. This parameter influences the way the Kalman Filter responses to noise. Bigger value makes the Kalman Filter less sensitive to noise and the estimation is too smooth, thus it does not give real angle estimation. Using smaller value makes the Kalman Filter more sensitive to noise. This makes the estimated angle still suffers from noise and it is likely that the Kalman Filter is useless. This paper recommends 0.0001 to 0.001 for the measurement covariance noise parameter. This paper also recommended a pipeline configuration if the control algorithm needs more space in a sampling time.


international conference on instrumentation communications information technology and biomedical engineering | 2009

Relation between eye movement and fatigue: Classification of morning and afternoon measurement based on Fuzzy rule

Zainal Arief; Djoko Purwanto; Dadet Pramadihanto; Tetsuo Sato; Kotaro Minato

This paper describes a simple method for classifying the condition of morning and afternoon measurement of eye movement based on the Fuzzy rule, the first step to find the relation between eye movement and fatigue. The eye movement is taken by camera and processed by computer. The left eye pupil center coordinates are used as eye movement data. These coordinates are extracted to obtain their features or parameters, which are saccadic latency, velocity, saccadic duration, and deviation. Extracted parameters from eye movement data become an input of the Fuzzy Identification System to classify the measurement time category, conducted either in the morning or the afternoon. Twenty-six visually normal students participate as subjects in this research. Their eye movement data are measured in the morning and in the afternoon after 9 hours of class. We also investigate whether the parameters can be used to distinguish the two conditions. The results of our experiments are assumed to be the system performance, and an accuracy of 86.54% is achieved. However, only the velocity and duration parameters show significant difference (p<0.05) between the two measurement times. This result reflects the fatigue in the ocular muscle which the two parameters mentioned above are directly affected.


international symposium on innovations in intelligent systems and applications | 2011

Design and evaluation of two-wheeled balancing robot chassis: Case study for Lego bricks

Hany Ferdinando; Handry Khoswanto; Djoko Purwanto; Stefanus Tjokro

Two-wheeled balancing robot is one of interesting nonlinear plant. The goal is to control the robot so that it can move with only two wheels. This paper elaborates the design and evaluation of its chassis. The chassis is constructed with Lego Mindstorm NXT and controlled by the AVR ATMega16 microcontroller. The system uses MX2125 accelerometer sensor from Parallax to evaluate the robot chassis. Two chassis design, i.e. model A and B are developed. From the experiments, it is shown that robot chassis design must address the mechanically stable issue. Therefore chassis model A is discarded. PD controller is recommended for this chassis model B. The chassis also considers the flexibility to mount such a load. Mounting load under the axis helps to stabilize the chassis. This makes the controller design easier. The load position opens the further research topic. Using Lego Mindstorm is very helpful in design and evaluation of mechatronics set up.


Archive | 2011

An Improved Face Recognition System for Service Robot Using Stereo Vision

Widodo Budiharto; Ari Santoso; Djoko Purwanto; Achmad Jazidie

Service robot is an emerging technology in robot vision, and demand from household and industry will be increased significantly in the future. General vision-based service robot should recognizes people and obstacles in dynamic environment and accomplishes a specific task given by a user. The ability to face recognition and natural interaction with a user are the important factors for developing service robots. Since tracking of a human face and face recognition are an essential function for a service robot, many researcher have developed face-tracking mechanism for the robot (Yang M., 2002) and face recognition system for service robot( Budiharto, W., 2010). The objective of this chapter is to propose an improved face recognition system using PCA(Principal Component Analysis) and implemented to a service robot in dynamic environment using stereo vision. The variation in illumination is one of the main challenging problem for face recognition. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals (Adini et al., 1997). Recognizing face reliably across changes in pose and illumination using PCA has proved to be a much harder problem because eigenfaces method comparing the intensity of the pixel. To solve this problem, we have improved the training images by generate random value for varying the intensity of the face images. We proposed an architecture of service robot and database for face recognition system. A navigation system for this service robot and depth estimation using stereo vision for measuring distance of moving obstacles are introduced. The obstacle avoidance problem is formulated using decision theory, prior and posterior distribution and loss function to determine an optimal response based on inaccurate sensor data. Based on experiments, by using 3 images per person with 3 poses (frontal, left and right) and giving training images with varying illumination, it improves the success rate for recognition. Our proposed method very fast and successfully implemented to service robot called Srikandi III in our laboratory. This chapter is organized as follows. Improved method and a framework for face recognition system is introduced in section 2. In section 3, the system for face detection and depth estimation for distance measurement of moving obstacles are introduced. Section 4, a detailed implementation of improved face recognition for service robot using stereo vision is presented. Finally, discussions and future work are drawn in section 5.


international seminar on intelligent technology and its applications | 2017

Visual ball tracking and prediction with unique segmented area on soccer robot

Setiawardhana; Rudy Dikairono; Tri Arief Sardjono; Djoko Purwanto

Object detection and tracking system has been developed by several researchers. This paper present algorithm for visual ball detection and ball estimation for goalie (goalkeeper) robot. The ball is captured by a camera with a fish-eye lens and processed for detection and tracking. Images from fish-eye camera are curved images. Images are thresholded to Hue Saturation Value (HSV). The system can predict goal area and ball position with multilayer backpropagation neural network (BPNN). The BPNN inputs are x and y axis of the ball. The BPNN outputs are goal area prediction and ball area prediction. The training data is unique segmented area. According to the changes of previous ball distance, the system will predict the direction of the next ball position. The achievement result (unique kernel 3×3, MSE <0.001, 30 samples data) for ball position prediction is 76.67%. The achievement result (unique kernel 3×3, MSE <0.001, 30 samples data) for goal area prediction is 100%.


international seminar on intelligent technology and its applications | 2016

Mobile robot motion planning by point to point based on modified ant colony optimization and Voronoi diagram

Nukman Habib; Djoko Purwanto; Adi Soeprijanto

The basic purpose of mobile robot motion planning (MRMP) is discover the shortest path safely from beginning to the end position in the environment without crash some obstacles. In this paper, we proposed a combined method between Voronoi diagram (VD) and the modified Ant Colony Optimization (M-ACO) algorithm for MRMP. The Voronoi diagram generate edges and vertices in the obstacles-filled space, then M-ACO choose the nodes (the generated vertices by VD) to form the safely shortest path by point to point (PTP) motion planning. The proposed approach to solve this MRMP problem indicate that it could make a safely planning shortest path.


international conference on electrical engineering and informatics | 2015

Indonesian natural voice command for robotic applications

Karisma Trinanda Putra; Djoko Purwanto; Ronny Mardiyanto

Human-machine interaction has been growing with the discovery of artificial intelligence technology. The development of human-machine interaction leads to a more natural interaction. In daily interactions, human uses speech, more dominant than the other way such as gestures and eye contact. Speech is the vocalized form of human communication which is closely related to language system. The problem is meaning, ambiguity, and the language that is not according to the rules of syntax, causing the command translation become more complex. To understand the meaning of the voice command, it is necessary to know the semantic and syntactic structure of sentences. An artificial intelligence technology that can understand Indonesian voice commands for robotic applications will be developed in this research. The purpose of this research is to translate voice command into the robots action, to generate human-machine interaction more natural. The voice command will be extracted using bark-frequency cepstral coefficients. Cepstral identified into words using neural networks. Words in a complete sentences will be processed using natural language processing so that, the meaning and appropriate action from the given command can be executed. Speech recognition experiments with 28 sets of speech signal obtain 82 % accuracy, while natural language processing experiments obtain 93 % accuracy with 50 sets of learning data.

Collaboration


Dive into the Djoko Purwanto's collaboration.

Top Co-Authors

Avatar

Muhammad Rivai

Sepuluh Nopember Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Achmad Jazidie

Sepuluh Nopember Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ronny Mardiyanto

Sepuluh Nopember Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ari Santoso

Sepuluh Nopember Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tri Arief Sardjono

Sepuluh Nopember Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Handry Khoswanto

Petra Christian University

View shared research outputs
Top Co-Authors

Avatar

Hany Ferdinando

Petra Christian University

View shared research outputs
Top Co-Authors

Avatar

Rudy Dikairono

Sepuluh Nopember Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ronny Mardiyanto

Sepuluh Nopember Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge