Ali Boyali
National Institute of Advanced Industrial Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ali Boyali.
systems, man and cybernetics | 2010
Ali Boyali; Levent Güvenç
The use of the neuro-dynamic programming method for real-time control of a parallel hybrid electric vehicle is addressed in this study. The validated model of a research prototype parallel hybrid electric light commercial vehicle, FOHEV I, is used in the numerical parts of this paper. A diesel engine and an electric motor power the front and rear axles, respectively. Due to the computational complexity and resulting burden for optimum power distribution in the hybrid electric powertrain, real-time computation using dynamic programming is not feasible. Sub-optimal optimization techniques are available for pre-defined speed profiles, however, the vehicle speed profile depends on the driver input and vehicle and road conditions and is not known a priori. A neuro-dynamic programming method is proposed here to solve this problem. The results are compared with and found to be quite close to the optimal ones computed using dynamic programming.
Biomedical Signal Processing and Control | 2016
Ali Boyali; Naohisa Hashimoto
In this study, we introduce a novel variant and application of the Collaborative Representation based Classification in spectral domain for recognition of the hand gestures using the raw surface Electromyography signals. The intuitive use of spectral features are explained via circulant matrices. The proposed Spectral Collaborative Representation based Classification (SCRC) is able to recognize gestures with higher levels of accuracy for a fairly rich gesture set. The worst recognition result which is the best in the literature is obtained as 97.3\% among the four sets of the experiments for each hand gestures. The recognition results are reported with a substantial number of experiments and labeling computation.
international symposium on innovations in intelligent systems and applications | 2011
Ali Boyali; Manolya Kavakli
In this study we describe the development of a six Degree of Freedom (6 DOF) pose estimation model of a tracked object and 3D user interface using stereo vision and Infra-Red (IR) cameras in the Matlab/Simulink and C# environments. The raw coordinate values of the IR light sources located on the tracked object are detected, digitized and Bluetooth broadcast by IR cameras and associated circuitry within Nintendo Wiimotes. Then, the signals are received by a PC and processed using pose extraction and stereo vision algorithms. The extracted motion and position parameters are used to manipulate a virtual object in the virtual reality toolbox of Matlab for 6-DOF motion tracking. We setup a stereo camera system with Wiimotes to increase the vision volume and accuracy of 3D coordinate estimation and present a 3D user input device implementation in C# with Matlab functions. The camera calibration toolbox is used for calibration of the stereo system and computation of the extrinsic and intrinsic camera parameters. We use the epipolar geometry toolbox for computation of epipolar constraints to estimate the location of the points which are not seen by both of the cameras simultaneously. Our preliminary results for stereo vision analysis indicate that the precision for pose estimation may reach to millimeter or sub-millimeter accuracy.
international conference on mobile and ubiquitous systems: networking and services | 2010
Ali Boyali; Manolya Kavakli; Jason Twamley
The goal of this paper is to present the development of a tracking technology to interact with a virtual object. This paper presents the general procedures of building a simple, low cost tracking system by using Wiimote (a remote of Nintendo game console) and the Open source Computer Vision (OpenCV) software library as well as interfacing the tracking system with an immersive virtual environment (Vizard). We used an iterative position and orientation estimation (POSIT algorithm) which is optimized as an OpenCV function for extracting position parameters. We filter out the noise in the coordinate values using Kalman filters. The orientation and translation of the tracked system are then used to manipulate a virtual object created in the virtual world of Vizard. Our results indicate that it is possible to implement an inexpensive and efficient application for interacting with virtual worlds using a Wiimote and appropriate digital filters.
Archive | 2012
Manolya Kavakli; Ali Boyali
Designing in Virtual Reality systems may bring significant advantages for the preliminary exploration of the design concept in 3D. In this chapter, our purpose is to provide a design platform in VR, integrating data gloves and the sensor jacket that consists of piezo-resistive sensor threads in a sensor network. Unlike the common gesture recognition approaches, that require the assistance of expensive devices such as cameras or Precision Position Tracker (PPT) devices, our sensor network eliminates both the need for additional devices and the limitation of mobility. We developed a Gesture Recognition System (De-SIGN) in various iterations. De-SIGN decodes design gestures. In this chapter, we present the system architecture for De-SIGN, its sensor analysis and synthesis method (SenSe) and the Sparse Representation-based Classification (SRC) algorithm we have developed for gesture signals, and discussed the system’s performance providing the recognition rates. The gesture recognition algorithm presented here is highly accurate regardless of the signal acquisition method used and gives excellent results even for high dimensional signals and large gesture dictionaries. Our findings state that gestures can be recognized with over 99% accuracy rate using the Sparse Representation-based Classification (SRC) algorithm for user-independent gesture dictionaries and 100% for user-dependent.
Multimedia Tools and Applications | 2018
Hessam Jahani-Fariman; Manolya Kavakli; Ali Boyali
Human-computer interaction has become increasingly easy and popular using widespread smart devices. Gestures and sketches as the trajectory of hands in 3D space are among the popular interaction media. Therefore, their recognition is essential. However, diversity of human gestures along with the lack of visual cues make the sketch recognition process challenging. This paper aims to develop an accurate sketch recognition algorithm using Block Sparse Bayesian Learning (BSBL). Sketches are acquired from three datasets using a Wii-mote in a virtual-reality environment. We evaluate the performance of the proposed sketch recognition approach (MATRACK) on diverse sketch datasets. Comparisons are drawn with three high accuracy classifiers namely, Hidden Markov Model (HMM), Principle Component Analysis (PCA) and K-Nearest Neighbour (K-NN). MATRACK, the developed BSBL based sketch recognition system, outperforms k-NN, HMM and PCA. Specifically, for the most diverse dataset, MATRACK reaches the accuracy of 93.5%, where other three classifiers approximately catches 80% accuracy.
SMART 2014, The Third International Conference on Smart Systems, Devices and Technologies | 2014
Ali Boyali; Naohisa Hashimoto; Osamu Matsumato
ieee global conference on consumer electronics | 2015
Ali Boyali; Naohisa Hashimoto; Osamu Matsumoto
arXiv: Robotics | 2015
Ali Boyali; Naohisa Hashimoto; Osamu Matsumoto
the internet of things | 2017
Naohisa Hashimoto; Ali Boyali; Yusuke Takinami; Osamu Matsumoto
Collaboration
Dive into the Ali Boyali's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs