Javier Adolfo Alcazar
General Motors
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Javier Adolfo Alcazar.
human-robot interaction | 2013
Brian T. Gleeson; Karon E. MacLean; Amir Haddadi; Elizabeth A. Croft; Javier Adolfo Alcazar
Human-robot collaborative work has the potential to advance quality, efficiency and safety in manufacturing. In this paper we present a gestural communication lexicon for human-robot collaboration in industrial assembly tasks and establish methodology for producing such a lexicon. Our user experiments are grounded in a study of industry needs, providing potential real-world applicability to our results. Actions required for industrial assembly tasks are abstracted into three classes: part acquisition, part manipulation, and part operations. We analyzed the communication between human pairs performing these subtasks and derived a set of communication terms and gestures. We found that participant-provided gestures are intuitive and well suited to robotic implementation, but that interpretation is highly dependent on task context. We then implemented these gestures on a robot arm in a human-robot interaction context, and found the gestures to be easily interpreted by observers. We found that observation of human-human interaction can be effective in determining what should be communicated in a given human-robot task, how communication gestures should be executed, and priorities for robotic system implementation based on frequency of use.
robot and human interactive communication | 2012
Ergun Calisgan; Amir Haddadi; H.F.M. Van der Loos; Javier Adolfo Alcazar; Elizabeth A. Croft
Nonverbal communication cues play an important role in human-human interaction and are expected to take a similar role in human-robot collaboration. In current industrial practice, human-robot turn-taking is explicitly human controlled, via a command channel such as switch or button. However, such a master-slave approach does not permit collaborative interaction, and requires the human to focus on both controlling the robots behavior and on the task, thereby affecting overall performance. In this paper, implicit, nonverbal communication cues are examined as a non-explicit communication channel during a turn-taking task context. The aim of this study is to characterize the types and frequencies of nonverbal cues important to regulating turn taking during an assembly-task-type collaboration. This analysis will guide the selection of cues that can be expressed by the robot as implicit user inputs while human and robot complete a shared task.
international conference on robotics and automation | 2013
Amir Haddadi; Elizabeth A. Croft; Brian T. Gleeson; Karon E. MacLean; Javier Adolfo Alcazar
New developments, innovations, and advancements in robotic technology are paving the way for intelligent robots to enable, support, and enhance the capabilities of human workers in manufacturing environments. We envision future industrial robot assistants that support workers in their tasks, advancing manufacturing quality and processes and increasing productivity. However, this requires new channels of fine-grained, fast and reliable communication. In this research we examined the communication required for human-robot collaboration in a vehicle door assembly scenario. We identified potential communicative gestures applicable to this scenario, implemented these gestures on a Barrett WAMTM1 manipulator, and evaluated them in terms of human recognition rate and response time in a real-time interaction. Response time analysis reveals insights into the communicative structure of robot motions; namely, key short gesture segments include the bulk of the communicative information. These results will help us design more efficient and fluid task flow in human-robot interaction scenarios.
IEEE Transactions on Automation Science and Engineering | 2014
Nicolae Lobontiu; Matt Cullin; Todd Petersen; Javier Adolfo Alcazar; Simona Noveanu
This study proposes a general analytical compliance model for symmetric notch flexure hinges composed of segments with constant width and analytically defined variable thicknesses. Applying serial combination and longitudinal/transverse mirroring of base segments, the in-plane compliances of the full flexure are obtained as functions of one quarter-flexure compliances. The new right circularly corner-filleted parabolic flexure hinge is introduced as an illustration of the general analytical modeling algorithm. Experimental testing and finite element simulation confirm the analytical model predictions for an aluminum flexure prototype. The planar compliances sensitivity to the relevant geometric parameters is also studied.
international conference on robotics and automation | 2012
Javier Adolfo Alcazar; Leandro G. Barajas
Advances in design and fabrication technologies are enabling the production and commercialization of sensor-rich robotic hands with skin-like sensor arrays. Robotic skin is poised to become a crucial interface between the robot embodied intelligence and the external world. The need to fuse and make sense out of data extracted from skin-like sensors is readily apparent. This paper presents a real-time sensor fusion algorithm that can be used to accurately estimate object position, translation and rotation during grasping. When an object being grasped moves across the sensor array, it creates a sliding sensation; the spatial-temporal sensations are estimated by computing localized slid vectors using an optical flow approach. These results were benchmarked against an L∞ Norm approach using a nominal known object trajectory generated by sliding and rotating an object over the sensor array using a second, high accuracy, industrial robot. Rotation and slid estimation can later be used to improve grasping quality and dexterity.
ieee-ras international conference on humanoid robots | 2010
Javier Adolfo Alcazar; Leandro G. Barajas
Kitting processes are fundamental enablers in flexible manufacturing environments derived from minomi principles. A typical kitting application will sort loose and unpacked parts without dunnage into a tray or “kit” and then place them near the point of assembly for easy reach by assembly workers. In order to prepare, sort and sequence the kits, it is necessary to have adaptable robots and automation able to pick-and-place a variety of parts at line production rate. This requires the assembling any of hundreds of kit types, each one having about 10 different parts within 60 seconds. These requirements are a fundamental challenge for kitting automation given that it requires several parts of different shapes, which are presented in either random or semi-structured fashion, to be grasped and placed into the kit. Highly flexible manufacturing (HFM) requires grasping previously unknown objects for which a computer 3D model may not be available. A methodology that integrates vision and flexible robotic grasping is proposed to address HFM herein. The proposed set of hand grasping shapes presented here is based on the capabilities and mechanical constraints of the robotic hand. Pre-grasp shapes for a Barrett Hand are studied and defined using finger spread and flexion. In addition, a simple and efficient vision algorithm is used to servo the robot and to select the pre-grasp shape in the pick-and-place task of 9 different vehicle door parts. Finally, experimental results evaluate the ability of the robotic hand to grasp both pliable and rigid parts.
Archive | 2009
Javier Adolfo Alcazar; Leandro G. Barajas
SAE 2012 World Congress & Exhibition | 2012
Javier Adolfo Alcazar
Archive | 2010
Javier Adolfo Alcazar; Leandro G. Barajas
Archive | 2010
Javier Adolfo Alcazar; Leandro G. Barajas