Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Micire is active.

Publication


Featured researches published by Mark Micire.


international conference on industrial electronics control and instrumentation | 2000

Mobility and sensing demands in USAR

Robin R. Murphy; Jennifer Casper; Jeff Hyams; Mark Micire; Brian W. Minten

Since 1999, the members of the Perceptual Robotics Laboratory at the University of South Florida have worked with the Hillsborough County Fire Department on identifying opportunities for robotics in urban search and rescue. This paper provides an introduction to the USAR environment and the impact on sensors and platforms. It discusses the possible roles of mobile robots, and the need for adjustable autonomy.


interactive tabletops and surfaces | 2009

Analysis of natural gestures for controlling robot teams on multi-touch tabletop surfaces

Mark Micire; Munjal Desai; Amanda Courtemanche; Katherine M. Tsui; Holly A. Yanco

Multi-touch technologies hold much promise for the command and control of mobile robot teams. To improve the ease of learning and usability of these interfaces, we conducted an experiment to determine the gestures that people would naturally use, rather than the gestures they would be instructed to use in a pre-designed system. A set of 26 tasks with differing control needs were presented sequentially on a DiamondTouch to 31 participants. We found that the task of controlling robots exposed unique gesture sets and considerations not previously observed, particularly in desktop-like applications. In this paper, we present the details of these findings, a taxonomy of the gesture set, and guidelines for designing gesture sets for robot control.


international conference on robotics and automation | 2001

Low-order-complexity vision-based docking

Brian W. Minten; Robin R. Murphy; Jeff Hyams; Mark Micire

This paper reports on a reactive docking behavior which uses a vision algorithm that grows linearly with the number of image pixels. The docking robot imprints (initializes) on a two-colored docking fiducial upon departing from the dock, then uses region statistics to adapt the color segmentation in changing lighting conditions. The docking behavior was implemented on a marsupial team of robots, where a daughter micro-rover had to reenter the mother robot from an approach zone with a 2 m radius and 140/spl deg/ angular width with a tolerance of /spl plusmn/5 and /spl plusmn/2 cm. Testing during outdoor conditions (noon, dusk) and challenging indoor scenarios (flashing lights) showed that using adaptation and imprinting was more robust than using imprinting atone.


Journal of Field Robotics | 2008

Evolution and field performance of a rescue robot

Mark Micire

Robots are slowly finding their way into the hands of search and rescue groups. One of the robots contributing to this effort is the Inuktun VGTV-Xtreme series by American Standard Robotics. This capable robot is one of the only robots engineered specifically for the search and rescue domain. This paper describes the adaptation of the VGTV platform from an industrial inspection robot into a capable and versatile search and rescue robot. These adaptations were based on growing requirements established by rescue groups, academic research, and extensive field trials. A narrative description of a successful search of a damaged building during the aftermath of Hurricane Katrina is included to support these claims. Finally, lessons learned from these deployments and guidelines for future robot development is discussed.


robot soccer world cup | 2001

Potential Tasks and Research Issues for Mobile Robots in RoboCup Rescue

Robin R. Murphy; Jennifer Casper; Mark Micire

Previous work[5] has summarized our experiences working with the Hillsborough Fire Rescue Department and FEMA documents pertaining to Urban Search and Rescue. This paper discusses the lessons learned and casts them into four main categories of tasks for the physical agent portion of RoboCup-Rescue: 1) reconnaissance and site assessment, 2) rescuer safety, 3) victim detection, and 4) mapping and characterizing the structure.


Archive | 2010

Improving Human-Robot Interaction through Interface Evolution

Brenden Keyes; Mark Micire; Jill L. Drury; Holly A. Yanco

In remote robot operations, the human operator(s) and robot(s) are working in different locations that are not within line of sight of each other. In this situation, the human’s knowledge of the robot’s surroundings, location, activities and status is gathered solely through the interface. Depending on the work context, having a good understanding of the robot’s state can be critical. Insufficient knowledge in an urban search and rescue (USAR) situation, for example, may result in the operator driving the robot into a shaky support beam, causing a secondary collapse. While the robot‘s sensors and autonomy modes should help avoid collisions, in some cases the human must direct the robots‘ operation. If the operator does not have good awareness of the robot’s state, the robot can be more of a detriment to the task than a benefit. The human’s comprehension of the robot’s state and environment is known as situation awareness (SA). Endsley developed the most generally accepted definition for SA: “The perception of elements in the environment within a volume of time and space [Level 1 SA], the comprehension of their meaning [Level 2 SA ] and the projection of their status in the near future [Level 3 SA]” (Endsley, 1988). Drury, Scholtz, and Yanco (2003) redefined this definition of situation awareness to make it mo re specific to robot operations, breaking it into five categories: human-robot awareness (the human’s understanding of the robot), human-human awareness, robot-human awareness (the robot’s information about the human), robot-robot awareness, and the humans’ overall mission awareness. In this chapter, we focus on two of the five types of awarene ss that relate to a case in which one human operator is working with one robot: human-robot awareness and the human’s overall mission awareness. Adams (2007) discusses the implications for human-unmanned vehicle SA at each of the three levels of SA (perception, comprehension, and projection). In Drury, Keyes, and Yanco (2007), human-robot awareness is further decomposed into five types to aid in assessing the operator’s understanding of the robot: location awareness, activity awareness, surroundings awareness, status awareness and overall mission awareness (LASSO). The two types that are primarily addressed in this chapter are location awareness and surroundings awareness. Location awareness is the operator’s knowledge of where the robot is situated on a larger scale (e.g., knowing where the robot is from where it started or that it is at a certain point on a map). Surroundings awareness is the knowledge the operator has of the robot’s circumstances in a local sense, such as when there is an


ieee international conference on rehabilitation robotics | 2007

Development of Vision-Based Navigation for a Robotic Wheelchair

Matt Bailey; Andrew Chanler; Bruce Allen Maxwell; Mark Micire; Katherine M. Tsui; Holly A. Yanco

Our environment is replete with visual cues intended to guide human navigation. For example, there are building directories at entrances and room numbers next to doors. By developing a robot wheelchair system that can interpret these cues, we will create a more robust and more usable system. This paper describes the design and development of our robot wheelchair system, called Wheeley, and its vision-based navigation system. The robot wheelchair system uses stereo vision to build maps of the environment through which it travels; this map can then be annotated with information gleaned from signs. We also describe the planned integration of an assistive robot arm to help with pushing elevator buttons and opening door handles.


Unmanned ground vehicle technology. Conference | 2000

Issues in intelligent robots for Search and Rescue

Jennifer Casper; Mark Micire; Robin R. Murphy

Since the 1995 Oklahoma City bombing and Kobe, Japan, earthquake, robotics researchers have been considering search and rescue as a humanitarian research domain. The recent devastation in Turkey and Taiwan, compounded with the new Robocup Rescue and AAAI Urban Search and Rescue robot competition, may encourage more research. However, roboticists generally go not have access to domain experts: the emergency workers or first providers. This paper shares our understanding of urban search and rescue, based on our active research in this area and training sessions with rescue workers from the Hillsborough County (Florida) Fire Departments. The paper is intended to be a stepping stone for roboticists entering the field.


distributed autonomous robotic systems | 2000

A Communication-free Behavior for Docking Mobile Robots

Brian W. Minten; Robin R. Murphy; Jeff Hyams; Mark Micire

Physical cooperation between robotic agents frequently requires the ability to dock. This paper reports on a reactive docking behavior which uses a vision algorithm that grows linearly with image pixels. The docking behavior was implemented on a marsupial team of robots, where a daughter micro-rover had to re-enter the mother robot from an approach zone with a 2 meter radius and 140° angular width. Experiments showed that the docking behavior had a similar success rate and was faster than 22 teleoperators.


2011 IEEE Conference on Technologies for Practical Robot Applications | 2011

Hand and finger registration for multi-touch joysticks on software-based operator control units

Mark Micire; Eric McCann; Munjal Desai; Katherine M. Tsui; Adam Norton; Holly A. Yanco

Robot control typically requires many physical joysticks, buttons, and switches. Taking inspiration from video game controllers, we have created a Dynamically Resizing, Ergonomic, and Multi-touch (DREAM) controller to allow for the development of a software-based operator control unit (SoftOCU). The DREAM Controller is created wherever a person places his or her hand; thus we needed to develop an algorithm for accurate hand and finger registration. Tested with a set of 405 hands from 62 users, our algorithm correctly identified 97% of the hands.

Collaboration


Dive into the Mark Micire's collaboration.

Top Co-Authors

Avatar

Holly A. Yanco

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Jennifer Casper

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Jeff Hyams

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Brian W. Minten

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katherine M. Tsui

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Adam Norton

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Eric McCann

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Munjal Desai

University of Massachusetts Lowell

View shared research outputs
Researchain Logo
Decentralizing Knowledge