Piotr Jasiobedzki
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Piotr Jasiobedzki.
Robotics and Autonomous Systems | 1998
S. B. Nickerson; Piotr Jasiobedzki; David Wilkes; Michael Jenkin; Evangelos E. Milios; John K. Tsotsos; Allan D. Jepson; O. N. Bains
The ARK mobile robot project has designed and implemented a series of mobile robots capable of navigating within industrial environments without relying on artificial landmarks or beacons. The ARK robots employ a novel sensor, Laser Eye, that combines vision and laser ranging to efficiently locate the robot in a map of its environment. Laser Eye allows self-location of the robot in both walled and open areas. Navigation in walled areas is carried out by matching 2D laser range scans, while navigation in open areas is carried out by visually detecting landmarks and measuring their azimuth, elevation and range with respect to the robot. In addition to solving the core tasks of pose estimation and navigation, the ARK robots address the tasks of sensing for safety and operator interaction.
intelligent robots and systems | 1993
Michael Jenkin; Evangelos E. Milios; Piotr Jasiobedzki; N. Bains; K. Tran
ARK (Autonomous Robot for a Known environment), is a visually-guided mobile robot which is being constructed as part of the Precarn project in mobile robotics. ARK operates in a previously mapped environment and navigates with respect to visual landmarks that have been previously located. While the robot moves, it utilizes an active vision sensor to register the robot with respect to these landmarks. As the landmarks may be scarce in certain regions of its environment, ARK plans paths which minimize both path length and path uncertainty. The global path planner assumes that the robot will use a Kalman filter to integrate landmark information with odometry data to correct path deviations as the robot moves, and then uses this information to choose a path which reduces the expected path deviation.
Image and Vision Computing | 1993
Piotr Jasiobedzki; Christopher J. Taylor; John N. H. Brunt
We describe a method for segmenting retinal images using positions of blood vessels supplying the retina. The image is tessellated into irregularly shaped primary regions which are bounded by vessels, chains of microaneurysms, edges, etc. Boundaries are classified into groups using a trained set of grey level models. We define a process of merging primary regions into large patches using image properties such as texture and intensity, and semantic interpretations of boundaries and their measured properties. The method which makes extensive use of morphological processing depends on a limited number of parameters which have natural physical interpretations.
Optical Tools for Manufacturing and Advanced Automation | 1993
Piotr Jasiobedzki; Michael Jenkin; Evangelos E. Milios; Brian Down; John K. Tsotsos; Todd Campbell
This paper describes a new sensor that combines visual information from a CCD camera with sparse distance measurements from an infra-red laser range-finder. The camera and the range- finder are coupled together in such a way that their optical axes are parallel. A mirror with a different reflectivity for visible and for infra-red light is used to ensure collinearity of effective optical axes of the camera lens and the range-finder. The range is measured for an object in the center of the camera field of view. The Laser Eye is mounted on a robotic head and is used in an active vision system for an autonomous mobile robot (called ARK).
computer-based medical systems | 1993
Piotr Jasiobedzki
This paper presents a new method of retinal image registration. The method is based on representing a segmented reference image as an adaptive adjacency graph. The graph consists of a network of active contours, nodes where contours are connected, regions outlined by the contours and their full adjacency relationship. The contours in the graph correspond to retinal vessels or other curvilinear features. The registration is performed by placing the graph on the image to be registered and allowing it to adapt to the image data. The contours move under combined effect of internal and external forces. The internal forces represent contour internal energy. The external forces correspond to image data and to connectivity constraints imposed on the contours. Results of registration obtained for retinal images are presented.<<ETX>>
intelligent robots and systems | 1994
Michael Jenkin; N. Bains; J. Bruce; Todd Campbell; Brian Down; Piotr Jasiobedzki; Allan D. Jepson; B. Majarais; Evangelos E. Milios; S. B. Nickerson; Demetri Terzopoulos; John K. Tsotsos; David Wilkes
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate and carry out survey/inspection tasks in a complex but known industrial environment. Rather than altering the robots environment by adding easily identifiable beacons the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that are used to detect unmapped obstacles, landmarks and objects. This paper describes the robots industrial environment, its control architecture, and some results in processing the robots range and vision sensor data for navigation.<<ETX>>
canadian conference on computer and robot vision | 2008
Anna Topol; Michael Jenkin; Jarek Gryz; Stephanie Wilson; Marcin Kwietniewski; Piotr Jasiobedzki; Ho-Kong Ng; Michel Bondy
Recent advancements in laser and visible light sensor technology allows for the collection of photorealistic 3D scans of large scale spaces. This enables the technology to be used in real world applications such as crime scene investigation. The 3D models of the environment obtained with a 3D scanner capture visible surfaces but do not provide semantic information about salient features within the captured scene. Later processing must convert these raw scans into salient scene structure. This paper describes ongoing research into the generation of semantic data from the 3D scan of a crime scene to aid forensic specialists in crime scene investigation and analysis.
intelligent robots and systems | 1995
Piotr Jasiobedzki
Autonomous mobile robots require at least some three dimensional information to navigate in complex and partially unknown environments. Usually it is sufficient to find obstacles in the direction of the planned motion or unobstructed floor space. The distance at which the obstacle detection or floor reconstruction is carried out, determines how far ahead the robot can plan its actions. Covering large space is particularly important in industrial settings and helps in finding quickly right paths and avoiding excessive exploration. In this paper the author presents a novel method of detecting driveable floor regions. The method uses a planar floor model and a vision guided range detection sensor to segment the image into regions and verify which of them belong to the floor. The acceptance criterion uses the sensor error model. The floor detection process requires a relatively small number (in order 25-50) of individual range measurements to plan a safe path for the robot. The author presents experimental results and floor maps created using this method.
SPIE's 1993 International Symposium on Optics, Imaging, and Instrumentation | 1993
Piotr Jasiobedzki
A new model of an adaptive adjacency graph (AAG) for representing a 2-D image or a 2-D view of a 3-D scene is introduced. The model makes use of image representation similar in form to a region adjacency graph. Adaptive adjacency graph, as opposed to region adjacency graph, is an active representation of the image. The AAG can adapt to the image or track features and maintain the topology of the graph. Adaptability of the AAG is achieved by incorporating active contours (`snakes) in the graph. Various methods for creating the AAGs are discussed. Results obtained for dynamic tracking of features in sequence of images and for registration of retinal images are presented.
Proceedings of SPIE | 1993
Piotr Jasiobedzki
This paper describes an image segmentation algorithm and the results obtained using a specially designed robotic head. The head consists of a camera and a laser range-finder mounted on a pan & tilt unit. Additional distance measuring capabilities, offered by the head, have been integrated into the segmentation process. The described method will be used for detecting visual landmarks by an autonomous mobile robot.