Juha Roening
University of Oulu
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Juha Roening.
Intelligent Robots and Computer Vision XIX: Algorithms, Techniques, and Active Vision | 2000
Tuukka Turunen; Juha Roening; Sami Ahola; Tino Pyssysalo
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the users perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the users perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
Intelligent Robots and Computer Vision X: Algorithms and Techniques | 1992
Sakari Pieskä; Tapio Heikkilä; Jukka Riekki; Tapio Taipale; Klaus Kansala; Juha Roening
Robots operating in unstructured environments require continuous utilization of sensors and intelligence for adapting to changing situations. In this paper a control method to achieve this goal is described and preliminary experiments are discussed. The control scheme is based on a hierarchically organized set of planning-executing-monitoring (PEM) cycles. Every PEM cycle is a goal-oriented module, which consists of three generic activities -- planning, executing, and monitoring -- and a separate meta control mechanism, which takes care of the control of generic activities inside a PEM cycle. We present our design experiments beginning from the development of a PEM-based logical model for an autonomous machine, and continue to the development of an implementation model for a loading manipulator control system. The laboratory implementations in two industrial robot environments are also described as well as plans for PEM-control implementation for a heavy-duty manipulator designed for loading paper rolls in harbor sites.
Proceedings of SPIE | 1999
Juha Roening; Kari Kangas
In the future, interaction between humans and personal roots will become increasingly important as robots will, more and more, operate as assistants in our everyday life. Because of this, there is a need for a convenient, flexible, and general-purpose technique that we can use to interact with robots. Moreover, the same technique should also be usable when we interact with embedded systems in smart environments. In this paper, we will describe a technique that allows us to use a single simple handheld control device to interact not only with personal robots, but also with ubiquitous embedded systems. When a new system, whether a mobile robot or a VCR, is encountered, the control device downloads a mobile code from the system and executes it. The mobile code then sues the services provided by the control device to create a virtual user interface that we can use to interact with the particular syste. Our technique is flexible, simple, adaptive, and open. The technique draws much of its flexibility and simplicity from the mobile code. Adaptivity comes from the fact that the control device only needs minimal knowledge about each particular system. In addition, the technique does not place any restrictions on the type of mobile code that can be used. We will describe the architecture of the CUES system that utilizes our technique. We will also describe the architecture of the SMAF system, our test bed mobile code execution environment used in CUES. In addition, we will present a virtual user interface for a mobile robot that we can use to control the robot and to monitor its status information. The interface also operates as a terminal that we can use to access remote information in the Internet.
electronic imaging | 1997
Juha Roening; Janne Haverinen
The applicability of the light-stripe based obstacle detection system for an outdoor vehicle was studied, and a working prototype for a laboratory environment was implemented. The prototype was implemented using a light- stripe projector and a smart photodiode matrix sensor. Special attention was paid to the ability of the algorithms to isolate the light stripes from the sensor image in an environment where a lot of unwanted light sources are present. Knowledge about the intensity distribution of the light-stripe and the performance of the known optics and geometry was used to differentiate between the light-stripe and interference on the sensor image.
Intelligent Robots and Computer Vision XVI: Algorithms, Techniques, Active Vision, and Materials Handling | 1997
Jouko O. Viitanen; Janne Haverinen; Pentti Mattila; Hannu Maekelae; Thomas V Numers; Zbigniev Stanek; Juha Roening
We describe an integrated system developed for use onboard a moving work machine. The machine is targeted to such applications as e.g. automatic container handling at loading terminals. The main emphasis is on the various environment perception duties required by autonomous or semi-autonomous operation. These include obstacle detection, container position determination, localization needed for efficient navigation and measurement of docking and grasping locations of containers. Practical experience is reported on the use of several different types of technologies for the tasks. For close distance measurement, such as container row following, ultrasonic measurement was used, with associated control software. For obstacle and docking position detection, 3D active vision techniques were developed with structured lighting, utilizing also motion estimation techniques. Depth from defocus-based methods were developed for passive 3D vision. For localization, fusion of data from several sources was carried out. These included dead-reckoning data from odometry, an inertial unit, and several alternative external localization devices, i.e. real-time kinematic GPS, inductive and optical transponders. The system was integrated to run on a real-time operating system platform, using a high-level software specification tool that created the hierarchical control structure of the software.
Photonics for Industrial Applications | 1995
Jukka Riekki; Juha Roening
In this article, a reactive system for planning robot actions is described. The described hierarchical control system architecture consists of planning-executing-monitoring-modelling elements (PEMM elements). A PEMM element is a goal-oriented, combined processing and data element. It includes a planner, an executor, a monitor, a modeler, and a local model. The elements form a tree-like structure. An element receives tasks from its ancestor and sends subtasks to its descendants. The model knowledge is distributed into the local models, which are connected to each other. The elements can be synchronized. The PEMM architecture is strictly hierarchical. It integrated planning, sensing, and modelling into a single framework. A PEMM-based control system is reactive, as it can cope with asynchronous events and operate under time constraints. The control system is intended to be used primarily to control mobile robots and robot manipulators in dynamic and partially unknown environments. It is suitable especially for applications consisting of physically separated devices and computing resources.
Fibers '91, Boston, MA | 1991
Juha Roening; Jukka Riekki; Seppo Kemppainen
A simulator is designed for developing and testing navigation and control strategies for mobile robots which provides an animation of a robot and its environment on a computer screen. The environment is two-dimensional consisting of walls and objects which can be either stationary or moving. Sensors for non-contact range measurement can be attached to the robot or placed in stationary positions in its environment. The operation of the simulator has been verified in tests on a reactive distributed control system for a mobile robot and a more common model-based navigation approach. Implementation of the simulator is made more flexible by using an object-oriented approach.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
Intelligent Robots and Computer Vision XX: Algorithms, Techniques, and Active Vision | 2001
Juha Roening; Jukka Riekki
We propose context-aware mobile systems for managing and using services on the behalf of the user. Context-aware mobile systems perceive environmental signals, infer the context (i.e. the state of the system and its local environment) from these signals, and calculate appropriate actions for the detected context. In its simplest form, such a system can be a mobile telephone adjusting its profile based on the noise level and brightness of the environment. A personal robot equipped with a vision system and a manipulator is a more complex example of a context-aware mobile system. Context-aware mobile systems both manage the available services and use them on behalf of the user or guide the user in using them. A portable system offers a single interface for a variety of services. The role of a personal robot is to enhance some services so as to make them more suitable to the user. Both portable and robotic systems are controlled by a system capable of recognizing the users context and reasoning out actions that would optimally serve the user in the situation at hand. In this paper, we will present our software architecture for context-aware service management and utilization, our approach for controlling personal robots, and our work on virtual and natural interfaces for interacting with a personal robot.
Intelligent Robots and Computer Vision X: Algorithms and Techniques | 1992
Jukka Riekki; Juha Roening; Olli Silvén; Matti Pietikaeinen; Visa Koivunen
This paper presents a vision-guided control system for an industrial robot capable of picking up an object, moving it to a goal, and placing it there. Tasks given to the control system are based on imperfect knowledge about the environment. The control system corrects the task parameters by matching them against range information gained from the environment. The control system is part of a larger system, which includes a high-level goal-oriented planner. The planner consists of hierarchically organized planning-executing-monitoring triplets, which execute given tasks by dividing them into subtasks, by sending the subtasks either to other triplets or to the control system described in this paper, and by monitoring the execution of the subtasks. The planner sees the robot and the control system as an intelligent robot capable of executing pick-and-place tasks in a dynamic, partly unknown environment. This paper presents the results of the testing of the control system with an industrial 6-axis robot and a structured light-based range sensor. Also the principle of calibrating the robot and the sensor is presented.
Proceedings of SPIE | 2001
Tuukka Turunen; Tino Pyssysalo; Juha Roening
Mobile augmented reality can be utilized in a number of different services, and it provides a lot of added value compared to the interfaces used in mobile multimedia today. Intelligent service connectivity architecture is needed for the emerging commercial mobile augmented reality services, to guarantee mobility and interoperability on a global scale. Some of the key responsibilities of this architecture are to find suitable service providers, to manage the connection with and utilization of such providers, and to allow smooth switching between them whenever the user moves out of the service area of the service provider she is currently connected to. We have studied the potential support technologies for such architectures and propose a way to create an intelligent service connectivity architecture based on current and upcoming wireless networks, an Internet backbone, and mechanisms to manage service connectivity in the upper layers of the protocol stack. In this paper, we explain the key issues of service connectivity, describe the properties of our architecture, and analyze the functionality of an example system. Based on these, we consider our proposition a good solution to the quest for global interoperability in mobile augmented reality services.