Robert Bieda
Silesian University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert Bieda.
Vision Based Systemsfor UAV Applications | 2013
Artur Babiarz; Robert Bieda; Karol Jędrasiak; Aleksander Nawrat
In the article a group of vi libraries (“virtual instrument”) of the simulation – programming LabVIEW environment, enabling the construction of a program that would be the basics of the image programming and analysis system, are presented. Presented libraries are dedicated for the Vivotek, supporting the SDK module (“VitaminCtrl”). They provide the ability to connect the camera that works on the Internet, configuration of the transmission protocol, acquisition parameters and audio and video streaming. In addition, while operating the PTZ (“pan-tilt-zoom”) cameras, suggested module allow to control the camera motion from the client application level which was created in the National Instruments’ firm environment. With the use of the designed libraries, the LabVIEW environments and PTZ cameras of Vivotek a post was made to make algorithm and synthesis of the processing algorithms and for the analysis of digital images. At the stage of the development of the functional modules, a consistent idea of input and output signals definition was also developed. The created modules rely on both the commands of the controlling module SDK and syntax URL format commands. What is more, designed modules enable capturing the video signal frames and their conversation to a LabVIEW variable environment of Image type (“IMAQ Image.ctl”). With the achieved functionality a simple construction of an application for the synthesis and analysis of algorithms for image processing and for the analysis of images acquired in the real time, directly from the camera, is possible.
Vision Based Systemsfor UAV Applications | 2013
Artur Babiarz; Robert Bieda; Krzysztof Jaskot
A vision system for group of small mobile robots playing soccer is presented. The whole process of extracting vision information from input images is discussed in detail. The method for correcting radial distortion introduced to the image by camera’s lens is presented, then simple adaptive background subtraction algorithm is described. Next, classical flood fill algorithm is presented together with novel optimization giving better results and shorter calculation time. Later, novel method for calculating object’s orientation, based on the geometrical moments and special shape of color markers on top of each robot, is presented. Then, the color classifier based on the histogram intersection kernel is discussed. Design of the vision system as a central server providing vision information to many clients simultaneously is presented. Experimental results obtained with use of the algorithm presented are also provided.
Vision Based Systemsfor UAV Applications | 2013
Artur Babiarz; Robert Bieda; Krzysztof Jaskot
Digital remote manual control systems are no less complicated than automatic control. Engineers have to overcome number of unique problems. Mechanical parts have to respond to digital signals transferred through wireless connection. Digital control algorithms have to correctly translate the movement of controls at remote station to signals in a way that a operator has a proper feeling of control. The Master Control System presented connects the features of a Drive-By-Wire control system with wireless connection. Main focus is put on the Master controller and its algorithms of input - output translation, that give finer control for an operator. In addition, computer controlled intervention algorithms are proposed that can increase safety of the subject vehicle. All that is done on basis of a small two-wheeled robot and a closed environment of small playground with radio interface allowing wireless connection with the robot, and a vision interface equipped with a camera serving as a visual feedback. Is it a great environment for theoretical considerations, testing and implementation. Modular design of the environment additionally allows the control of whole Mobile Robot Group, not only one robot subject. Moreover sented results can be applied more widely in controllers for cars or other wheeled or tracked vehicles.
Archive | 2015
Marek Kulbacki; Roman Koteras; Agnieszka Szczęsna; Krzysztof Daniec; Robert Bieda; Janusz Słupik; Jakub Segen; Aleksander Nawrat; Andrzej Polanski; Konrad Wojciechowski
We present the concept and implementation of unobtrusive wearable network of sensors and distributed control system for integrated monitoring - acquisition, processing, analysis of human motion and other physiological modalities. The entire system, hardware and software are scalable and compliant with the Wireless Body Area Network model. The wearable system modules can work independently and continuously indoor and outdoor. Each of the tracking and controlled subjects is wearing a Body Acquisition System (BAS). BAS is a human acquisition system for monitoring human motion and multiple physiological signals. It is built into a wearable unobtrusive smart clothing and enables to create wireless sensor network using WIFI for external communication, local hub for local data acquisition, processing and transfer. The central hub for global data processing and data exchange has been developed as Cloud Based Human Multimodal Database (CBHMD). A software application, Multimodal Data Environment (MMDE) has been built to visualize and control the acquisition and monitoring process. MMDE allows domain experts such as physicians, physiotherapists, film producers, to work with connected BASs control and react in real time. MMDE enables remote communication, data acquisition directly from BASs, diagnostics, management and maintenance of medical devices in BASs, as well as data processing using customized processes and algorithms.
Vision Based Systemsfor UAV Applications | 2013
Robert Bieda; Krzysztof Jaskot; Karol Jędrasiak; Aleksander Nawrat
The aim of this methodology is a creation of a tool which would be able to construct a system for obtaining information about objects in the environment of the vision system. This task is often defined as a sub-task in the primary problem of the automatic task completion by the independent UAV unit using visual information. These researches concerned the construction of the algorithm that would allow the location of objects in the image, identification of these objects by classifying them into appropriate (previously declared) class, and then sending information about the sought-after object / objects to the actuator control module.
international conference on methods and models in automation and robotics | 2016
Rafal T. Grygiel; Robert Bieda; Marian J. Blachuta
Coupled tanks systems play an important role in teaching of control theory. Although due to the existence of two independent storages, making them a second order system, it has been shown that one time constant is at least 6 time greater than the other. In normal operation conditions this ratio is about 10-20. Therefore, special attention is necessary to explain their properties.
international conference on methods and models in automation and robotics | 2015
Robert Bieda; Rafal T. Grygiel; Adam Galuszka
In the paper an efficient and accurate method for estimating object orientation in three-dimensional (3D) space is proposed. Classical approaches based on Kalman filtering requires mathematical formulation of plant model, which in most cases is based on the nonlinear equations of rotational kinematics of rigid bodies. It follows that linearization operations are necessary. This approach is correct but in many cases leads to difficulties in computations and implementations. To simplify this problem, using the assumption of Bayesian classification systems, in the paper the angular velocity vector is treated as three separate events. Therefore, tree independent Kalman filters are used to estimate Euler angles for each Roll-Pitch-Yaw coordinate system. This new approach is called Naive Kalman Filter. Data fusion for real IMU sensor which integrates data from triaxial gyroscope, accelerometer and magnetometer is presented in order to illustrate accuracy and computational efficiency of proposed filter.
Proceedings of SPIE | 2008
Aleksandra Ledwon; Robert Bieda; Aleksandra Kawczyk-Krupka; Andrzej Polanski; Konrad Wojciechowski; Wojciech Latos; Karolina Sieroń-Stołtny; Aleksander Sieroń
Background: Fluorescence diagnostics uses the ability of tissues to fluoresce after exposition to a specific wavelength of light. The change in fluorescence between normal and progression to cancer allows to see early cancer and precancerous lesions often missed by white light. Aim: To improve by computer image processing the sensitivity of fluorescence images obtained during examination of skin, oral cavity, vulva and cervix lesions, during endoscopy, cystoscopy and bronchoscopy using Xillix ONCOLIFE. Methods: Function of image f(x,y):R2 → R3 was transformed from original color space RGB to space in which vector of 46 values refers to every point labeled by defined xy-coordinates- f(x,y):R2 → R46. By means of Fisher discriminator vector of attributes of concrete point analalyzed in the image was reduced according to two defined classes defined as pathologic areas (foreground) and healthy areas (background). As a result the highest four fishers coefficients allowing the greatest separation between points of pathologic (foreground) and healthy (background) areas were chosen. In this way new function f(x,y):R2 → R4 was created in which point x,y corresponds with vector Y, H, a*, c2. In the second step using Gaussian Mixtures and Expectation-Maximisation appropriate classificator was constructed. This classificator enables determination of probability that the selected pixel of analyzed image is a pathologically changed point (foreground) or healthy one (background). Obtained map of probability distribution was presented by means of pseudocolors. Results: Image processing techniques improve the sensitivity, quality and sharpness of original fluorescence images. Conclusion: Computer image processing enables better visualization of suspected areas examined by means of fluorescence diagnostics.
mediterranean conference on control and automation | 2017
Marian J. Blachuta; Robert Bieda; Rafal T. Grygiel
A detailed analysis of a PI tank level control is performed with respect to both load disturbance attenuation and set-point change. Certain performance indices are given providing guidelines for achievable accuracy and controller settings. Exact formulas for extrema of time responses are derived, which use a novel parametrization of system poles. For transfer between equilibria under control signal limitations, set-point generators with a feed-forward from the reference signal are proposed. The effect of an anti-windup controller augmentation is examined. A method based on solution of the non-linear differential equation describing the tank draining is proposed for a tank model identification. Moreover, invariance of the control system properties with respect to the actual equilibrium is highlighted.
Archive | 2016
Robert Bieda; Rafal T. Grygiel; Adam Galuszka
In the paper Naive Kalman filter is introduced and presented for estimating orientation in 3D space. Using the assumption of Bayesian classification systems, the angular velocity vector is treated as three separate events. Therefore, three independent Kalman filters are used to estimate Euler angles for each RPY coordinate system. Data fusion is presented for real IMU sensor which integrated data from triaxial gyroscope, accelerometer and magnetometer.