Mariana Luderitz Kolberg
Universidade Federal do Rio Grande do Sul
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mariana Luderitz Kolberg.
international conference on robotics and automation | 2014
Renan Maffei; Vitor A. M. Jorge; Edson Prestes; Mariana Luderitz Kolberg
Integrated exploration is the most complete task in mobile robotics, and corresponds to the union of mapping, localization and motion planning. A powerful integrated exploration solution must take into account decisions that improve the quality of the map construction, such as closing loops, at the same time that the environment is explored. Potential fields and boundary value problems (BVP) have been used with success in tasks of planning, localization and exploration, but not yet in integrated strategies. In this paper, we present an integrated exploration strategy using a time varying BVP-based exploration. Our strategy consists of creating potential rails that guide the robot to regions that are either unexplored or were visited a long time ago. We also apply local distortions in the potential field to generate a loop closure strategy. Experimental results demonstrate that our method improves the quality of the map construction, keeping the balance between revisiting and exploratory activities.
international conference on robotics and automation | 2015
Renan Maffei; Vitor A. M. Jorge; Vitor F. Rey; Mariana Luderitz Kolberg; Edson Prestes
Estimating the robot localization is a fundamental requirement for applications in robotics. For many years, Monte Carlo Localization (MCL) has been one of the most popular approaches to solve the global localization when using range finders, like sonars or lasers. It generally weights the estimates about the robot state by comparing raw sensor readings with simulated readings computed for each estimate. In this paper, we propose an observation model for localization that associates a kernel density estimate (KDE) to each point in the space. This single-valued density measure is independent of orientation, what allows an efficient pre-caching step, substantially boosting the computation time of the process. Using the gradient of the densities field, our strategy is able to estimate orientation information that helps to restrict the localization search space. Additionally, we can combine densities obtained by kernels of different sizes and profiles to improve the quality of the acquired information. We show through experiments in comparison with traditional approaches that our method is efficient, even working with large sets of particles, and effective.
international conference on robotics and automation | 2015
Vitor A. M. Jorge; Renan Maffei; Guilherme S. Franco; Jessica Daltrozo; Mariane Giambastiani; Mariana Luderitz Kolberg; Edson Prestes
An autonomous robot requires a map of the environment for many tasks. Yet, in many cases, this map is unavailable and the robot must build one in real-time, in the so-called integrated exploration task. Several integrated exploration approaches adopt some sort of loop-closing strategy combined with an online simultaneous localization and mapping (SLAM) technique. This is important because the robot can reduce the uncertainty about its pose by revisiting known areas. One solution for environment exploration is to use the vector field computed from the numeric solution of a Boundary Value Problem (BVP). This approach is called BVP Path Planner and generates smooth and free of local minima potential fields. However, this planner cannot actively close loops and does not scale well in large scenarios. In this paper we present a technique which performs active loop closure using the BVP Path Planner. Our proposal takes advantage of the potential of unexplored regions, and induces the robot to close loops by placing dynamic barriers at the visited space. The update of the potential field is boosted using a local window charged by a Voronoi diagram of the environment containing global information. We show through experimental results the effectiveness of the technique with a thorough discussion of its characteristics.
intelligent robots and systems | 2015
Renan Maffei; Vitor A. M. Jorge; Vitor F. Rey; Guilherme S. Franco; Mariane Giambastiani; Jessica Barbosa; Mariana Luderitz Kolberg; Edson Prestes
Place recognition is the frond-end of Simultaneous Localization and Mapping (SLAM). Topological representations depend on good association of vertices, which ultimately depends on the front-end. In this paper, we consider a robot lost in an unknown environment trying to construct a topological map to localize itself using a laser range finder and odometry information. The algorithm makes use of an efficient observation model based on kernel density estimates (KDEs) to detect loops. The observation model separates the map into regions denominated words, classified based on the density of free space, number of observations and segment orientation. Loop closing results from the matching of sequences of N consecutive words (n-grams). The proposed approach is orders of magnitude faster than a sequence of Iterative Closest Point (ICP) matches. The method is evaluated varying input parameters in real and simulated scenarios.
intelligent robots and systems | 2014
Renata Neuland; Jeremy Nicola; Renan Maffei; Luc Jaulin; Edson Prestes; Mariana Luderitz Kolberg
Probabilistic approaches are extensively used to solve high-dimensionality problems in many different fields. The particle filter is a prominent approach in the field of Robotics, due to its adaptability to non-linear models with multi-modal distributions. Nonetheless, its result is strongly dependent on the quality and the number of samples required to cover the space of possible solutions. In contrast, interval analysis deals with high-dimensionality problems by reducing the space enclosing the actual solution. Notwithstanding, it cannot precise where in the resulting subspace the actual solution is. We devised a strategy that combines the best of both worlds. Our approach is illustrated by solving the global localization problem for underwater robots.
Unmanned Systems | 2014
Renata Neuland; Renan Maffei; Luc Jaulin; Edson Prestes; Mariana Luderitz Kolberg
One of the fundamental tasks of robotics is to solve the localization problem, in which a robot must determine its true pose without any knowledge on its initial location. In underwater environments, this is specially hard due to sensors restrictions. For instance, many times, the localization process must rely on information from acoustic sensors, such as transponders. We propose a method to deal with this scenario, that consists in a hybridization of probabilistic and interval approaches, aiming to overcome the weaknesses found in each approach and improve the precision of results. In this paper, we use the set inversion via interval analysis (SIVIA) technique to reduce the region of uncertainty about robot localization, and a particle filter to refine the estimates. With the information provided by SIVIA, the distribution of particles can be concentrated in regions of higher interest. We compare this approach with a previous hybrid approach using contractors instead of SIVIA. Experiments with simulated data show that our hybrid method using SIVIA provides more accurate results than the method using contractors.
intelligent robots and systems | 2013
Renan Maffei; Vitor A. M. Jorge; Mariana Luderitz Kolberg; Edson Prestes
Simultaneous Localization and Mapping (SLAM) is one of the most difficult tasks in mobile robotics. While the construction of consistent and coherent local solutions is simple, the SLAM remains a critical problem as the distance travelled by the robot increases. To circumvent this limitation, many strategies divide the environment in small regions, and formulate the SLAM problem as a combination of multiple precise submaps. In this paper, we propose a new submap-based particle filter algorithm called Segmented DP-SLAM, that combines an optimized data structure to store the maps of the particles with a probabilistic map of segments, representing hypothesis of submaps topologies. We evaluate our method through experimental results obtained in simulated and real environments.
intelligent robots and systems | 2016
Renan Maffei; Vitor A. M. Jorge; Vitor F. Rey; Mariana Luderitz Kolberg; Edson Prestes
Proper place recognition on an environment that can change over time is fundamental for long-term SLAM. In such scenarios the observations obtained in the same region can drastically differ due to changes caused by semi-static objects, such as doors, furniture, etc. In this work, we extend a strategy that represents environment regions using words, based on spatial density information extracted from laser readings. This time, in order to deal with changes in the environment, our method not only builds words representing the real observations made by the robot, but also alternative multi-level words to account for possible changes in a places observations generated by non-static objects. Place recognition is made by searching matches of sequences of N consecutive words (both real or alternatives). Experiments performed in real and simulated scenarios are shown, and demonstrate the advantages associated to the use of multi-level words.
Numerical Linear Algebra With Applications | 2015
Mariana Luderitz Kolberg; Gerd Bohlender; Luiz Gustavo Fernandes
Summary Automatic result verification is an important tool to guarantee that completely inaccurate results cannot be used for decisions without getting remarked during a numerical computation. Mathematical rigor provided by verified computing allows the computation of an enclosure containing the exact solution of a given problem. Particularly, the computation of linear systems can strongly benefit from this technique in terms of reliability of results. However, in order to compute an enclosure of the exact result of a linear system, more floating-point operations are necessary, consequently increasing the execution time. In this context, parallelism appears as a good alternative to improve the solver performance. In this paper, we present an approach to solve very large dense linear systems with verified computing on clusters. This approach enabled our parallel solver to compute huge linear systems with point or interval input matrices with dimensions up to 100,000. Numerical experiments show that the new version of our parallel solver introduced in this paper provides good relative speedups and delivers a reliable enclosure of the exact results. Copyright
Computers & Industrial Engineering | 2015
Andriele Busatto do Carmo; Mateus Raeder; Thiago Nunes; Mariana Luderitz Kolberg; Luiz Gustavo Fernandes
We defined a new scheduling architecture in the context of PSPs.The proposed scheduling architecture improve the overall ripping procedure throughput.The proposed architecture allows a better load distribution among the available RIP engines.We provide an adaptive environment to experiment different scheduling algorithms. The Digital Printing industry has become extremely specialized in the past few years. The use of personalized documents has emerged as a consolidated trend in this field. In order to meet this demand, languages to describe templates for personalized documents were proposed along with procedures which allow the correct printing of such documents. One of these procedures, which demands a high computational effort, is the ripping phase performed over a queue of documents in order to convert them into a printable format. An alternative to decrease the ripping phase computational time is to use high performance computing techniques to allow parallel ripping of different documents. However, such strategies present several unsolved issues. One of the most severe issues is the impossibility to assure a fair load balancing for any job queue. In this scenario, this work proposes a job profile oriented scheduling architecture for improving the throughput of industrial printing environments through a more efficient use of the available resources. Our results show a performance gain of up to 10% in average over the previous existing strategies applied on different job queue scenarios.