Alina Trifan
University of Aveiro
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alina Trifan.
robot soccer world cup | 2013
António J. R. Neves; Alina Trifan; Bernardo Cunha
Vision is an extremely important sense for both humans and robots, providing detailed information about the environment. In the past few years, the use of digital cameras in robotic applications has been significantly increasing. The use of digital cameras as the main sensor allows the robot to capture the relevant information of the surrounding environment and take decisions. A robust vision system should be able to reliably detect objects and present an accurate representation of the world to higher-level processes, not only under ideal light conditions, but also under changing lighting intensity and color balance. To extract information from the acquired image, shapes or colors, the configuration of the colormetric camera parameters, such as exposure, gain, brightness or white-balance, among others, is very important. In this paper, we propose an algorithm for the self-calibration of the most important parameters of digital cameras for robotic applications. The algorithms extract some statistical information from the acquired images, namely the intensity histogram, saturation histogram and information from a black and a white area of the image, to then estimate the colormetric parameters of the camera. We present experimental results with two robotic platforms, a wheeled robot and a humanoid soccer robot, in challenging environments: soccer fields, both indoor and outdoor, that show the effectiveness of our algorithms. The images acquired after calibration show good properties for further processing, independently of the initial configuration of the camera and the type and amount of light of the environment, both indoor and outdoor.
Proceedings of SPIE | 2012
Alina Trifan; António J. R. Neves; Nuno Lau; Bernardo Cunha
Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well as two support applications that can run on an external computer for color calibration and debugging purposes. These applications are built based on a typical client-server model, in which the main vision pipe runs as a server, allowing clients to connect and distantly monitor its performance, without interfering with its efficiency. The experimental results that we acquire prove the efficiency of our approach both in terms of accuracy and processing time. Despite having been developed for the NAO robot, the modular design of the proposed vision system allows it to be easily integrated into other humanoid robots with a minimum number of changes, mostly in the acquisition module.
ieee international conference on autonomous robot systems and competitions | 2015
António J. R. Neves; Alina Trifan; Paulo Dias; José Luís Azevedo
Detection of aerial objects is a difficult problem to tackle given the dynamics and speed of a flying object. The problem is even more difficult when considering a noncontrolled environment, where the predominance of a given color is not guaranteed, and/or when the vision system is located on a moving platform. Taking as an example the game of robotic soccer promoted by the RoboCup Federation, most of the teams participating in the soccer competitions detect the objects in the environment using an omni directional camera. Omni directional vision systems only detect the ball when it is on the ground, and thus precise information on the ball position when in the air is lost. In this paper we present a novel approach for 3D ball detection in which we use the color information to identify ball candidates and the 3D data for filtering the relevant color information. The main advantage of our approach is the low processing time, being thus suitable for real-time applications. We present experimental results showing the effectiveness of the proposed algorithm. Moreover, this approach was already used in the last official RoboCup Middle Size League competition. The goalkeeper was able to move to a right position in order to defend a goal, in situations where the ball was flying towards the goal.
robot soccer world cup | 2014
Alina Trifan; António J. R. Neves; Bernardo Cunha; José Luís Azevedo
The game of soccer is one of the main focuses of the RoboCup competitions, being a fun and entertaining research environment for the development of autonomous multi-agent cooperative systems. For an autonomous robot to be able to play soccer, first it has to perceive the surrounding world and extract only the relevant information in the game context. Therefore, the vision system of a robotic soccer player is probably the most important sensorial element, on which the acting of the robot is fully based. In this paper we present a new modular time-constrained vision library, named UAVision, that allows the use of video sensors up to a frame rate of 50 fps in full resolution and provides accurate results in terms of detection of the objects of interest for a robot playing soccer.
International Symposium Computational Modeling of Objects Represented in Images | 2014
António J. R. Neves; Alina Trifan; Bernardo Cunha
The ultimate goal of Computer Vision has been, for more than half of century, to create an artificial vision system that could imitate the human vision. The artificial vision system should have all the capabilities of the human vision system but must not carry the same flaws. Robotics and Automation are just two examples of research areas that use artificial vision systems as the main sensorial element. In these areas, the use of color-coded objects is very common since it relieves the burden of information processing while being an unobtrusive restraint of the environment. We present a novel computer vision library called UAVision that provides support for different video sensors technologies and all the necessary software for implementing an artificial vision system for the detection of color-coded objects. The experimental results that we present, both for the scenario of robotic soccer games and for traffic sign detection, show that our library can work at more than 50fps with images of 1 megapixel.
practical applications of agents and multi agent systems | 2018
Daniel Canedo; Alina Trifan; António J. R. Neves
Monitoring classrooms using cameras is a non-invasive approach of digitizing students’ behaviour. Understanding students’ attention span and what type of behaviours may indicate a lack of attention is fundamental for understanding and consequently improving the dynamics of a lecture. Recent studies show useful information regarding classrooms and their students’ behaviour throughout the lecture. In this paper we start by presenting an overview about the state of the art on this topic, presenting what we consider to be the most robust and efficient Computer Vision techniques for monitoring classrooms. After the analysis of relevant state of the art, we propose an agent that is theoretically capable of tracking the students’ attention and output that data. The main goal of this paper is to contribute to the development of an autonomous agent able to provide information to both teachers and students and we present preliminary results on this topic. We believe this autonomous agent features the best solution for monitoring classrooms since it uses the most suited state of the art approaches for each individual role.
international conference on pattern recognition applications and methods | 2016
António J. R. Neves; Fred Gomes; Paulo Dias; Alina Trifan
Robotic soccer represents an innovative and appealing test bed for the most recent advances in multi-agent systems, artificial intelligence, perception and navigation and biped walking. The main sensorial element of a soccer robot must be its perception system, most of the times based on a digital camera, through which the robot analyses the surrounding world and performs accordingly. Up to this date, the validation of the vision system of a soccer robots can only be related to the way the robot and its team mates interpret the surroundings, relative to their owns. In this paper we propose an external monitoring vision system that can act as a ground truth system for the validations of the objects of interest of a robotic soccer game, mainly robots and ball. The system we present is made of two to four digital cameras, strategically positioned above the soccer field. We present preliminary results regarding the accuracy of the detection of a soccer ball, which proves that such a system can indeed be used as a provider for ground truth ball positions on the field during a robotic soccer game.
international conference on pattern recognition applications and methods | 2016
Alina Trifan; António J. R. Neves
Local feature descriptors and detectors have been widely used in computer vision in the last years for solving object detection and recognition tasks. Research efforts have been focused on reducing the complexity of these descriptors and improving their accuracy. However, these descriptors have not been tested until now on raw image data. This paper presents a study on the use of two of the most known and used feature descriptors, SURF and SIFT, directly on raw CFA images acquired by a digital camera. We are interested in understanding if the number and quality of the keypoints obtained from a raw image are comparable to the ones obtained in the grayscale images, which are normally used by these transforms. The results that we present show that the number and positions of the keypoints obtained from grayscale images are similar to the ones obtained from CFA images and furthermore to the ones obtained from grayscale images that resulted directly from the interpolation of a CFA image.
Advances in Computer Science : an International Journal | 2016
António J. R. Neves; Alina Trifan; Bernardo Cunha; José Luís Azevedo
Archive | 2013
R. G. Dias; António J. R. Neves; José Luís Azevedo; Bernardo Cunha; João Paulo da Silva Cunha; Paulo Dias; A. Domingos; L. Ferreira; Pedro Fonseca; Nuno Lau; Eurico Pedrosa; Ac Pereira; Rui Serra; João de Abreu e Silva; P. Soares; Alina Trifan