Joanna Isabelle Olszewska
University of Gloucestershire
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joanna Isabelle Olszewska.
Neurocomputing | 2015
Joanna Isabelle Olszewska
In this paper, we present a new optical character recognition (OCR) approach which allows real-time, automatic extraction and recognition of digits in images and videos. Our method relies on active contours in order to robustly extract optical characters from real-world visual scenes. The detected character recognition is based on template matching. Our developed system has shown excellent results when applied to the automated identification of team playersÂ? numbers in sport datasets and has outperformed state-of-the-art methods.
international conference on agents and artificial intelligence | 2016
Joanna Isabelle Olszewska
In applications involving multiple conversational agents, each of these agents has its own view of a visual scene, and thus all the agents must establish common visual landmarks in order to coordinate their space understanding and to coherently share generated spatial descriptions of this scene. Whereas natural language processing approaches contribute to define the common ground through dialogues between these agents, we propose in this paper a computer-vision system to determine the object of reference for both agents efficiently and automatically. Our approach consists in processing each agent’s view by computing the related, visual interest points, and then by matching them in order to extract the salient and meaningful landmark. Our approach has been successfully tested on real-world data, and its performance and design allow its use for embedded robotic system communication.
computer analysis of images and patterns | 2015
Joanna Isabelle Olszewska
In this work, we propose a new method for fully automatic detection and recognition of textureless objects present in complex visual scenes. While most approaches only deal with shape matching, our approach considers objects both in terms of low-level features and high-level information, and represents objects’ view-based templates as trees. Multi-level matching increases algorithm robustness, while the new tree structure of the template reduces its computational burden. We have evaluated our algorithm on the CMU dataset consisting of objects under arbitrary viewpoints and in cluttered environment. Our proposed approach has shown excellent performance, outperforming state-of-the-art methods.
International Conference on Innovative Techniques and Applications of Artificial Intelligence | 2015
Joanna Isabelle Olszewska
A visual, three-dimensional (3D) scene is usually grounded by two-dimensional (2D) views. In order to develop a system able to automatically understand such a 3D scene and to provide high-level specifications of what is this scene, we propose a new computational formalism which allows to perform reasoning simultaneously about the 3D scene and its 2D views. In particular, our approach formalizes both 3D directional relations and 3D far/close spatial relations among objects of interest in the scene. For this purpose, qualitative spatial relations based on the clock model are computed in each of the 2D views capturing the scene and are reconstructed in the 3D space in a semantically meaningful, spherical representation. Our resulting 3D qualitative spatial relations have been successfully tested on real-world dataset and show excellent performance in terms of accurateness and efficiency compatible with real-time applications.
international conference on agents and artificial intelligence | 2017
Joanna Isabelle Olszewska
Intelligent agent’s navigation remotely controlled by means of natural language commands is of great help for robots operating in rescue activities or assistive aid. Whereas full conversation between the human commander and the agent could be limited in such situations, we propose thus to build human/robot dialogues based directly on semantically meaningful instructions like the directional spatial relations, in particular represented by the clock model, to efficiently communicate orders to the agent in the way it successfully gets to a target’s position. Experiments within real-world, simulated scenario have demonstrated the usefulness and effectiveness of our developed approach.
international conference on agents and artificial intelligence | 2016
Joanna Isabelle Olszewska
Reliable detection of objects of interest in complex visual scenes is of prime importance for video-surveillance applications. While most vision approaches deal with tracking visible or partially visible objects in single or multiple video streams, we propose a new approach to automatically detect all objects of interest being part of an analyzed scene, even those entirely hidden in a camera view whereas being present in the scene. For that, we have developed an innovative artificial-intelligence framework embedding a computer vision process fully integrating symbolic knowledge-based reasoning. Our system has been evaluated on standard datasets consisting of video streams with real-world objects evolving in cluttered, outdoor environment under difficult lighting conditions. Our proposed approach shows excellent performance both in detection accuracy and robustness, and outperforms state-of-the-art methods.
international conference on agents and artificial intelligence | 2016
Joanna Isabelle Olszewska
Detecting visible as well as invisible objects of interest in real-world scenes is crucial in new-generation video-surveillance. For this purpose, we design a fully intelligent system incorporating semantic, symbolic, and grounded information. In particular, we conceptualize temporal representations we use together with spatial and visual information in our multi-view tracking system. It uses them for automated reasoning and induction of knowledge about the multiple views of the studied scene, in order to automatically detect salient or hidden objects of interest. Tests on standard datasets demonstrated the efficiency and accuracy of our proposed approach.
biomedical engineering systems and technologies | 2016
Simon Nash; Mark Rhodes; Joanna Isabelle Olszewska
Although face recognition applications are growing, robust face recognition is still a challenging task due e.g. to variations in face poses, facial expressions, or lighting conditions. In this paper, we propose a new method which allows both automatic face detection and recognition and incorporates an interactive selection of facial features in conjunction with our new pose-correction algorithm. Our resulting system we called iFR successfully recognizes faces across pose, while being computationally efficient and outperforming standard approaches, as demonstrated in tests carried out on publicly available standard datasets.
International Conference on Innovative Techniques and Applications of Artificial Intelligence | 2016
Joanna Isabelle Olszewska; J. Toman
This paper tackles with the single-source, shortest-path problem in the challenging context of navigation through real-world, natural environment like a ski area, where traditional on-site sign posts could be limited or not available. For this purpose, we propose a novel approach for planning the shortest path in a directed, acyclical graph (DAG) built on geo-location data mapped from available web databases through Google Map and/or Google Earth. Our new path-planning algorithm we called OPEN is run against this resulting graph and provides the optimal path in a computationally efficient way. Our approach was demonstrated on real-world cases, and it outperforms state-of-art, path-planning algorithms.
international joint conference on knowledge discovery knowledge engineering and knowledge management | 2015
Joanna Isabelle Olszewska
Building efficiently an ontology is a crucial task for most of the applications involving knowledge representation. In particular, applications dealing with dynamic processes directly shaping the ontological domain need the conceptualization of complex activities within this domain. For this purpose, we propose to develop an OWL ontology based on UML activity diagrams. Indeed, the Unified Modeling Language (UML) is a well-known visual language widely adopted for software specification and documentation. UML consists in structure as well as behaviour notations such as activity diagrams which describe the flow of control and data through the various stages of a procedure. Our approach has been successfully validated in a study case of an ontology with a publication repository domain.