Erland Jungert
Swedish Defence Research Agency
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erland Jungert.
international conference on pattern recognition | 1988
Erland Jungert
In earlier work it has been demonstrated that symbolic projections can be used as a spatial knowledge structure. Here it will be shown how the basic language used in the symbolic projections can be extended. The motivation for this extension is the need for a means by which to perform spatial reasoning in, e.g. digitized images or maps.
Archive | 1996
Shi-Kuo Chang; Erland Jungert; Genoveffa Tortora
Introduction, S.K. Chang et al iconic indexing by 2D strings, S.K. Chang et al a normalized index for image databases, G. Petraglia et al representation of 3D symbolic and binary pictures using 3D strings, S.K. Chang and Y. Li spatial knowledge representation for iconic image database, S.-Y. Lee and F.-J. Hsu ordering information and symbolic projection, C. Schleider determining the views of a moving object using symbolic projections, E. Jungert a logical framework for spatio temporal indexing of image sequences, A. Del Bimbo and E. Vicario a generalized approach for image indexing and retrieval based on 2D strings, E.G.M. Petrakis and S.C. Orphanoudakis using 2D string decision trees for symbolic picture retrieval, D.J. Beuhrer et al efficient image retrieval algorithms for large spatial databases, J.C.R. Tseng et al a prototype multimedia database system incorporating iconic indexing, T. Arndt and R. Kuppanna a two- and three-dimensional ship databse application, J. Hildebrandt and K. Tang.
IEEE Transactions on Multimedia | 2004
Shi-Kuo Chang; Gennaro Costagliola; Erland Jungert; Francesco Orciuoli
Sensor data fusion imposes a number of novel requirements on query languages and query processing techniques. A spatial/temporal query language called /spl Sigma/QL has been proposed to support the retrieval and fusion of multimedia information from multiple sources and databases. In this paper we investigate fusion techniques, multimedia data transformations and /spl Sigma/QL query processing techniques for sensor data fusion. Fusion techniques including fusion by the merge operation, the detection of moving objects, and the incorporation of belief values, have been developed. An experimental prototype has been implemented and tested to demonstrate the feasibility of these techniques.
IEEE Transactions on Software Engineering | 1985
Shi-Kuo Chang; Erland Jungert; Stefano Levialdi; Genoveffa Tortora; Tadao Ichikawa
This paper describes a generic image processing language IPL, and a programming environment supporting the language primitives for an image information system. The central notion of IPL is that it allows the user to navigate through the image database and manipulate images using generalized icons. The image processing language IPL consists of three subsets: the logical image processing language LIPL, the interactive image processing language IIPL, and the physical image processing language PIPL. This paper presents the main concepts of this generic language, some examples, and a scenario.
Information Fusion | 2007
Shi-Kuo Chang; Erland Jungert; Xin Li
Previous approaches in query processing do not consider queries to automatically combine results obtained from different information sources, i.e. they do not support information fusion. In this work, an approach for information fusion using a progressive query language and an interactive reasoner is for this reason introduced. The system basically consists of a query processor with fusion capability and a reasoner with learning capability. This query processor first executes a query to produce some initial results. If the initial results are uninformative, then the reasoner guided by the user creates a more elaborate query by means of some rule and returns the query to the query processor to produce a more informative answer. What is novel in our approach is that application-dependent information fusion rules can be initially specified by the user and subsequently learned by the reasoner. Examples of progressive queries are drawn from multi-sensor information fusion applications.
Lecture Notes in Computer Science | 2002
Shi-Kuo Chang; Gennaro Costagliola; Erland Jungert
To support the retrieval and fusion of multimedia information from multiple real-time sources and databases, a novel approach for sensor-based query processing is described. The sensor dependency tree is used to facilitate query optimization. Through query refinement one or more sensor may provide feedback information to the other sensors. The approach is also applicable to evolutionary queries that change in time and/or space, depending upon the temporal/spatial coordinates of the query originator.
Journal of Visual Languages and Computing | 1999
Daniel Hernández; Erland Jungert
In this paper, we introduce a framework to represent the course of motion of point-like objects qualitatively, i.e. by making only as many distinctions as necessary in a given context. The need to describe the course of motion arises in many applications, such as robotics and motion planning, automatic control and surveillance systems, image sequence analysis and indexing, and animation. We use a bottom-up approach that starts from basic descriptional units for orientation, length and velocity?integrated in the concept of qualitative motion vectors (QMVs)?and the operations on them. In order to describe sequences of QMVs we explore qualitative means of obtaining ordering information and of classifying significant turns. We then move on to issues of generalization discerning global, local and `turn-based? variants. Finally, we develop techniques for identifying higher-level motion patterns such as closed loops and spirals.
Archive | 1990
Tadao Ichikawa; Erland Jungert; Robert R. Korfhage
I Theory.- 1. Diagram Understanding: the Symbolic Descriptions Behind the Scenes.- II Design Systems.- 2. The Interface Construction Set.- 3. A Visual Environment for the Design of Distributed Systems.- 4. Generation of Visual Languages for Development of Knowledge-Based Systems.- III Visual Programming.- 5. Visual Man-Machine Interface for Program Design and Production.- 6. Hi-visual Iconic Programming Environment.- 7. Alex-An Alexical Programming Language.- IV Algorithm Animation.- 8. Visual Programming of Program Visualizations: a Gestural Interface for Animating Algorithms.- 9. Principles of aladdin and other Algorithm Animation Systems.- V Simulation Animation.- 10. Simulacrum: a System Behavior Example Editor.- 11. Action Graphics: A Spreadsheet-Based Language for Animated Simulation.- 12. Animation Using Behavior Functions.- VI Applications.- 13. A Visual Environment for Liver Simulation Studies.- 14. A Spatial Knowledge Structure for Visual Information Systems.- 15. The use of Visual Representations in Information Retrieval Applications.- 16. Generalizing the Sheet Language Paradigm.- Epilogue.
Storage and Retrieval for Image and Video Databases | 2004
Tobias Horney; Jörgen Ahlberg; Christina Grönwall; Martin Folkesson; Karin Silvervarg; Jorgen Fransson; Lena M. Klasen; Erland Jungert; Fredrik Lantz; Morgan Ulvklo
We present an approach to a general decision support system. The aim is to cover the complete process for automatic target recognition, from sensor data to the user interface. The approach is based on a query-based information system, and include tasks like feature extraction from sensor data, data association, data fusion and situation analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low altitude. The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are used to select the models of interest in the matching step, where the target is matched with a number of target models, returning a likelihood value for each model. Several methods and sensor data types are used in both steps. The user communicates with the system via a visual user interface, where, for instance, the user can mark an area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query language developed for this type of applications, and an ontological system decides which algorithms should be invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers are given back to the user. The user does not need to have any detailed technical knowledge about the sensors (or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.
International Journal of Emergency Management | 2006
Erland Jungert; Niklas Hallberg; Amund Hunstad
Societies have always been challenged by different crises, disasters and difficult times although western society for a long time has been considered safe. In recent years, our perception of the world has changed, due to terrorist attacks and other large-scale disasters. To handle uncertain situations where the conditions can change rapidly; effective crisis management is required. To support crisis management, command and control (C2) systems can be used. However, a solid architecture for these systems is needed, if they should meet the requirements of crisis management, e.g., support inter-organisational and situational awareness including crisis, organisational and security awareness. The objective of this paper is to outline an architecture for C2 systems supporting network centric crisis management. The corner-stones of this architecture are: the C2 model, the service structure, the service allocation bridges and the distributed ontologies. Further, the information flow and the IT security aspects are covered as well.