Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer Simonjan is active.

Publication


Featured researches published by Jennifer Simonjan.


international conference on distributed smart cameras | 2013

Ella: Middleware for multi-camera surveillance in heterogeneous visual sensor networks

Bernhard Dieber; Jennifer Simonjan; Lukas Esterle; Bernhard Rinner; Georg Nebehay; Roman P. Pflugfelder; Gustavo Fernández

Despite significant interest in the research community, the development of multi-camera applications is still quite challenging. This paper presents Ella - a dedicated publish/subscribe middleware system that facilitates distribution, component reuse and communication for heterogeneous multi-camera applications. We present the key components of this middleware system and demonstrate its applicability based on an autonomous multi-camera person tracking application. Ella is able to run on resource-limited and heterogeneous VSNs. We present performance measurements on different hardware platforms as well as operating systems.


IEEE Computer | 2015

Self-Aware and Self-Expressive Camera Networks

Bernhard Rinner; Lukas Esterle; Jennifer Simonjan; Georg Nebehay; Roman P. Pflugfelder; Gustavo Fernández Domínguez; Peter R. Lewis

Smart cameras perform on-board image analysis, adapt their algorithms to changes in their environment, and collaborate with other networked cameras to analyze the dynamic behavior of objects. A proposed computational framework adopts the concepts of self-awareness and self-expression to more efficiently manage the complex tradeoffs among performance, flexibility, resources, and reliability. The Web extra at http://youtu.be/NKe31_OKLz4 is a video demonstrating CamSim, a smart camera simulation tool, enables users to test self-adaptive and self-organizing smart-camera techniques without deploying a smart-camera network.


Self-aware Computing Systems | 2016

Self-aware Object Tracking in Multi-Camera Networks

Lukas Esterle; Jennifer Simonjan; Georg Nebehay; Roman P. Pflugfelder; Gustavo Fernández Domínguez; Bernhard Rinner

This chapter discusses another example of self-aware and self-expressive systems: a multi-camera network for object tracking. It provides a detailed description of how the concepts of self-awareness and self-expression can be implemented in a real network of smart cameras. In contrast to traditional cameras, smart cameras are able to perform image analysis on-board and collaborate with other cameras in order to analyse the dynamic behaviour of objects in partly unknown environments. Self-aware and self-expressive smart cameras are even able to reason about their current state and to adapt their algorithms in response to changes in their environment and the network. Self-awareness and self-expression allow them to manage the trade-off among performance, flexibility, resources and reliability during runtime. Due to the uncertainties and dynamics in the network a fixed configuration of the cameras is infeasible. We adopt the concepts of self-awareness and self-expression for autonomous monitoring of the state and progress of each camera in the network and adapt its behaviour to changing conditions. In this chapter we focus on describing the building blocks for self-aware camera networks and demonstrate the key characteristics in a multi-camera object tracking application both in simulation and in a real camera network. The proposed application implements the goal sharing with time-awareness capability pattern, including meta-self-awareness capabilities as discussed in Chapter 5. Furthermore, the distributed camera network employs the middleware system described in Chapter 11 to facilitate distributed coordination of tracking responsibilities. Moreover, the application uses socially inspired techniques and mechanisms discussed in Chapter 7.


international conference on pervasive computing | 2016

Autonomous, lightweight calibration of visual sensor networks with dense coverage

Jennifer Simonjan; Bernhard Rinner

We present an algorithm for autonomous network calibration of visual sensor networks, which become more and more pervasive since they can be found in various everyday life environments. The proposed algorithm works in a fully decentralized way and minimizes usage of cost-intensive vision algorithms. To achieve network calibration, our approach relies on jointly detected objects and geometric relations between camera nodes. Distances and angles are the only information required to be exchanged between nodes. The process works iteratively until cameras have determined the relative position and orientation of their neighbors. Preliminary results are demonstrated using our visual sensor network simulator.


Self-aware Computing Systems | 2016

Middleware Support for Self-aware Computing Systems

Jennifer Simonjan; Bernhard Dieber; Bernhard Rinner

The implementation of a distributed self-aware computing system (SACS) typically requires a substantial software infrastructure. A middleware system with dedicated services for self-awareness and self-expression can therefore support the development of SACS applications. In this chapter we show the advantages of using a middleware system as the basis for a self-aware computing system. We identify requirements for middleware systems to support the development of self-aware applications. By providing facilities for communication, decoupling and transparency, middleware systems can provide essential features needed in SACS. We compare different middleware paradigms and their suitability to support self-awareness in distributed applications. We argue that the publish/subscribe paradigm is very well suited for this application area since it supports modularisation and decoupling. Units can be added to and removed from existing applications and may well be reused in new applications. Thus, SACS can be constructed by recombining existing publish/subscribe modules. In addition, we present details of publish/subscribe and introduce our middleware implementation called Ella. We describe how different aspects of a SACS and patterns for self-aware applications can be represented using Ella. We present different communication paradigms in Ella (broadcasting, peer2peer) as well as decoupling mechanisms provided by the middleware. We argue that SACS applications can be developed (i) faster, (ii) more efficiently and (iii) more reliable with Ella. Finally, Chapter 13 presents a self-aware and self-expressive multi-camera application which has been implemented with Ella.


international conference on pervasive computing | 2015

Towards large-scale pervasive smart camera networks

Jennifer Simonjan

Pervasive or ubiquitous computing refers to sensors and other electronic devices which are invisibly embedded into our everyday life environment. Examples use wireless sensor networks to provide meaningful information humans without the need for their interaction. The main focus is to make the environment more intelligent without disturbing the humans. Visual sensor networks (VSNs) are also a type of ubiquitous networks, since cameras are omnipresent by being embedded into everyday life objects such as smartphones. Cameras are becoming smaller and cheaper nowadays, enabling an invisible integration into our everyday life environment. The size the sensors is of important interest in order to enable dense sensor coverage without being disturbing. Small cameras such as embedded cameras, have various resource constraints including processing power, memory size or sensor resolution. Thus, building an invisible, pervasive VSN brings up certain challenges.


international conference on nanoscale computing and communication | 2018

Nano-cameras: a key enabling technology for the internet of multimedia nano-things

Jennifer Simonjan; Josep Miquel Jornet; Ian F. Akyildiz; Bernhard Rinner

Nanotechnology is enabling the development of a new generation of devices which are able to sense, process and communicate, while being in the scale of tens to hundreds of cubic nanometers. Such small, imperceptible devices enhance not only current applications but enable entirely new paradigms. This paper introduces the concept of nano-cameras, which are built upon nanoscale photodetectors, lenses and electronic circuitry. The state-of-the-art in nanoscale photodetectors and lenses is presented and the expected performance of nano-cameras is numerically evaluated through simulation studies. Finally, the open challenges towards integrating nano-cameras in practical applications and ultimately building the Internet of Multimedia Nano-Things are discussed.


international conference on distributed smart cameras | 2017

Self-calibration and Cooperative State Estimation in a Resource-aware Visual Sensor Network

Jennifer Simonjan; Melanie Schranz; Bernhard Rinner

In this paper we present an algorithm, which enables distributed visual sensor networks to autonomously calibrate the network and dynamically build clusters to achieve cooperative object tracking based on state estimation. A main focus is thereby on resource-awareness and -efficiency, since we aim for low-power embedded smart camera networks. We do not require any human intervention or a-priori information about the network topology to achieve calibration and tracking. Camera nodes first estimate relative positions and orientations and then use the common coordinate system to enable cooperative state estimation. For that purpose, cameras dynamically build clusters depending on their available resources. New nodes joining the network are discovered and failing nodes do not prevent others from their tasks. Compared to other methods, our approach is not only able to handle sensor measurement errors but also faulty camera positions gathered during the network calibration process.


distributed computing in sensor systems | 2017

Distributed Visual Sensor Network Calibration Based on Joint Object Detections

Jennifer Simonjan; Bernhard Rinner


Archive | 2015

COVER FEATURE SELF-AWARE AND SELF-EXPRESSIVE SYSTEMS

Bernhard Rinner; Lukas Esterle; Jennifer Simonjan; Alpen-Adria-Universität Klagenfurt; Georg Nebehay; Roman P. Pflugfelder; Gustavo Fernández Domínguez; Peter R. Lewis

Collaboration


Dive into the Jennifer Simonjan's collaboration.

Top Co-Authors

Avatar

Bernhard Rinner

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Georg Nebehay

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lukas Esterle

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Roman P. Pflugfelder

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernhard Dieber

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gustavo Fernández

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Melanie Schranz

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Ian F. Akyildiz

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge