Aydin Arpa
Sarnoff Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aydin Arpa.
Battlespace digitization and network-centric warfare . Conference | 2002
Rakesh Kumar; Harpreet S. Sawhney; Aydin Arpa; Supun Samarasekera; Manoj Aggrawal; Stephen Charles Hsu; David Nistér; Keith J. Hanna
In a typical security and monitoring system a large number of networked cameras are installed at fixed positions around a site under surveillance. There is generally no global view or map that shows the guard how the views of different cameras relate to one another. Individual cameras may be equipped with pan, tilt and zoom capabilities, and the guard may be able to follow an intruder with one camera, then pick him up with another. But such tracking can be difficult, and hand off between cameras disorienting. The guard does not have the ability to continually shift his viewpoint. More over current systems do not scale up with the number of cameras. The system becomes more unwieldy as cameras are added to the system. In this paper, we will present the system and key algorithms for remote immersive monitoring of an urban site using a blanket of video cameras. The guard monitors the world using a live 3D model, which is constantly being updated from different directions using the multiple video streams. The world can be monitored remotely from any virtual viewpoint. The observer can see the entire scene from far and get a birds eye view or can fly/zoom in and see activity of interest up close. A 3D-site model is constructed of the urban site and used as glue for combining the multiple video streams. Moreover each of the video cameras has smart image processing associated with it, which allows it to detect moving and new objects in the scene and recover their 3D geometry and pose of the camera with respect to the world model. Each video stream is overlaid on top of the video model using the recovered pose. Virtual views of the scene are generated by combining the various video streams, the background 3D model and the recovered 3D geometry of foreground objects. These moving objects are highlighted on the 3D model and used as a cue by the operator to direct his viewpoint.
Archive | 2002
Aydin Arpa; Keith J. Hanna; Rakesh Kumar; Supun Samarasekera; Harpreet S. Sawhney; Manoj Aggarwal; David Nistér; Stephen Charles Hsu
Archive | 2005
Supun Samarasekera; Keith J. Hanna; Harpreet S. Sawhney; Rakesh Kumar; Aydin Arpa; Vincent Paragano; Thomas Germano; Manjo Aggarwal
eurographics | 2002
Harpreet S. Sawhney; Aydin Arpa; Rakesh Kumar; Supun Samarasekera; Manoj Aggarwal; Steven C. Hsu; David Nistér; Keith J. Hanna
Archive | 2004
Aydin Arpa; Keith J. Hanna; Supun Samarasekera; Rakesh Kumar; Harpreet S. Sawhney
Archive | 2004
Supun Samarasekera; Rakesh Kumar; Keith J. Hanna; Harpreet S. Sawhney; Aydin Arpa; Manoj Aggarwal; Vincent Paragano
Archive | 2005
Supun Samarasekera; Vincent Paragano; Harpreet S. Sawhney; Manoj Aggarwal; Keith J. Hanna; Rakesh Kumar; Aydin Arpa; Philip Miller
Archive | 2005
Vincent Paragano; Rakesh Kumar; Keith J. Hanna; Hui Cheng; Aydin Arpa
Archive | 2005
Supun Samarasekera; Vincent Paragano; Harpreet S. Sawhney; Manoj Aggarwal; Keith J. Hanna; Rakesh Kumar; Aydin Arpa; Philip Miller
Archive | 2005
Manjo Aggarwal; Aydin Arpa; Thomas Germano; Keith J. Hanna; Rakesh Kumar; Vincent Paragano; Supun Samarasekera; Harpreet S. Sawhney