Gauri M. Jog
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gauri M. Jog.
Advanced Engineering Informatics | 2011
Ioannis Brilakis; Man-Woo Park; Gauri M. Jog
Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.
Journal of Computing in Civil Engineering | 2013
Christian Koch; Gauri M. Jog; Ioannis Brilakis
Potholes, as a severe type of pavement distress, are currently identified and assessed manually in pavement-maintenance programs. This manual process is time-consuming and labor-intensive. Existing methods for automated pothole detection either rely on expensive and high-maintenance range sensors or make use of acceleration data, which only apply when the pothole is on the tires’ path. The authors’ previous work has proposed and validated a camera-based pothole-detection method. However, this method is limited to single frames and cannot determine the severity of potholes. This paper presents a novel method that addresses these issues by incrementally updating a representative texture template for intact pavement regions and using a vision tracker to reduce the computational effort, improve the detection reliability, and count potholes efficiently. The improved method was implemented and tested on real data. The results indicate a significant capability and performance increase of this method over its predecessor.
International Conference on Computing in Civil EngineeringAmerican Society of Civil Engineers | 2012
Gauri M. Jog; Christian Koch; Mani Golparvar-Fard; Ioannis Brilakis
Current pavement condition assessment methods are predominantly manual and time consuming. Existing pothole recognition and assessment methods rely on 3D surface reconstruction that requires high equipment and computational costs or relies on acceleration data which provides preliminary results. This paper presents an inexpensive solution that automatically detects and assesses the severity of potholes using vision-based data for both 2D recognition and for 3D reconstruction. The combination of these two techniques is used to improve recognition results by using visual and spatial characteristics of potholes and measure properties (width, number, and depth) that are used to assess severity of potholes. The number of potholes is deduced with 2D recognition whereas the width and depth of the potholes is obtained with 3D reconstruction. The proposed method is validated on several actual potholes. The results show that the proposed inexpensive and visual method holds promise to improve automated pothole detection and severity assessment.
Journal of Computing in Civil Engineering | 2014
Burcin Becerik-Gerber; Mohsin Siddiqui; Ioannis Brilakis; Omar El-Anwar; Nora El-Gohary; Tarek Mahfouz; Gauri M. Jog; Shuai Li; Amr Kandil
AbstractThis paper presents an exploratory analysis to identify civil engineering challenges that can be addressed with further data sensing and analysis (DSA) research. An initial literature review was followed by a web-based survey to solicit expert opinions in each civil engineering subdiscipline to select challenges that can be addressed by civil engineering DSA research. A total of 10 challenges were identified and evidence of economic, environmental, and societal impacts of these challenges is presented through a review of the literature. The challenges presented in this paper are high building energy consumption, crude estimation of sea level, increased soil and coastal erosion, inadequate water quality, untapped and depleting groundwater, increasing traffic congestion, poor infrastructure resilience to disasters, poor and degrading infrastructure, need for better mining and coal ash waste disposal, and low construction site safety. The paper aims to assist the civil engineering research community ...
Advanced Engineering Informatics | 2011
Gauri M. Jog; Habib Fathi; Ioannis Brilakis
Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT+normalized eight-point algorithm+RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640x480) involving scenes of the built environment.
28th International Symposium on Automation and Robotics in Construction | 2011
Man-Woo Park; Gauri M. Jog; Ioannis Brilakis
Vision based tracking can provide the spatial location of project related entities such as equipment, workers, and materials in a large-scale congested construction site. It tracks entities in a video stream by inferring their motion. To initiate the process, it is required to determine the pixel areas of the entities to be tracked in the following consecutive video frames. For the purpose of fully automating the process, this paper presents an automated way of initializing trackers using Semantic Texton Forests (STFs) method. STFs method performs simultaneously the segmentation of the image and the classification of the segments based on the low-level semantic information and the context information. In this paper, STFs method is tested in the case of wheel loaders recognition. In the experiments, wheel loaders are further divided into several parts such as wheels and body parts to help learn the context information. The results show 79% accuracy of recognizing the pixel areas of the wheel loader. These results signify that STFs method has the potential to automate the initialization process of vision based tracking.
The 2011 ASCE International Workshop on Computing in Civil EngineeringAmerican Society of Civil Engineers | 2011
Gauri M. Jog; Shuai Li; Burcin Becerik Gerber; Ioannis Brilakis
The objective of this study was to identify challenges in civil and environmental engineering that can potentially be solved using data sensing and analysis research. The challenges were recognized through extensive literature review in all disciplines of civil and environmental engineering. The literature review included journal articles, reports, expert interviews, and magazine articles. The challenges were ranked by comparing their impact on cost, time, quality, environment and safety. The result of this literature review includes challenges such as improving construction safety and productivity, improving roof safety, reducing building energy consumption, solving traffic congestion, managing groundwater, mapping and monitoring the underground, estimating sea conditions, and solving soil erosion problems. These challenges suggest areas where researchers can apply data sensing and analysis research.
International Workshop on Computing in Civil Engineering 2009 | 2009
Gauri M. Jog; Ioannis Brilakis
Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.
Automation in Construction | 2011
Gauri M. Jog; Ioannis Brilakis; Demos C. Angelides
Archive | 2011
Gauri M. Jog; Man-Woo Park; Ioannis Brilakis