Junqiao Zhao
Tongji University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junqiao Zhao.
Computers, Environment and Urban Systems | 2014
Filip Biljecki; Hugo Ledoux; J.E. Stoter; Junqiao Zhao
The level of detail in 3D city modelling, despite its usefulness and importance, is still an ambiguous and undefined term. It is used for the communication of how thoroughly real-world features have been acquired and modelled, as we demonstrate in this paper. Its definitions vary greatly between practitioners, standards and institutions. We fundamentally discuss the concept, and we provide a formal and consistent framework to define discrete and continuous levels of detail (LODs), by determining six metrics that constitute it, and by discussing their quantification and their relations. The resulting LODs are discretisations of functions of metrics that can be specified in an acquisition–modelling specification form that we introduce. The advantages of this approach over existing paradigms are formalisation, consistency, continuity, and finer specification of LODs. As an example of the realisation of the framework, we derive a series of 10 discrete LODs. We give a proposal for the integration of the framework within the OGC standard CityGML (through the Application Domain Extension).
Transactions in Gis | 2016
Sjors Donkers; Hugo Ledoux; Junqiao Zhao; J.E. Stoter
Although the international standard CityGML has five levels of detail (LODs), the vast majority of available models are the coarse ones (up to LOD2, i.e. block-shaped buildings with roofs). LOD3 and LOD4 models, which contain architectural details such as balconies, windows and rooms, rarely exist because, unlike coarser LODs, their construction requires several datasets that must be acquired with different technologies, and often extensive manual work is needed. In this article we investigate an alternative to obtaining CityGML LOD3 models: the automatic conversion from already existing architectural models (stored in the IFC format). Existing conversion algorithms mostly focus on the semantic mappings and convert all the geometries, which yields CityGML models having poor usability in practice (spatial analysis, for instance, is not possible). We present a conversion algorithm that accurately applies the correct semantics from IFC models and that constructs valid CityGML LOD3 buildings by performing a series of geometric operations in 3D. We have implemented our algorithm and we demonstrate its effectiveness with several real-world datasets. We also propose specific improvements to both standards to foster their integration in the future.
Computers & Graphics | 2010
Qing Zhu; Junqiao Zhao; Zhiqiang Du; Yeting Zhang
Aiming at the fundamental issue of optimal design of discrete levels of detail (LOD) for the visualization of complicated 3D building facades, this paper presents a new quantitative analytical method of perceptible 3D details based on perceptual metric. First, the perceptual metric is defined as the quantitative indicator of the visual perceptibility of facade details at a given viewing distance. Then, according to the human vision system, an algorithm employing 2D discrete wavelet transform and contrast sensitivity function is developed to extract the value of perceptual metric from the rendered image of the facade. Finally, a perceptual metric function is defined, based on the perceptual metric values extracted at equal interval viewing distances. The minimum detail redundancy model is then proposed for the optimal design of discrete LODs. This method provides a quantitative instruction for generating discrete LODs. The experimental results prove the effectiveness and great potential of this method.
Archive | 2011
Qing Zhu; Junqiao Zhao; Zhiqiang Du; Yeting Zhang; Weiping Xu; Xiao Xie; Yulin Ding; Fei Wang; Tingsong Wang
In recent years, the integration of semantics into 3D city models has become a consensus. The CityGML standard laid the foundation for the storage and application of semantics, which boosts the progress of semantic 3D city modeling. This paper reports an extended semantic model based on CityGML and its visual applications under the content of a three-dimensional GIS project of China. Firstly, concepts Room, Corridor and Stair are derived from concept Space which represents the similar concept of Room in CityGML. These concepts will benefit the application of indoor navigation. Geological model is also supported by this model, which enables the underground analysis. Secondly, a semi-automatic data integration tool is developed. The types of semantic concept are defined based on the Technical Specification for Three-Dimensional City Modeling of China which leads to an adaptive way to assign semantics into pure geometry. In order to better visualize the models enriched by semantics, two fundamental techniques, data reduction and selective representation are then introduced. It shows that semantics could not only help improve the performance of exploration tasks but also enhance the efficiency of spatial cognition. Finally, two exploration cases are presented, one is indoor navigation, the semantic model is used to extract the geometric path and a semantics enhanced navigation routine is used, which greatly enriches the connotation of ordinary navigation applications; the other is a unified profiler, in order to fill up the cross-section correctly, semantics are incorporated, which help ensure the topological and semantic consistency.
Archive | 2014
Junqiao Zhao; J.E. Stoter; Hugo Ledoux
Three-dimensional (3D) city models based on the OGC CityGML standard have become increasingly available in the past few years. Although GIS applications call for standardized and geometric-topological rigorous 3D models, many existing visually convincing 3D city datasets show weak or invalid geometry. These defects prohibit the downstream applications of such models. As a result, intensive manual work of model repair has to be conducted which is complex and labour-intensive. Although model repair is already a popular research topic for CAD models and is becoming important in GIS, existing research either focuses on certain defects or on a particular geometric primitive. Therefore a framework that explores the full set of validation requirements and provides ways to repair a CityGML model according to these requirements is needed and proposed in this paper. First, the validity criterion of CityGML geometric model is defined, which guarantees both the rigorous geometry for analytical use and the flexible representation of geographic features. Then, a recursive repair framework aiming at obtaining a valid CityGML geometric model is presented. The geometric terms adopted in this paper are compliant with the ISO19107 standard. Future work will further implement the framework.
international symposium on neural networks | 2017
Jiqian Li; Yan Wu; Junqiao Zhao; Linting Guan; Chen Ye; Tao Yang
With the rapid development of driverless cars, pedestrian detection has been a canonical instance of object detection. Although recent deep learning detectors such as RPN+BF and MS-CNN have shown excellent performance for pedestrian detection, they have limited success for detecting pedestrian, and the importance of final feature receptive field has been awared by previous leading deep learning pedestrian detectors. Applying the dilated convolution to the feature learning of pedestrian detection, we constructed a pedestrian detection framework along with the region proposal network and boosted decision trees. Pipeline of our proposed framework can be briefly generalized as follows: firstly, the fine-tuned RPN with specified aspect ratio is used to get boxes and scores. Secondly, the designed dilated convolution feature extraction model is used to get features. As different dilation factors provide different receptive field scales, we concat the features of different layers with the dilated convolutional features to get the final features. Finally, the candidate boxes are sent to the boosted decision trees to be classified using the scores and features. We evaluated our method on the Caltech Pedestrian Detection Benchmark. Comparing with other state-of-the-art detection methods, the proposed framework with dilated convolution has better performance.
International Journal of Computational Intelligence Systems | 2018
Linting Guan; Yan Wu; Junqiao Zhao
Recent deep convolutional neural network-based object detectors have shown promising performance when detecting large objects, but they are still limited in detecting small or partially occluded ones—in part because such objects convey limited information due to the small areas they occupy in images. Consequently, it is difficult for deep neural networks to extract sufficient distinguishing fine-grained features for high-level feature maps, which are crucial for the network to precisely locate small or partially occluded objects. There are two ways to alleviate this problem: the first is to use lower-level but larger feature maps to improve location accuracy and the second is to use context information to increase classification accuracy. In this paper, we combine both methods by first constructing larger and more meaningful feature maps in top-down order and concatenating them and subsequently fusing multilevel contextual information through pyramid pooling to construct context aware features. We propose a unified framework called the Semantic Context Aware Network (SCAN) to enhance object detection accuracy. SCAN is simple to implement and can be trained from end to end. We evaluate the proposed network on the KITTI challenge benchmark and present an improvement of the precision.
Cognitive Systems Research | 2018
Tao Yang; Yan Wu; Junqiao Zhao; Linting Guan
Semantic image segmentation is one of the most challenged tasks in computer vision. In this paper, we propose a highly fused convolutional network, which consists of three parts: feature downsampling, combined feature upsampling and multiple predictions. We adopt a strategy of multiple steps of upsampling and combined feature maps in pooling layers with its corresponding unpooling layers. Then we bring out multiple pre-outputs, each pre-output is generated from an unpooling layer by one-step upsampling. Finally, we concatenate these pre-outputs to get the final output. As a result, our proposed network makes highly use of the feature information by fusing and reusing feature maps. In addition, when training our model, we add multiple soft cost functions on pre-outputs and final outputs. In this way, we can reduce the loss reduction when the loss is back propagated. We evaluate our model on three major segmentation datasets: CamVid, PASCAL VOC and ADE20K. We achieve a state-of-the-art performance on CamVid dataset, as well as considerable improvements on PASCAL VOC dataset and ADE20K dataset
international conference on intelligent computing | 2017
Yan Wu; Tao Yang; Junqiao Zhao; Linting Guan; Jiqian Li
Autonomous car has achieved unprecedented improvement in object detection because of the high performance of deep convolutional neural networks, and now researches are devoted to more complex traffic scene parsing. In this paper, we present a novel traffic scene parsing algorithm by learning a fully combined convolutional network (FCCN). Our network improves the upsampling layer of a fully convolutional network, we add five unpooling layers after the final convolution layer, and each unpooling layer is corresponded to a former pooling layer. We then combine each pair of pooling and unpooling layers, add convolution layers after the combined layer. Since we find it is still hard to learn fine details or edge features of target objects, we propose a soft cost function for further improvement. Our cost function adds soft weights on different target objects. The weight of background is set as constantly one, and the weights for target objects are calculated dynamically, which should be larger than two. We evaluate our work on CamVid datasets. The results show that our FCCN achieves a considerable improvement in segmentation performance.
international conference on control and automation | 2017
Hong Wu; Chen Ye; Junqiao Zhao
To be “fast enough” is a key issue for intelligent vehicle motion planning. The traditional simplicity high dimensional search approach for motion planning often suffers from huge statue-spaces, high time complexity and inefficiency. In this paper, we propose a method, increment-dimensional heuristic search to solve the problem. Our method employs a stepped-up heuristic search to reduce the searching status and improve the search algorithm execution efficiency, while the continues motion planning in this scheme still provides high quality trajectory for the vehicle. In experiment, the quantitative evaluation shows that the proposed algorithm reduces more than 90% of searching status and executes time is about 1/10 of that of the simplicity heuristic search method. It turns out to be a very good trade-off between execution efficiency and trajectory quality in real world scenarios. In practice, this algorithm is implemented in the decision-making module in TiEV (Tongji Intelligent Electronic Vehicle).