Guangwen Yang
Tsinghua University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guangwen Yang.
Journal of remote sensing | 2013
Peng Gong; Jie Wang; Le Yu; Yongchao Zhao; Yuanyuan Zhao; Lu Liang; Z. C. Niu; Xiaomeng Huang; Haohuan Fu; Shuang Liu; Congcong Li; Xueyan Li; Wei Fu; Caixia Liu; Yue Xu; Xiaoyi Wang; Qu Cheng; Luanyun Hu; Wenbo Yao; Han Zhang; Peng Zhu; Ziying Zhao; Haiying Zhang; Yaomin Zheng; Luyan Ji; Yawen Zhang; Han Chen; An Yan; Jianhong Guo; Liang Yu
We have produced the first 30 m resolution global land-cover maps using Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) data. We have classified over 6600 scenes of Landsat TM data after 2006, and over 2300 scenes of Landsat TM and ETM+ data before 2006, all selected from the green season. These images cover most of the worlds land surface except Antarctica and Greenland. Most of these images came from the United States Geological Survey in level L1T (orthorectified). Four classifiers that were freely available were employed, including the conventional maximum likelihood classifier (MLC), J4.8 decision tree classifier, Random Forest (RF) classifier and support vector machine (SVM) classifier. A total of 91,433 training samples were collected by traversing each scene and finding the most representative and homogeneous samples. A total of 38,664 test samples were collected at preset, fixed locations based on a globally systematic unaligned sampling strategy. Two software tools, Global Analyst and Global Mapper developed by extending the functionality of Google Earth, were used in developing the training and test sample databases by referencing the Moderate Resolution Imaging Spectroradiometer enhanced vegetation index (MODIS EVI) time series for 2010 and high resolution images from Google Earth. A unique land-cover classification system was developed that can be crosswalked to the existing United Nations Food and Agriculture Organization (FAO) land-cover classification system as well as the International Geosphere-Biosphere Programme (IGBP) system. Using the four classification algorithms, we obtained the initial set of global land-cover maps. The SVM produced the highest overall classification accuracy (OCA) of 64.9% assessed with our test samples, with RF (59.8%), J4.8 (57.9%), and MLC (53.9%) ranked from the second to the fourth. We also estimated the OCAs using a subset of our test samples (8629) each of which represented a homogeneous area greater than 500 m × 500 m. Using this subset, we found the OCA for the SVM to be 71.5%. As a consistent source for estimating the coverage of global land-cover types in the world, estimation from the test samples shows that only 6.90% of the world is planted for agricultural production. The total area of cropland is 11.51% if unplanted croplands are included. The forests, grasslands, and shrublands cover 28.35%, 13.37%, and 11.49% of the world, respectively. The impervious surface covers only 0.66% of the world. Inland waterbodies, barren lands, and snow and ice cover 3.56%, 16.51%, and 12.81% of the world, respectively.
Advances in Atmospheric Sciences | 2013
Lijuan Li; Pengfei Lin; Yongqiang Yu; Bin Wang; Tianjun Zhou; Li Liu; Jiping Liu; Qing Bao; Shiming Xu; Wenyu Huang; Kun Xia; Ye Pu; Li Dong; Si Shen; Yimin Liu; Ning Hu; Mimi Liu; Wenqi Sun; Xiangjun Shi; Weipeng Zheng; Bo Wu; Mirong Song; Hailong Liu; Xuehong Zhang; Guoxiong Wu; Wei Xue; Xiaomeng Huang; Guangwen Yang; Zhenya Song; Fangli Qiao
This study mainly introduces the development of the Flexible Global Ocean-Atmosphere-Land System Model: Grid-point Version 2 (FGOALS-g2) and the preliminary evaluations of its performances based on results from the pre-industrial control run and four members of historical runs according to the fifth phase of the Coupled Model Intercomparison Project (CMIP5) experiment design. The results suggest that many obvious improvements have been achieved by the FGOALS-g2 compared with the previous version,FGOALS-g1, including its climatological mean states, climate variability, and 20th century surface temperature evolution. For example,FGOALS-g2 better simulates the frequency of tropical land precipitation, East Asian Monsoon precipitation and its seasonal cycle, MJO and ENSO, which are closely related to the updated cumulus parameterization scheme, as well as the alleviation of uncertainties in some key parameters in shallow and deep convection schemes, cloud fraction, cloud macro/microphysical processes and the boundary layer scheme in its atmospheric model. The annual cycle of sea surface temperature along the equator in the Pacific is significantly improved in the new version. The sea ice salinity simulation is one of the unique characteristics of FGOALS-g2, although it is somehow inconsistent with empirical observations in the Antarctic.
Science in China Series F: Information Sciences | 2016
Haohuan Fu; Junfeng Liao; Jinzhe Yang; Lanning Wang; Zhenya Song; Xiaomeng Huang; Chao Yang; Wei Xue; Fangfang Liu; Fangli Qiao; Wei Zhao; Xunqiang Yin; Chaofeng Hou; Chenglong Zhang; Wei Ge; Jian Zhang; Yangang Wang; Chunbo Zhou; Guangwen Yang
The Sunway TaihuLight supercomputer is the world’s first system with a peak performance greater than 100 PFlops. In this paper, we provide a detailed introduction to the TaihuLight system. In contrast with other existing heterogeneous supercomputers, which include both CPU processors and PCIe-connected many-core accelerators (NVIDIA GPU or Intel Xeon Phi), the computing power of TaihuLight is provided by a homegrown many-core SW26010 CPU that includes both the management processing elements (MPEs) and computing processing elements (CPEs) in one chip. With 260 processing elements in one CPU, a single SW26010 provides a peak performance of over three TFlops. To alleviate the memory bandwidth bottleneck in most applications, each CPE comes with a scratch pad memory, which serves as a user-controlled cache. To support the parallelization of programs on the new many-core architecture, in addition to the basic C/C++ and Fortran compilers, the system provides a customized Sunway OpenACC tool that supports the OpenACC 2.0 syntax. This paper also reports our preliminary efforts on developing and optimizing applications on the TaihuLight system, focusing on key application domains, such as earth system modeling, ocean surface wave modeling, atomistic simulation, and phase-field simulation.
international workshop on peer to peer systems | 2004
Shinning Shi; Guangwen Yang; Dingxing Wang; Jin Yu; Shaogang Qu; Ming Chen
This paper discusses large scale keyword searching on top of peer-to-peer (P2P) networks. The state-of-the-art keyword searching techniques for unstructured and structured P2P systems are query flooding and inverted list intersection respectively. However, it has been demonstrated that P2P-based large scale full-text searching is not feasible by using either of the two techniques. We propose in this paper a new index partitioning and building scheme, multi-level partitioning (MLP), and discuss its implementation on top of P2P networks. MLP can dramatically reduce bisection bandwidth consumption and end-user latency compared with the partition-by-keyword scheme. And comparing with partition-by-document, it need only broadcast a query to moderate number of peers to generate precise results.
international conference on parallel processing | 2003
Shuming Shi; Jin Yu; Guangwen Yang; Dingxing Wang
We discuss the techniques of performing distributed page ranking on top of structured peer-to-peer networks. Distributed page ranking are needed because the size of the Web grows at a remarkable speed and centralized page ranking is not scalable. Open system PageRank is presented based on the traditional PageRank used by Google. We then propose some distributed page ranking algorithms, partially prove their convergence, and discuss some interesting properties of them. Indirect transmission is introduced to reduce communication overhead between page rankers and to achieve scalable communication. The relationship between convergence time and bandwidth consumed is also discussed. Finally, we verify some of the discussions by experiments based on real datasets
Journal of Physics: Condensed Matter | 1999
J B Wang; Guangwen Yang
The phase transformation between diamond and graphite in preparation of diamond by pulsed-laser-induced liquid - solid interface reaction (PLIIR) was studied by calculating the probability of phase transition of the carbon atoms over a potential barrier in the pressure-temperature (P - T) phase diagram of carbon. It is found that the probability of phase transition from graphite to diamond is as high as 10-3-10-4 in the C pressure - temperature region where the pressure and temperature are in the range of 10 GPa to 15 GPa and 4000 K to 5000 K, respectively, in the pressure - temperature phase diagram. The distribution of the probability of the phase transformation from graphite to diamond was obtained in the corresponding pressure-temperature region, in which diamonds are prepared by PLIIR. In addition, the dependence of the probability for the transformation of graphite to diamond on temperature was investigated and found to be in agreement with the Arrhenius rule.
Advances in Atmospheric Sciences | 2013
Lijuan Li; Bin Wang; Li Dong; Li Liu; Si Shen; Ning Hu; Wenqi Sun; Yong Wang; Wenyu Huang; Xiangjun Shi; Ye Pu; Guangwen Yang
The Grid-point Atmospheric Model of IAP LASG version 2 (GAMIL2) has been developed through upgrading the deep convection parameterization, cumulus cloud fraction and two-moment cloud microphysical scheme, as well as changing some of the large uncertain parameters. In this paper, its performance is evaluated, and the results suggest that there are some significant improvements in GAMIL2 compared to the previous version GAMIL1, for example, the components of the energy budget at the top of atmosphere (TOA) and surface; the geographic distribution of shortwave cloud radiative forcing (SWCF); the ratio of stratiform versus total rainfall; the response of atmospheric circulation to the tropical ocean; and the eastward propagation and spatiotemporal structures of the Madden Julian Oscillation (MJO). Furthermore, the indirect aerosols effect (IAE) is −0.94 W m−2, within the range of 0 to −2 W m−2 given by the IPCC 4th Assessment Report (2007). The influence of uncertain parameters on the MJO and radiation fluxes is also discussed.
grid computing | 2007
Yongwei Wu; Yulai Yuan; Guangwen Yang; Weimin Zheng
Due to the dynamic nature of grid environments, schedule algorithms always need assistance of a long-time-ahead load prediction to make decisions on how to use grid resources efficiently. In this paper, we present and evaluate a new hybrid model, which predicts the n-step-ahead load status by using interval values. This model integrates autoregressive (AR) model with confidence interval estimations to forecast the future load of a system. Meanwhile, two filtering technologies from signal processing field are also introduced into this model to eliminate data noise and enhance prediction accuracy. The results of experiments conducted on a real grid environment demonstrate that this new model is more capable of predicting n-step-ahead load in a computational grid than previous works. The proposed hybrid model performs well on prediction advance time for up to 50 minutes, with significant less prediction errors than conventional AR model. It also achieves an interval length acceptable for task scheduler.
acm sigplan symposium on principles and practice of parallel programming | 2013
Chao Yang; Wei Xue; Haohuan Fu; Lin Gan; Linfeng Li; Yangtong Xu; Yutong Lu; Jiachang Sun; Guangwen Yang; Weimin Zheng
Developing highly scalable algorithms for global atmospheric modeling is becoming increasingly important as scientists inquire to understand behaviors of the global atmosphere at extreme scales. Nowadays, heterogeneous architecture based on both processors and accelerators is becoming an important solution for large-scale computing. However, large-scale simulation of the global atmosphere brings a severe challenge to the development of highly scalable algorithms that fit well into state-of-the-art heterogeneous systems. Although successes have been made on GPU-accelerated computing in some top-level applications, studies on fully exploiting heterogeneous architectures in global atmospheric modeling are still very less to be seen, due in large part to both the computational difficulties of the mathematical models and the requirement of high accuracy for long term simulations. In this paper, we propose a peta-scalable hybrid algorithm that is successfully applied in a cubed-sphere shallow-water model in global atmospheric simulations. We employ an adjustable partition between CPUs and GPUs to achieve a balanced utilization of the entire hybrid system, and present a pipe-flow scheme to conduct conflict-free inter-node communication on the cubed-sphere geometry and to maximize communication-computation overlap. Systematic optimizations for multithreading on both GPU and CPU sides are performed to enhance computing throughput and improve memory efficiency. Our experiments demonstrate nearly ideal strong and weak scalabilities on up to 3,750 nodes of the Tianhe-1A. The largest run sustains a performance of 0.8 Pflops in double precision (32% of the peak performance), using 45,000 CPU cores and 3,750 GPUs.
international conference on parallel processing | 2011
Yifeng Geng; Shimin Chen; Yongwei Wu; Ryan Wu; Guangwen Yang; Weimin Zheng
MapReduce is an important programming model for processing and generating large data sets in parallel. It is commonly applied in applications such as web indexing, data mining, machine learning, etc. As an open-source implementation of MapReduce, Hadoop is now widely used in industry. Virtualization, which is easy to configure and economical to use, shows great potential for cloud computing. With the increasing core number in a CPU and involving of virtualization technique, one physical machine can hosts more and more virtual machines, but I/O devices normally do not increase so rapidly. As MapReduce system is often used to running I/O intensive applications, decreasing of data redundancy and load unbalance, which increase I/O interference in virtual cloud, come to be serious problems. This paper builds a model and defines metrics to analyze the data allocation problem in virtual environment theoretically. And we design a location-aware file block allocation strategy that retains compatibility with the native Hadoop. Our model simulation and experiment in real system shows our new strategy can achieve better data redundancy and load balance to reduce I/O interference. Execution time of applications such as RandomWriter, Text Sort and Word Count are reduced by up to 33% and 10% on average.