Xianyao Chen
State Oceanic Administration
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xianyao Chen.
Advances in Adaptive Data Analysis | 2009
Norden E. Huang; Zhaohua Wu; Steven R. Long; Kenneth C. Arnold; Xianyao Chen; Karin Blank
Instantaneous frequency (IF) is necessary for understanding the detailed mechanisms for nonlinear and nonstationary processes. Historically, IF was computed from analytic signal (AS) through the Hilbert transform. This paper offers an overview of the difficulties involved in using AS, and two new methods to overcome the difficulties for computing IF. The first approach is to compute the quadrature (defined here as a simple 90° shift of phase angle) directly. The second approach is designated as the normalized Hilbert transform (NHT), which consists of applying the Hilbert transform to the empirically determined FM signals. Additionally, we have also introduced alternative methods to compute local frequency, the generalized zero-crossing (GZC), and the teager energy operator (TEO) methods. Through careful comparisons, we found that the NHT and direct quadrature gave the best overall performance. While the TEO method is the most localized, it is limited to data from linear processes, the GZC method is the m...
Science | 2014
Xianyao Chen; Ka Kit Tung
Deep-sea warming slows down global warming Global warming seems to have paused over the past 15 years while the deep ocean takes the heat instead. The thermal capacity of the oceans far exceeds that of the atmosphere, so the oceans can store up to 90% of the heat buildup caused by increased concentrations of greenhouse gases such as carbon dioxide. Chen and Tung used observational data to trace the pathways of recent ocean heating. They conclude that the deep Atlantic and Southern Oceans, but not the Pacific, have absorbed the excess heat that would otherwise have fueled continued warming. Science, this issue p. 897 The slowdown in global warming over the beginning of the 21st century has resulted from heat transport into the deep ocean. A vacillating global heat sink at intermediate ocean depths is associated with different climate regimes of surface warming under anthropogenic forcing: The latter part of the 20th century saw rapid global warming as more heat stayed near the surface. In the 21st century, surface warming slowed as more heat moved into deeper oceans. In situ and reanalyzed data are used to trace the pathways of ocean heat uptake. In addition to the shallow La Niña–like patterns in the Pacific that were the previous focus, we found that the slowdown is mainly caused by heat transported to deeper layers in the Atlantic and the Southern oceans, initiated by a recurrent salinity anomaly in the subpolar North Atlantic. Cooling periods associated with the latter deeper heat-sequestration mechanism historically lasted 20 to 35 years.
Advances in Adaptive Data Analysis | 2009
Zhaohua Wu; Norden E. Huang; Xianyao Chen
A multi-dimensional ensemble empirical mode decomposition (MEEMD) for multi-dimensional data (such as images or solid with variable density) is proposed here. The decomposition is based on the applications of ensemble empirical mode decomposition (EEMD) to slices of data in each and every dimension involved. The final reconstruction of the corresponding intrinsic mode function (IMF) is based on a comparable minimal scale combination principle. For two-dimensional spatial data or images, f(x,y), we consider the data (or image) as a collection of one-dimensional series in both x-direction and y-direction. Each of the one-dimensional slices is decomposed through EEMD with the slice of the similar scale reconstructed in resulting two-dimensional pseudo-IMF-like components. This new two-dimensional data is further decomposed, but the data is considered as a collection of one-dimensional series in y-direction along locations in x-direction. In this way, we obtain a collection of two-dimensional components. Thes...
Advances in Adaptive Data Analysis | 2010
Gang Wang; Xianyao Chen; Fangli Qiao; Zhaohua Wu; Norden E. Huang
Empirical Mode Decomposition (EMD) has been widely used to analyze non-stationary and nonlinear signal by decomposing data into a series of intrinsic mode functions (IMFs) and a trend function through sifting processes. For lack of a firm mathematical foundation, the implementation of EMD is still empirical and ad hoc .I n this paper, we prove mathematically that EMD, as practiced now, only gives an approximation to the true envelope. As a result, there is a potential conflict between the strict definition of IMF and its empirical implementation through natural cubic spline. It is found that the amplitude of IMF is closely connected with the interpolation function defining the upper and lower envelopes: adopting the cubic spline function, the upper (lower) envelope of the resulting IMF is proved to be a unitary cubic spline line as long as the extrema are sparsely distributed compared with the sampling data. Furthermore, when natural spline boundary condition is adopted, the unitary cubic spline line degenerates into a straight line. Unless the amplitude of the IMF is a strictly monotonic function, the slope of the straight line will be zero. It explains why the amplitude of IMF tends to be a constant with the number of sifting increasing ad infinitum. Therefore, to get physically meaningful IMFs the sifting times for each IMF should be kept low as in the practice of EMD. Strictly speaking, the resolution of these difficulties should be either to change the EMD implementation method and eschew the spline, or to define the stoppage criterion more objectively and leniently. Short of the full resolution of the conflict, we should realize that the EMD as implemented now yields an approximation with respect to cubic
Indoor Air | 2010
Xianyao Chen; Philip K. Hopke
UNLABELLED Limonene ozonolysis was examined under conditions relevant to indoor environments in terms of temperatures, air exchange rates, and reagent concentrations. Secondary organic aerosols (SOA) produced and particle-bound reactive oxygen species (ROS) were studied under situations when the product of the two reagent concentrations was constant, the specific concentration combinations play an important role in determining the total SOA formed. A combination of concentration ratios of ozone/limonene between 1 and 2 produce the maximum SOA concentration. The two enantiomers, R-(+)-limonene and S-(-)-limonene, were found to have similar SOA yields. The measured ROS concentrations for limonene and ozone concentrations relevant to prevailing indoor concentrations ranged from 5.2 to 14.5 nmol/m(3) equivalent of H2O2. It was found that particle samples aged for 24 h in freezer lost a discernible fraction of the ROS compared to fresh samples. The residual ROS concentrations were around 83-97% of the values obtained from the analysis of samples immediately after collection. The ROS formed from limonene ozonolysis could be separated into three categories as short-lived, high reactive, and volatile; semi-volatile and relatively stable; non-volatile and low-reactive species based on ROS measurements under various conditions. Such chemical and physical characterization of the ROS in terms of reactivity and volatility provides useful insights into nature of ROS. PRACTICAL IMPLICATIONS A better understanding of the formation mechanism of secondary organic aerosol generated from indoor chemistry allows us to evaluate and predict the exposure under such environments. Measurements of particle-bound ROS shed light on potential adverse health effect associated with exposure to particles.
Advances in Adaptive Data Analysis | 2011
Norden E. Huang; Xianyao Chen; Men-Tzung Lo; Zhaohua Wu
As the original definition on Hilbert spectrum was given in terms of total energy and amplitude, there is a mismatch between the Hilbert spectrum and the traditional Fourier spectrum, which is defined in terms of energy density. Rigorous definitions of Hilbert energy and amplitude spectra are given in terms of energy and amplitude density in the time-frequency space. Unlike Fourier spectral analysis, where the resolution is fixed once the data length and sampling rate is given, the time-frequency resolution could be arbitrarily assigned in Hilbert spectral analysis (HSA). Furthermore, HSA could also provide zooming ability for detailed examination of the data in a specific frequency range with all the resolution power. These complications have made the conversion between Hilbert and Fourier spectral results difficult and the conversion formula is elusive until now. We have derived a simple relationship between them in this paper. The conversion factor turns out to be simply the sampling rate for the full resolution cases. In case of zooming, there is another additional multiplicative factor. The conversion factors have been tested in various cases including white noise, delta function, and signals from natural phenomena. With the introduction of this conversion, we can compare HSA and Fourier spectral analysis results quantitatively.
Journal of Climate | 2015
Xianyao Chen; John M. Wallace
AbstractENSO-like variability is examined using a set of univariate indices based on unfiltered monthly global sea surface temperature (SST), sea level pressure (SLP), outgoing longwave radiation (OLR), sea level, and the three-dimensional ocean temperature (OT) fields. These indices, many of which correspond to the leading principal components (PCs) of the respective global fields, are highly correlated with each other. In combination with their spatial regression patterns, they provide a comprehensive description of ENSO-like variability in the atmosphere and ocean across time scales ranging from months to decades, from 1950 onward. The SLP and SST indices are highly correlated with one another back to the late nineteenth century. The interdecadal-scale shifts in the prevailing polarity of ENSO that occurred in the 1940s, the 1970s, and around the year 2000 are clearly evident in low-pass-filtered time series of these indices.On the basis of empirical mode decomposition, ENSO-like variability is partiti...
Advances in Adaptive Data Analysis | 2011
Zhaohua Wu; Norden E. Huang; Xianyao Chen
In this paper, we present some general considerations about data analysis from the perspective of a physical scientist and advocate the physical, instead of mathematical, analysis of data. These considerations have been accompanying our development of novel adaptive, local analysis methods, especially the empirical mode decomposition and its major variation, the ensemble empirical mode decomposition, and its preliminary mathematical explanations. A particular emphasis will be on the advantages and disadvantages of mathematical and physical constraints associated with various analysis methods. We argue that, using data analysis in a given temporal domain of observation as an example, the mathematical constraints imposed on data may lead to difficulties in understanding the physics behind the data. With such difficulties in mind, we promote adaptive, local analysis method, which satisfies fundamental physical principle of consequent evolution of a system being not able to change the past evolution of the system. We also argue, using the ensemble empirical mode decomposition as an example, that noise can be helpful to extract physically meaningful signals hidden in noisy data.
Climate Dynamics | 2013
Xianyao Chen; Yuanling Zhang; Min Zhang; Ying Feng; Zhaohua Wu; Fangli Qiao; Norden E. Huang
This study proposes a new more precise and detailed method to examine the performance of IPCC AR4 models in simulation of nonlinear variability of global ocean heat content (OHC) on the annual time scale during 1950–1999. The method is based on the intercomparison of modulated annual cycle (MAC) of OHC and its instantaneous frequency (IF), derived by Empirical Mode Decomposition and Hilbert-Huang Transformation. In addition to indicate the general agreement in gross features globally between models and observation, our results point out the problems both in observation and in modeling. In the well observed Northern Hemisphere, models exhibit extremely good skills to capture nonlinear annual variability of OHC. The simulated MACs are highly correlated with observations (>0.95) and the IF of MACs varies coherently with each other. However, in sparsely observed Southern Hemisphere (SH), even though the simulated MACs highly correlate with observations, the IF shows significant difference. This comparisons show that the models exhibit coherent variability of IF of MACs in SH with each other, but not with observations, revealing the problems in the objective analyzed dataset using sparse observations. In the well observed tropic region, the models lack the coherence with the observations, indicating inadequate physics of the models in the tropical area. These results illustrate that the proposed method can be used routinely to identify problems in both models and in observation of the global ocean as a critical component of global climate change.
Advances in Adaptive Data Analysis | 2009
Norden E. Huang; Zhaohua Wu; Jorge E. Pinzón; Claire L. Parkinson; Steven R. Long; Karin Blank; Per Gloersen; Xianyao Chen
Global climate variability is currently a topic of high scientific and public interest, with potential ramifications for the Earths ecologic systems and policies governing world economy. Across the broad spectrum of global climate variability, the least well understood time scale is that of decade-to-century.1 The bases for investigating past changes across that period band are the records of annual mean Global Surface Temperature Anomaly (GSTA) time series, produced variously in many painstaking efforts.2–5 However, due to incipient instrument noise, the uneven distribution of sensors spatially and temporally, data gaps, land urbanization, and bias corrections to sea surface temperature, noise and uncertainty continue to exist in all data sets.1, 2, 6–8 Using the Empirical Mode Decomposition method as a filter, we can reduce this noise and uncertainty and produce a cleaner annual mean GSTA dataset. The noise in the climate dataset is thus reduced by one-third and the difference between the new and the commonly used, but unfiltered time series, ranges up to 0.1506°C, with a standard deviation up to 0.01974°C, and an overall mean difference of only 0.0001°C. Considering that the total increase of the global mean temperature over the last 150 years to be only around 0.6°C, we believe this difference of 0.1506°C is significant.